White House AI ‘Bill of Rights’ may risk national security

The Office of Science and Technology of the White House proposes guidelines for the use of artificial intelligence in it Blueprint for an AI Bill of Rights. While the blueprint highlights the basic rights and principles of our democracy and lists examples of harm that AI can cause, it fails to address how to practice it without upsetting one of the most powerful parts of high -tech economy in the US – its changing ecosystem. .

Compared to the European Union and China, America has a fundamentally different economic relationship with its technologically changing landscape: The US innovates, the EU regulates, and China is determined to lead. . The US significantly outperforms Europe in almost every AI metric, from scientific paper citations, to venture capital dollars, to commercial activity. Meanwhile AI is a key domain of economic and military competition between the US and the Chinese Communist Party.

In an attempt to get ahead of this duopoly, China is investing heavily in AI and making it a lynchpin in both the commercial and national security sectors. The CCP captured the US in many dimensions through the clever deployment of government guidance funds, and the entire national industrial policies including Made in China 2025 (2015) and the New Generation AI Development Plan (2017).

Legislating AI simply because it’s the zeitgeist is dangerous to American competitiveness and national security. Following the EU’s lead into a regulatory quagmire could hamper the speed of American innovation in AI, limiting the country’s ability to compete economically and militarily with China.

Also Read :  How Plaksha University aims to take on the IITs by changing technical education system

The OSTP document has four glaring flaws: fair algorithms, data privacy, regulation, and overly broad and vague definitions.

Rational algorithms

How do you define a “fair algorithm?” The blueprint focuses on protection from algorithmic discrimination but fails to provide a proper empirical definition. Equity as an abstract concept seems easy to understand, but as a quantitative definition it is much murkier. Princeton computer scientist Arvind Narayanan has proposed 21 different methods for determining fairness. Simply put, how can automated systems be evaluated, and OSTP guidelines implemented, without a clear target metric?

On the contrary, expert opinion is that China forays into AI regulation with Internet Information Service Algorithmic Recommendation Management Provisions (2022) provide some restrictions on the use of AI. While this policy has surface-level parallels with EU law, its restrictions do not apply to the Chinese government. As Russel Wald, director of policy at Stanford’s Institute for Human-Centered Artificial Intelligence, says, this “regulation [is] aimed at the benefit of the regime.”

Data privacy

The White House has suggested that companies should allow users to withdraw consent for the use of their data. If a user does this, the White House has advised that companies should remove that data from any machine learning models built from it.

Retraining all AI algorithms in all products and services every time any user requests it is not economically feasible. Does this mean that companies like Amazon, Netflix and social media networks will have to rebuild their customer recommendation systems every time someone deletes their data? ? Of this, the blueprint is not clear. The economic and operational impact of these questions is likely to be significant. Mismanagement could allow Chinese competitors to overtake American companies.

Also Read :  Ryzen 7000 Mobile, RDNA 3 Laptops, Zen 4 X3D

REGULATIONS

Given the increasing prevalence and ubiquity of AI and automated systems, the OSTP Blueprint will place a heavy burden on a large part of the existing economy, stifling innovation. It is possible that AI startups will take years rather than weeks and months to launch. Given the declining R&D spending by the US federal government (almost one-third of Cold War levels), this has direct implications for the changing ecosystem of American national security.

Ambiguous definitions

OSTP’s definition of automated systems includes “Any system, software, or process that uses computing as a whole or part of a system to determine outcomes, make or assist decisions, inform policy implementation, collect data or observe , or otherwise interact with individuals and/or communities.” What modern electronic product or service is not covered by this definition?

This requires a dystopian nightmare of pre-deployment data use interviews, community input, pre-deployment testing and assessment, ongoing monitoring and reporting, independent evaluation, opt-out and data removal, and timely alternatives to man. This will create a huge burden, not only for the tech industry, but any sector touched by plausible AI or “automated” systems. This is essentially de-automating automation at a very large administrative and economic cost.

Privacy-Preserving Machine Learning

The goals of the White House blueprint are noble. AI must be used responsibly, such that it benefits society and prevents the illiberal purposes with which China is deploying the technology. As such, instead of accomplishing this through procedural checks, the US government could promote non-regulatory protections such as privacy-preserving machine learning, or PPML. This wide variety of methods, including synthetic data generation, differential privacy, federated learning and edge processing will address some of the blueprint’s core concerns without slowing the pace of innovation.

Also Read :  Volley Co-Founder Says Industry ‘Went Wrong’ by Coupling Voice Control With ‘Imaginary Humans’ (Podcast)

The PPML certainly does not address all the issues highlighted in the Blueprint for an AI Bill of Rights but provides a plausible alternative to legislation that could destabilize America with AI. In doing so, it could create a template for non-regulatory mechanisms that protect the public, while mitigating national security threats to AI innovation in our continued competition with China.

If the Biden Administration is serious about demonstrating thought leadership equal to America’s technological leadership, we will move beyond idealistic principles. Or else the next generation of leading AI and automated systems will be built by our big power competitors.

Jonah Cader is a graduate fellow at Stanford University’s Institute for Human-Centered Artificial Intelligence. As a management consultant at McKinsey & Company, he works between the US and China, leading strategy projects for companies across the high-tech value chain.

Have an Opinion?

This article is an Op-Ed and the opinions expressed are those of the author. If you would like to respond, or have an editorial of your own that you would like to submit, please email C4ISRNET and Federal Times Senior Managing Editor Cary O’Reilly.

Source

Leave a Reply

Your email address will not be published.

Related Articles

Back to top button