White House Takes Extraordinary Measures to Regulate Burgeoning US AI Sector
Fact Checked by Robin Hackney
Note: This blog is an update - to understand the basics of the Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence, see this blog post.
90 days ago the Biden White House issued a groundbreaking Executive Order designed to provide a regulatory roadmap for AI regulation across the US and govern the adoption of AI within US government agency. At the 90-day milepost, the Biden White House came out with a progress report on AI efforts outlined in the original Executive Order as well as additional actions the US government is taking to tackle AI regulation. It’s good to see the US government invested in the subject, but there’s definitely areas where executive powers are being extended that call into question the White House’s overall approach.
So what do you need to know today on this important action? Let’s dive in.
White House Executive Order on AI: Use Of Emergency Powers Under the Defense Protection Act (DPA) to Regulate AI
In a move many are already questioning the administration invoked the Defense Protection Act (DPA) to compel US tech companies to provide proprietary detail on their AI models and model development programs to the Commerce Department. The controversy here is around the use of what is supposed to be an emergency power, but one that has recently begun to be used by Presidents as part of a broadening of executive powers. (most recently by both Donald Trump and Biden to speed up the federal Covid-19 response).
“The Defense Production Act is about production — it has it in the title — and not restriction,” Adam Thierer, a senior fellow at the free-market R Street Institute, told POLITICO in their reporting.
The use of these emergency powers will experience push back from technology companies and their trade groups so it’s not certain that this executive power will remain in effect long-term, but companies developing ‘powerful AI systems’ will need to be aware of these new requirements. Particularly because a recent article from Wired suggests that this regulation could be enforced as early as this week. Note that ‘powerful AI systems’ is in quotes because the administration hasn’t yet defined the exact systems that qualify for this requirement. The exact quote from the administration being:
“Used Defense Production Act authorities to compel developers of the most powerful AI systems to report vital information, especially AI safety test results, to the Department of Commerce. These companies now must share this information on the most powerful AI systems, and they must likewise report large computing clusters able to train these systems.”
What’s also important to note here is that this action likely telegraphs the administration’s assumption that Congress won’t be able to move on a comprehensive AI bill of rights or other regulation, even on important matters such as those related to national defense. Given the low likelihood of comprehensive federal action the administration is relying on what it considers its best option for taking some action at this time.
What is good to see on this side, though, is the continued focus on compelling rigorous safety testing on all newly developed AI systems, further ensuring the responsible advancement of AI technology. As safety standard testing becomes embedded in the culture of AI development we should see more uniform and improved approaches that make it easier and more affordable for all enterprises to conduct safety testing for their AI model development efforts.
Foreign National Use of US AI Cloud Computing Coming Under the Umbrella of Know Your Customer (KYC)
The other big proposal from the White House is that cloud companies must determine if and disclose what foreign entities or nationals are accessing US AI data centers and related training environments. Reuters reported that Commerce Secretary Gina Raimondo said, “We can’t have non-state actors or China or folks who we don’t want accessing our cloud to train their models.”
To further enforce this proposal the administration published an updated proposal on Know Your Customer (KYC) requirements which are typically applied to financial and sensitive verticals. The updated proposal would require cloud computing companies to adhere to many of the same identity confirmation standards that financial institutions need to adhere to to prevent illegal activity like money laundering. Moving forward the administration proposes that foreign entities and nationals that are users of US cloud computing would be required to provide the same level of documentation as current KYC programs require and requires those cloud computing companies to certify annual compliance with the program.
US Agencies Met Their Timeline Benchmarks Set Forth Under the Executive Order
In a bit of positive self-backslapping, the administration reported that US agencies and entities completed all of their dictated tasks under the timelines proposed in the Executive Order. Most notably, nine critical agencies that cover AI’s use in critical infrastructure sectors including the Department of Defense, the Department of Transportation, the Department of Treasury, and Department of Health and Human Services completed their AI risk assessments and reported those to the Department of Homeland Security.
Other notable US agency activity completed and reported by the White House included:
Launched a pilot of the National AI Research Resource — catalyzing broad-based innovation, competition, and more equitable access to AI research. The pilot, managed by the U.S. National Science Foundation (NSF), is the first step toward a national infrastructure for delivering computing power, data, software, access to open and proprietary AI models, and other AI training resources to researchers and students.
Launched an AI Talent Surge to accelerate hiring AI professionals across the federal government, including through a large-scale hiring action for data scientists. The Office of Personnel Management has granted flexible hiring authorities for federal agencies to hire AI talent, including direct hire authorities and excepted service authorities.
Began the EducateAI initiative to help fund educators creating high-quality, inclusive AI educational opportunities at the K-12 through undergraduate levels. The initiative’s launch helps fulfill the Executive Order’s charge for NSF to prioritize AI-related workforce development—essential for advancing future AI innovation and ensuring that all Americans can benefit from the opportunities that AI creates.
Announced the funding of new Regional Innovation Engines (NSF Engines), including with a focus on advancing AI. For example, with an initial investment of $15 million over two years and up to $160 million over the next decade, the Piedmont Triad Regenerative Medicine Engine will tap the world’s largest regenerative medicine cluster to create and scale breakthrough clinical therapies, including by leveraging AI.
Established an AI Task Force at the Department of Health and Human Services to develop policies to provide regulatory clarity and catalyze AI innovation in health care. The Task Force will, for example, develop methods of evaluating AI-enabled tools and frameworks for AI’s use to advance drug development, bolster public health, and improve health care delivery.
Comentarios