What business leaders need to know about the Biden Administration's executive order 14110 on AI
Going beyond the press releases.
Much fanfare accompanied the Biden Administration’s recently released executive order (EO) 14110 on “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.”
In this post, I examine it for the immediate and long-term impacts for business leaders.
I have organized this post using the “policy and principles” (each of which is essentially a section) of the EO. And I’ll only gloss over entire sections that aren’t immediately applicable to business leaders.
(Update: 3 December 2023) For a detailed breakdown of all the EO requirements in a structured format, check out this spreadsheet Stanford University is maintaining.
Artificial Intelligence must be safe and secure
Developing Guidelines, Standards, and Best Practices for AI Safety and Security
The EO directs the National Institute of Standards and Technology (NIST) to build out the existing AI RMF with additional guidance focused on:
Auditing
Red-teaming
Generative AI
Secure software development
Expect all of these things to trickle out over the next year or so and become part of the emerging “industry standards” as they relate to AI security.
The order directs the Department of Energy and National Science Foundation to look into privacy-enhancing technologies (PET) as they relate to AI systems. The former is also required to look into “AI capabilities to generate outputs that may represent nuclear, nonproliferation, biological, chemical, critical infrastructure, and energy-security threats or hazards,” which I consider to be a good use of government resources.
Dual use models
This is a big one, because it will require - using existing legislation, the Defense Production Act - companies intending to develop “dual-use foundation models” to report to the federal government on:
Physical and cyber security measures
Ownership of model weights
Red team test results
The EO defines a “dual-use foundation model” as one meeting all of the below criteria:
Trained on “broad data”
Containing 10B+ parameters
That could pose national security risks by:
Making it easier to build weapons of mass destruction
Facilitating offensive cyber operations through automated vulnerability discovery and exploitation
Permitting evasion of human oversight
Computing clusters
The EO also requires reporting to the federal government of any acquisition of “potential large-scale computing cluster,” but doesn’t define the term. Rather, it requires the Commerce, Energy, Defense, and Intelligence departments to come up with a definition.
IaaS provider reporting
This is an interesting one, similar to the above, because it requires the Department of Commerce to come up with a reporting framework to identify if foreign individuals or entities are using U.S.-based Infrastructure-as-a-Service (IaaS) providers to train AI models with any potential use in offensive cyber operations.
I can see the beginnings of a “know your customer” (KYC) regime for technology providers. As American technological infrastructure itself increasing becomes part of the digital battlefield, I understand the objective here.
With that said, being an entrepreneur has shown me that regulatory friction involved in starting a new business or even project is massive. I hope this KYC regime won’t be too onerous.
Managing AI in Critical Infrastructure and in Cybersecurity
This mainly directs Departments and Agencies to develop reports and guidelines (partially based on the NIST AI RMF). But hidden within it is an important requirement:
the Assistant to the President for National Security Affairs and the Director of OMB, in consultation with the Secretary of Homeland Security, shall coordinate work by the heads of agencies with authority over critical infrastructure to develop and take steps for the Federal Government to mandate such guidelines.
Thus, if you are in an industry meeting the definition of “critical infrastructure,” pay close attention to these standards and rule-making processes.
Additionally, the order also requires the Secretary of Homeland Security to deploy a pilot program for finding and fixing vulnerabilities in U.S. government networks. To that, I would advise you to read these articles on the federal vulnerability management practices and look at this meme:
That about sums it up for me.
Reducing Risks at the Intersection of AI and CBRN Threats
This will mainly apply to biotech companies, and it requires government agencies to establish a framework for understanding the intersection of AI and weapons of mass destruction development. It specifically will require any organization receiving government grants to adhere to this framework.
Reducing the Risks Posed by Synthetic Content
According to the EO, the
Administration will help develop effective labeling and content provenance mechanisms, so that Americans are able to determine when content is generated using AI and when it is not.
I’ve already expressed my skepticism about watermarking and similar efforts, but this section does make mention of “digital content authentication,” which I believe to be the correct approach.
There almost zero chance of a watermarking regime emerging during the Biden Administration, even assuming it continues to 2029, or ever.
Promoting Safe Release and Preventing the Malicious Use of Federal Data for AI Training
Looks like Uncle Sam is taking the risk of sensitive data generation seriously, which he should. This basically requires security reviews of publicly-available federal government data to prevent it from being used in the development of offensive weapons.
Good move.
Promoting responsible innovation, competition, and collaboration will allow the United States to lead in AI and unlock the technology’s potential to solve some of society’s most difficult challenges
Attracting AI Talent to the United States
The goal here is to open up immigration to non-Americans with AI skills, so it could potentially relieve some of the wage pressure that tech companies face when it comes to hiring AI talent. And because it relies on exclusively on executive authority, it will be faster, but less comprehensive, than Congress changing the law.
Promoting Innovation
This requires establishing a variety of new initiatives, such as a “pilot program implementing the National AI Research Resource (NAIRR).” Much more could be done by slashing existing regulation and red tape than by creating new programs, so I would have liked to see more on that side of things.
What is useful, however, is the requirement that the U.S. Patent and Trademark Office (USPTO) address:
inventorship and the use of AI, including generative AI, in the inventive process, including illustrative examples in which AI systems play different roles in inventive processes and how, in each example, inventorship issues ought to be analyzed;
Additionally, the USPTO will need to:
consult with the Director of the United States Copyright Office and issue recommendations to the President on potential executive actions relating to copyright and AI. The recommendations shall address any copyright and related issues discussed in the United States Copyright Office’s study, including the scope of protection for works produced using AI and the treatment of copyrighted works in AI training.
Clarity around the IP ownership of AI-generated or -enabled products will be a key issue for businesses, and I would love to see some resolution here. Considering that it took almost a year for a one-word trademark (StackAware) to be approved, though, I’m not optimistic that the 120 / 270 day timelines (respective for each requirement) are realistic.
And frankly I think the copyright issue is going to get resolved de facto by the indemnifications that big AI companies are offering. I’m not a lawyer, but think this article lays out an interesting framework for how this question will end up.
Promoting Competition
This will be important for “the Seven” and anyone else on their scale. Specifically, it directs the Federal Trade Commission (FTC) to “consider….whether to exercise the Commission’s existing authorities…to ensure fair competition in the AI marketplace.” Considering how active (but not necessarily successful) the FTC under Lina Khan has been in anti-trust matters, I expect a lot of activity here.
And some industry experts aren’t necessarily happy about this.
The responsible development and use of AI require a commitment to supporting American workers
All motherhood and apple pie here, but on larger scale, I view automation in general - and AI in particular - as a way to essentially replace labor with capital. As the recent Hollywood writer’s strikes have shown, labor is becoming more aware of this problem.
Expect disruptions and stronger demands by labor related to IP assignment as a result. The order also puts employers on notice that they need to comply with existing law and regulations when it comes to worker surveillance and hiring.
Artificial Intelligence policies must be consistent with my Administration’s dedication to advancing equity and civil rights
Aside from pointing out that most federal government statements on this topic talk nonsensically about eliminating “bias” from AI models (which is impossible) and addressing “discrimination” (without defining it), there is very little actionable guidance here for business leaders.
As with many things where an appointee or bureaucrat “knows it when he sees it,” regulation through enforcement is going to be how things get clarified here. Stay on the lookout for court decisions or regulatory actions on this topic, and adjust accordingly.
The interests of Americans who increasingly use, interact with, or purchase AI and AI-enabled products in their daily lives must be protected
A key issue, which receives little attention among all of talks of AI-related harm, is balancing risk against reward. The EO but alludes to it, but doesn’t get into details. To use a heuristic of the Administration’s view, I note the word “risk” is present 101 times. The words “reward,” “balance,” or “tradeoff” don’t appear at all in the order.
For a detailed analysis of how to consider this problem, check out this piece on the National Institute of Standards and Technology (NIST) AI Risk Management Framework (RMF).
Americans’ privacy and civil liberties must be protected as AI continues advancing
This is going to be a big one. AI is already having major privacy implications when it comes to:
While the European Union, using the General Data Protection Regulation (GDPR), will be the most active player in this respect, I can see U.S. federal, state, or even local government action when it comes to AI and privacy.
The EO itself, though, focuses almost entirely on governmental actions, rather than regulating how the private sector should capture or process data. That’s all well and good, but I think there is a lot more work to be done in this space.
Due to legislative gridlock in the U.S., though, it will mostly play out in the form of regulatory action or court decisions (the least predictable and desirable way to answer important societal problems).
Manage the risks from the Federal Government’s own use of AI and increase its internal capacity to regulate, govern, and support responsible use of AI to deliver better results for Americans
It would be great to see the federal government become more efficient by using AI. I think this is going to be an extremely slow process. I predict it will be lead by the Department of Defense, with other agencies lagging behind.
The key question will be whether the federal workforce, which has an extremely high concentration of white-collar workers, will shrink at all. Optimally there would be some efficiencies generated here, resulting in a lower tax and regulatory burden, but I don’t see that happening.
The Federal Government should lead the way to global societal, economic, and technological progress, as the United States has in previous eras of disruptive innovation and change
This hasn’t really happened since the Manhattan Project (if you can call that “progress”), and I think the best thing the government can do is stay out of the way. More programs and projects just consume tax dollars and create self-perpetuating bureaucracies. Cutting red tape and focusing mainly on existisitential risk is the best way to achieve this goal.