← All Frameworks
Enacted

US Executive Order 14110 on Safe, Secure, and Trustworthy AI

United States · 2023-10-30

President Biden's executive order established the most comprehensive US federal AI policy, directing over 50 actions across federal agencies. It requires developers of powerful AI systems to share safety test results with the government and establishes standards for AI safety and security.

Scope

Broad executive action directing federal agencies to establish AI safety standards, require reporting from companies developing powerful AI models, protect consumer rights, advance equity, and support workers affected by AI.

Impact on Legal Practice

Creates new compliance obligations for AI developers, particularly around safety testing and reporting. Lawyers must advise clients on evolving federal agency requirements and potential enforcement actions. Increases demand for AI governance and regulatory counsel.

Impact on Business Practice

Companies developing dual-use foundation models must report safety testing results to the government. Federal procurement standards create market pressure for responsible AI development. Agencies are directed to use AI for government efficiency while managing risks.

Impact on Common Law

While executive orders don't create binding law, they direct agency rulemaking that does. The order's directives on AI safety standards and testing requirements are shaping administrative law through agency regulations and guidance documents.

Positive Aspects

  • Comprehensive scope covering safety, security, equity, and consumer protection
  • Leverages Defense Production Act for safety reporting — provides real enforcement mechanism
  • Addresses AI impact on workers and advances equity considerations
  • Directs development of AI standards through NIST and other expert agencies

Concerns

  • Executive orders can be reversed by subsequent administrations, creating regulatory uncertainty
  • Relies heavily on agency implementation, which may be slow or inconsistent
  • Does not carry the force of legislation — limited staying power
  • May be seen as overreach by industry stakeholders and opposing political parties

Our Takes

Lawra Lawra (The Moderate)
This executive order was the US government's most serious attempt at AI governance to date. Its real value isn't in the specific directives — many of which may not survive political transitions — but in establishing that AI safety is a federal priority and normalizing government oversight of AI development.
Lawrena Lawrena (The Skeptic)
An executive order is a promise written on paper that the next president can throw away. While the directives are good on paper, we need actual legislation with the force of law. Relying on executive action for something this important is asking for regulatory whiplash.
Lawrelai Lawrelai (The Enthusiast)
What I like about this order is its balance — it acknowledges AI's benefits while addressing risks. It's flexible enough to evolve with technology. The key is implementation: if agencies work with industry rather than against it, this could be a model for responsive AI governance.
Carlos Miranda Levy Carlos Miranda Levy (The Curator)
This executive order embodies the right philosophy — government setting the playing field rather than picking winners. The Defense Production Act invocation is clever but concerning: using emergency powers for AI governance sets a precedent. What I appreciate is the focus on standards through NIST rather than prescriptive rules. AI governance should enable innovation while maintaining accountability. The political vulnerability of executive orders is actually a feature, not a bug — it keeps regulation responsive to reality.

Overview

Executive Order 14110, signed on October 30, 2023, represents the most sweeping US federal action on AI governance. It directs over 50 specific actions across federal agencies, covering AI safety testing, equity, consumer protection, privacy, worker support, government use of AI, and international cooperation.

Key Provisions

The order’s most significant provision invokes the Defense Production Act to require companies developing foundation models that pose national security risks to notify the government and share safety testing results. It also directs NIST to develop AI safety standards, requires agencies to address algorithmic discrimination, and establishes guidelines for federal procurement of AI systems.

Current Status

Executive orders are subject to the political cycle. The order’s long-term impact depends on how thoroughly agencies implement its directives and whether subsequent administrations maintain, modify, or revoke its provisions. Several agencies have already begun rulemaking based on the order’s directives.

Sources

Explore Jurisprudence

See how courts are applying the law to AI in practice. From landmark sanctions to ongoing copyright battles, these cases are creating the precedent that will shape the future.

Ready for structured learning? Explore the Learning Program →

Comments

Loading comments...

0/2000 Comments are moderated before appearing.