A short primer to President Trump’s Executive Order: “Ensuring a National Policy Framework for Artificial Intelligence”

On December 11, 2025, President Trump signed an Executive Order aimed at limiting state governments’ powers to regulate artificial intelligence.

Reduced regulation does not mean reduced risk

With a second Executive Order (EO), United States President Donald Trump doubled-down on its AI deregulation – reinforcing a hands off approach, that signals speed and innovation over guardrails. EO Ensuring a National Policy Framework for Artificial Intelligence, signed on December 11, 2025, aims to limit and pre-empt state-level regulation of artificial intelligence (AI). The EO lays the groundwork for the federal government to challenge state AI laws, to pursue new pre-emptive regulations and laws, and to influence state action by withholding federal funds from states that pursue AI laws inconsistent with the EO’s policy.

The stated purpose of this EO is to create a national policy framework. The EO alleges that:

  • A 50-state “patchwork of different regulatory regimes” creates compliance challenges and stifles innovation.
  • The anti-discrimination provisions in some state laws will “embed ideological bias within models.”
  • State AI laws violate the Commerce Clause when they “impermissibly regulate beyond State borders.”

 The EO directs the follow actions by Executive Branch agencies: 

  1. The Attorney General shall create an AI Litigation Task Force within 30 days. This Task Force is directed to challenge state laws inconsistent with the EO policy.
  2. The Secretary of Commerce shall publish an evaluation of existing State AI laws that conflict with the EO within 90 days.
  3. The Secretary of Commerce shall issue a Policy Notice within 90 days that makes states with “onerous AI laws” ineligible for remaining federal grants. The EO links AI policy to broadband program eligibility under a major federal broadband funding program, the Broadband Equity Access and Deployment Program (BEAD).
  4. All other executive departments and agencies must review their discretionary grant programs to determine whether they may condition their grants on a state law that conflicts with the EO policy.
  5. The Federal Communications Commission (FCC) must start a proceeding within 90 days to determine whether to adopt a federal reporting and disclosure standard for AI models that preempts state laws.
  6. The Federal Trade Commission (FTC) must create a policy statement within 90 days to explain when state law are preempted because they “require alterations to truthful outputs of AI Models.”
  7. Presidential Advisors must prepare legislative recommendations to establish a uniform federal policy framework for AI.

The EO cites Colorado’s “algorithmic discrimination” law (that will take effect in June 2026) as a specific example of a state law at odds with the EO’s policy. The EO argues the Colorado law could pressure models to produce “false results” to avoid differential treatment or impact.

Lastly, the EO includes carve-outs to not pre-empt state AI laws relating to child safety, AI data center infrastructure, state government use of AI, and “other topics as determined.”

What does this mean? 

Executive Orders issued by the U.S. President do not pre-empt existing state laws governing AI. All state and local laws remain enforceable. States will likely challenge the actions of the relevant executive agencies with arguments that raise the 10th Amendment, Spending Clause coercion, and challenges to FCC or FTC administrative powers. Also, federal lawmakers and state governors do not agree about this overarching federal pre-emption of laws governing AI. For example, In December 2025, congressional lawmakers killed a National Defense Authorization Act (NDAA) provision that would have stopped states from enforcing their own AI laws. 

How should organizations navigate the uncertainty around AI, while the federal government contemplates federal pre-emption?

Organizations should not confuse deregulation with reduced risk. AI-related risk or harm will not arrive as an “AI claim” or “AI issue.” Instead, it will surface through familiar legal pathways, such as product liability, IP infringement, professional liability, consumer protection issues, or privacy complaints, for example. There are already well-established legal frameworks that can be used to impose sanctions or enforcement action:

Legal frameworks

Example of potential AI-related liability

Product Liability (State Law)

Autonomous vehicle AI fails to detect pedestrians due to sensor or model design flaws, causing an accident.

UDAP State Laws

AI-driven pricing or sales tools unfairly target vulnerable consumers with higher prices or manipulative tactics.

FTC Act, Section 5 – Deceptive Practices

AI vendor misrepresents how consumer data is collected, used, or shared to train AI models.

Antitrust Laws – Exclusionary Conduct

Dominant platform uses AI ranking or recommendation algorithms to favor its own products and suppress rivals.

Antitrust Laws – Algorithmic Discrimination

AI systems systematically disadvantage certain businesses or market entrants, distorting competition.

Privacy & Data Protection Laws (State & Federal)

AI system collects or processes personal data without proper notice, consent, or lawful purpose.

Biometric Privacy Laws (e.g., facial recognition)

AI uses facial recognition or voice data without required consent, leading to statutory liability.

Data Security & Breach Laws

Poorly secured AI training data exposes personal information through data leaks or model outputs.

IP Infringement

AI model is trained on copyrighted works without authorization and produces substantially similar content.

Trade Secret Misappropriation

AI is trained on confidential business data improperly obtained from a former employee or partner.

While these are merely illustrative examples, the ongoing uncertainty surrounding AI regulation does not eliminate legal risk; existing regulatory authorities and private plaintiffs may still pursue enforcement actions or litigation, and the absence of AI-specific rules does not shield companies from liability.