What to Watch as White House Moves to Federalize AI Regulation
Highlights
- The White House has issued an executive order (EO) aiming to preempt state laws regulating artificial intelligence (AI) on the grounds that such laws inhibit innovation by creating a complex compliance landscape and impermissibly regulating interstate commerce.
- The EO creates new federal mechanisms, including an AI Litigation Task Force and federal funding restrictions, to challenge and deter state laws deemed "onerous" or inconsistent with federal AI policy.
- The EO takes specific aim at the Colorado AI Act, scheduled to go into effect on June 30, 2026, claiming that the law will "force AI models to produce false results" through its obligation to protect against algorithmic discrimination.
The White House issued an executive order (EO) on "Ensuring a National Policy Framework for Artificial Intelligence" on Dec. 11, 2025, establishing a framework for the federal regulation of artificial intelligence (AI) and creating an AI Litigation Task Force to challenge state laws that are inconsistent with federal AI policy objectives. The EO further directs federal agencies to eliminate regulatory obstacles that the administration believes threaten U.S. competitiveness in the global AI race and restrict funding for states with "onerous AI laws."
Formalization of Federal Policy Objective
The EO further underscores AI development as critical to national and economic security, positioning the U.S. in direct competition with adversaries for global AI supremacy. Stating that a 50-state "patchwork" stifles AI innovation, the EO establishes a formal effort to create policy "to sustain and enhance the United States' global AI dominance through a minimally burdensome national policy framework for AI." The EO is the first formal signal that the president will take punitive action against states that have and enforce laws governing AI.
Key Provisions and Requirements
1. Federal Preemption and Litigation Task Force. The EO directs the U.S. Attorney General (AG) to establish an AI Litigation Task Force within 30 days. This task force is empowered to challenge state AI laws that conflict with the EO's policy, including on the grounds that such laws:
- unconstitutionally regulate interstate commerce
- are preempted by existing federal regulations
- are otherwise unlawful in the AG's judgment, including because they may require AI models to alter truthful outputs or compel developers or deployers to disclose or report information in a manner that would violate the First Amendment
2. Evaluation and Identification of State AI Laws. The U.S. Department of Commerce secretary must publish an evaluation of existing state AI laws within 90 days, identifying those that are "onerous" or conflict with the EO's policy. The evaluation will focus on laws that require AI models to alter truthful outputs, compel developers or deployers to disclose or report information in a manner that would violate the First Amendment, and promote AI innovation.
3. Federal Funding Restrictions. States with identified "onerous" AI laws may become ineligible for certain federal funds, including nondeployment funds under the Broadband Equity Access and Deployment Program. Federal agencies are also directed to consider conditioning discretionary grants on states refraining from enforcing conflicting AI laws.
4. Federal Reporting and Disclosure Standards. The EO directs the Federal Communications Commission to consider adopting a federal reporting and disclosure standard for AI models that would preempt conflicting state requirements within 90 days. Recall that the Trump Administration revoked the Biden Administration's AI EO introduced in 2023, which created reporting requirements for models above a certain size and that meet other metrics.
5. Federal Trade Commission (FTC) Policy on Deceptive Conduct. The FTC is directed to issue a policy statement clarifying that state laws requiring AI models to alter truthful outputs may be preempted by the FTC Act's prohibition on deceptive acts or practices.
6. Legislative Recommendations. The EO calls for the Special Advisor for AI and Crypto, along with the Assistant to the President for Science and Technology, to jointly prepare a legislative recommendation to establish a uniform AI policy framework, with limited exceptions for "otherwise lawful" state laws on child safety, AI infrastructure, state government procurement and utilization, and other specified topics to be determined.
Implications
The short EO does not define AI or provide details as to what laws, beyond the Colorado AI Act, would be considered "onerous." For the time being, this likely adds more – not less – complication to the current landscape. For example, the impact on existing state privacy laws that do not use the phrase "algorithmic discrimination" but require businesses to offer opt-outs from certain automated decisions is unclear. All 18 of these laws are already in effect or will be as of Jan. 1, 2026, as will a new set of regulations in California that create extensive disclosure requirements around use of AI for certain decisions.
In addition, if the AI Litigation Task Force has to go through a lengthy court process to challenge state laws, there could be a long period of time where businesses are uncertain about whether they need to take steps to comply with these laws.
Also unclear is the impact on laws that require risk assessments where AI is used in the employment context (e.g., NYC Local Law 144 and recent amendments to Illinois' Human Rights Act (775 ILCS 5/2-101)), laws related to disclosure of interaction with AI (such as California's Unlawful Use of Bots Act (Cal. Bus. And Prof. Code Section 17941(a)), laws pertaining to state-specific interests (such as Tennessee's Ensuring Likeness Voice and Image Security Act (ELVIS) Act (Tenn. Code Section 47-25-1103(a)) and New York's amendments to its State Fashion Workers Act (SB 9832)), or sector-specific laws, such as the regulation of AI in healthcare. As to the latter category, it is notable that several states have enacted legislation restricting the ability of health insurers to utilize AI in payment decisions. Whether these laws will be targeted by the AI Litigation Task Force is unclear. However, the EO does create an opportunity for health insurers and other affected industries to lobby the administration to preempt what it views as burdensome state laws.
Next Steps to Watch
The EO follows two unsuccessful attempts by lawmakers to include federal preemption of state AI laws in major legislative vehicles: the One Big Beautiful Bill Act and National Defense Authorization Act. Both efforts faced bipartisan resistance at federal and state levels, signaling that litigation to enforce preemption under this EO may encounter significant legal and political challenges. States, especially those with AI laws, will likely attempt to block unilateral federal preemption, which could further complicate implementation of the EO and state AI laws.
For AI developers, deployers and safety groups, it is critical to closely monitor the administration's progress toward a federal legislative framework for AI. Currently, there is no leading bipartisan bill in Congress that comprehensively addresses the risks and regulatory gaps targeted by state laws – a key point of contention in the preemption debate. In the last Congress, Senate Majority Leader John Thune (R-S.D.) and Sen. Amy Klobuchar (D-Minn.) introduced the Artificial Intelligence Research, Innovation, and Accountability Act of 2024, a comprehensive proposal that may be reintroduced in the coming year.
The White House's forthcoming legislative framework will likely shape congressional negotiations, providing clarity on the administration's priorities and serving as a starting point for bipartisan discussions. Ultimately, any federal AI legislation will require support from both parties to become law. Stakeholders should engage with policymakers to ensure that the unique impacts of the technology are understood.
For additional questions or assistance with developing your advocacy or compliance strategy, please contact the authors or another member of Holland & Knight's Public Policy & Regulation Group or the Data Strategy, Security & Privacy Team.
Information contained in this alert is for the general education and knowledge of our readers. It is not designed to be, and should not be used as, the sole source of information when analyzing and resolving a legal problem, and it should not be substituted for legal advice, which relies on a specific factual analysis. Moreover, the laws of each jurisdiction are different and are constantly changing. This information is not intended to create, and receipt of it does not constitute, an attorney-client relationship. If you have specific questions regarding a particular fact situation, we urge you to consult the authors of this publication, your Holland & Knight representative or other competent legal counsel.