March 27, 2026

White House Releases a National Policy Framework for Artificial Intelligence

A Look at Scope, Priorities and Policy Implications
Holland & Knight Alert
Sarah Starling Crossan | Miranda A. Franco | Marissa C. Serafino

Highlights

  • The White House released a National Policy Framework for Artificial Intelligence (AI) (Framework), outlining nonbinding legislative recommendations to inform congressional consideration of a unified federal approach to AI regulation.
  • The Framework prioritizes child safety, community protections, free speech, innovation, workforce readiness and targeted federal preemption, and cautions against vague standards, open‑ended liability and fragmented state regulation.
  • This Holland & Knight alert reviews the Framework's core policy positions and situates them alongside related congressional proposals, including U.S. Sen. Marsha Blackburn's updated TRUMP AMERICA AI Act.

The White House released a National Policy Framework for Artificial Intelligence (Framework) on March 20, 2026, outlining legislative recommendations intended to guide U.S. Congress as it considers federal artificial intelligence (AI) legislation. The Framework is not a binding document and does not, on its own, create new legal obligations or direct agencies to undertake specific regulatory actions.

The legislative recommendations reflect President Donald Trump's long-standing position that the growing patchwork of state AI laws is creating barriers to innovation and underscoring the need for a unified national standard, and Congress should take targeted action in specific areas to mitigate potential individual and economic harms associated with the continued adoption of AI technologies. Last summer, the Trump Administration urged Congress to adopt a temporary federal "moratorium" preempting certain state AI laws, but Congress ultimately declined to pursue that approach.

Executive Order (EO) 14365: Removing Barriers to American Leadership in AI

The Trump Administration issued "Removing Barriers to American Leadership in Artificial Intelligence" (EO 14365) on December 11, 2025, which initiated a coordinated federal review of state-level AI laws and directed agencies to develop policy recommendations, standards and legislative proposals to inform a national approach.

Notably, EO 14365 required the U.S. Department of Commerce Secretary to submit, within 90 days, an evaluation identifying "onerous" state AI laws and recommend potential referrals to a newly established AI Litigation Task Force tasked with challenging unconstitutional or preempted state measures.

Though the Task Force was announced on January 9, 2026, within the required 30-day timeframe, the Commerce Department has not yet publicly released its required evaluation, which had been due by March 11, 2026. This delay introduces some uncertainty regarding the Trump Administration's near-term posture on state preemption enforcement, even as the Framework reinforces a clear preference for national uniformity. Further, the Framework's endorsement of federal preemption is particularly notable when viewed against the current state-led approach to AI regulation.

Though the Framework outlines a high-level policy direction, congressional efforts are beginning to operationalize these priorities in more detailed and prescriptive ways. On March 18, 2026, just two days prior to the Framework's release, U.S. Sen. Marsha Blackburn (R-Tenn.) released an updated discussion draft of The Republic Unifying Meritocratic Performance Advancing Machine Intelligence by Eliminating Regulatory Interstate Chaos Across American Industry (TRUMP AMERICA AI Act), building on her original December 2025 proposal. The 291-page legislative draft reflects a more prescriptive governance approach, seeking to codify elements of the Trump Administration's AI‑related EOs, impose new requirements on AI developers and constrain states' ability to regulate AI systems.

Legislative Recommendations

Released shortly thereafter, the four-page Framework reflects a more discrete policy posture. Rather than attempting to address the full universe of AI‑related issues, it focuses on identifying priority areas for federal coordination and signaling where restraint and flexibility are warranted. This contrast underscores a challenge for Republicans to balance a light-touch, innovation-forward framework with increasing political pressure to address safety and consumer protection risks.

The Framework organizes its legislative recommendations across seven thematic areas:

  • protecting children and empowering parents
  • safeguarding and strengthening communities
  • respecting intellectual property (IP) and supporting creators
  • preventing censorship and protecting free speech
  • enabling innovation and U.S. AI dominance
  • educating an AI‑ready workforce
  • establishing a federal policy framework with targeted preemption of state AI laws

Across these areas, the Framework consistently emphasizes national uniformity, reliance on existing legal and regulatory structures, and avoidance of prescriptive or open‑ended approaches that could introduce compliance burdens without clear benefit. This approach has been embraced by many congressional Republicans as a starting point for legislative action and a signal toward a unified national strategy.

This approach differs from proposals that seek to construct a comprehensive AI governance architecture in a single step. Sen. Blackburn's TRUMP AMERICA AI Act, for example, spans 17 titles and addresses a wide range of issues, including:

  • developer duties of care
  • liability and enforcement regimes
  • labor reporting requirements
  • product liability
  • research infrastructure
  • advanced AI risk evaluation
  • content provenance

By contrast, the Framework narrows its focus to setting federal priorities, identifying areas of consensus and highlighting risks that may warrant congressional attention – leaving questions of statutory design, enforcement mechanisms and institutional structure to subsequent legislative deliberation.

Children's Privacy and Protections

The Framework's most detailed recommendations concern children, parents and youth protection. It urges Congress to empower parents with tools to manage children's privacy, screen time, content exposure and account controls. It recommends commercially reasonable and privacy‑protective age assurance for AI platforms and services likely to be accessed by minors, along with safeguards to reduce risks of sexual exploitation and self‑harm. It also affirms that existing child privacy protections apply to AI systems, including limits on data collection for training and targeted advertising. At the same time, the Framework cautions against ambiguous content standards and open‑ended liability that could drive excessive litigation and explicitly preserves states' authority to enforce generally applicable child protection laws.

Sen. Blackburn's legislation addresses many of these same concerns through enforceable mandates. The TRUMP AMERICA AI Act imposes a duty of care on AI chatbot developers, establishes extensive online safety requirements for children and creates criminal prohibitions under the Guidelines for User Age-verification and Responsible Dialogue (GUARD) Act. The comparison highlights the Framework's role as a moderating policy document that treats child safety as a central federal concern while emphasizing the importance of tailoring guardrails to avoid unintended consequences.

Beyond child protection, the Framework devotes significant attention to community and infrastructure impacts associated with AI deployment. It recommends that Congress codify the Trump Administration's ratepayer protection pledge, requiring technology companies to supply or pay for the electricity used by AI data centers. It also urges streamlined federal permitting to support on‑site or behind‑the‑meter power generation and improve grid reliability.

IP Protections

In addition, the Framework highlights the need to strengthen enforcement against AI‑enabled fraud and impersonation scams and provide grants, tax incentives and technical assistance to support small-business AI adoption. For many of these concepts, legislative proposals already exist in Congress, including the Artificial Intelligence Scam Prevention Act, Disrupt Explicit Forged Images And Non-Consensual Edits (DEFIANCE) Act AI Fraud Accountability Act, and most recently, the AI Data Center Moratorium Act.

Regarding IP, the Framework adopts a deliberately incremental approach. It states the Trump Administration's view that training AI models on copyrighted material does not violate copyright law, acknowledges that courts may reach different conclusions and supports allowing litigation to resolve fair use questions. Rather than recommending mandatory licensing requirements, it suggests that Congress consider voluntary licensing or collective rights mechanisms. Separately, it recommends a narrowly tailored federal framework to protect individuals from unauthorized AI‑generated replicas of voice or likeness, with clear First Amendment exceptions.

Free Speech and Content Moderation

The Framework's treatment of free speech and content moderation similarly reflects a focus on limiting unintended consequences. It identifies government coercion of platforms as a primary risk and recommends barring federal agencies from pressuring AI or technology providers to suppress lawful content. It also calls for mechanisms to allow individuals to seek redress when government actors attempt to influence platform outputs. Unlike more expansive proposals that would significantly restructure platform liability, the Framework emphasizes restraint and seeks to avoid incentives for over‑moderation.

AI Oversight

With respect to innovation, research infrastructure and oversight, the Framework recommends AI regulatory sandboxes, improved access to federal datasets in AI‑ready formats and reliance on existing federal agencies rather than creation of a new standalone AI regulator. In healthcare and other regulated sectors, this approach points to continued reliance on agencies such as the U.S. Food and Drug Administration, Assistant Secretary for Technology Policy/The Office of the National Coordinator for Health Information Technology, and Centers for Medicare & Medicaid Services. Sen. Blackburn's legislation goes further by proposing new institutions, testbeds, curated datasets and a National AI Research Resource, reinforcing the Framework's role as policy guidance rather than comprehensive implementation.

The Framework takes a similar approach to workforce and labor impacts, recommending integration of AI training into existing education and workforce programs, expanded research into task‑level workforce realignment, and enhanced support for land‑grant institutions' technical assistance and youth development efforts. The TRUMP AMERICA AI Act supplements these priorities with mandatory AI‑related job effects reporting and U.S. Department of Labor data collection.

National Security Concerns

On safety and national security, the Framework remains at an intentionally high level. It recommends that national security agencies develop the technical capacity needed to understand frontier AI systems and establish mitigation plans in consultation with developers. By contrast, Sen. Blackburn's legislation proposes a more formalized evaluation program for advanced AI systems, reflecting a broader governance approach.

Federal and State AI Governance

Finally, the Framework suggests clearly delineating the respective roles of the federal government and states in AI governance. It recommends a federal AI policy that avoids a fragmented state patchwork by preempting state AI laws that impose undue burdens while preserving state police powers, zoning authority and rules governing states' own use of AI. It treats AI development as inherently interstate and closely tied to national security and foreign policy considerations. Additionally, the Framework calls for limiting states' ability to regulate AI model development and restricting the imposition of liability on AI developers for unlawful conduct carried out by third parties using their systems, which could include deployers and consumers.

Despite growing alignment among Republicans, Democrats remain more skeptical of the Framework and represent a critical bloc for any bipartisan legislative pathway. Members such as Reps. Yvette Clarke (D-N.Y.) and Don Beyer (D-Va.), along with Sen. Brian Schatz (D-Hawaii), have raised concerns regarding federal preemption, accountability and oversight, and Senate Committee on Commerce Ranking Member Maria Cantwell (D-Wash.) continues to advocate for a more structured approach grounded in standards, testing and public infrastructure investment.

Legislative Action

These concerns are already translating into legislative action. On March 20, 2026, Rep. Beyer, alongside Reps. Doris Matsui (D-Calif.), Ted Lieu (D-Calif.), Sara Jacobs (D-Calif.) and April McClain Delaney (D-Md.), introduced the Guaranteeing and Upholding Americans' Right to Decide Responsible AI Laws and Standards (GUARDRAILS) Act, which would repeal the Trump Administration's EO establishing a national AI policy framework and effectively block efforts to impose a moratorium on state-level AI regulation. Sen. Schatz is expected to introduce companion legislation in the Senate, further underscoring Democratic opposition to broad federal preemption.

Substantively, Democratic proposals are coalescing around limiting broad federal preemption, strengthening oversight mechanisms, addressing workforce disruption and establishing safeguards against harmful or deceptive AI deployment. Political dynamics further suggest that comprehensive bipartisan legislation may remain challenging in the near-term, particularly given broader electoral considerations.

What's Next

Legislative progress is more likely to occur in discrete areas such as child safety, transparency, fraud prevention and government use of AI rather than through comprehensive statutory reform. As a result, stakeholders should anticipate continued incremental federal action alongside sustained state-level activity, reinforcing a hybrid regulatory environment.

In healthcare in particular, AI policy is likely to continue evolving through agency action and congressional oversight, including targeted hearings on data use, prior authorization, rural access and administrative efficiency, rather than through comprehensive, healthcare‑specific AI legislation. For stakeholders, this environment underscores the importance of a dual-track engagement strategy – shaping emerging federal standards while remaining responsive to state-level developments and oversight activity.

Our HK Health AI Navigator tracks and interprets AI-related laws and policies impacting the healthcare industry.

Holland & Knight's Artificial Intelligence Policy & Regulation Team continues to follow developments and will share additional analysis. If you have any questions, please contact the authors.


Information contained in this alert is for the general education and knowledge of our readers. It is not designed to be, and should not be used as, the sole source of information when analyzing and resolving a legal problem, and it should not be substituted for legal advice, which relies on a specific factual analysis. Moreover, the laws of each jurisdiction are different and are constantly changing. This information is not intended to create, and receipt of it does not constitute, an attorney-client relationship. If you have specific questions regarding a particular fact situation, we urge you to consult the authors of this publication, your Holland & Knight representative or other competent legal counsel.


Related Insights