HHS Releases Strategy Positioning Artificial Intelligence the Core of Health Innovation
The U.S. Department of Health and Human Services (HHS or Department) on Dec. 4, 2025, released its 21-page artificial intelligence (AI) strategy as a continuation of its nearly year-long AI effort and follow-up to several directives including the AI Action Plan, Executive Order 14179 ("Removing Barriers to American Leadership in AI") and Office of Management and Budget (OMB) memoranda M-25-21 and M-25-22. This strategy represents the next phase of HHS' push to make AI an integral part of the federal workforce's toolkit, integrating AI across internal operations, scientific research and public health programs. It fulfills HHS' commitment to leverage cutting-edge technologies to enhance efficiency, foster American innovation and improve patient outcomes in alignment with the Trump Administration's AI policy goals.
Led by HHS Acting Chief AI Officer (ACIO) Clark Minor the plan puts a finer point on the Trump Administration's vision for AI by expanding its use throughout the Department in a responsible, mission-driven manner. In Minor's words, the strategy is about "harnessing AI to empower our workforce and drive innovation across the Department." The HHS AI strategy is built on five core pillars that together provide a comprehensive road map for AI integration: 1) ensure governance and risk management for public trust, 2) design infrastructure and platforms for user needs, 3) promote workforce development and burden reduction for efficiency, 4) foster health research and reproducibility through gold‑standard science, and 5) enable care and public health delivery modernization for better outcomes. With intention to keep pace with rapid technological advances, these pillars will be revisited and updated as needed to maximize AI's impact across HHS.
It's worth noting how this HHS-wide plan dovetails with parallel AI initiatives in its sub-agencies, particularly the U.S. Food and Drug Administration (FDA). Just days before HHS rolled out this strategy, the FDA made headlines with a major AI deployment of its own. On Dec. 1, 2025, the FDA announced it had launched a secure, agency-wide "agentic AI" platform available to all FDA employees. This AI system allows staff to use AI agents for complex, multistep tasks (for example, managing regulatory meeting schedules, assisting in pre-market product reviews or automating parts of post-market surveillance and inspections). Importantly, FDA's announcement underscored that AI has built-in human oversight and is optional for employees. The HHS strategy explicitly mentions that its "OneHHS" approach includes every division, naming the FDA alongside the Centers of Disease Control and Prevention (CDC), Centers for Medicare & Medicaid Services (CMS), National Institutes of Health (NIH) and others. The "OneHHS" approach includes sharing AI code/models across divisions and releasing them publicly as open source where legally permissible. Expect that FDA's agentic AI deployment is just the first of many concrete steps within HHS agencies to come as the strategy's guidance takes hold.
The five core pillars (and their focus areas) are outlined below:
Governance and Risk Management
- ensure robust oversight and risk controls
- establish structures and policies for AI governance to maintain public trust
- emphasize ethics, transparency, security and adherence to civil rights and privacy laws in all AI uses
- the strategy promises annual public reporting of AI use cases and risk assessments
- manage risks and set clear rules so AI is trustworthy and accountable in HHS programs
Infrastructure and Platform Design
- build modern, unified AI platforms
- design a "OneHHS" AI infrastructure that all divisions can use
- invest in cloud and data platforms, scalable tools and cybersecurity measures
- enable AI solutions to be developed, shared and deployed efficiently across HHS, tailored to user needs, while protecting sensitive data
Workforce Development and Burden Reduction
- empower staff with skills and tools; automate rote tasks
- train and recruit personnel so that HHS has the talent to develop and use AI
- the plan includes creation of new AI-specific roles (data scientists, machine learning (ML) engineers, project managers) and an AI Talent Lead to coordinate hiring and training
- deploy AI to reduce administrative burdens
- upskill employees and introduce AI assistants to improve productivity (i.e., enabling staff to spend less time on tedious tasks and more on high-value work)
Research and Reproducibility
- advance science with reliable AI
- use AI to accelerate biomedical research and data analysis
- uphold gold-standard scientific rigor
- ensure AI-driven research produces reproducible, validated results that can be trusted by the scientific community
Modernization of Care and Public Health Delivery
- improve health services through AI
- apply AI in clinical care, public health and program delivery to drive better outcomes
- use AI for decision support in clinics and public health surveillance
- modernize how HHS delivers services using AI: improving accuracy, accessibility and impact of healthcare and human services programs
Together, these pillars outline a plan to embed AI into HHS' culture and operations. For the first time in HHS history, the "OneHHS" approach is emphasized: All divisions, including major agencies such as CDC, CMS, FDA and NIH, are invited to collaborate in building a robust, department-wide AI ecosystem. In practice, this means sharing data, code and best practices across the Department so that an AI solution developed by one agency can be leveraged by others.
The strategy's initial focus is on improving internal efficiency and decision-making within HHS. At the same time, the initiative sends a clear signal to industry: Show us what you can do. It lays the foundation for external partnerships and highlights that this groundwork will enable future collaboration with private-sector innovators to amplify AI's benefits in health and human services delivery.
Key Directives and Actions for HHS Divisions
To implement these pillars, the AI Strategy outlines concrete directives for HHS departments and offices. Each division of HHS is expected to take specific actions in governance, development and oversight of AI, ensuring a coordinated and accountable rollout. Some of the major directives and department-level actions include:
- Establishing Robust Governance. HHS has formed a high-level AI Governance Board, led by Deputy Secretary Jim O'Neill to oversee AI activities across the Department. This board, comprising leaders from across HHS (information technology (IT), cybersecurity, data, privacy, etc.), will meet regularly to guide major AI decisions and policies. It works in tandem with a cross-division AI Community of Practice, a working group of senior staff from each operating division, to drive implementation from the ground up. Together, these bodies ensure top-down governance (aligning AI efforts with executive directives and ethical standards) and bottom-up innovation (surfacing use cases and needs from each agency) are synchronized. HHS is also updating internal policies to support AI. The Office of the Chief Information Officer is reviewing IT policies to streamline any that unnecessarily impede AI adoption. For example, the strategy calls for accelerating the security Authority to Operate (ATO) process for AI systems so new tools can be deployed faster while still complying with National Institute of Standards and Technology (NIST)-aligned cybersecurity controls. The goal is to remove outdated bureaucratic barriers and instill an agile, innovation-friendly governance structure that still upholds safety and ethics.
- Cataloguing AI Use Cases and "OneHHS" Sharing. To track progress and encourage reuse, HHS will maintain an enterprise AI use case inventory. In fiscal year (FY) 2024, HHS had 271 active or planned AI use cases across its divisions. This number is expected to grow significantly (roughly a 70 percent increase in new use cases for FY 2025) as the inventory is updated. Such growth underscores the rapid expansion of AI experimentation within HHS. To manage this, the strategy mandates development of standard operating procedures for cataloguing AI projects and a common taxonomy to categorize them. Each HHS division must contribute to this inventory annually, reporting what AI applications they are piloting or deploying. Importantly, HHS is embracing a culture of open sharing: Divisions are expected to proactively share any custom-developed AI code or models with the rest of the Department (in line with OMB policy). Where legally permissible, HHS will also release such code publicly as open source. This means an AI tool created to, say, analyze public health data at CDC could be reused by CMS for fraud detection or by FDA for inspection analytics without each agency reinventing the wheel. By taking a "OneHHS" approach to AI development, HHS aims to reduce duplication, accelerate innovation through reuse and ensure that successful solutions are scaled department-wide rather than confined to one program.
- Managing Risks and Ensuring Compliance. In line with OMB's M-25-21 guidance and to foster public trust, HHS is instituting strict risk management and compliance checks for AI, especially for high-impact uses. Every division must identify which of its AI systems might be considered high-impact (i.e. those that could significantly affect health outcomes, rights or sensitive data). For any such system, divisions are required to implement minimum risk management practices (covering elements such as bias mitigation, outcome monitoring, security and human oversight) by April 3, 2026. This date aligns with OMB's timeline (one year from the memos) for agencies to have risk controls in place. If an AI tool cannot meet the required safeguards by the deadline, it must be stopped or phased out until compliance is achieved. HHS' ACIO will work closely with each division on this mandate, with the authority to grant case-by-case waivers or demand pauses in use. For example, if FDA is using an AI system for drug review that lacks adequate bias testing, it will need to strengthen those controls or halt the system. This "governance with teeth" approach ensures no AI deployment undermines patient safety, civil rights or data privacy. HHS currently bases its internal AI risk management guidance on the NIST AI Risk Management Framework. The Department monitors updates to this framework and evolving best practices and is developing its own risk management approach through internal assessments, especially for potential high-impact AI use cases. HHS is effectively aiming to teeter the line between innovation and responsible use. Additionally, the strategy emphasizes ongoing adherence to established laws and frameworks such as the Federal Information Security Modernization Act and NIST guidelines. By weaving these requirements into the process (often via HHS' existing ATO security review), the Department will continuously monitor AI projects for compliance even after deployment, not just at a single checkpoint.
- Investing in Workforce and Talent. Acknowledging that people are as crucial as technology, the strategy places heavy emphasis on workforce development. HHS wants to cultivate an "AI-ready" workforce across all its agencies. Concretely, this involves new training and hiring initiatives. HHS will create new roles or positions for AI specialists (i.e., data scientists, ML engineers and AI project managers) and recruit aggressively to fill them. At the same time, current employees will receive expanded AI training programs. Government-wide AI training (offered through OMB and the U.S. General Services Administration) is being complemented by bespoke training within HHS divisions to meet specific needs. From basic AI literacy (such as learning to interact with AI tools or understanding their outputs) to advanced skills (such as developing AI models or implementing AI in program workflows), the goal is to upskill staff at all levels. By doing so, HHS hopes to equip employees to identify AI opportunities in their work and collaborate effectively with technical experts. Moreover, the strategy encourages a culture of continuous learning and collaboration: It mentions creating platforms for knowledge sharing such as internal AI forums, webinars, "lunch and learn" sessions and even exploring talent exchange programs where staff members might rotate through a stint in another agency or an outside organization to broaden their AI experience.
- Measuring Impact and Ensuring Transparency. To keep all these efforts on track, HHS is embedding metrics and accountability into the strategy. Each division will incorporate AI initiatives into their annual performance plans, meaning leadership will be evaluating progress on AI just as they do for other strategic goals. Key performance indicators might include, for instance, how many processes have been improved by AI, cost or time savings achieved, percentage of staff trained in AI or improvements in health outcomes attributable to AI projects. The AI Governance Board will regularly review these metrics to identify what's working and where adjustments are needed. Importantly, HHS commits to transparency in reporting AI activities. Per the strategy, the Department will publicly share an updated AI use case inventory and publish evaluations or risk assessments of major AI systems. This public-facing accountability is meant to build trust: Both Congress and the general public should be able to see that HHS is using AI prudently and effectively. The strategy's emphasis on transparency also aligns with the fostering public trust pillar. By openly communicating the benefits and safeguards of HHS' AI work, the Department aims to reassure the public that AI will be used to help deliver services (better, faster, more cost-effectively), not to harm or make unchecked decisions.
Conclusion
Ultimately, the strategy document signals a decisive shift in how HHS approaches AI: It aims to rid the agency of a fragmented collection of isolated pilot projects and establish coordinated AI capability. By aligning with previous AI documents out of the Trump Administration and laying out clear responsibilities, HHS aims for an AI innovation-forward agenda for the coming years. The strategy's core message is that AI will be harnessed to augment the workforce, not replace it, improving how HHS employees do their jobs. The plan is ambitious in scope (topically ranging from IT infrastructure to ethics) and implementation timeline but also grounded in pragmatic steps.
The plan's accelerated timeline gives industry a unique opportunity to move quickly. While the strategy emphasizes risk assessment and governance, its actions suggest a parallel approach: advancing risk assessment and deployment simultaneously, rather than completing risk evaluation before initiating data collection and implementation.
This strategy positions AI as a central driver in improving the nation's health and well-being while establishing HHS as a leader in deploying this technology rapidly and with a clear focus on patient outcomes.