November 6, 2023

New Artificial Intelligence Executive Order Contains Numerous Healthcare Implications

Holland & Knight
Robert S. Hill | Isaac F. Fuhrman | Miranda A. Franco

Highlights

  • President Joe Biden on Oct. 30, 2023, signed an executive order (EO) to establish the first set of standards for using artificial intelligence (AI) in healthcare and other industries.
  • The EO seeks to find balance between managing potential risks of AI while encouraging innovation that can benefit consumers.
  • Execution of the EO's many directives depends heavily on the agencies and companies that have been called to assist with their development and rollout.

President Joe Biden on Oct. 30, 2023, signed a sweeping executive order (EO) and invoked the Defense Production Act to establish the first set of standards for using artificial intelligence (AI) in healthcare and other industries. The rise of the internet dramatically changed healthcare, and AI is poised to do the same. The rapid expansion of AI across healthcare holds the promise of dramatically altering diagnosis and treatment, research, risk assessment, drug development and even payment systems. This Holland & Knight alert examines AI's potential for the healthcare industry and timelines for establishing government oversight.

Wide Scope

AI is still in its relative infancy, but multiple applications are already poised to disrupt healthcare's status quo. These include patient wearables that can generate real-time biometric data for use in advanced analysis and AI-powered techniques to predict the three-dimensional shape of protein molecules from amino acid sequencing data widely known as the "protein folding problem." Approaches to AI are shaped by emerging legislation, regulatory oversight, civil liability law and industry standards. Accordingly, the need for a more unified approach to AI governance provided the impetus for the Biden Administration to set forth principles to guide federal agencies in advancing, using and overseeing AI.

Notably, the EO uses the definition of "artificial intelligence," found at 15 U.S.C. 9401(3): "a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments." Therefore, the scope of the EO is not limited to generative AI or even machine learning technologies more generally. Instead, the EO potentially impacts any machine-based system that makes predictions, recommendations or decisions.

The EO and HHS

Generally, the EO seeks to strike a balance between controlling the risks of AI while also encouraging innovation that may benefit consumers. To ensure the safe and responsible use of AI in the healthcare industry, the EO advanced healthcare-specific directives for the U.S. Department of Health and Human Services (HHS) over the next year.

Within 90 Days

Establish an HHS AI Task Force (Section 8.b.i.). The HHS secretary is directed to work with the U.S. Secretary of Defense and Secretary of Veterans Affairs to establish an HHS AI Task Force and, within one year, develop a strategic plan that includes policies and frameworks, possibly including regulatory action, on responsible deployment and use of AI and AI-enabled technologies in the health and human services sector (including research and discovery, drug and device safety, health care delivery and financing, and public health).

Within 180 Days

Issue a Strategy on Whether AI Technologies in the Health and Human Services Sector Maintain Appropriate Levels of Quality (Section 8.b.ii.). The HHS secretary is required to develop a strategy, in consultation with relevant agencies, to determine whether AI-enabled technologies in the health and human services sector maintain appropriate levels of quality. This work includes developing AI assurance policy – to evaluate essential aspects of AI-enabled healthcare tools' performance – and infrastructure needs for enabling pre-market assessment and post-market oversight of AI-enabled healthcare technology algorithmic system performance against real-world data.

Ensure that Healthcare Providers Who Receive Federal Funding Comply with Nondiscrimination Requirements When Utilizing AI Technology (Section 8.b.iii.). The HHS secretary, in consultation with relevant agencies that the HHS secretary deems appropriate, is required to consider appropriate actions to advance the prompt understanding of, and compliance with, federal nondiscrimination laws by health and human services providers that receive federal financial assistance, as well as how those laws relate to AI.

Within 365 Days

Establish an AI Safety Program (Section 8.b.iv.). The HHS secretary, in consultation with the Secretary of Defense and Secretary of Veterans Affairs, is required to establish an AI safety program that, in partnership with voluntary federally listed patient safety organizations, establishes a framework for approaches for identifying and capturing clinical errors resulting from AI deployed in healthcare settings, among other things.

Develop a Strategy for Regulating the Use of AI or AI-Enabled Tools in the Drug Development Process (Section 8.b.v.). The HHS secretary is required to develop a strategy to define the objectives, goals and principles required for appropriate regulation throughout each drug development process and identify areas where future rulemaking authority may be necessary, as well as the existing budget and resources for new public-private partnerships needed for such a regulatory system.

The EO and the NSF

The EO further directs the National Science Foundation (NSF) to take certain steps to promote innovation in the AI space, including the following.

Within 45 Days

Coordinate the Launch of the National AI Research Resource (NAIRR) (Section 5.2.a.i). Heads of agencies identified by the NSF Director for the coordination of the launch of the NAIRR pilot program, consistent with past recommendations of the NAIRR Task Force, shall submit to the NSF Director a report identifying the agency resources that could be developed and integrated into such a pilot program.

Within 90 Days

Launch the NAIRR (Section 5.2.a.i). The program shall pursue the infrastructure, governance mechanisms and user interfaces to pilot an initial integration of distributed computational, data, model and training resources to be made available to the research community in support of AI-related research and development.

Within 120 Days

Enhance Existing Training Programs (Section 5.2.b). With the Secretary of the U.S. Department of Energy, establish a pilot program to enhance existing successful training programs for scientists, with the goal of training 500 new researchers by 2025 capable of meeting the rising demand for AI talent.

Within 150 Days

Support Regional Innovation (Section 5.2.a.ii.). Fund and launch at least one NSF Regional Innovation Engine that prioritizes AI-related work, such as AI-related research, societal or workforce needs.

Within 540 Days

New National AI Research Institutes (Section 5.2.a.iii). Establish at least four new National AI Research Institutes, in addition to the 25 currently funded as of the date of this order.

The EO, the USPTO and the Library of Congress

The EO recognizes that 1) the level of protection that AI systems can receive under U.S. patent law and 2) the treatment of AI system output under intellectual property law will have a major impact on the development of this technology. The EO appears to implicitly recognize that many believe that Alice Corp Pty. Ltd.. v. CLS Bank Intern., 573 U.S. 208 (2014), and subsequent patent eligibility jurisprudence have created potential headwinds and uncertainty for AI development. The EO further shows awareness of precedent, which precludes copyright protection for content generated by AI. Therefore, to promote innovation and clarify issues related to AI and inventorship of patentable subject matter, the Under Secretary of Commerce for Intellectual Property and Director of the U.S. Patent and Trademark Office (USPTO Director) are directed to take certain actions. These action items will have an enormous effect on the development and ownership of key AI technologies and outputs in the healthcare context, among many other issues.

Within 120 Days

Publish Guidance to Patent Examiners and Applicants (Section 5.2.c.i). Publish guidance to USPTO patent examiners and applicants addressing inventorship and the use of AI, including generative AI, in the inventive process, including illustrative examples in which AI systems play different roles in inventive processes and how, in each example, inventorship issues ought to be analyzed.

Within 270 Days

Issue Additional and Updated Guidance (Section 5.2.c.ii). Issue additional guidance to USPTO patent examiners and applicants to address other considerations at the intersection of AI and IP, which could include, as the USPTO Director deems necessary, updated guidance on patent eligibility to address innovation in AI and critical and emerging technologies.

Issue Recommendations to the President on Potential Executive Actions (Section 5.2.c.iii). Within 270 days or 180 days after the U.S. Copyright Office of the Library of Congress publishes its forthcoming AI study, consult with the Director of the U.S. Copyright Office and issue recommendations to the president on potential executive actions relating to copyright and AI. The recommendations shall address any copyright and related issues discussed in the U.S. Copyright Office's study, including the scope of protection for works produced using AI and the treatment of copyrighted works in AI training.

Conclusion 

The execution of many of the EO directives depends heavily on the agencies and companies that have been called to action. As noted above – in a summary that captures only a subset of the sweeping scope of the EO – many of these items involve the interactions of multiple stakeholders, adding to the complexity of what is about to unfold, especially for the healthcare industry. Accordingly, it is likely the White House and agencies will begin operationalizing and engaging the industry and seeking necessary input to roll out AI systems and processes.

Already following the EO, the White House Office of Management of Budget (OMB) issued a draft memorandum to executive agency heads stating that each agency must designate a Chief AI Officer (CAIO) within 60 days. The designated CAIO will be tasked with "advancing responsible AI innovation" and "managing risks from the use of AI." The draft memo is subject to public comment until Dec. 5, 2023, through the OMB website.

At the same time, given that executive deference issues appear to be top of mind with the U.S. Supreme Court, litigation challenges might also play a significant role in determining how the EO is actually implemented. See, e.g., Loper Bright Enterprises v. Raimondo, 45 F.4th 359 (D.C. Cir. 2022), cert. granted, 216 L. Ed. 2d 414 (May 1, 2023) (No. 22-451). This is especially true for provisions of the EO, which create new obligations, for example, Section 4.2(a)'s requirement that certain AI activities be reported to the federal government.

The EO also follows increased scrutiny of AI by Congress. In the 118th Congress, at least 40 bills had been introduced that either focused on AI or contained AI-focused provisions. There have also been numerous congressional roundtables and hearings to help inform lawmakers of potential legislative and regulatory needs around the use of AI. Congress is considering whether the current federal government mechanisms are sufficient for AI oversight and policymaking, the role of the federal government in supporting AI research and development, the potential impact of AI technologies on the workforce, and disclosure of AI use, testing and validation of AI systems. For the Biden Administration, there is recognition that Congress may be slow to act, and significant federal agency action is needed in the interim.

 


Information contained in this alert is for the general education and knowledge of our readers. It is not designed to be, and should not be used as, the sole source of information when analyzing and resolving a legal problem, and it should not be substituted for legal advice, which relies on a specific factual analysis. Moreover, the laws of each jurisdiction are different and are constantly changing. This information is not intended to create, and receipt of it does not constitute, an attorney-client relationship. If you have specific questions regarding a particular fact situation, we urge you to consult the authors of this publication, your Holland & Knight representative or other competent legal counsel.


Related Insights