Back to News and Insights

Sweeping Executive Order Sets the Stage for Federal Oversight of AI in Healthcare

Insights
SHARE

On October 30, 2023, the White House released a sweeping executive order entitled Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (the “EO”) which sets out an ambitious plan to support the responsible development and use of artificial intelligence (“AI”).

As addressed in our prior article, The Rise of Artificial Intelligence: Navigating Healthcare Regulatory Considerations, while existing laws applicable to healthcare also apply to the development and deployment of AI, to date there has not been overarching federal oversight to help balance the potential of these tools with management of the risks. Given the more recent rise of interest in, and expansion of, AI, we have seen efforts to implement an AI-specific regulatory scheme to provide more oversight (for example, with respect to healthcare, the Office of the National Coordinator of Healthcare Technology’s HTI-1 proposed rule included provisions which outline a series of standards that predictive decision support interventions, or DSIs, would need to satisfy to obtain ONC Health IT Certification).[1] However, this EO takes these regulatory efforts to an entirely new level, imposing sweeping mandates for implementation across multiple federal agencies.

The EO reflects the Biden Administration’s (the “Administration’s”) view that there is “the highest urgency on governing the development and use of AI safely and responsibly,” calling for a “coordinated, Federal Government-wide approach” to do so. The EO then sets out eight guiding principles and priorities applicable to AI across various sectors:

  1. Safety & Security: To ensure the safety and security of AI, the EO calls for (a) “robust, reliable, repeatable, and standardized” evaluation of AI; (b) appropriate policies and institutions; and (c) the development of effective labeling and source identifying mechanisms which will enable the public to readily identify AI-generated content.
  2. Promoting Responsible Innovation, Competition, & Collaboration: The Administration will advance American leadership in AI by supporting programs which promote AI-related education, training, development, research, and capacity. The EO reveals the Administration’s intention to pursue antitrust efforts to attain its goal of a “fair, open, and competitive ecosystem and marketplace” for AI and related technologies.
  3. Supporting American Workers: The use of AI should protect worker and labor rights, positively augment human work, and allow everyone to benefit from its innovation.
  4. Advancing Equity & Civil Rights: Recognizing the potential of AI to deepen discrimination and bias, the Administration will seek to ensure that AI use complies with all federal laws. This will involve technical evaluation, oversight, community engagement, and “rigorous regulation.”
  5. Consumer Protection: The Administration will enforce consumer protection laws and enact additional safeguards against fraud, unintended bias, discrimination, infringements on privacy, and other harms from AI. The EO specifically mentions that consumer protection is important in healthcare “where mistakes by or misuse of AI could harm patients.”
  6. Privacy & Civil Liberties: To combat the exploitation and exposure of personal data, the Administration will ensure that the collection, use, and retention of data is lawful, secure, and mitigates privacy and confidentiality risks.
  7. Responsible Government Use of AI: The Administration will attract, retain, and develop AI professionals to help the federal government harness and govern AI. The federal government will train its workforce on the benefits, risks, and limitations of AI for their jobs, and will ensure that the federal government adopts, deploys, and uses “safe and rights-respecting” AI.
  8. Global Leadership: The Administration will engage with international allies and partners to develop a framework for the responsible and beneficial use of AI. Additionally, the Administration will seek to promote AI safety and security principles and actions around the world, including with the United States’ competitors.

With respect to healthcare, the EO issues multiple specific directives to the U.S. Department of Health and Human Services (“HHS”) with associated deadlines. A summary of the key requirements and related timelines follow.

Within 90 days:

  • Establish HHS AI Task Force

Within 180 days:

  • Secretary of HHS to direct HHS components, as deemed appropriate, to develop a strategy to maintain appropriate levels of quality in AI-enabled healthcare solutions (e.g., safety, equity, privacy, security, transparency, and workplace efficiency), including development of an AI assurance policy and infrastructure needs for pre-market assessment and post-market oversight.
  • Secretary of HHS to consider appropriate actions to advance understanding of, and compliance with, federal nondiscrimination laws by providers that receive federal financial assistance.

Within 365 days:

  • HHS AI Task Force to develop a strategic plan that includes policies and frameworks – possibly including regulatory action, as appropriate – on responsible deployment and use of AI and AI-enabled technologies, and identify appropriate guidance and resources to promote such deployment
  • Secretary of HHS, in consultation with the Secretary of Defense and the Secretary of Veterans Affairs, to establish an AI safety program, in partnership with voluntary federally listed Patient Safety Organizations
  • Secretary of HHS to develop a strategy for regulating the use of AI or AI-enabled tools in drug-development processes (e.g., identifying areas of future rule making and resources needed for such a regulatory system).

In addition, in an effort to advance responsible AI innovation by healthcare technology developers, HHS is directed to identify and prioritize grantmaking and other awards. To advance the development of AI systems improving care furnished to veterans, the EO also requires the Secretary of Veterans Affairs to host two 3-month “AI Tech Sprint” competitions.

We are clearly in the “early days” of regulating AI in healthcare in the U.S. The early signs suggest that there may be vetting of AI-enabled healthcare solutions before they go to market, as well as dedicated resources to ongoing oversight, and that the Administration expects innovators to invest significant time and care in ensuring that such solutions are safe and otherwise developed in ways to manage risks. Industry stakeholders should take note that through this EO, Administration has established a clear expectation that AI solutions introduced in the healthcare sector be developed responsibly and with patient safety front of mind. Developers of healthcare technology and provider organizations exploring the use of AI-enabled solutions must think carefully about patient perspectives and consumer protection.

In parallel, the EO also suggests that the Administration will invest in promoting innovation involving AI in healthcare, and that the Administration recognizes the immense potential of AI as a force of advancement in healthcare delivery. As state medical boards and other regulators begin to consider their positions on the use of AI in patient care, the EO is a helpful reference point which makes clear that safely and appropriately utilizing AI to advance healthcare delivery is a national priority.

Much like the development of AI itself, the regulatory landscape will continue to evolve at a rapid pace. Already, the Office of Management and Budget (“OMB”) has issued a draft policy for the use of AI by the federal government. The OMB policy expands upon the EO’s prioritization of the responsible government use of AI by, among other things, requiring each federal agency to: (a) designate a Chief AI Officer; and (b) follow minimum risk-mitigation practices when using “rights-impacting and safety-impacting” AI. In addition, last month, the FDA established a new digital health advisory committee which will include providing recommendations on the use of AI. They are currently soliciting nominations for the committee and plan to be operational in 2024.

Simultaneous but mostly separate from the Administration’s efforts, AI has been discussed a great deal in Congress over the last several months without much fruitful movement forward. Most activity being generated in the Senate, Majority Leader Schumer (D-NY) has hosted a series of bipartisan AI Insight Forums to collect information to inform legislative drafting and has noted an opportunity to advance legislation prior to the end of the year. No legislative text has come out of those sessions at this point but would be more general – setting certain standards across industries.

There are, however, efforts specific to regulating AI in the healthcare space in the Senate Health, Education, Labor and Pensions (HELP) Committee. Ranking Member Bill Cassidy (R-LA) released a white paper specific to the uses of AI in health care with a look at opportunities to reduce administrative burden on providers. The paper included questions for stakeholders to provide input for possible draft legislation. The Committee will be holding its first hearing on the topic on November 8. Efforts specific to health care regulation in Congress have tended to focus on what agency will regulate it and what authorities are needed as well as who is liable for the AI decisions, specifically focusing on diagnosis and treatment decisions.

This article showcases the many simultaneous moving pieces that must be considered as these efforts move forward. HLB’s digital health practice and Government Relations and Public Policy Department will continue to monitor developments involving AI-enabled solutions impacting the American healthcare industry in the months to come.

[1] Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing, 88 Fed. Reg. 23746 (Apr. 18, 2023).


For more information, please contact Amy Joseph, Jeremy Sherer, or Melania Jankowski in Boston, Stephen Phillips or Michael Shimada in San Francisco, Monica Massaro in Washington, D.C., or your regular HLB contact.

RELATED CAPABILITIES

Professional

Melania Jankowski
Associate
Boston
Amy M. Joseph
Partner
Boston
Monica Herr Massaro
Director, Government Relations & Public Policy
Washington, D.C.
Jeremy D. Sherer
Partner
Boston
Washington, D.C.
Michael Shimada
Associate
San Francisco