On October 30, 2023, the White House released a sweeping executive order entitled Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (the “EO”) which sets out an ambitious plan to support the responsible development and use of artificial intelligence (“AI”). As addressed in our prior article, The Rise of Artificial Intelligence: Navigating Healthcare Regulatory Considerations, while existing laws applicable to healthcare also apply to the development and deployment of AI, to date there has not been overarching federal oversight to help balance the potential of these tools with management of the risks. Given the more recent rise of interest in, and expansion of, AI, we have seen efforts to implement an AI-specific regulatory scheme to provide more oversight (for example, with respect to healthcare, the Office of the National Coordinator of Healthcare Technology’s HTI-1 proposed rule included provisions which outline a series of standards that predictive decision support interventions, or DSIs, would need to satisfy to obtain ONC Health IT Certification).[1] However, this EO takes these regulatory efforts to an entirely new level, imposing sweeping mandates for implementation across multiple federal agencies. The EO reflects the Biden Administration’s (the “Administration’s”) view that there is “the highest urgency on governing the development and use of AI safely and responsibly,” calling for a “coordinated, Federal Government-wide approach” to do so. The EO then sets out eight guiding principles and priorities applicable to AI across various sectors: With respect to healthcare, the EO issues multiple specific directives to the U.S. Department of Health and Human Services (“HHS”) with associated deadlines. A summary of the key requirements and related timelines follow. Within 90 days: Within 180 days: Within 365 days: In addition, in an effort to advance responsible AI innovation by healthcare technology developers, HHS is directed to identify and prioritize grantmaking and other awards. To advance the development of AI systems improving care furnished to veterans, the EO also requires the Secretary of Veterans Affairs to host two 3-month “AI Tech Sprint” competitions. We are clearly in the “early days” of regulating AI in healthcare in the U.S. The early signs suggest that there may be vetting of AI-enabled healthcare solutions before they go to market, as well as dedicated resources to ongoing oversight, and that the Administration expects innovators to invest significant time and care in ensuring that such solutions are safe and otherwise developed in ways to manage risks. Industry stakeholders should take note that through this EO, Administration has established a clear expectation that AI solutions introduced in the healthcare sector be developed responsibly and with patient safety front of mind. Developers of healthcare technology and provider organizations exploring the use of AI-enabled solutions must think carefully about patient perspectives and consumer protection. In parallel, the EO also suggests that the Administration will invest in promoting innovation involving AI in healthcare, and that the Administration recognizes the immense potential of AI as a force of advancement in healthcare delivery. As state medical boards and other regulators begin to consider their positions on the use of AI in patient care, the EO is a helpful reference point which makes clear that safely and appropriately utilizing AI to advance healthcare delivery is a national priority. Much like the development of AI itself, the regulatory landscape will continue to evolve at a rapid pace. Already, the Office of Management and Budget (“OMB”) has issued a draft policy for the use of AI by the federal government. The OMB policy expands upon the EO’s prioritization of the responsible government use of AI by, among other things, requiring each federal agency to: (a) designate a Chief AI Officer; and (b) follow minimum risk-mitigation practices when using “rights-impacting and safety-impacting” AI. In addition, last month, the FDA established a new digital health advisory committee which will include providing recommendations on the use of AI. They are currently soliciting nominations for the committee and plan to be operational in 2024. Simultaneous but mostly separate from the Administration’s efforts, AI has been discussed a great deal in Congress over the last several months without much fruitful movement forward. Most activity being generated in the Senate, Majority Leader Schumer (D-NY) has hosted a series of bipartisan AI Insight Forums to collect information to inform legislative drafting and has noted an opportunity to advance legislation prior to the end of the year. No legislative text has come out of those sessions at this point but would be more general – setting certain standards across industries. There are, however, efforts specific to regulating AI in the healthcare space in the Senate Health, Education, Labor and Pensions (HELP) Committee. Ranking Member Bill Cassidy (R-LA) released a white paper specific to the uses of AI in health care with a look at opportunities to reduce administrative burden on providers. The paper included questions for stakeholders to provide input for possible draft legislation. The Committee will be holding its first hearing on the topic on November 8. Efforts specific to health care regulation in Congress have tended to focus on what agency will regulate it and what authorities are needed as well as who is liable for the AI decisions, specifically focusing on diagnosis and treatment decisions. This article showcases the many simultaneous moving pieces that must be considered as these efforts move forward. HLB’s digital health practice and Government Relations and Public Policy Department will continue to monitor developments involving AI-enabled solutions impacting the American healthcare industry in the months to come. [1] Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing, 88 Fed. Reg. 23746 (Apr. 18, 2023). For more information, please contact Amy Joseph, Jeremy Sherer, or Melania Jankowski in Boston, Stephen Phillips or Michael Shimada in San Francisco, Monica Massaro in Washington, D.C., or your regular HLB contact.Sweeping Executive Order Sets the Stage for Federal Oversight of AI in Healthcare
RELATED CAPABILITIES
Professional