Back to News and InsightsBACK TO HEALTH EQUITY BLOG

Federal Regulation of AI in Health Care

Insights
SHARE

On Wednesday, September 4, the House Energy and Commerce Health Subcommittee held a hearing titled “Examining Opportunities to Advance American Health Care through the Use of Artificial Intelligence Technologies.”

This hearing was the first look into how legislators may address AI health policy in the 119th congressional session. The hearing’s themes were in stark contrast to the hearings on health care AI during the previous Congress. The prior hearings included calls for federal guardrails that would create detailed standards across the AI sector, advocacy of equitable health outcomes when AI is used, predictable payment approaches, and robust oversight of product development.

With the new administration taking office favoring a hands-off approach to AI regulation, while actively using AI tools within government agencies, the bipartisan focus on guardrails that were prominent over the past two years has largely faded.  The Trump Administration has taken a decisive deregulatory stance on artificial intelligence, aiming to position the United States as the global leader in AI innovation. In January 2025, President Trump signed an executive order titled “Removing Barriers to American Leadership in Artificial Intelligence,” which revoked prior regulations seen as obstacles to AI development. The Executive Order laid the foundation for the America’s AI Action Plan, released in July 2025, which promotes a hands-off federal stance and prioritizes private-sector-led growth.

The shift away from the bipartisan focus to regulate AI development was evident not only in the witnesses who sat before the Committee, which included health care providers using AI, researchers, and a representative from the American Psychological Association, but in lines of questioning that reflected both a deregulatory and more partisan approach. Where there was once a Senate Bipartisan Working Group on AI led by then Majority Leader Chuck Schumer and a House Bipartisan AI Task Force actively seeking policy solutions, there is now debate on whether or not Congress should even undertake legislative guardrails related to AI.

During the hearing many Republican members noted their support for the Administration’s approach to AI and focused on the positive use cases. Democrats challenged Republicans on their contradiction of promoting innovation while cutting millions from the health care sector and, specifically, Medicaid in the One Big Beautiful Bill Act, leaving hospitals and health systems without the resources needed to adopt or implement new technologies, including AI. Democrats argued that innovation cannot thrive in a system where basic access to care is under threat and called for a more balanced approach that supports both technological advancement and health care equity.

Despite these contrasting views, it was apparent during the hearing that Members of Congress from both sides of the aisle shared the belief that the unique nature of health care uses of AI may require a regulatory approach that departs from broader efforts to regulate (or not regulate) AI more generally. The Committee and witnesses explored some of the health care specific areas, providing a prediction on where federal efforts to regulate AI may focus in coming months.

Mental Health

Although there was no general consensus established during the hearing, Members of Congress seemed to have found some bipartisan common ground where patient safety and ethical concerns are most pressing. There is shared interest in addressing mental health applications, especially around liability when AI tools are used in diagnosis or treatment. The hearing raised major concerns following recent news and lawsuits related to chatbots counseling suicide to individuals seeking mental health support. Concerns were raised of AI tools making claims that lead individuals to believe they are receiving help and counseling from health care providers that then provide inappropriate advice.

Additionally, although not solely focused on mental health, there seems to be a growing consensus around protecting children’s use of AI in health care, recognizing the unique vulnerabilities of minors and the need for age-appropriate guardrails. These areas of agreement offer a foundation for future legislation, even as broader regulatory frameworks remain contested. The Energy and Commerce Committee had previously worked on the Kids Online Safety Act, which passed overwhelmingly by the Senate in July 2024 but was stalled in the House Committee.

Data Privacy

During the recent hearing, lawmakers raised significant concerns about data privacy in the context of artificial intelligence in health care. A central theme was the lack of comprehensive federal privacy legislation to protect Americans’ health data as AI tools become more prevalent. Witnesses emphasized that many health care organizations and insurers do little vetting or monitoring of AI tools before deployment, leaving patient data vulnerable to misuse or breaches. In the last Congress, there was bipartisan work done on the American Privacy Rights Act which has not yet been reintroduced in the current session. Data privacy has proven to be a challenging area to legislate despite interest in doing so, and data privacy legislation may not come to fruition without the pressure of further data breaches impacting the health care system.

Denials of Care

Lawmakers also agreed on the need to curb denials of care driven by opaque or biased algorithms, ensuring that AI does not become a barrier to access. The Improving Seniors’ Timely Access to Care Act (H.R. 3514/S. 1816), addressing payer prior authorization practices, includes provisions around payer transparency regarding denials determined by AI utilization. The Improving Seniors’ Timely Access to Care Act would authorize a Centers for Medicare and Medicaid Services (CMS) review on the impact of AI on prior authorization practices on patient access and on health care disparities for rural and low-income beneficiaries. The legislation has broad bipartisan support and has been discussed for inclusion in larger health care legislative packages for some time.

Payment

The hearing brought to light the increased costs associated with AI utilization without any mechanism to pay for their use, leaving providers to determine if the efficiencies gained are worth the cost to purchase, train and maintain these tools. Without clear reimbursement structures, healthcare providers may be reluctant to invest in or deploy AI technologies, especially those that support diagnostics or administrative efficiency. The costs have been widely acknowledged, but with the current financial pressures in Congress and the health care industry, more funding for payment is unlikely at this time, leaving a fragmented landscape that could stall progress despite growing interest and need.

Staying Ahead in Uncertain Times

As the debate between the proper role of federal and state government continues and states themselves diverge in their approach to regulating health care AI and explore regulatory sandboxes, the near-term regulatory environment will likely consist of a patchwork of AI laws with which companies must comply, increasing costs and requiring companies to establish and maintain robust AI governance and compliance programs.

Although the Department of Health and Human Services (HHS) may not formally propose regulations on the use of AI, there are initiatives that could impact its use, including enhancing interoperability and transparency. This summer the White House and HHS announced the establishment of a new Health Tech Ecosystem, based on voluntary commitments from over 60 data networks, health systems and providers, app developers and payers to adopt and align on a new interoperability framework with the goal of creating a process for easily accessible and shareable medical information across newly created CMS Aligned Networks. Subsequently, the agency recently announced the launch of a major enforcement initiative, adding more resources to stop health data blocking, ensuring patients and their providers have easier access to their electronic health information.

Additionally, the Federal Trade Commission (FTC) has launched an inquiry into seven companies that offer AI-powered chatbots, requesting details on how they assess and manage risks these technologies may pose to children and teens. The targets of the inquiry include companies with some of the most popular consumer facing applications—Alphabet, Character Technologies, Instagram, Meta, OpenAI, Snap, and X. The focus of the FTC’s study, which is fact-finding in nature and not an enforcement action, is to understand how these firms test, monitor, and mitigate potential harms, especially as chatbots increasingly simulate human-like interactions. The study aims to shed light on industry practices and inform future policy as the FTC seeks to balance the goals of consumer protection and support of innovation. Chairman Brett Guthrie (R-KY-02) and Ranking Member Frank Pallone, Jr. (D-NJ-06) have already issued a statement applauding the effort and indicating bipartisan interest in legislation on this issue.

As federal agencies look to work within their individual authority, some states will embrace their mandate to police harmful conduct, prevent patient harm and continue efforts to regulate the use of AI. State efforts will likely include additional bills addressing notice and consent to patients on to how AI is being used, such as AB3030. which became law in California in 2024, and HB149, which passed the Texas legislature this year. States may also move to prohibit certain uses of AI in patient-facing encounters, such as Illinois’ recent prohibition on chatbots that provide mental and behavior health advice. However, as exemplified by the recent postponement of Colorado’s AI Act (“CAIA”), one of the nation’s first comprehensive laws regulating artificial intelligence systems used in high-stakes decisions such as healthcare, more general, broader efforts to regulate AI may face headwinds from those concerned about uncertain liability, compliance costs, and related potential negative impacts on innovation.

With most states focusing on guardrails, it is worth noting Utah has taken a different approach in creating a regulatory framework that promotes innovation and seeks to use practical experience as a guide for tailored regulation. The Utah Office of Artificial Intelligence Policy, established as part of 2024’s Utah AI Policy Act, is authorized to provide developers two years of “regulatory mitigation” to develop pilot AI programs and receive feedback from key stakeholders, including industry experts, academics, regulators, and community members. The mitigation period provides for exemptions from applicable state regulations and laws, cure periods to address compliance issues, and limitations on civil penalties. The enactment of HB452, which prohibits certain uses of personal information by a mental health chatbot and requires certain disclosures to users, was influenced by a project originating out of Utah.

When considering the adoption of AI in healthcare, stakeholders should take a proactive and informed approach. Heading into 2026, it will be essential to monitor both state activity, as well as prepare for possible federal action addressing health care specific use cases and initiatives focused on AI generally. Concurrently, stakeholders should become familiar with the guidelines and best practices released by standard-setting organizations and provider trade associations in the absence of clear and consistent laws and regulations. Health care providers must evaluate their own liability risks and ethical obligations, especially when AI tools are involved in clinical decision-making or patient interactions. Payers should prepare for evolving requirements and expectations, including how AI-driven services may impact reimbursement models and coverage policies. Additionally, all stakeholders need a baseline of technical literacy to understand what information an AI tool uses, how it functions, and what its limitations are. Understanding how AI tool’s function and impact clinical and business processes is crucial for making responsible decisions, maintaining compliance, and building trust with patients and the public.

Professional

Eric M. Fish
Partner
Washington, D.C.
Monica Massaro
Director, Government Relations & Public Policy
Washington, D.C.

For more information, please contact Eric Fish, Stephen Phillips, Andrea Frey, Monica Massaro, or your regular Hooper, Lundy & Bookman contact.

© 2025 Hooper Lundy & Bookman PC

Privacy Preference Center