The AI Landscape: California and other State Legislative Efforts to Regulate Use of AI in Health Care
As the use of artificial intelligence (AI) in healthcare continues to accelerate, state legislatures are taking steps to regulate such uses, reflecting a growing recognition of both the potential benefits and inherent risks of this technology. In particular, newly enacted laws in California aim to establish clear guidelines for AI applications in clinical settings to promote transparency and fairness in patient interactions while safeguarding against biases that could affect care delivery. Similarly, laws enacted in Colorado and Utah seek to mitigate discrimination and harmful use of AI, require disclosure of the use of generative AI (a type of AI system that uses information it receives to generate new content, known as “GenAI”) by health care providers, and foster AI innovation. This article explores the key provisions of the new state laws, the implications for healthcare stakeholders, and the importance of navigating AI’s complex regulatory environment amid the ongoing federal policy discussions surrounding AI in health care.
California
Although Governor Gavin Newsom blocked the passage of SB 1047, California’s controversial AI Safety Bill, he signed approximately 15 other AI-specific measures into law this year, underscoring the state’s commitment to establishing robust guardrails for AI. Two of these laws – AB 3030 and SB 1120 – focus specifically on the responsible use of AI tools by payers and health care providers, as detailed below.
- Disclosures around use of GenAI for Patient Communications: AB 3030, which will take effect January 1, 2025, promotes patient transparency by imposing disclosure requirements on health care providers that use GenAI. Specifically, the law requires a variety of health care providers, such as hospitals, clinics, medical groups, and individual licensed health providers, that use GenAI to generate patient communications relating to a patient’s clinical information (i.e. relating to the patient’s health status, but not administrative matters such as scheduling or billing) that are sent electronically or over the phone to include:
- a disclaimer with the communication clarifying that it was produced using GenAI without review by a medical professional, and
- clear instructions for patients to use the entity’s website or other platform to communicate with the health care provider without responses made using GenAI.
The disclosure requirements apply to every AI-generated communication with the patient. For example, for any video or written communications involving continuous online interactions with patients, such as a chat-based telehealth interaction, the provider must prominently display the disclaimer throughout the entire interaction; for any audio communications, the provider must make the disclaimer verbally at both the start and the end of the interaction.
- Use of AI during utilization review/management by payors: SB 1120 requires health plans and disability insurers that use algorithms, artificial intelligence (including GenAI), and other software tools (or who use a vendor that uses such tools) for utilization review or management functions to ensure compliance with certain specified requirements. The specific requirements include that the tools must base any determination on specified information and be applied fairly and equitably in accordance with all applicable federal guidance and regulations (such as the recently updated federal Section 1557 Final Rule, which was amended in July to prevent AI tools and algorithms used for clinical care and administrative activities from discrimination among underrepresented or marginalized patients). SB 1120, which also goes into effect early 2025, stipulates that only licensed physicians or other qualified licensed health care professionals may evaluate specific clinical issues involved in health care services requested by a provider and make determinations of medical necessity. The rationale behind SB 1120 is that AI tool outputs, which are usually trained on existing content, may reflect inaccuracies and biases documented in that content, leading to improper or even discriminatory clinical recommendations.
While not directly related to health care, SB 942 requires entities operating in CA with more than one million monthly website visitors/users to, beginning in 2025, clearly and conspicuously disclose whether and what content was generated by AI, as well as to create a free AI detection tool that allows users to determine if the content (audio, video, image, or combination thereof) was created or altered by AI. Regulated entities will need to comply by embedding “provenance data” into such content’s metadata, for instance by tagging or watermarking it in the metadata of an image indicating it was generated by AI.
Finally, as noted above, Governor Newsom blocked the contentious SB 1047, which would have imposed stringent safety standards on companies developing AI tools that cost more than $100 million to develop to prevent “critical harm”. In his veto message, the governor noted that the bill, although well-intentioned, could thwart the “promise of this technology to advance the public good” by applying even to the most basic AI functions, rather than taking into account whether an AI system will be deployed in high-risk settings, rely on the use of sensitive data, or involve critical decision-making. SB 1047 bill would not have directly impacted health care providers, but rather the developers of AI tools that health care providers use.
Colorado & Utah
Beyond California, other state legislatures enacted AI laws implicating the use of AI technologies by health care providers.
For example, earlier this year the Colorado governor signed into law SB 24-205, which requires developers of “high-risk” AI models to demonstrate that measures were taken to mitigate risks of unlawful discrimination and harmful use, explain the model’s intended use, purpose, and benefits, disclose the model’s known or foreseeable limitations, and report how the model was trained. Deployers of “high-risk” AI models must provide risk management protocols and governance measures to manage preventable or foreseeable discrimination risks, complete annual impact assessments revealing the model’s real-world benefits, risks, and performance metrics, and provide notice and explanations to consumers on how the AI model will affect Colorado consumers’ right to opt out of having their data processed by the model. If any high-risk AI system perpetuates unlawful discrimination or violates state or federal data privacy or copyright laws, the deployer must report such a breach to the state Attorney General.
SB 24-205, which goes into effect February 1, 2026, defines “high-risk” AI models as: “[a]ny artificial intelligence system that, when deployed, makes, or is a substantial factor in making, a consequential decision,” or a decision that affects a consumer’s access to, among other opportunities, health care services. For example, if an AI system can be used to determine whether health care services should be provided or denied to a particular individual, the developer or deployer must ensure that it complies with SB 24-205’s requirements.
In March 2024, Utah enacted the Artificial Intelligence Policy Act, establishing disclosure obligations and other requirements on companies using GenAI systems. Importantly for health care providers, this law already took effect as of May 1, 2024, and imposes specific disclosure obligations on those in “regulated occupations” (i.e., an occupation regulated by the state that requires a person to obtain a license or state certification to practice, such as health care professions). Those in regulated occupations shall “prominently disclose” when a consumer/patient is interacting with GenAI content during the provision of regulated services. The prominent disclosure must be provided (1) verbally at the start of an oral exchange or conversation and (2) through electronic messaging before a written exchange. Entities and individuals outside of “regulated professions” but subject to state consumer protection laws are still responsible for statements made by GenAI tools, and “shall clearly and conspicuously disclose” to the consumer with whom the GenAI interacts that the consumer is interacting with GenAI and not a human, if asked or prompted by the consumer. Utah Code Section 13-2-12. Utah’s AI Policy Act does not provide for a private right of action, but the State Attorney General, the Utah Division of Consumer Protection, or a court may impose fines and/or civil penalties for violations. The Act also creates an Office of Artificial Intelligence Policy, and an AI Learning Laboratory Program aimed at encouraging AI innovation in the State and developing future AI policies.
Key Takeaways
Although there is no uniform framework regulating the use of AI systems by health care organizations, state and federal guidance trend towards regulating AI systems via consumer safety, insurance, and data privacy protections. In the absence of federal legislation, federal guidance concerning AI use currently aims to bring AI under the umbrella of existing federal nondiscrimination regulations. This past year, the Office for Civil Rights (OCR) issued a Final Rule expanding the scope of regulations promulgated under Section 1557 of the Affordable Care Act, which prohibit covered entities from discriminating in health programs or activities, to include the use of “clinical algorithms in decision-making.” Similarly, the Assistant Secretary for Technology Policy’s (formerly the Office of the National Coordinator for Health Information Technology) HTI-1 Final Rule established specific reporting requirements for AI developers and transparency requirements for predictive decision AI systems used by certified health IT products. The Centers for Medicare & Medicaid Services’ (CMS’) Final Rule on the CY 2024 Policy and Technical Changes to the Medicare Advantage Program allows Medicare Advantage (MA) organizations to use AI tools to assist in coverage determinations but requires all medical necessity determinations to be based on an evaluation of each individual’s specific circumstances. CMS recognizes that products and their software may be proprietary in nature. However, MA plans are not absolved by CMS from understanding and making publicly available the external clinical evidence relied upon in developing these products and tools. These rules, although narrowly scoped, demonstrate the federal government’s burgeoning interest to regulate the use of AI in health care settings. Federal regulation of AI in health care is expected to continue in the future, no matter the outcome of the 2024 elections, although areas of focus in regulation may differ.
The United States Congress has a number of Committees and working groups dedicated to exploring AI regulation across industries, as well as focused on health care. Despite a flurry of hearings and activity throughout the year, these groups have not yet drafted any legislation. Although states continue to regulate AI individually, the disparate requirements, particularly for companies and health systems providing services across state lines, will be an important driver for federal legislative standards.
As both federal and state legislative efforts like those in California, Colorado, and Utah speed up to establish guardrails around AI technology, health care organizations seeking to or already using AI tools must take action now to ensure compliance with these requirements, if applicable. Health care organizations interested in implementing the use of AI technology should consider implementing governance controls and frameworks, both to help mitigate current institutional risks and to promote the organization’s stakeholder interests in future legislative efforts. Part of such compliance efforts must also include following state and federal updates, given the continuously evolving regulatory landscape governing the use of AI in healthcare.
Professional
Hooper, Lundy & Bookman, P.C.’s Digital Health Practice and Government Relations department are monitoring developments closely as federal agencies issue further guidance and states enact new laws concerning the use of AI. For more information, please contact any member of the Digital Health Practice, including Andrea Frey, Monica Massaro, Stephen Phillips, Sunaya Padmanabhan, Kelly Carroll, Sheryl B. Xavier, or any other member of the Hooper, Lundy, & Bookman team.