Back to News and InsightsBACK TO HEALTH EQUITY BLOG

The Rise of Artificial Intelligence: Navigating Healthcare Regulatory Considerations

Insights
SHARE

Artificial intelligence (AI) has been recently receiving significant attention in the media and from lawmakers alike.  AI has been used in healthcare in some capacity for years, with one key example being imaging-related AI tools to support radiologists.  However, lately there has been an exponential increase in interest and investment in AI.  AI tools have advanced healthcare delivery and operations in various ways, including virtual assistants helping patients to schedule appointments and manage billing, symptom checkers that help patients access information relevant to their symptoms, and clinical decision support tools utilized by practitioners.  Such tools may be particularly important given ongoing workforce shortages in the health care industry. At the same time, potential risks with AI, and in particular risks around bias, reliability, transparency, and patient privacy, are particularly important to address given the high stakes in delivery of health care.

Historically, there has not been an overarching federal regulatory scheme that regulates AI.  However, given the recent rise of interest and expansion of AI tools (including, of course, the explosion of ChatGPT since its release in late 2022), efforts to implement such a scheme are starting to emerge.  Efforts to regulate AI in healthcare are progressing, although that progress is decidedly slower than the breakneck speed at which AI technology itself is developing.  However, the area is not devoid of regulatory oversight.  Many of the existing healthcare regulatory laws and regulations that healthcare attorneys utilize on a daily basis apply equally to AI.  This article summarizes some of the key existing laws and regulations that apply to AI in healthcare as well as emerging efforts to implement more of a regulatory framework for oversight in the future.

Key Healthcare Regulatory Considerations

As stated above, though new technologies are emerging, many of the same healthcare regulatory rules and limitations continue to apply.  A few key areas that are well-traversed by healthcare attorneys follow.

Practice of Medicine.  Harnessing the power of AI has and will continue to galvanize material advancements in medical practice.  From the perspective of professional boards regulating the practice of medicine and other health care professions, solutions leveraging AI and/or machine learning (ML) must maintain the role of the practitioner in medical practice.  In essence, the practitioner must make all clinical decisions involving patient treatment.  Technological solutions identify patterns, analyze data, and compile information in powerful ways that empower practitioners to provide superior care to their patients.  To avoid allegations that technology has engaged in the unlicensed practice of a health care profession, the practitioner must actively review information produced by the technology, evaluate that information, and decide on the most appropriate next steps for treatment.

Patient Privacy.  Developers of AI require a large amount of data, and any use or disclosure of patient data should be closely analyzed under applicable federal and state data privacy and security laws.  As one example, under the Health Insurance Portability and Accountability Act and its implementing regulations, some key questions for covered entities will include whether only de-identified data is contemplated or whether protected health information (PHI) is involved, and whether the purpose of a covered entity’s use or disclosure is for “health care operations”, for “research” (such as for a developer’s research and development of a tool that will be commercialized over time), or other purposes.[1]  In short, covered entities should understand how PHI is being used and disclosed and for what purposes to put in place appropriate protections in accordance with applicable laws, as well as make determinations regarding whether and to what extent they are willing to contractually permit a business associate that is also a developer of AI to create and use de-identified information.

Anti-Kickback Statute.  The Office of the National Coordinator of Healthcare Technology (ONC) within the U.S. Department of Health and Human Services (HHS), in a recent proposed rule (addressed further below), flagged the potential for violation of the federal anti-kickback statute in the context of AI.  The ONC stated that where a third party provides remuneration to a health IT developer to integrate or enable AI software – referred to as predictive decision support interventions – with one purpose being to increase sales of that party’s products or services, the federal anti-kickback statute could be implicated.[2]  In particular, ONC referenced pharmaceutical manufacturers and clinical laboratories as entities that may financially sponsor the deployment of AI and in doing so promote AI solutions that recommend or influence a health provider to order a particular item or service from the sponsor.[3]  The potential kickback risk raised by the ONC echoes potential risks previously identified by the HHS Office of Inspector General regarding arrangements between electronic health record (EHR) vendors and their customers, such as “a provider or supplier paying an EHR vendor to recommend – through its software – that provider or supplier for items or services reimbursable by a Federal health care program.”[4]

FDA Oversight – Clinical Decision Support Software.  In September 2022, the U.S. Food and Drug Administration (FDA) published final guidance clarifying its views on what clinical decision support software (CDSS) are (and are not) medical devices under the Federal Food, Drug and Cosmetic Act (FDCA).[5]  FDA clarified the types of CDSS that are excluded from the definition of “device” by certain criteria set forth in the FDCA (the Non-Device CDS Criteria). That determination is important because of the significant efforts required to obtain FDA approval to market a medical device.

Section 3060(a) of the Cures Act added Section 520(o) of the FDCA, which excludes certain software functions from the definition of device in section 201(h) of the FDCA. Certain CDS software functions are excluded from the definition of device by section 520(o)(1)(E) of the FDCA if the software functions meet all of the following four criteria:

(1) not intended to acquire, process, or analyze a medical image or a signal from an in vitro diagnostic device or a pattern or signal from a signal acquisition system;

(2) intended to display, analyze, or print medical information;

(3) intended to support or provide recommendations to the treating clinician about prevention, diagnosis, or treatment of a patient; and

(4) intended to enable the clinician to independently review the basis of the software’s recommendations so that it is not the intent that the clinician rely primarily on the software’s recommendations to diagnose or treat a patient.[6]

Medical Malpractice Claims and Other Tort Risk.  Another area of risk is that of medical malpractice and other tort claims, in circumstances where a practitioner utilized AI as part of their clinical decision-making process and a patient experiences an adverse outcome (health systems or other employers of practitioners could also be subject to vicarious liability). As referenced elsewhere in this article, ultimately a practitioner should be making a determination as to a plan of care based on their independent clinical judgment, as opposed to relying on the AI tool, and questions could arise as to whether a practitioner’s reliance on AI deviates from the standard of care or otherwise constitutes tortious conduct.[7]

Unlawful Discrimination and Other Applicable Laws.  Outside of healthcare-specific laws and regulations, there are other existing legal systems that apply to AI.  On April 25, 2023, the Federal Trade Commission, the Civil Rights Division of the U.S. Department of Justice, the Consumer Financial Protection Bureau, and the U.S. Equal Employment Opportunity Commission released a Joint Statement outlining their commitment to enforcing their respective existing laws and regulations “to promote responsible innovation” of automated systems, emphasizing that ‘[e]xisting legal authorities apply to the use of automated systems and innovative new technologies just as they apply to other practices.[8]  This Joint Statement provides a summary of additional guidance and other activities by each agency previously issued and that reflect each agency’s “concern about potentially harmful uses of automated systems.”  The Joint Statement focuses in particular on the potential for AI tools to produce outcomes that result in unlawful discrimination, including, without limitation, due to skewed outcomes based on imbalanced or unrepresentative datasets, automated systems correlating data with protected classes, “black box” systems that do not allow for review to determine fairness, and flawed assumptions about users or the context in which the tool will be used.

Legislative and Regulatory Efforts Underway

Calls for regulatory oversight of AI have grown at both the state and federal level in the past few years, as various government agencies and other stakeholders seek to strike the right balance between fostering innovation and establishing guardrails for consumer protection.

As just a few examples, in October 2022 the White House issued the Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People,[9] a non-binding white paper which sets forth principles regarding safety, efficacy, non-discrimination, data privacy, transparency, and the right of an individual to opt out of an automated system.

In January 2023, the National Institute of Standards and Technology (NIST) released a voluntary guide entitled AI Risk Management Framework.[10]  The NIST guide identifies the risks of AI as well as the characteristics of what would constitute “trustworthy” AI, with a focus on validity, reliability, accountability, and transparency, before identifying the “core” of the risk management framework: govern, map, measure, and manage.  Much like risk assessment and management plans in other areas, these functions include establishing a culture of risk management (govern), identifying what the risks are (map), assessing the risks (measure), and prioritizing risks to implement mitigating actions (manage).  On April 13, 2023, the U.S. Senate Majority Leader announced the launch of an effort for broad legislation on the topic.[11]

There are also a number of state legislative efforts underway.[12]  In Massachusetts, for example, Bill S.31 would generally regulate AI models like ChatGPT (and which was drafted with the help of ChatGPT), and Bill H.1974 would relate to the use of AI in providing mental health services, requiring a licensed mental health professional to seek approval from the relevant professional licensing board and seek informed consent from patients prior to using AI, among other things.  Whether or not these particular bills pass, they and other pending bills across multiple states indicate a strong interest at the state level to implement more oversight of AI.

Federal regulatory efforts specific to healthcare are also starting to emerge.  For example, HHS’s proposed rule to update implementing regulations to Section 1557 of the Affordable Care Act, which prohibits discrimination in certain health programs and activities, addresses bias in AI.[13] 42 C.F.R. § 92.210, as proposed, would state that a “covered entity must not discriminate against any individual on the basis of race, color, national origin, sex, age, or disability through the use of clinical algorithms in its decision-making.”  This rule is not limited to AI, as an algorithm could take many forms, but its applicability to AI is clear. This is a new provision that HHS felt was critical to address, given “recent research demonstrating the prevalence of clinical algorithms that may result in discrimination,” including studies of Crisis Standards of Care plans used during the COVID-19 pandemic.[14]  HHS states that although covered entities are not liable for algorithms they did not develop, “they may be held liable under this provision for their decisions made in reliance on clinical algorithms,” noting that such algorithms are a tool to supplement, not supplant, individual clinical judgment.

More recently, the ONC issued a proposed rule on Health Data, Technology and Interoperability: Certification Program Updates, Algorithm Transparency and Information Sharing (the HTI-1 Proposed Rule).[15]  Among other things, the HTI-1 Proposed Rule outlines a series of standards that AI and ML technology (which the ONC calls predictive decision support interventions, or DSIs, rather than CDS) must satisfy to obtain the voluntary ONC Health IT Certification.  The ONC describes its goal with that certification being to “assist in addressing the gaps between the promise and peril of AI in health.”[16]

The HTI-1 Proposed Rule emphasizes the importance of transparency in AI technology, which the ONC articulates through a commitment to FAVES solutions, i.e., those that are fair, appropriate, valid, effective, and safe.[17]  The requirements that DSIs must satisfy to obtain the ONC Health IT Certification broadly fall into three categories: 1) providing technical and performance information to users of DSIs; 2) requiring developers of DSIs to follow a range of risk management practices; and 3) requiring developers of DSIs to perform real-world testing for their technology solutions.

Providing information to the users of DSIs intends to enable those users to “make informed decisions about whether and how to use predictive DSIs.”[18]  Risk management practices are expected to include risk analysis, risk mitigation tactics, and governance strategies.  This suggests that developers of DSIs will be judged holistically and expected to develop programs, policies and procedures that ensure they engage in comprehensive and proactive risk management.  In particular, ONC anticipates that developers will invest significant effort in counteracting the potential bias that DSIs may create.

Notably, though these legislative and regulatory efforts are more recent, trade associations have previously issued guidance to support stakeholders in the healthcare sector seeking to develop, deploy and use AI.  The American Medical Association issued a Policy for Augmented Intelligence in 2018,[19] and the Consumer Technology Association issued ANSI-accredited standards defining terms and addressing core requirements to determine trustworthy AI solutions in health care, in 2020 and 2021 respectively.  Although it is important for health care providers and health IT developers alike to track state and federal legislative and regulatory developments, in the interim such industry-led efforts to promote innovation balanced with accountability and consumer protection can be useful resources.

Conclusion

The energy and excitement surrounding the promise of AI and ML in the healthcare industry in 2023 is palpable.  But as is the case with any material advancement involving the provision of care, there are numerous regulatory issues to consider as we begin to understand how this technology will interact with healthcare’s challenging regulatory landscape.

Melania Jankowski also contributed to this article.

[1] See, e.g., 45 C.F.R. §164.501 (definition of “health care operations” and “research”); 45 C.F.R. §164.514 (providing standards for de-identification of PHI). The HHS webpage Resources for Mobile Health Developers, available at HIPAA & Health Apps | HHS.gov (last checked June 6, 2023), also provides useful resources, though not specific to developers of apps that incorporate AI.

[2] Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing, 88 Fed. Reg. 23746, 23777 (Apr. 3, 2023).

[3] Id.

[4] See OIG, General Questions Regarding Certain Fraud and Abuse Authorities, FAQ # 6 General Questions Regarding Certain Fraud and Abuse Authorities | Office of Inspector General | Government Oversight | U.S. Department of Health and Human Services (hhs.gov), (last visited June 6, 2023).

[5] 21 U.S.C. § 301 et seq.

[6] FD&C Act § 520(o)(1)(E).

[7] Though outside the scope of this article, organizations and clinicians utilizing AI clinical support tools should adopt safeguards to mitigate tort liability risk when incorporating AI into their clinical tool sets, including  requiring clear documentation of the practitioner’s AI-independent medical decision-making, obtaining patient informed consent to the use of such AI tools, establishing clinical procedures, policies and protocols on responsible use of AI, and obtaining insurance coverage for such use.

[8] See Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems, EEOC-CRT-FTC-CFPB-AI-Joint-Statement(final).pdf (last visited June 6, 2023).

[9] See The White House, Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People, Blueprint for an AI Bill of Rights (whitehouse.gov) (last visited June 3, 2023).

[10] See NIST, Artificial Intelligence Risk Management Framework, Artificial Intelligence Risk Management Framework (AI RMF 1.0) (nist.gov) (last visited June 6, 2023).

[11] See Schumer Launches Major Effort to Get Ahead of Artificial Intelligence, Schumer Launches Major Effort To Get Ahe… | The Senate Democratic Caucus (last visited June 6, 2023).

[12] See Bill S.31 (malegislature.gov); Bill H.1974 (malegislature.gov) (last visited June 3, 2023).  The National Conference of State Legislatures provides a listing of state-level legislation efforts in AI, last updated April 18, 2023, at Artificial Intelligence 2023 Legislation (ncsl.org) (last visited, June 6, 2023).

[13] 87 Fed. Reg. 47824, 47880-84, 47918 (Aug. 4, 2022).

[14] Id. at 47880.

[15] Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing, 88 Fed. Reg. 23746 (Apr. 18, 2023).

[16] 88 Fed. Reg. 23780.

[17] 88 Fed. Reg. 23780.

[18] 88 Fed. Reg. 23780.

[19] See American Medical Association, Advancing health care AI through ethics, evidence and equity | American Medical Association (ama-assn.org) (last visited June 6, 2023); Consumer Technology Association, CTA Launches New Trustworthiness Standard for AI in Health Care (last visited June 6, 2023).

For more information, please contact Amy Joseph, Jeremy Sherer, or Melania Jankowski in Boston, Steve Phillips in San Francisco, or your regular HLB contact.