Digital Health Blog

Legal & Policy Insight to Empower the Evolution of Health Care


Never Miss an Update

SUBSCRIBE
01.14.26

OpenAI and Anthropic Launch Health-Focused Version of Consumer LLMs

Leading artificial intelligence companies OpenAI and Anthropic have both unveiled major health care initiatives, marking a significant push by foundational model makers into the medical sector. OpenAI announced two products: ChatGPT Health, a consumer-facing platform that allows users to upload medical records and connect health and wellness apps for personalized insights, and OpenAI for Healthcare, an enterprise suite designed to help health care providers with administrative tasks like prior authorization and coding. Anthropic followed with Claude for Healthcare, which blends enterprise and consumer tools in a unified platform offering HIPAA-ready infrastructure for handling protected health information. Both companies are targeting patients, providers, and researchers with AI tools that promise to reduce administrative burden, improve care coordination, and help individuals better understand their medical information.

However, the creation of these patient-centric tools has raised significant concerns among consumer privacy advocates and health care professionals. Medical information inputted by patients onto these kinds of tools often falls outside of HIPAA’s protections since the tool is not functioning as a covered entity under the federal health privacy framework. That said, the FTC’s health breach notification regulations and state consumer health data privacy laws may still be implicated (e.g., Washington’s My Health My Data Act).  Additionally, critics highlight the risk of AI hallucinations producing inaccurate medical information, potential data breaches, and the possibility that de-identified health records could be re-identified when combined with other datasets.

While OpenAI and Anthropic have indicated that the tools are designed with privacy protections and committed not to use personal health data for training future models, the regulatory landscape impacting these initiative remains unclear. State and federal laws do not yet adequately address the unique challenges posed by AI systems processing vast amounts of medical data in real-time, requirements for disclaimers in patient communications, and clear instructions for reaching human health care providers. Both companies acknowledge their systems can make mistakes and emphasize that qualified health care professionals must review AI-generated content before clinical decisions are made. California Assemblywoman Mia Bonta, sponsor of the recently enacted AB489, a law prohibiting AI systems from functioning as licensed healthcare professionals and restricts marketing language that suggests clinical expertise, responded to these announcements emphasizing that these tools warrant increased scrutiny around consumer protection and highlight the importance of compliance with emerging regulations.

01.14.26

Utah Launches Nation’s First State-Approved AI Prescription Refill Program

Utah has partnered with Doctronic to become the first state to approve artificial intelligence for prescription refills, marking a significant development in health care innovation and use of regulatory sandboxes to foster technological advancement while maintaining patient safety through controlled testing environments. This project allows patients to interact with an AI agent to renew routine prescriptions for chronic conditions approximately 190 commonly prescribed medications including blood pressure drugs, diabetes medications, and thyroid treatments. The program includes human oversight mechanisms, with physicians reviewing the first 250 AI-generated prescriptions in each drug class before full automation proceeds. The AI agent also includes multiple safeguards such as identity verification, contraindication screening, and automatic escalation to human clinicians when uncertainties arise. State officials will track clinical safety protocols, patient satisfaction, medication adherence, and cost impacts. Controlled substances and high-risk medications are excluded from the pilot project.

This project may catalyze broader adoption of regulatory sandboxes for high-stakes AI applications across the country, balancing innovation with accountability. States such as Texas, Arizona, and Delaware have already created their sandbox frameworks, and legislation is being introduced in others for consideration in the 2026 legislative sessions. In September 2025, Senator Ted Cruz introduced the SANDBOX Act, which would create a federal regulatory sandbox program allowing AI developers to obtain temporary waivers from federal regulations for up to ten years to test and deploy AI technologies.

01.14.26

DEA Issues Fourth Extension of Telemedicine Flexibilities for Prescribing Controlled Medications through 2026

The DEA, in coordination with HHS, issued another temporary extension of the COVID-19 era telemedicine flexibilities. The fourth temporary extension allows DEA-registered practitioners to prescribe Schedule II-V controlled substances remotely without a prior in-person evaluation through December 31, 2026, avoiding a potential disruption to telehealth-based care while permanent rules are finalized. Under the extension, clinicians may continue prescribing controlled substances via telemedicine (subject to applicable federal and state laws), as the DEA continues work on long-term regulations, including a proposed special registration framework. Additional details are available in the DEA’s official announcement here, HHS’s related press materials here, and HHS’s overview of controlled substance prescribing via telehealth here.

01.14.26

FDA’s New Digital Health Guidance Signal Shift for Wellness Devices and CDS

On January 6, 2026, the Food and Drug Administration (FDA) released updated guidance on Clinical Decision Support Software, as well as General Wellness: Policy for Low Risk Devices, providing clarity to an industry grappling with innovation and regulatory expectations. Both guidance documents contain additional explanation from the FDA on the definitions and applicable regulations which apply to general wellness products and clinical decision support (CDS) software. Read more here.

01.14.26

National AI Policy Framework Introduced

Senator Marsha Blackburn (R-TN) has proposed The Republic Unifying Meritocratic Performance Advancing Machine intelligence by Eliminating Regulatory Interstate Chaos Across American Industry (TRUMP AMERICA AI Act), a draft bill creating a unified federal rulebook for artificial intelligence. The framework focuses on child safety, creator rights, bias audits, and transparency in AI’s impact on jobs and infrastructure, aiming to harmonize regulations and strengthen U.S. leadership in AI. While not specific to health care and partisan in nature, Senator Blackburn has played a significant role in looking into AI policymaking including bipartisan efforts to protect children interacting with AI through the Kids Online Safety Act which is a component of this framework.

12.22.25

HHS Releases HTI-5 Proposed Rule

The U.S. Department of Health and Human Services (HHS), through the Assistant Secretary for Technology Policy and Office of the National Coordinator for Health Information Technology (ASTP/ONC), has released the Health Data, Technology, and Interoperability: ASTP/ONC Deregulatory Actions to Unleash Prosperity (HTI-5) Proposed Rule. ASTP/ONC identifies three primary objectives in HTI-5, each aimed at streamlining compliance while strengthening data access and interoperability. These include (i) reducing burden in the Health IT Certification Program, (ii) revisiting the information blocking regulatory framework by proposing to revise or remove certain definitions, conditions, and exceptions that have been susceptible to misuse or overbroad interpretation, and (iii) reorienting the Certification Program around FHIR®-based application programming interfaces (APIs) and modern interoperability standards.

The HTI-5 Proposed Rule will be open for public comment for 60 days following publication in the Federal Register. ASTP/ONC has encouraged health IT developers, providers, digital health companies, and other stakeholders to review the proposal in full, given the breadth of changes contemplated. ASTP/ONC’s press release and fact sheet about the proposed rule can be found here.

In tandem with HTI-5, ASTP/ONC is also withdrawing certain proposals not yet finalized from the HTI-2 proposed rule.

12.19.25

CMS Announces Winner of AI Fraud Detection Competition

On December 15, the Centers for Medicare and Medicaid Services (“CMS”) announced the winner of its “Crushing Fraud Chili Cook-Off Competition.” The competition amounted to a hack-a-thon wherein ten competitors submitted proposals in a “market-based research challenge” to develop machine learning models that detect indicators of fraud in Medicare claims data. The winner was Milliman, Inc., an actuarial firm, whose proposal leveraged explainable AI to “flag statistical anomalies in provider billing” and explain “the underlying factors, empowering investigators to make informed decisions while maintaining human oversight.” The model’s output includes “a single risk score: a composite metric that combines behavioral, network, and financial anomalies into an actionable score for CMS officials.” CMS asserts that the competition details and submissions are exempt from the Freedom of Information Act but plans to release a white paper in the near future to summarize takeaways from the competition and potential next steps to implement similar features to support oversight of federal health care programs.

12.19.25

ONC Releases New Information Blocking FAQs

The Assistant Secretary for Technology Policy/Office of the National Coordinator for Health Information Technology (ASTP/ONC) has released a new set of Information Blocking FAQs, offering clarification on how the Information Blocking Rule applies to evolving health IT practices. One newly issued FAQ specifically states that the regulations may be implicated when an actor’s practices interfere with automation technologies’ ability to access, exchange, or use electronic health information (EHI), such as robotic process automation or agentic artificial intelligence. This guidance reinforces ONC’s expectation that certified health IT and related practices should support automated data access and exchange, a critical foundation for APIs, population health tools, and AI-enabled workflows. In addition to automation, ONC also published new FAQs addressing revenue sharing and when certain financial arrangements may raise information blocking concerns, the role of the requester under the Manner Exception, and the scope of EHI that must be made available to satisfy the Manner Exception.

12.19.25

Report Calls for Governance Standards to Address Bias in Health Care AI

A new report “Building a Healthier Future: Designing AI for Health Equity,” authored by the NAACP in partnership with Sanofi through the ACE Your Health Initiative, warns that artificial intelligence tools in health care risk deepening racial inequities without stronger oversight and governance. The report cautions that algorithms used for diagnostics, treatment, and insurance decisions can perpetuate bias and cultural blind spots if developed without input and oversight. To address these risks, the report proposes a three-tier governance framework to guide equitable AI implementation calls for bias audits and “equity-first” standards as hospitals, technology companies, and regulators adopt AI solutions.

12.19.25

Shadow AI Poses Greater Risks Than Most Health Care Organizations Realize, Report Says

Shadow AI—the unauthorized use of AI tools by employees—is emerging as a major compliance and governance challenge for health care organizations, according to Wolters Kluwer Health’s 2026 predictions report. Shadow AI introduces significant concerns around data privacy, security, and regulatory compliance, particularly as generative AI tools become widely accessible. The report warns this issue is larger than most health care organizations realize, and failure to address it could lead to operational, legal, and ethical challenges.

While AI adoption in clinical decision-making and operational workflows is accelerating, many health systems are playing catch-up on policies and oversight, leaving gaps in policies and procedures. As employees become more comfortable with using general consumer tools in the health care setting, legal and reputational risk created by insufficient governance is compounded.

Recognizing this, providers and health systems should prioritize AI governance frameworks, update compliance policies, and educate staff on approved AI tools to mitigate shadow AI risks. HLB attorneys are actively helping organizations develop and evaluate compliance programs to strategically address these emerging challenges posed by the greater integration of AI into the health care environment.

Key Contacts

Andrea Frey
Partner
San Francisco
San Diego
Stephen K. Phillips
Partner
San Francisco
Eric M. Fish
Partner
Washington, D.C.
Monica Massaro
Principal of Government Relations & Public Policy
Washington, D.C.
Claire Ernst
Director, Government Relations & Public Policy
Washington, D.C.

Learn More

Digital Health

© 2026 Hooper Lundy & Bookman PC

Privacy Preference Center