OpenAI and Anthropic Launch Health-Focused Version of Consumer LLMs
Leading artificial intelligence companies OpenAI and Anthropic have both unveiled major health care initiatives, marking a significant push by foundational model makers into the medical sector. OpenAI announced two products: ChatGPT Health, a consumer-facing platform that allows users to upload medical records and connect health and wellness apps for personalized insights, and OpenAI for Healthcare, an enterprise suite designed to help health care providers with administrative tasks like prior authorization and coding. Anthropic followed with Claude for Healthcare, which blends enterprise and consumer tools in a unified platform offering HIPAA-ready infrastructure for handling protected health information. Both companies are targeting patients, providers, and researchers with AI tools that promise to reduce administrative burden, improve care coordination, and help individuals better understand their medical information.
However, the creation of these patient-centric tools has raised significant concerns among consumer privacy advocates and health care professionals. Medical information inputted by patients onto these kinds of tools often falls outside of HIPAA’s protections since the tool is not functioning as a covered entity under the federal health privacy framework. That said, the FTC’s health breach notification regulations and state consumer health data privacy laws may still be implicated (e.g., Washington’s My Health My Data Act). Additionally, critics highlight the risk of AI hallucinations producing inaccurate medical information, potential data breaches, and the possibility that de-identified health records could be re-identified when combined with other datasets.
While OpenAI and Anthropic have indicated that the tools are designed with privacy protections and committed not to use personal health data for training future models, the regulatory landscape impacting these initiative remains unclear. State and federal laws do not yet adequately address the unique challenges posed by AI systems processing vast amounts of medical data in real-time, requirements for disclaimers in patient communications, and clear instructions for reaching human health care providers. Both companies acknowledge their systems can make mistakes and emphasize that qualified health care professionals must review AI-generated content before clinical decisions are made. California Assemblywoman Mia Bonta, sponsor of the recently enacted AB489, a law prohibiting AI systems from functioning as licensed healthcare professionals and restricts marketing language that suggests clinical expertise, responded to these announcements emphasizing that these tools warrant increased scrutiny around consumer protection and highlight the importance of compliance with emerging regulations.
Utah Launches Nation’s First State-Approved AI Prescription Refill Program
Utah has partnered with Doctronic to become the first state to approve artificial intelligence for prescription refills, marking a significant development in health care innovation and use of regulatory sandboxes to foster technological advancement while maintaining patient safety through controlled testing environments. This project allows patients to interact with an AI agent to renew routine prescriptions for chronic conditions approximately 190 commonly prescribed medications including blood pressure drugs, diabetes medications, and thyroid treatments. The program includes human oversight mechanisms, with physicians reviewing the first 250 AI-generated prescriptions in each drug class before full automation proceeds. The AI agent also includes multiple safeguards such as identity verification, contraindication screening, and automatic escalation to human clinicians when uncertainties arise. State officials will track clinical safety protocols, patient satisfaction, medication adherence, and cost impacts. Controlled substances and high-risk medications are excluded from the pilot project.
This project may catalyze broader adoption of regulatory sandboxes for high-stakes AI applications across the country, balancing innovation with accountability. States such as Texas, Arizona, and Delaware have already created their sandbox frameworks, and legislation is being introduced in others for consideration in the 2026 legislative sessions. In September 2025, Senator Ted Cruz introduced the SANDBOX Act, which would create a federal regulatory sandbox program allowing AI developers to obtain temporary waivers from federal regulations for up to ten years to test and deploy AI technologies.
DEA Issues Fourth Extension of Telemedicine Flexibilities for Prescribing Controlled Medications through 2026
The DEA, in coordination with HHS, issued another temporary extension of the COVID-19 era telemedicine flexibilities. The fourth temporary extension allows DEA-registered practitioners to prescribe Schedule II-V controlled substances remotely without a prior in-person evaluation through December 31, 2026, avoiding a potential disruption to telehealth-based care while permanent rules are finalized. Under the extension, clinicians may continue prescribing controlled substances via telemedicine (subject to applicable federal and state laws), as the DEA continues work on long-term regulations, including a proposed special registration framework. Additional details are available in the DEA’s official announcement here, HHS’s related press materials here, and HHS’s overview of controlled substance prescribing via telehealth here.
FDA’s New Digital Health Guidance Signal Shift for Wellness Devices and CDS
On January 6, 2026, the Food and Drug Administration (FDA) released updated guidance on Clinical Decision Support Software, as well as General Wellness: Policy for Low Risk Devices, providing clarity to an industry grappling with innovation and regulatory expectations. Both guidance documents contain additional explanation from the FDA on the definitions and applicable regulations which apply to general wellness products and clinical decision support (CDS) software. Read more here.
National AI Policy Framework Introduced
Senator Marsha Blackburn (R-TN) has proposed The Republic Unifying Meritocratic Performance Advancing Machine intelligence by Eliminating Regulatory Interstate Chaos Across American Industry (TRUMP AMERICA AI Act), a draft bill creating a unified federal rulebook for artificial intelligence. The framework focuses on child safety, creator rights, bias audits, and transparency in AI’s impact on jobs and infrastructure, aiming to harmonize regulations and strengthen U.S. leadership in AI. While not specific to health care and partisan in nature, Senator Blackburn has played a significant role in looking into AI policymaking including bipartisan efforts to protect children interacting with AI through the Kids Online Safety Act which is a component of this framework.
