Back to News and InsightsBACK TO HEALTH EQUITY BLOG

OpenAI and Anthropic Launch Health-Focused Version of Consumer LLMs

SHARE

Leading artificial intelligence companies OpenAI and Anthropic have both unveiled major health care initiatives, marking a significant push by foundational model makers into the medical sector. OpenAI announced two products: ChatGPT Health, a consumer-facing platform that allows users to upload medical records and connect health and wellness apps for personalized insights, and OpenAI for Healthcare, an enterprise suite designed to help health care providers with administrative tasks like prior authorization and coding. Anthropic followed with Claude for Healthcare, which blends enterprise and consumer tools in a unified platform offering HIPAA-ready infrastructure for handling protected health information. Both companies are targeting patients, providers, and researchers with AI tools that promise to reduce administrative burden, improve care coordination, and help individuals better understand their medical information.

However, the creation of these patient-centric tools has raised significant concerns among consumer privacy advocates and health care professionals. Medical information inputted by patients onto these kinds of tools often falls outside of HIPAA’s protections since the tool is not functioning as a covered entity under the federal health privacy framework. That said, the FTC’s health breach notification regulations and state consumer health data privacy laws may still be implicated (e.g., Washington’s My Health My Data Act).  Additionally, critics highlight the risk of AI hallucinations producing inaccurate medical information, potential data breaches, and the possibility that de-identified health records could be re-identified when combined with other datasets.

While OpenAI and Anthropic have indicated that the tools are designed with privacy protections and committed not to use personal health data for training future models, the regulatory landscape impacting these initiative remains unclear. State and federal laws do not yet adequately address the unique challenges posed by AI systems processing vast amounts of medical data in real-time, requirements for disclaimers in patient communications, and clear instructions for reaching human health care providers. Both companies acknowledge their systems can make mistakes and emphasize that qualified health care professionals must review AI-generated content before clinical decisions are made. California Assemblywoman Mia Bonta, sponsor of the recently enacted AB489, a law prohibiting AI systems from functioning as licensed healthcare professionals and restricts marketing language that suggests clinical expertise, responded to these announcements emphasizing that these tools warrant increased scrutiny around consumer protection and highlight the importance of compliance with emerging regulations.


© 2026 Hooper Lundy & Bookman PC

Privacy Preference Center