HHS Seeks Input on AI in Clinical Care
On December 19, the Department of Health and Human Services released a Request for Information (RFI) on Accelerating the Adoption and Use of Artificial Intelligence as part of Clinical Care. Specifically, HHS is looking for feedback on the how clinicians currently use AI and barriers to doing so. HHS intends to review the feedback to support its approaches to regulation, reimbursement and research and development for future rulemaking. While the agency has been engaging with stakeholders and utilizing its own AI broadly, this is the first major effort focused on clinical care and reimbursement, which has been a large issue as it relates to adoption. Comments are due February 21, 2026. For more information or support to comment, please contact our Hooper, Lundy and Bookman Digital Health Team.
UnitedHealthcare Postpones Remote Monitoring Coverage Changes
UnitedHealthcare will delay planned restrictions on remote patient monitoring (RPM) coverage until later in 2026, following significant industry feedback and allowing time for further review.
Originally, UnitedHealthcare intended to implement the changes on January 1, 2026, limiting RPM reimbursement to chronic heart failure and hypertensive disorders during pregnancy. The policy would have applied across Medicare Advantage and commercial plans, excluding coverage for commonly monitored conditions such as type 2 diabetes, COPD, and general hypertension.
The insurer cited concerns about insufficient clinical evidence supporting RPM for most chronic conditions. However, advocates argue the policy disregards established data and could disrupt care for patients who rely on RPM for disease management.
Providers and digital health companies should monitor updates closely and consider engaging in advocacy efforts to preserve RPM access for chronic care populations. HLB can help organizations navigate payer policy changes and support advocacy efforts to maintain access to remote monitoring services.
Shadow AI Poses Greater Risks Than Most Health Care Organizations Realize, Report Says
Shadow AI—the unauthorized use of AI tools by employees—is emerging as a major compliance and governance challenge for health care organizations, according to Wolters Kluwer Health’s 2026 predictions report. Shadow AI introduces significant concerns around data privacy, security, and regulatory compliance, particularly as generative AI tools become widely accessible. The report warns this issue is larger than most health care organizations realize, and failure to address it could lead to operational, legal, and ethical challenges.
While AI adoption in clinical decision-making and operational workflows is accelerating, many health systems are playing catch-up on policies and oversight, leaving gaps in policies and procedures. As employees become more comfortable with using general consumer tools in the health care setting, legal and reputational risk created by insufficient governance is compounded.
Recognizing this, providers and health systems should prioritize AI governance frameworks, update compliance policies, and educate staff on approved AI tools to mitigate shadow AI risks. HLB attorneys are actively helping organizations develop and evaluate compliance programs to strategically address these emerging challenges posed by the greater integration of AI into the health care environment.
Report Calls for Governance Standards to Address Bias in Health Care AI
A new report “Building a Healthier Future: Designing AI for Health Equity,” authored by the NAACP in partnership with Sanofi through the ACE Your Health Initiative, warns that artificial intelligence tools in health care risk deepening racial inequities without stronger oversight and governance. The report cautions that algorithms used for diagnostics, treatment, and insurance decisions can perpetuate bias and cultural blind spots if developed without input and oversight. To address these risks, the report proposes a three-tier governance framework to guide equitable AI implementation calls for bias audits and “equity-first” standards as hospitals, technology companies, and regulators adopt AI solutions.
ONC Releases New Information Blocking FAQs
The Assistant Secretary for Technology Policy/Office of the National Coordinator for Health Information Technology (ASTP/ONC) has released a new set of Information Blocking FAQs, offering clarification on how the Information Blocking Rule applies to evolving health IT practices. One newly issued FAQ specifically states that the regulations may be implicated when an actor’s practices interfere with automation technologies’ ability to access, exchange, or use electronic health information (EHI), such as robotic process automation or agentic artificial intelligence. This guidance reinforces ONC’s expectation that certified health IT and related practices should support automated data access and exchange, a critical foundation for APIs, population health tools, and AI-enabled workflows. In addition to automation, ONC also published new FAQs addressing revenue sharing and when certain financial arrangements may raise information blocking concerns, the role of the requester under the Manner Exception, and the scope of EHI that must be made available to satisfy the Manner Exception.
CMS Announces Winner of AI Fraud Detection Competition
On December 15, the Centers for Medicare and Medicaid Services (“CMS”) announced the winner of its “Crushing Fraud Chili Cook-Off Competition.” The competition amounted to a hack-a-thon wherein ten competitors submitted proposals in a “market-based research challenge” to develop machine learning models that detect indicators of fraud in Medicare claims data. The winner was Milliman, Inc., an actuarial firm, whose proposal leveraged explainable AI to “flag statistical anomalies in provider billing” and explain “the underlying factors, empowering investigators to make informed decisions while maintaining human oversight.” The model’s output includes “a single risk score: a composite metric that combines behavioral, network, and financial anomalies into an actionable score for CMS officials.” CMS asserts that the competition details and submissions are exempt from the Freedom of Information Act but plans to release a white paper in the near future to summarize takeaways from the competition and potential next steps to implement similar features to support oversight of federal health care programs.







