Comments to the Department of Health and Human Services on accelerating the adoption and use of artificial intelligence as part of clinical care
ID 299333998 © Mohamed Ahmed Soliman | Dreamstime.com

Testimony

Comments to the Department of Health and Human Services on accelerating the adoption and use of artificial intelligence as part of clinical care

Regulatory uncertainty creates a significant barrier to innovation and adoption of artificial intelligence in clinical care.

A version of the following public comment letter was submitted to the Department of Health and Human Services on February 23, 2026.

On behalf of Reason Foundation, we respectfully submit these comments in response to the request for information (“RFI”) on Accelerating the Adoption and Use of Artificial Intelligence as Part of Clinical Care published by the Office of the Deputy Secretary and Assistant Secretary for Policy (ASTP) and Office of the National Coordinator for Health Information Technology (ONC).

Reason Foundation is a national 501(c)(3) public policy research and education organization with expertise across a range of policy areas, including artificial intelligence policy.

Our comments respond to the following two specific questions contained in the RFI:

  1. What are the biggest barriers to private sector innovation in AI for health care and its adoption and use in clinical care?
  2. What regulatory, payment policy, or programmatic design changes should HHS prioritize to incentivize the effective use of AI in clinical care and why? What HHS regulations, policies, or programs could be revisited to augment your ability to develop or use AI in clinical care? Please provide specific changes and applicable Code of Federal Regulations citations.

1. What are the biggest barriers to private sector innovation in AI for health care and its adoption and use in clinical care?

Regulatory uncertainty creates a significant barrier to private sector innovation and adoption of artificial intelligence (“AI”) in clinical care. This uncertainty surrounds the boundary between the regulated and unregulated software that aid clinicians in medical decision-making. While medical devices that autonomously perform diagnoses or sometimes treatments are fully regulated by the Food and Drug Administration (“FDA”) and subject to lengthy clinical trials, other software, known as Clinical Decision Support (“CDS”), is merely informational and therefore exempt from full regulation.

CDS software is typically embedded in hospital workflows and analyzes patient data to provide treating clinicians with alerts, risk assessments, and suggested next steps. The already uncertain boundary between a CDS’s recommendations and a medical device’s diagnosis is a source of concern for developers, who preemptively strip valuable functionality, such as time-sensitive alerts, from their products and redesign what the software displays to clinicians to avoid device classification and associated regulation. The result is that clinicians receive weaker, less actionable tools than current technology could provide, reducing diagnostic accuracy and worsening patient outcomes.

In practice, CDS developers avoid certain features, such as providing specific probabilities for anticipated risks, as shown by a 2025 JAMA Health Forum study. For example, a CDS tool designed to detect early signs of sepsis could suggest multiple treatment options even if it knows some options are far more relevant than others. It could avoid ranking these options to avoid qualifying as a medical device. Rather than alerting a clinician that a patient faces a “20-30% chance of sepsis within 24 hours” and recommending a specific treatment, such as “immediate broad-spectrum antibiotics,” developers are incentivized to present a list of undifferentiated and unranked options. Diluted CDS functionality slashes projected clinician time savings from 35% to under 15% and cut diagnostic accuracy gains by 22%. These changes neuter AI’s core value of augmenting human judgment with precise, actionable insights at the point of care. 

This appears to be precisely what Congress sought to avoid with the 21st Century Cures Act, which created a statutory exclusion for certain clinician-facing CDS that support professional judgment and permit independent review. This exclusion applies when four conditions hold true:

  1. The software supports or provides recommendations to a healthcare professional;
  2. The software analyzes patient-specific medical information;
  3. The professional must independently review the basis for any recommendation; and
  4. The software discloses any limitations or known failure modes.

Congress designed this safe harbor to foster non-device CDS tools that enhance, not replace, professional judgment and to sidestep the FDA’s lengthy premarket review for supportive technologies while preserving safety through transparency and reviewability.

Despite this, from 2022 to January 2026, FDA guidance repeatedly redrew and reinterpreted the boundary between regulated device software and unregulated CDS tools. The agency first issued draft CDS guidance in September 2022, proposing four criteria to distinguish regulated devices from Cures Act exclusions. They then released clarifications in January 2023, further reinterpreting exclusions to capture tools with probabilistic outputs or action prioritization. A 2024 update tightened this criteria even further, now deeming many supportive CDS features, such as risk scores or sequenced options, as “device-like” outputs, even when clinicians independently review or override them.

From these years-long tightened boundaries, developers now face a stricter, less predictable environment that slows AI adoption in clinical care. Hospitals may be less likely to deploy CDS tools at scale if they cannot reliably predict if ordinary workflow features will trigger device classification, nor can they determine who bears compliance burdens like validation, maintenance, and adverse event reporting.

Former FDA Commissioner Scott Gottlieb and Sen. Bill Cassidy (R-La.) directly challenged these interpretations in an April 2024 letter to FDA leadership. They demanded evidence-based justification for expanding regulatory reach beyond the statutory text. To date, the agency has neither provided a clear response nor cited new safety data warranting the tightened boundary. Instead, precautionary logic drives the classification of merely supportive tools as medical devices. This logic risks blocking rapid adoption of AI tools that could reduce clinician burnout by 30% and diagnostic errors by 20%, according to peer-reviewed pilots. Without HHS-led course correction, the FDA’s unilateral reinterpretations risk ceding U.S. clinical AI leadership to less risk-averse markets.

After several years of progressive boundary-tightening, the FDA released new CDS guidance in January 2026 that partially relaxes the agency’s prior highly restrictive treatment of clinician-facing software. It more clearly ties the device versus non-device boundary to the four statutory criteria in the Cures Act and acknowledges that patient-specific, actionable recommendations can qualify as non-device CDS so long as they support, rather than replace, professional judgment and allow independent review of the underlying rationale. The guidance consolidates earlier documents, replacing some of the most rigid “any directive language = device” interpretations with a more nuanced focus on intent and reliance. It also uses expanded examples to signal that certain high-value CDS functions do not have to automatically trigger full device regulation.

Despite this recent course correction, substantial barriers remain. The boundary between exempt CDS and regulated device software is still complex, multi-factored, and heavily dependent on the agency’s evolving “current thinking” rather than a broad, bright-line safe harbor. Innovators must still navigate fine distinctions such as whether a tool’s language “supports” or “drives” a clinician’s decision, and whether a medical prediction is too “time sensitive” to be reviewed independently, or whether a single recommendation is the only “clinically appropriate” decision, all under the shadow of discretionary enforcement. This uncertainty disproportionately harms smaller firms and startups that lack resources to continuously reinterpret guidance, absorb regulatory risk, or adjust products late in development. All of this disincentivizes more transformative AI, instead leading smaller firms to settle for more cautious, low-impact designs.

In addition, the new FDA guidance leaves broader structural barriers unaddressed. Many of the most clinically valuable AI applications, such as early deterioration detection and sepsis prediction, remain effectively steered into device status, thus deterring investment and slowing deployment. Liability environments still favor traditional manual workflows, and there is little clear, AI-specific direction for general-purpose models or patient-facing tools, leaving developers unsure which regulatory regime applies. Together, these factors continue to channel innovation toward what regulators are most comfortable permitting rather than what clinicians and patients would voluntarily adopt in a more predictable, innovation-friendly framework.

Even for non-device CDS that clears the FDA boundary, hospitals confront a profound accountability vacuum that stalls deployment. Questions regarding who is responsible for clinician training, incident reporting, etc., still persist. The 2026 FDA guidance clarifies classification but omits this critical post-adoption clarity, leaving providers to navigate fragmented accreditation standards and state rules through protracted legal reviews. This uncertainty defaults risk-averse institutions toward rejecting high-value AI, even when pilots show clear gains in efficiency and accuracy, further channeling innovation toward tools that avoid device classification, rather than tools that deliver the greatest clinical value.

2. What regulatory, payment policy, or programmatic design changes should HHS prioritize to incentivize the effective use of AI in clinical care and why? What HHS regulations, policies, or programs could be revisited to augment your ability to develop or use AI in clinical care? Please provide specific changes and applicable Code of Federal Regulations citations.

HHS should direct the FDA to codify a broad and binding safe harbor in its intended use of device and premarket approval procedure regulations at 21 C.F.R. Parts 801 and 806, respectively, that fully implements the Cures Act’s four CDS criteria. Doing so would replace the current patchwork of nonbinding guidance with clear rules protecting probabilistic outputs, prioritized recommendations, and time-critical tools when clinicians retain review authority, explicitly including high-value applications like sepsis detection.

To fill the accountability vacuum, HHS should amend the Centers for Medicare and Medicaid Services’ condition of participation regulations, as well as the Office of the National Coordinator for Health Information Technology’s certification criteria, to establish a simple, voluntary framework that would allocate responsibilities to enable rapid contracting without legal paralysis. Some of these responsibilities could include:

  1. Developers disclose inputs/limitations;
  2. Hospitals validate locally and train staff; and
  3. Clinicians override and report incidents.

These targeted steps would create predictability, promote competition, and advance clinician freedom, and thereby channel innovation toward patient-chosen solutions rather than regulatory comfort zones.