Skip to main content

Study: Regulatory Policy Changes Needed to Bring AI Into Clinical Decision Making

A new white paper from the Duke-Margolis Center for Health Policy outlines the policy needs for incorporating artificial intelligence (AI) into diagnosis and other types of clinical decision support software  with effective innovation, regulation and patient protections.

“AI is now poised to disrupt health care, with the potential to improve patient outcomes, reduce costs, and enhance work-life balance for healthcare providers,” said Greg Daniel, PhD, MPH, Deputy Director for Policy at Duke-Margolis. But a policy process is needed.

“Integrating AI into healthcare safely and effectively will need to be a careful process, requiring policymakers and stakeholders to strike a balance between the essential work of safeguarding patients while ensuring that innovators have access to the tools they need to succeed in making products that improve the public health,” Daniel said.

The Food and Drug Administration (FDA) released their updated Software Precertification (Pre-Cert) Pilot Program working model 1.0 on Jan. 7, which aims to regulate digital health technologies in ways that foster innovation and protect patient safety.

The Duke-Margolis paper, which was developed with the input of a multi-stakeholder working group, addresses the major challenges currently hindering safe, effective AI health care innovation, including these near-term priorities:

Regulatory clarity – The 21st Century Cures Act removed certain types of clinical decision software from FDA authority, depending on whether the software systems explain how the input data is analyzed to arrive at a care recommendation. Software that directly diagnoses or treats patients are considered to be of higher risk than software that acts as a support or resource for a clinician’s decision-making. Greater regulatory clarity is needed regarding how FDA would assess risks to patients when providers use so called “black box” software versus software that gives more information about how the product is using input data to come to a recommendation.

Data access and privacy -- Software innovators need access to large volumes of clinical data to “train” the software. But this data must be consistent with the quality of input data used in the real world, because you wouldn’t want to train an autonomous vehicle on an empty racetrack when it would be expected to drive down crowded city streets. It will be critical to improve data standards and increase the interoperability of data, while upholding patient privacy protections.

Demonstrating value --  Public and private coverage and reimbursement to the provider will drive adoption and increase the return on investment for these technologies. But AI-enabled clinical decision support software must be able to demonstrate improvements in provider system efficiency and enable providers to meet key outcome and cost measures. A useful first step would be to establish which clinical decision support software features and performance outcomes will be most valued by payers, as well as the types of evidence that will be required to prove performance gains. 

“AI-enabled clinical decision support software has the potential help clinicians arrive at a correct diagnosis faster, while enhancing public health and improving clinical outcomes,” said Christina Silcox, PhD, managing associate at Duke-Margolis and co-author on the white paper. “To realize AI’s potential in health care, the regulatory, legal, data, and adoption challenges that are slowing safe and effective innovation need to be addressed.”

“Current State and Near-Term Priorities for AI-Enabled Diagnostic Support Software in Health Care,” includes input and guidance from a stakeholder working group. The white paper was funded through a grant from the Gordon and Betty Moore Foundation.