AI-Enabled Medical Software: How to Harness Benefits and Mitigate Risks
The Duke-Margolis Center for Health Policy and The Center for Innovation Policy (CIP) at Duke University School of Law have issued a new guide on the use of artificial intelligence (AI) in health care. The report, Trust, but Verify: Informational Challenges Surrounding AI-Enabled Clinical Decision Software, which was funded by the Greenwall Foundation, is a resource for software developers, regulators, clinicians, policy makers, and other stakeholders on how to incentivize innovation of safe, effective AI-enabled medical products while communicating on how and when to use these products.
“Stakeholders require substantial information about AI-enabled software to effectively harness its benefits and mitigate risk,” said the report authors Christina Silcox, managing associate, Duke-Margolis; Arti Rai, Elvin R. Latty Professor of Law and Co-Director, CIP, Duke Law; and Isha Sharma, senior research assistant, Duke-Margolis. “Unique business concerns and technical challenges may at times create mismatches between information regulators and adopters desire and information developers are willing or able to provide. Our work examined where these mismatches may exist.”
The report covers a broad range of issues, addressing the unique challenges of using AI in health care, key questions and answers about AI-enabled clinical decision software, and the patent status, regulation, and adoption of this software. The report comes as the Food & Drug Administration has issued an update on its continuing work to reimagine regulation of medical software, “Developing the Software Precertification Program: Summary of Learnings and Ongoing Activities.”
Some of the key findings from the report’s co-authors include:
- Recent Supreme Court decisions on medical diagnostic and software patenting have not deterred venture capital investment in AI-enabled software. To the contrary, as with AI-enabled health generally, venture capitalist investment in AI-enabled clinical decision software has risen in recent years.
- Developers and adopters view FDA clearance and/or approval of their AI-enabled medical products as an important indicator of value
- Almost all the stakeholders interviewed for this report stressed that AI-enabled clinical decision software can enhance workflows, positively influence care decisions, and improve outcomes
- How health systems monitor for degrading performance of AI-enabled software will be a growing issue as these systems adopt AI software and tailor it to their environments
- Clear and accessible user guides for AI-enabled software that include the populations for which the software is intended and the limitations of the software will be crucial for clinicians to refer to during use, as well as in training other health care providers on the software
- The data on patient acceptance of AI-enabled medical products are mixed
- Neither FDA nor adopters have yet required information regarding AI-enabled software that developers consider most valuable as trade secrets, so conflicts have not yet been a critical concern. Instead, performance data has been emphasized.
The co-authors offer recommendations on information that should be shared as stakeholders explore, evaluate, adopt, use, and monitor emerging AI-enabled products, including:
- Provider systems should be open about their internal process challenges and informational needs so manufacturers are better able to develop products that solve real problems and fit into the health system work flow. Manufacturers need to bring in experts who are well-versed in health system workflows and be prepared to show evidence of the clinical utility of their product, not just the accuracy of the results.
- As products emerge that have a higher risk profile, parties involved will need to create procedures to allow information considered by developers to be a trade secret (e.g., training data and model details) to be shared safely with trusted third parties (e.g., the FDA) that can evaluate the information.
- Information about the intended use of AI-enabled decision support software, including the clinical context and how the recommendations should be used, should always be disclosed publicly.
- Stakeholders should develop a set of best practices and recommendations on how to best evaluate a new AI-enabled software product, including guidelines for how to vet products thoroughly.
- Because AI-enabled software can fail or break in unexpected ways, manufacturers and health systems should work together to monitor system performance after implementation, including updating as needed, and share information about product limitations and adverse or near-miss events.
“AI has the potential to streamline workflows, increase job satisfaction, reduce spending, and improve health outcomes,” noted the co-authors. “Estimates show that AI can help address about 20 percent of unmet clinical demand. However, to achieve this goal and long-term success, ensuring that the right information is shared with the right stakeholder at the right time will be essential.”
This white paper is release in conjunction with an upcoming law journal article that provides details regarding research methodology and also makes findings regarding the adequacy of disclosures found in patents, peer-reviewed publications, and FDA documentation.