Trust, But Verify: Informational Challenges Surrounding AI-Enabled Clinical Decision Software

Trust, But Verify: Informational Challenges Surrounding AI-Enabled Clinical Decision Software

White Paper

Trust, But Verify: Informational Challenges Surrounding AI-Enabled Clinical Decision Software

This report explores how, in cases where certain information cannot be shared, alternative information could be used to satisfy stakeholder needs. The report is meant to serve as a resource for developers, regulators, clinicians, policy makers, and other stakeholders as they strive to develop, evaluate, adopt, and use AI-enabled medical products. We offer insight into how to incentivize innovation of safe and effective products while communicating information on how and when to use these products. Specific themes include the:

  • Ways in which AI-enabled software in health care may differ from traditional medical products;
  • Categories of information surrounding AI-enabled clinical software;
  • Informational needs and governance structure around AI-enabled clinical software during the total product life cycle; and
  • Role of regulatory incentives that protect developer investment, such as patents and trade secrecy, in information flow.

Discussion on informational needs and governance structure is also based on literature review, database searches, perspectives provided during meetings hosted by the Center of Innovation Policy at Duke Law and the Duke-Margolis Center for Health Policy, and individual stakeholder interviews.

This white paper was funded by the Greenwall Foundation. Any opinions expressed in this paper are solely of those of the authors and do not represent the views or policies of other organizations external to Duke.

View Press Release

Authors

silcox

Christina Silcox, PhD

Research Director, Digital Health
Adjunct Assistant Professor
Senior Team Member
Margolis Core Faculty

Arti Rai

Arti K. Rai, JD

Elvin R. Latty Professor of Law
Margolis Core Faculty