Preventing Bias and Inequities in AI-Enabled Health Tools

Cover image depicting computers, patients, and health care workers

White Paper

Preventing Bias and Inequities in AI-Enabled Health Tools

Published date

July 6, 2022

While Artificial intelligence (AI) has shown great potential within the health care system, it is susceptible to reproducing and even scaling the biases and inequities. A new Duke-Margolis paper explores how bias enters into an AI tool throughout various stages of the development and implementation process, identifies mitigation and testing practices that can reduce the likelihood of building a tool that is biased or inequitable, describes gaps where more research is needed, and offers recommendations to AI developers, AI purchasers, data originators, and the FDA.

 

The paper explores how AI can becomes biased and inequitable through four key areas:

  • inequitable framing of the health care challenge or the next steps to be taken by users.
  • the use of unrepresentative data,
  • the use of biased training data, or
  • insufficient care with choices in data selection, curation, preparation, and model development

The paper identifies key responsibilities for stakeholders, which include:

  • Developers should create teams with diverse expertise with a deep understanding of the problem being solved, the data being used, the differences that can occur across subgroups populations, and how the AI tool output is likely to be used.
  • Purchasers need to test tools within their own subpopulations and use their purchasing power to demand the use of good machine learning practices, testing in diverse populations, and transparency in the results.
  • Data originators should ensure that their data is recorded in just and equitable ways, to allow for the creation of more equitable AI tools and facilitate the ease of testing these tools.
  • The FDA, and other federal agencies, should ensure that AI-enabled medical devices perform well across subgroups, require clear and accessible labeling of the products regarding subgroup testing and populations intended for use, and work to build systems that can monitor biased performance of medical products.

Duke-Margolis Authors

Trevan Locke headshot

Trevan Locke, PhD

Assistant Research Director

silcox

Christina Silcox, PhD

Research Director, Digital Health
Adjunct Assistant Professor
Senior Team Member
Margolis Core Faculty

Andrea Thoumi headshot

Andrea Thoumi, MPP, MSc

Area Lead, Community Health and Equity
Faculty Director of Health Equity Educational Programming
Senior Team Member
Anti-Racism and Equity Committee Member
Core Faculty Member
Adjunct Assistant Professor
2020 Intern Mentor