Connect with us

Health

Can AI solve America’s maternal health crisis? 3 ways to prevent bias in healthcare

Avatar

Published

on

Can AI solve America's maternal health crisis? 3 ways to prevent bias in healthcare

Maternal health is an important conversation this election season. Three women, along with their loved ones, took the stage and shared their harrowing experiences in reproductive health during the first night of the Democratic National Convention. After being denied care while suffering a miscarriage, “I was in pain and bleeding so much that my husband feared for my life,” recalls Kaitlyn Joshua of Baton Rouge, Louisiana.

Childbirth in the United States poses a greater risk compared to other high-income countries. As much as 80% of pregnancy-related deaths are preventable, according to the Centers for Disease Control and Prevention. Artificial intelligence is being used to address disparities in maternal health by predicting pregnancy complications, monitoring fetal abnormalities, identifying high-risk pregnanciesand improving access to care.

The problem with using AI in maternal health care, however, is that the technology is often designed without patients of color in mind, meaning that quality of care, access to care, and treatment can reduce and even could cause damage. For example, Harvard researchers discovered that an algorithm predicted this Black and Latina women were less likely to have a successful vaginal birth after cesarean section than white women. The algorithm’s bias could lead doctors to perform more C-sections on women of color. After years of research, the algorithm was updated to no longer consider race or ethnicity when predicting the risk of VBAC complications.

It is impractical to simply remove race and ethnicity from every AI algorithm. These demographic factors play a critical role in addressing persistent inequalities within healthcare systems. Researchers must be aware of how and when race and ethnicity data are used in creating AI.

AI algorithms for maternal health care rely on data. When that data does not represent our most vulnerable populations and is rooted in racist practices by healthcare providers, biases can emerge in AI maternal care. When marginalized patients, caregivers from their communities, and healthcare professionals who have undergone inclusive training collaborate to create AI innovation, it opens the door to addressing bias and sparking the equitable revitalization of maternal care in the United States .

Doctors from Cedars-Sinai It is recognized that due to provider bias, Black women are less likely to receive low-dose aspirin treatment to prevent preeclampsia, a dangerous hypertensive complication of pregnancy that can cause illness or death. The doctors used AI to identify patients at risk for preeclampsia and automate decision-making about prescribing aspirin. This technology led to an increase in appropriate aspirin treatment and eliminated racial disparities in care.

Black women are two to three times more likely to die from pregnancy-related causes than white, Asian and Latina women. regardless of their income and education. Joshua’s story marks the steady drumbeat in the long line Black women who often feel unseen, undervalued and unsupported when seeking maternal health care. It’s an experience that even Beyoncé and Serena Williams couldn’t ignore.

The use of AI in maternal health care continues to evolve, and preventing AI bias is critical not only to the equitable advancement of AI, but also to addressing persistent disparities in maternal health in the United States . AI cannot solve the maternal and reproductive health crisis in the United States, but it can pave the way for equitable care for our vulnerable populations.

Here are three ways to avoid AI bias in maternal healthcare technology

  • Provide diverse, representative data to avoid bias: The data used to train AI systems should be diverse and represent all demographics. This includes collecting comprehensive data across diverse racial, socioeconomic, and geographic backgrounds to avoid perpetuating existing biases. By integrating a wide range of data points, AI systems can deliver more accurate and fair health assessments and recommendations.
  • Embrace a multidisciplinary approach involving healthcare experts, ethicists, and community advocates: This means involving not only data scientists and engineers, but also healthcare professionals, ethicists, and community advocates in the design and implementation process. Such collaboration ensures that the AI ​​systems are developed with a thorough understanding of the real-world implications and nuances of maternal health.
  • Establish transparent AI governance with regulatory oversight for monitoring and continuous improvement. Ensure ethical standards are adhered to and encourage open communication between patients, healthcare providers and AI developers to continuously improve systems. This promotes confidence in AI for improving maternal health care.