Connect with us

Health

The everyday use of AI-assisted diagnostics shows promise and perils

Avatar

Published

on

The everyday use of AI-assisted diagnostics shows promise and perils

While AI-enabled diagnoses may seem futuristic, a radiologist who stated “AI is something I use every day” presented a recent National Academy of Medicine Workshop with compelling examples of its promise and dangers today.

AI-assisted analysis of diagnostic images has been commonplace for more than a decade. It “impacts every patient encounter I have,” says Dr. Jason Poff, a practicing radiologist in Greensboro, NC, and director of innovation implementation at Radiology Partnerswhose own and affiliated practices represent approximately 10% of all images read nationally.

On the plus side, AI can “weave a story about something from a decade ago,” using disparate data in the patient record to put together a structured overview. It can detect abnormalities that a radiologist may not see; for example, the 56-year-old woman with left chest pain and a rib fracture that the radiologist missed. And unlike human radiologists, who might stop at a certain number of diagnoses of complex cases, the AI ​​can provide the full range of capabilities.

But, Poff warned, “profits don’t happen automatically. Nothing is guaranteed here. We spend a lot of time diving into all the failure modes, the ways the AI ​​can lead you astray.”

AI can produce both false positives, where people sometimes have to ignore the AI ​​’to stop unnecessary surgical procedures’, and false negatives, for example by overlooking an important finding that was not part of their training. The accuracy of the diagnosis may vary per condition.

Uncertainty “is something that AI struggles with all the time,” Poff added, tactfully omitting similar issues that can affect human doctors.

The key is how people interact with the AI. For example, when you look at a patient in real time, “How much should I trust this AI?” Poff suggested there might be a series of warning lights indicating whether the patient’s potential diagnosis was in an area the AI ​​was trained for, possibly outside of it, or definitely outside of it.

Then of course there is the matter of money, as Dr. Yvonne Lui, associate professor of artificial intelligence at New York University’s Langone Department of Radiology, noted. “The true benefits and costs to society are unknown” of AI tools that can be expensive, she said. For example, when her group tried to use AI to reduce unnecessary recalls for additional images of patients scanned for possible breast cancer, the number of recalls – and medical costs and patient anxiety – actually increased.

“We need to find the specific use cases that these AI tools will benefit from,” she said.

Similarly, Poff’s group tried to use AI to detect pneumothoraxes (collapsed lungs). All the real cases found were detected by radiologists, but there were also false positives.

Despite the challenges, radiologists predicted that AI use would inevitably increase to keep pace with the overwhelming number of images ordered and read.

Perhaps most crucial to proper adoption is recent research showing the variability of what happens when humans and AI interact. A study published in March in Naturopathy found that AI increased the accuracy of some radiologists’ performance, while hurting the performance of others. In the latter camp, some doctors who should have ignored the AI ​​hesitated, while others who could have benefited from its recommendations brushed them aside. The different levels of experience, expertise and decision-making styles of physicians were key.

A senior researcher said this in a Press release from Harvard Medical School“Our research reveals the nuanced and complex nature of machine-human interactions.”

The ‘machine’ itself is also nuanced. In a brief overview of the evolution of AI from rule-based models to deep learning to large language models, Dr. Michael Powell, Chief Clinical Officer of Google Health, that “the real world is messy. The technical details are important. If you put different types of AI together, you may not get effectiveness or safety.”

But, he added, “there is an incredible opportunity. We know what the future will look like, we just don’t know whether it will happen in 10 years or 100 years.”