Connect with us

Health

The impact of AI on medicine around development, implementation and use

Avatar

Published

on

The impact of AI on medicine around development, implementation and use

As AI increasingly makes its way into our daily lives, there is no doubt that its impact on healthcare and medicine will affect everyone, regardless of whether they choose to use AI for their own activities. So how can we ensure that we implement AI in a responsible way that delivers mostly positive benefits while minimizing potential downsides?

At the recent 2024 SXSW Conference and Festivals, which took place in March 2024, Dr. Jesse Ehrenfeld, president of the American Medical Association (AMA), on the topic ‘AI, healthcare and the strange future of medicine€. In a follow-up interview on the AI Today podcast, Dr. Ehrenfeld elaborates on his talk and shares additional insights for this article.

Q: How do you see the impact of AI on medicine, and why did AMA recently release a set of AI principles?

Dr. Jesse Ehrenfeld: I’m a practicing physician, an anesthesiologist, and I saw some patients earlier this week. I worked in Milwaukee, Wisconsin, at the Medical College of Wisconsin, and have been in practice for about twenty years. I am the current president of the AMA, a household name, and the largest, most influential group representing physicians across the country. Founded in 1847, provider of the Code of Medical Ethics and many things to help physicians practice healthcare in America today. I am certified in both anesthesiology and clinical informatics. I am the first board of computer scientists to be certified as chair of the AMA. It’s a relatively new specialty designation, and I also spent ten years in the Navy. Fundamentally, everything I do comes down to understanding how we can support the delivery of quality medical care to our patients, based on my work and active practice.

It won’t surprise you, but doctors have been saddled with a lot of technology that was bad, didn’t work, and was a burden, not an asset. We just don’t want that anymore, especially not with AI. That’s why AMA released a set principles for the development, deployment and use of AI in November 2023, in response to concerns heard by both physicians and the public.

The public has many questions about these AI systems. What do they mean? How can they trust them? Security, all that stuff. Our principles guide all of our work, our engagement with the federal government, Congress, government and industry around how we ensure we put these technologies to work as they are developed, deployed and ultimately used in the healthcare delivery system.

We have been working on AI policy since 2018. But in the latest version we call for a comprehensive government approach to AI. We need to make sure we reduce risk to patients and ensure we maximize benefit. And these principles came out of a lot of work to bring together subject matter experts, physicians, computer scientists, national specialty groups, and there’s a lot in these principles.

Q: Can you provide an overview of these AI principles?

Dr. Jesse Ehrenfeld: Above all, we want to ensure that AI in healthcare is designed, developed and deployed in a way that is ethical, fair, responsible and transparent. Our view and perspective is that compliance with a national governance policy is necessary to develop AI in an ethical and responsible manner. Voluntary agreements and voluntary compliance are not enough. We need regulation and we need to take a risk-based approach. The level of control and supervision of validation should be proportionate to the potential harm or impact that an AI system could cause. So if you’re using it to support a diagnosis rather than a planning operation, it may require a different level of supervision.

We’ve done a lot of research work with physicians across the country to understand what’s happening in practice today as these technologies are increasingly being used in our research work. The results are exciting, but I think it should probably also serve as a warning to developers and regulators. Doctors are generally very excited about the potential of AI in healthcare. 65% of US physicians from a nationally renowned sample see some benefit in using AI in their practice, helping with documentation, assisting with document translation, assisting with diagnoses, and removing administrative burdens through automation such as prior permission.

But they also have concerns. 41% of doctors say they are as excited about AI as they are afraid. And there are additional concerns about patient privacy and the impact on the patient-physician relationship. Ultimately, we want safe and reliable products on the market. This is how we will gain the trust of doctors and consumers, and of course all our work to support the development of high-quality, clinically validated AI comes back to these principles.

Q: Which of these data and health privacy concerns are you addressing?

Dr. Jesse Ehrenfeld: What I see are more questions than answers from patients and consumers about data and AI. For example, what does a healthcare app do? Where does the data go? Can I use or share that information? And unfortunately, the federal government hasn’t really made sure there’s transparency about where your data goes. The worst example of this is a company and a developer and an app that they label as “HIPAA compliant.” In the average person’s eyes, “HIPAA compliant” means their data is safe, private, and secure. Well, applications are not covered by HIPAA, and HIPAA only applies to covered entities. So saying you are “HIPAA compliant” when you are not under HIPAA coverage is completely misleading, and we simply should not allow something like that to happen.

There is also a lot of concern about where health data is going, and that obviously includes the use of AI with patients. 94% of patients tell us they want strong laws governing the use of their healthcare data. Patients are hesitant to use digital tools if they do not understand the privacy considerations surrounding them. There is a lot we need to do in terms of regulation. But there is also a lot that AI developers can do, even if not legally required, to strengthen trust in the use of AI data.

Choose your favorite big tech company. Do you trust them with your healthcare data? What if there is a data breach? Would you upload a sensitive photo of a body part to their server so that it can provide you with information about possible conditions you may be concerned about? What do you do if there is a problem? Who are you calling? So I think there should be opportunities to create more transparency about where data collection is going. How can you opt out of the pooling and sharing of your data, etc.?

Unfortunately, HIPAA doesn’t solve any of this. Many of these applications are not even covered by HIPAA. More needs to be done to ensure we can guarantee the security and privacy of healthcare data.

Q: Where and how do you see AI having the most positive impact on healthcare and medicine?

Dr. Jesse Ehrenfeld: We need to use these technologies like AI, and we will need to embrace them if we want to solve the healthcare workforce crisis. This is a global problem. It is not limited to the US. 83 million Americans do not have access to primary care. We also don’t have enough doctors in America these days. We could never open enough medical schools and residency programs to meet those demands if we continue to operate and provide care in the same ways.

When we talk about AI from an AMA lens, we actually like to use the term augmented intelligence, not artificial intelligence. Because it comes back to this fundamental principle that the tools should be just that, tools to enhance the capabilities of our healthcare teams, physicians, nurses and everyone involved, to be more effective and efficient in delivering care. However, what we need are platforms. Right now we have a lot of one-off solutions that don’t integrate with each other, and I think we’re starting to see companies move in that direction quickly. It’s clear that we want this to happen in the medical world.

We try many different routes to ensure we have a voice at the table during the design and development process. We have our Physician Innovation Network, a free online platform to bring together physician entrepreneurs to drive change and innovation and bring better products to market. Companies are looking for clinical input and doctors want to connect with entrepreneurs. We also have a technology incubator in Silicon Valley called Health2047. They spun off a dozen companies, driven by the insights we as physicians have in the AMA.

Ultimately, it comes down to ensuring that we have a regulatory framework that ensures that only clinically validated products are brought to market. And we need to ensure that tools truly deliver on their promise and that they are an asset, not a burden.

I don’t think AI will replace doctors, but I do think doctors who use AI will replace those who don’t. AI products have enormous potential and promise to alleviate the administrative burden experienced by physicians and practices. Ultimately, I expect there will be a lot of success in ways we can use AI directly in patient care. There’s a lot of excitement there, but we need to make sure we clearly have tools and technologies that can address challenges around racial bias, errors that could cause harm, security, privacy concerns, and threats to health information. Physicians need to understand how to manage these risks and manage liability before we rely on more and more tools.

(disclosure: I co-host the AI ​​Today podcast)