As AI increasingly permeates our daily lives, there is no doubt that its impact on healthcare and medicine will affect everyone, whether or not they choose to use AI for their own operations. So how can we ensure that we implement AI in a responsible way that provides mostly positive benefits while minimizing potential downsides?
At the recent 2024 SXSW Conference and Festival held in March 2024, Dr. Jesse Ehrenfeld, President of the American Medical Association (AMA) spoke on the topic “AI, Health Care, and the Strange Future of Medicine“. In a subsequent interview for the AI Today podcast, Dr. Ehrenfeld expands on his talk and shares additional information for this article.
Q: How do you see the medical implications of AI and why did the AMA recently release a set of AI principles?
Dr. Jesse Ehrenfeld: I’m a practicing physician, an anesthesiologist, and actually saw a bunch of patients earlier this week. I worked in Milwaukee, WI at the Medical College of Wisconsin and have been working for about 20 years. I am the current president of the AMA, which is a household name, and the largest, most influential group representing physicians across the nation. Founded in 1847, purveyor of the code of medical ethics and many things to help doctors practice health care in America today. I am board certified in both anesthesiology and clinical informatics. I am the first IT board certified AMA President. It’s a relatively new specialty designation, and I spent ten years in the Navy as well. Basically, everything I do is based on an understanding of how we can support the delivery of high quality medical care to our patients, informed by my work and active practice.
It won’t surprise you, but doctors have been saddled with a lot of technology that just sucked, didn’t work, and was a burden rather than an asset. We just don’t want that anymore, especially with artificial intelligence. So the AMA released a set principles for the development, deployment and use of artificial intelligence in November 2023, which comes in response to concerns we’re hearing from both doctors and the public.
The public has many questions about these AI systems. What do they mean? How can they be trusted? Security, all of it. Our principles guide all of our work, our engagement with the federal government, Congress, administration, and industry on how we ensure that we regulate these technologies to work as they are developed, deployed, and ultimately used. in the care delivery system.
We’ve been working on AI policy since 2018. But in the latest iteration, we’re calling for a whole-of-government approach to AI. We need to make sure we mitigate the risk to patients and make sure we maximize the utility. And these principles came from a lot of work to bring together subject matter experts, doctors, informatics, national specialty groups, and there’s a lot to these principles.
Q: Can you give an overview of these AI principles?
Dr. Jesse Ehrenfeld: Above all, we want to ensure that healthcare AI is designed, developed, deployed in an ethical, fair, responsible and transparent manner. Our vision and perspective is that compliance with a national governance policy is essential to the development of artificial intelligence in an ethical and responsible manner. Voluntary agreements, voluntary compliance is not enough. We need regulation and we should have a risk-based approach. The level of control, oversight and validation should be commensurate with the potential for harm or consequences that an AI system may introduce. So if you’re using it to support diagnostics versus a programming function, it might require a different level of oversight.
We’ve done a lot of surveying of doctors across the country to understand what’s happening in practice today as the increased use of these technologies is happening in our research. The results are exciting, but I think it should probably also serve as a warning to developers and regulators. Doctors in general are very enthusiastic about the potential of artificial intelligence in healthcare. 65% of US physicians in a nationally trusted sample see some benefit in using AI in their practice, helping with documentation, translating documents, helping with diagnoses and relieving administrative burdens through automation, such as the former authorization.
But they also have concerns. 41% of doctors say they are just as excited about AI as they are scared. And there are additional concerns about patient privacy and the impact on the doctor-patient relationship. At the end of the day, we want safe and reliable products on the market. That’s how we’ll earn the trust of doctors, consumers, and obviously all our work to support the development of high-quality, clinically validated AI goes back to those principles.
Q: What are some of these health data and privacy concerns that you focus on?
Dr. Jesse Ehrenfeld: What I see are more questions than answers from patients and consumers about data and AI. For example, with a healthcare application, what does it do? Where does the data go? Can I use this information or share it? And unfortunately, the federal government hasn’t really made sure that there’s transparency around where your data goes. The worst example of this is a company and a developer and an app they label as “HIPAA Compliant”. In the average person’s mind, “HIPAA Compliant” implies that their data is safe, private and secure. Well, apps are not covered entities in HIPAA, and HIPAA only applies to covered entities. So saying you are “HIPAA Compliant” when you are not covered by HIPAA is completely misleading and we simply must not allow this to happen.
There is also a lot of concern about where health data is going, and this obviously extends to the use of AI with patients. 94% of our patients say they want strong laws to govern the use of their healthcare data. Patients are reluctant to use digital tools if they do not understand the privacy factors surrounding them. There is a lot to do in the regulatory space. But there is also much that AI developers can do, even if not required by law, to foster trust in the use of AI data.
Choose your favorite big tech company. Do you trust them with your healthcare data? What if there is a data breach? Would you upload a sensitive photo of a body part to their server to allow them to give you some information about possible conditions that may concern you? What do you do when there is a problem? Who are you calling? So I think there should be opportunities to create more transparency about where data collection goes. How can you opt out of having your data collected and shared and so on and so forth?
Unfortunately, HIPAA doesn’t solve all of this. In fact, many of these applications are not covered by HIPAA. More needs to be done to ensure we can ensure the security and privacy of healthcare data.
Q: Where and how do you see AI having the most positive impact on healthcare and medicine?
Dr. Jesse Ehrenfeld: We need to use these technologies like artificial intelligence, and we should embrace them if we are going to solve the workforce crisis that exists in healthcare today. This is a global problem. It is not limited to the US. 83 million Americans do not have access to primary care. We also don’t have enough doctors in America today. We could never open enough medical schools and residencies to meet these demands if we continue to work and provide care in the same way.
When we talk about AI from an AMA lens, we actually like to use the term augmented intelligence, not artificial intelligence. Because it goes back to that fundamental principle that tools should be just that, tools to enhance the capabilities of healthcare teams, doctors, nurses, everyone involved, to be more effective, more efficient in delivering care. What we need, however, are platforms. Right now, we have a lot of unique solutions that don’t mesh together, and that’s a direction I think we’re starting to see companies move quickly. Obviously, we look forward to this happening in the medical field.
We try many different paths to ensure we have a voice at the table throughout the design and development process. We have our Physician Innovation Network, a free online platform that brings together physician entrepreneurs to help drive change and innovation and bring better products to market. Companies are looking for clinical information and clinicians are looking to connect with entrepreneurs. We also have a technology incubator in Silicon Valley called Health2047. About a dozen companies have been spun off powered by insights we have as physicians in the AMA.
At the end of the day, we need to ensure that we have a regulatory framework that ensures that only clinically validated products are brought to market. And we need to ensure that the tools really live up to their promise and are an asset, not a burden.
I don’t think AI will replace doctors, but I do think doctors who use AI will replace those who don’t. AI products have enormous potential and promise to ease the administrative burdens faced by physicians and practices. Ultimately, I expect there will be a lot of success in ways we directly use AI in patient care. There is a lot of excitement, but we need to make sure that we obviously have tools and technologies that address challenges around racial bias, mistakes that can cause harm, security, privacy issues, and threats to health information. Physicians need to understand how to manage these risks and how to manage liability before we rely on more and more tools.
(disclosure: I co-host the AI Today podcast)