VIENNA — At the European Respiratory Society (ERS) 2024 Congress, experts discussed the benefits and risks of artificial intelligence (AI) in medicine and explored ethical implications and practical challenges.
With over 600 AI-enabled medical devices registered with the US Food and Drug Administration since 2020, AI is rapidly pervading healthcare systems. But like any other medical device, AI tools must be thoroughly assessed and follow strict regulations.
Joshua Hatherley, PhD, a postdoctoral fellow at the School of Philosophy and History of Ideas at Aarhus University in Denmark, said the traditional bioethical principles — autonomy, beneficence, nonmaleficence, and justice — remain a crucial framework for assessing ethics regarding the use of AI tools in medicine. However, he said the emerging fifth principle of “explainability” has gained attention due to the unique characteristics of AI systems.
“Everyone is excited about AI right now, but there are many open questions about how much we can trust it and to what extent we can use it,” Ana Catalina Hernandez Padilla, a clinical researcher at the Université de Limoges, France, told Medscape Medical News.
Joseph Alderman, MBChB, an AI and digital health clinical research fellow at the Institute of Inflammation and Ageing at the University of Birmingham, UK, said these are undoubtedly exciting times to work in AI and health, but he believes clinicians should be “part of the story” and advocate for AI that is safe, effective, and equitable.
The Pros
Alderman said AI has huge potential to improve healthcare and patients’ experiences.
One interesting area in which AI is being applied is the informed consent process. Conversational AI models, like large language models, can provide patients with a time-unlimited platform to discuss risks, benefits, and recommendations, potentially improving understanding and patient engagement. AI systems can also predict the preferences of noncommunicative patients by analyzing their social media and medical data, which may improve surrogate decision-making and ensure treatment aligns with patient preferences, Hatherley explained.
Another significant benefit is AI’s capacity to improve patient outcomes through better resource allocation. For example, AI can help optimize the allocation of hospital beds, leading to more efficient use of resources and improved patient health outcomes.
AI systems can reduce medical errors and enhance diagnosis or treatment plans through large-scale data analysis, leading to faster and more accurate decision-making. It can handle administrative tasks, reducing clinician burnout and allowing healthcare professionals to focus more on patient care.
AI also promises to advance health equity by improving access to quality care in underserved areas. In rural hospitals or developing countries, AI can help fill gaps in clinical expertise, potentially leveling the playing field in access to healthcare.
The Cons
Despite its potential, AI in medicine presents several risks that require careful ethical considerations. One primary concern is the possibility of embedded bias in AI systems.
For example, AI-driven advice from an AI agent may prioritize certain outcomes, such as survival, based on broad standards rather than unique patient values, potentially misaligning with the preferences of patients who value quality of life over longevity. “That may interfere with patients’ autonomous decisions,” Hatherley said.
AI systems also have limited generalizability. Models trained on a specific patient population may perform poorly when applied to different groups due to changes in demographic or clinical characteristics. This can result in less accurate or inappropriate recommendations in real-world settings. “These technologies work on the very narrow population on which the tool was developed but might not necessarily work in the real world,” said Alderman.
Another significant risk is algorithmic bias, which can worsen health disparities. AI models trained on biased datasets may perpetuate or exacerbate existing inequities in healthcare delivery, leading to suboptimal care for marginalized populations. “We have evidence of algorithms directly discriminating against people with certain characteristics,” Alderman said.
AI’s Black Box
AI systems, particularly those utilizing deep learning, often function as “black boxes,” meaning their internal decision-making processes are opaque and difficult to interpret. Hatherley said this lack of transparency raises significant concerns about trust and accountability in clinical decision-making.
While Explainable AI methods have been developed to offer insights into how these systems generate their recommendations, these explanations frequently fail to capture the reasoning process entirely. Hatherley explained that this is similar to using a pharmaceutical medicine without a clear understanding of the mechanisms for which it works.
This opacity in AI decision-making can lead to mistrust among clinicians and patients, limiting its effective use in healthcare. “We don’t really know how to interpret the information it provides,” Hernandez said.
She said while younger clinicians might be more open to testing the waters with AI tools, older practitioners still prefer to trust their own senses while looking at a patient as a whole and observing the evolution of their disease. “They are not just ticking boxes. They interpret all these variables together to make a medical decision,” she said.
“I am really optimistic about the future of AI,” Hatherley concluded. “There are still many challenges to overcome, but, ultimately, it’s not enough to talk about how AI should be adapted to human beings. We also need to talk about how humans should adapt to AI.”
Hatherley, Alderman, and Hernandez have reported no relevant financial relationships.
Manuela Callari is a freelance science journalist specializing in human and planetary health. Her words have been published in The Medical Republic, Rare Disease Advisor, The Guardian, MIT Technology Review, and others.