Generative artificial intelligence has taken the world by storm. At the snap of a finger, or a few clacks on the keyboard, anyone with internet access can conjure up academic essays, legal documents, computer code, and even create works of art and videos. These technologies are seeping their way into the media, law, finance and even education sectors. But if we can get generative AI right, it has the power to transform the lives of millions of people – in healthcare.
Ageing populations, unhealthy modern lifestyles, the overhangs of the covid pandemic, and the potential threat of other zoonotic diseases such as bird flu are overwhelming healthcare systems globally. Throw in the ever-increasing reports of burnout from medical providers and workforce shortages, and we have a compelling case for an AI-powered healthcare revolution.
Currently, AI is being deployed across different areas of medical research and healthcare. A famous example is DeepMind’s AlphaFold, which was lauded as a computational biology breakthrough. Being able to predict the structure of proteins with incredible accuracy, AlphaFold has aided in the discovery and developments of new drugs.
In clinical practice, diagnostic AIs trained to inspect medical images can help doctors spot conditions quicker, improving patient outcomes while also reducing the workload of healthcare professionals. A Microsoft collaboration with researchers at the University of Cambridge, for example, has yielded Osairis, an AI tool that can help doctors prep radiotherapy images for analysis in just a few minutes.
Beyond driving progress, medical AI is becoming more lucrative. In 2023, the global medical AI market was estimated to be worth $19.27 billion last year and will jump nearly 10-fold to $187.7 billion by the end of the decade. According to a report from International Data Corporation and Microsoft, just under 80 per cent of healthcare organizations in the US already report using AI technology.
Among those that are currently leading the way are the Silicon Valley tech giants. Microsoft has announced a spate of new AI programs and partnerships with healthcare organizations. One example is ‘AI for Health,’ which aims to support nonprofits and researchers working on global health challenges by providing AI and expertise in population health, imaging analytics, genomics, and proteomics.
Last year, Amazon’s Web Service launched HealthScribe, a new HIPAA-eligible service that empowers healthcare software providers to build clinical applications that use speech recognition and generative AI for clinicians. Similarly, Google’s Med-PaLm was the first large language model to reach expert performance on medical licensing exam-style questions.
While all this progress and investment is promising, we need to make sure that tech companies don’t inadvertently cause harm.
Foundation models – which are machine learning models trained on a broad spectrum of generalized and unlabeled data – form the basis of many of these generative AIs. They can perform a wide variety of tasks such as understanding language, generating text and images, and conversing in natural language.
But training AI on unrepresentative or small amounts of data can introduce bias. You may already be familiar with the bias that is rife in tools such OpenAI’s ChatGPT and DALL-E, which can unwittingly spew out racist or sexist responses or images. Similar prejudices against disadvantaged groups, such as the working class and People of Color, are also widespread in healthcare. Therefore, it is critical that this isn’t exacerbated by any new medical AIs.
So, how can we solve this? The best solution to this is to ensure that AI developers train their algorithms on as much and as diverse medical datasets as possible. However, accessing large enough amounts of high-quality data is another challenge entirely. Navigating the regulatory and ethical requirements of different medical data providers across many different countries, as well as safeguarding patient privacy, is a mammoth task that requires extra resources and expertise.
That’s why I believe it is as important to invest in data providers, who can organize and aggregate sensitive and high-quality medical information, as it is to invest in the companies that use them.
The potential of medical AI is tantalizing, but it is ultimately up to us to develop and implement these responsibly. In my view, it is an opportunity for us to pave the way to better and improved lives for millions around the world, while leaving the vestiges of medical discrimination behind.
Editor’s Note: The author has no financial relationship with any of the companies / products mentioned.
Photo: AdrianHillman, Getty Images
Joshua Miller is the CEO and co-founder of Gradient Health and holds a BS/BSE in Computer Science and Electrical Engineering from Duke University. He has spent his career building companies, first founding FarmShots, a Y Combinator backed startup that grew to an international presence and was acquired by Syngenta in 2018. He then went on to serve on the board of a number of companies, making angel investments in over 10 companies across envirotech, medicine, and fintech.
This post appears through the MedCity Influencers program. Anyone can publish their perspective on business and innovation in healthcare on MedCity News through MedCity Influencers. Click here to find out how.