After OpenAI introduced ChatGPT in late 2022, conversations about AI transforming medicine intensified, with comparisons made to the fictional technology in "Star Trek." However, efforts to leverage technology for clinical decision-making are not a recent development.
This practice dates back centuries, with early innovators like Pierre-Charles-Alexandre Louis analyzing structured data in post-revolutionary France. Louis's research revealed that leeching, a common treatment of that era, was more harmful than beneficial. His work established the basis for the modern analysis of large datasets to gain medical insights, which has paved the way for today's AI developments.
By the 20th century, with the advent of computing, thinkers like George Bernard Shaw predicted a future where diagnosis would be automated. Early AI models were built on this vision.
In the 1950s, pioneers like Robert Ledley and Lee Lusted designed systems that attempted to automate diagnosis based on patient symptoms. These early systems worked well in limited areas, like diagnosing appendicitis, but the grand vision of an AI doctor has yet to materialise. These limitations, combined with a poor understanding of how doctors think, led to what is now referred to as an "AI winter."
As clinical reasoning advanced, research focused on cognitive psychology, demonstrating that doctors often use quick, intuitive decision-making (System 1 thinking) rather than slow, thoughtful reasoning (System 2). Consequently, AI systems, designed to mimic the latter, fell short in effectively replicating human diagnosis.
Fast forward to the early 2020s, and AI began to emerge from its winter. New machine-learning algorithms, including large language models (LLMs) like GPT-4, started gaining attention for their clinical potential. Initially, models like GPT-3 were seen as impressive for creative writing, but not as useful for clinical tasks. However, GPT-4 changed that perception.
Upon testing, it became clear that GPT-4 could handle complex medical cases, suggesting accurate diagnoses in a way that mimicked expert-level reasoning. Unlike earlier systems, it didn't rely on the strict, mechanical calculations that had previously limited AI in healthcare.
Surprisingly, GPT-4 demonstrated an ability to offer accurate differential diagnoses, sometimes outperforming human physicians. It was capable of analysing a case and providing a list of potential diagnoses, even pinpointing correct diagnoses that some doctors, including myself, had initially missed. This emergent ability to make useful diagnostic suggestions, without additional medical training, marked a turning point in AI's role in healthcare.
A growing body of studies confirmed that GPT-4 and similar models could outperform humans in certain aspects of diagnostic reasoning. These AI systems excelled in recognising patterns across vast amounts of medical literature, providing real-time insights that could support—and sometimes improve—clinical decision-making.
Despite these breakthroughs, the core question remains: Can AI truly make medicine more human? While there is anxiety about AI's integration into clinical settings, there is also hope. AI has the potential to free doctors from excessive data management, allowing them to focus more on the uniquely human aspects of care—empathy, communication, and building trust with patients.
By looking at the history of technology in medicine, it becomes clear that AI is not replacing doctors but can serve as an advanced tool, enhancing their ability to make better, more informed decisions. As AI continues to evolve, its role in medicine will likely continue to grow, with the potential to transform the field into something more human-centered than ever before.
Source: The Business Standard
Bd-pratidin English/ Afia