
Introduction
As an experience clinician my delivery of patient care has been dramatically helped by technology. The advent of Artificial Intelligence (AI) towards the latter part of my career is exciting. But I also have considerable experience in clinical safety, particularly as it applies to software products and solutions. I believe that a careful examination of the pros and cons of AI is required. This article reflects an exploration of the current position.
Yes! No! Wait! (sorry)
I am a keen follower of cricket. Those who have played and watched will understand that confusion between batsmen when running between the wickets is disastrous.
The current situation with the proponents (confident yes) opposition (No, even if we must deploy the military) and those asking for a moratorium (Wait) leaves us in a situation where we are likely to be “run out”. We then find ourselves back in the pavilion playing no further part in the match.
Several industry leaders and analysts have underscored the inevitability of Artificial Intelligence’s progression and impact on various sectors, including healthcare.

May be Unstoppable
Mo Gawdat, former Chief Business Officer at Google X, argues that AI’s development is beyond the point of reversal. He stresses that resisting AI advancements is not an option. It will continue to evolve and reshape industries, driven by global investments and technological innovation. Gawdat likens AI’s progression to a “hurricane” that cannot be halted but must be managed carefully to ensure ethical outcomes. I think there are sorts of metaphors that may be applicable. These include having “a tiger by its tail” or “surfing a massive wave”. I have to say that the latter is more appealing. It implies that with skill and fitness we can survive and have an exhilarating experience!
Whilst outside of Healthcare, Lloyds Bank highlights the rapid rise of generative AI. They identify its transformative potential in business operations. The bank note that the technology is now “unstoppable.” Emphasising the imperative that businesses must integrate AI into their strategies or risk falling behind in the competitive landscape. Like banking this must apply to healthcare where competition remains a driving force in many systems.
Both sources align in recognising that AI is not just a technological trend but a force that will redefine industries. This includes healthcare, over the coming years. However, with this inevitability come significant concerns, such as ethical challenges, transparency issues, and the potential for bias; key issues that both Gawdat and other thought leaders suggest must be addressed through responsible development and regulation.
Stop

As a counter to this, there are prominent voices who advovate halting, or at least pausing, AI development. They express particular concern regarding advanced AI systems.
Eliezer Yudkowsky, a prominent AI theorist, has been one of the more extreme proponents of stopping AI development. He argues that highly advanced AI systems could pose existential threats to humanity if they surpass human intelligence. As this transpires Yudowsky suggests that this has the potential to create catastrophic outcomes. Yudkowsky advocates for a complete halt to powerful AI systems’ development. Enforcing such a ban, with military intervention if necessary. In order to prevent rogue actors from developing advanced AI systems unchecked. He believes that current safeguards are insufficient, and that humanity risks losing control over AI systems as they evolve beyond our ability to regulate them.
Pause

For those in the “wait” camp an open letter signed by over 1,200 technologists, including Elon Musk and Steve Wozniak, called for a six-month moratorium on the development of AI systems more powerful than GPT-4.
The letter argues that AI labs are in an “out-of-control race”. Developing increasingly complex AI models without adequately considering the risks. The signatories emphasise the need for shared safety protocols and more robust AI governance before further AI advancements are made.
The letter reflects growing concerns about AI’s potential to disrupt labour markets, spread misinformation, and create undetectable biases. These could exacerbate social inequalities.
Collateral Damage

Critics argue that the pace of AI development should be slowed until broader societal and environmental impacts are better understood. They express concerns about the economic and social inequalities that AI could exacerbate.
AI’s ability to automate jobs could widen the wealth gap. The wealthiest might benefit disproportionately from AI-driven economies, leaving much of the population behind.

Furthermore, AI is criticised for its environmental impact, as its energy consumption could soon surpass that of many smaller countries.
These arguments for halting or pausing AI reflect fears that, without sufficient oversight, advanced AI systems could have unpredictable and potentially disastrous consequences. Many also acknowledge, however, the complexity of enforcing such a halt, given the competitive race among global powers and the massive investments already poured into AI research.
BUT LOOK AT THE PRIZE

The integration of artificial intelligence (AI) into healthcare offers transformative potential for both patients and clinicians, promising to reshape diagnostics, treatment, and the overall delivery of care. Soon, AI’s capabilities are expected to enhance efficiency and accuracy in clinical decision-making, while in the longer term, it may revolutionise personalised medicine and population health management.
Near-Term Benefits

In the short term, AI is already providing tangible benefits, particularly in areas like medical imaging and diagnostics. AI algorithms can analyse scans such as X-rays, MRIs, and CTs with remarkable speed and precision. They often identifying early signs of diseases such as cancer or neurological conditions that might be missed by the human eye. For example, AI-driven imaging tools have demonstrated accuracy levels comparable to or even exceeding that of human radiologists in certain cases. This allows clinicians to make faster, more accurate diagnoses, enabling early intervention and potentially improving patient outcomes.
AI is also being used to streamline administrative tasks, reducing the time clinicians spend on paperwork and enabling them to focus more on direct patient care. For instance, AI-powered systems can automate the handling of patient records, billing, and appointment scheduling. This helps alleviate clinician burnout, a growing concern in many healthcare systems.
More Distant Future Potential

Looking further ahead, AI holds the promise of driving the next frontier in personalised medicine. It has the capability to leverage vast datasets from electronic health records, genomics, and wearable devices. This creates the potential to offer tailored treatment plans based on a patient’s unique genetic makeup, lifestyle, and real-time health metrics. In turn this could lead to more effective, individualised treatments. The result of this may improve long-term outcomes and reducing the trial-and-error nature of some current medical therapies.
AI-supported clinical decision support (CDS) tools can transform healthcare by assisting clinicians in making more accurate and efficient decisions. These systems use algorithms to analyse vast amounts of patient data, medical records, and clinical guidelines to provide real-time insights at the point of care. By leveraging AI, CDS can identify patterns, suggest diagnoses, recommend treatments, and even predict potential complications. This not only helps reduce human error but also improves patient outcomes by providing clinicians with evidence-based recommendations tailored to individual patient profiles. However, it is essential for clinicians to critically evaluate AI-generated suggestions to avoid over-reliance, particularly given risks like “AI hallucinations” where systems may produce inaccurate or misleading information.
Additionally, AI could enable predictive analytics, identifying patients at risk of certain diseases before symptoms appear. AI models may predict the onset of chronic conditions by analysing trends in patient data. This may allow for preventative measures to be initiated that reduce hospitalisations and improve quality of life. On a population level, AI could also contribute to managing public health challenges. It can potentially predict disease outbreaks or track epidemiological trends, helping healthcare systems respond more proactively.
Addressing the Challenge

The future of AI in healthcare looks incredibly promising to many, offering enhanced diagnostic precision, personalised treatments, and predictive analytics. Concerns around transparency, data security, and human oversight need to be carefully managed. We must ensure AI is implemented in a way that benefits both clinicians and patients alike.
Transparency
Alongside its benefits, the arrival of AI in healthcare has prompted significant ethical and practical concerns. A primary issue is transparency as many AI systems, particularly those using deep learning, operate as “black boxes. This makes it difficult to explain how decisions are made. This lack of explainability raises questions about accountability, particularly in high-stakes clinical settings where patient outcomes depend on AI-driven recommendations.
Data Privacy
Data privacy represents another concern as AI relies on vast amounts of patient data. Ensuring this data is securely stored and ethically used is a top priority for healthcare providers. Fears around data breaches and misuse of sensitive health information are real, particularly as AI systems are increasingly integrated with cloud-based platforms.
Loss of clinical judgement
There is also a broader concern about human oversight. While AI can assist in decision-making, there are worries about clinicians becoming overly reliant on algorithms. This might potentially compromising their own clinical judgement. Ensuring that AI serves as an augmentation tool, rather than replacing human decision-making, is critical to maintaining trust between clinicians and patients.
Loss of humanity and the human touch
Alongside any AI usage, human input into clinical care is essential in ensuring that patients receive holistic, personalised treatment that extends beyond their physical symptoms.
As most General Practitioners know and most aspirant GPs articulate in professional examinations, an essential part of providing care is actively listening to patients. We must understant their concerns, ideas, and expectations. Understanding their worries and anxieties, while also acknowledging their expectations of care, helps build trust and ensures that treatments align with the patient’s goals and preferences. This empathic and communicative approach helps clinicians not only address immediate health issues but also support the patient’s broader well-being.
Clinicians must care for individuals by considering their physiological, psychological, and social contexts—a core tenet of patient-centred care. Physiologically, clinicians evaluate patients’ health through diagnosis and treatment plans tailored to their unique conditions. Psychologically, understanding a patient’s emotional well-being is key, as mental health can significantly impact recovery and overall health outcomes. Socially, clinicians consider factors like the patient’s family support, work environment, and socioeconomic status, all of which influence their ability to adhere to treatment plans and maintain long-term well-being. This sophisticated equation and formulation of care planning is unlikely to be delivered solely by technology although might well benefit from AI input.
Another important element of clinical care is touching the patient. Obviously this needs to be both appropriate and with consent, but it is a significant element of the clinical process. The “laying on of hands” remains an important element of clinical practice and is reassuring to patients.
Regulation
When discussing the challenges of regulating AI as a medical device, it’s essential to look at not only the approaches taken by the EU and the UK, but also the evolving regulatory landscape in the US. Key thought leaders such as John Halamka, president of the Mayo Clinic Platform, and Eric Schmidt, former Google CEO, have provided valuable insights into the complexities and potential drawbacks of current regulations in these regions.
Different Approaches to Surfing the AI Wave
EU and UK Approaches
The EU’s AI Act classifies medical devices incorporating AI into a “high-risk” category, placing additional regulatory requirements on manufacturers, such as transparency, data governance, and human oversight. The EU’s Medical Device Regulation (MDR) already imposes stringent obligations on medical devices, but the new AI regulations require further layers of compliance, especially concerning algorithm transparency and error-tracking.
Critics like Eric Schmidt (former Google CEO and then Chair) argue that these transparency rules could stifle innovation, particularly for advanced machine learning systems that cannot fully explain their decision-making processes. While the EU emphasises protection of fundamental rights, the extensive conformity assessments and the complexity of its framework could delay product releases and make it harder for companies, particularly smaller companies, to innovate in the MedTech space.
In contrast, the UK’s MHRA is taking a more agile approach, attempting to balance safety with the need to foster technological development. The UK framework is more flexible in its handling of continuous learning algorithms (AI systems that evolve as they process more data) The UK approach seeks to ensure regulatory requirements don’t overly constrain innovation. The MHRA’s proposals are aimed at supporting real-world testing and clinical integration of AI, with an emphasis on healthcare-specific risk management.
US Approaches
In the United States, the Food and Drug Administration (FDA) is actively working to regulate AI and machine learning-based software as a medical device (SaMD). Unlike the EU, which has a top-down regulatory framework, the US is gradually updating its regulations through pilot programmes and industry engagement.
The FDA’s Digital Health Innovation Action Plan includes a pre-certification programme that evaluates software developers’ processes and culture of quality, rather than focusing solely on individual products. This more dynamic approach is designed to enable companies to bring AI-driven solutions to market more rapidly, while maintaining safety and efficacy.
The FDA is also focused on adaptive AI systems, recognising that these systems will evolve as they learn from new data. This stands in contrast to the more rigid frameworks in the EU, which require a high degree of transparency and explainability from AI models. The US approach is more permissive, allowing for adaptive learning as long as developers demonstrate effective oversight mechanisms.
Insights from Thought Leaders: John Halamka and Eric Schmidt
John Halamka, a leading voice in healthcare technology, has often pointed out the need for global harmonisation in regulating AI-based medical devices. He advocates for regulatory frameworks that can adapt to the rapid evolution of AI while still ensuring patient safety. Halamka has highlighted that overly rigid frameworks, such as the one proposed in the EU, could stifle innovation, particularly when dealing with machine learning systems that adapt over time. He believes regulators should focus on outcome-based evaluations, where the performance and safety of AI systems are assessed in “real-world” environments rather than imposing burdensome pre-market regulations.
Eric Schmidt has been a vocal critic of the EU’s approach, arguing that the transparency requirements could set Europe back significantly in the AI race. He contends that AI systems, especially those involving deep learning, often cannot explain their decision-making processes in a way that meets these transparency mandates. Schmidt also stresses that the EU’s emphasis on regulation before innovation is a mistake, pointing out that this could hinder Europe’s ability to compete with the US and China, both of which are more focused on innovation-driven AI development.
Other views- Meet the experts
The preparation for this article identified an array of Global experts on AI in healthcare. This included experts who are both strong proponents and cynics. Their views are briefly summarised below
Proponents of AI in Healthcare:
- John Halamka (Mayo Clinic, USA):
- “We need a global harmonisation of AI standards to ensure that AI tools are safe, effective, and focused on improving patient care, not just technology for technology’s sake.”
- Eric Topol (Scripps Research, USA):
- “AI can bring back the humanism in medicine by freeing clinicians from the data clerk role and allowing them to focus on their patients.”
- Fei-Fei Li (Stanford University, USA):
- “The future of AI should be about human-centred design. It should augment human capabilities and benefit society, especially in sensitive fields like healthcare.”
- Juergen Schmidhuber (IDSIA, Switzerland):
- “AI’s ability to solve problems that elude human experts, such as early-stage cancer detection, makes it an essential tool for the future of medicine.”
- Andrew Ng (Coursera, USA):
- “AI is the new electricity. Its transformative power in healthcare will democratise access to quality diagnostics and treatment, especially in underserved areas.”
- Hua Zhang (China AI Alliance, China):
- “China’s healthcare system will be revolutionised by AI, using big data and automation to improve efficiency and manage public health challenges.”
- Regina Barzilay (MIT, USA):
- “AI should be used to amplify the abilities of physicians and ensure early detection of life-threatening diseases like cancer.”
- Daniel Kraft (Exponential Medicine, USA):
- “AI is the superpower that will transform healthcare by making it proactive, predictive, and personalised.”
Critics or Cautious Advocates:
- Eliezer Yudkowsky (MIRI, USA):
- “If somebody builds a too-powerful AI, I expect every single member of the human species and all biological life on Earth to die shortly thereafter.”
- Max Tegmark (MIT, USA):
- “We are racing ahead with AI systems that nobody—not even their creators—can understand, predict, or reliably control.”
- Stuart Russell (UC Berkeley, USA):
- “We must ensure AI is used for the benefit of humanity, but without control mechanisms, it could become dangerous, particularly in areas like healthcare.”
So What
Thought leaders offer diverse perspectives on AI in healthcare, with many emphasising the technology’s transformative potential and others urging caution due to ethical and safety concerns.
What is clear is that there is a globally consistent approach. This needs to recognise that control and regulation are imperative. Yet the regulation must also create a permissive climate to foster development. There is danger that, in our attempt to gain a tight grip, the AI soap slips out of our hands and out of control. This is an immense challenge to regulators and to clinicians more widely. We need to focus on considering ourselves in a partnership with AI, understanding its strengths and weaknesses and our joint potential to deliver better, safer care to our patients.