David Lee

Author: admin

Healthcare AI- An Impossible Conundrum?

man sitting facing monitor

Introduction

As an experience clinician my delivery of patient care has been dramatically helped by technology. The advent of Artificial Intelligence (AI) towards the latter part of my career is exciting. But I also have considerable experience in clinical safety, particularly as it applies to software products and solutions. I believe that a careful examination of the pros and cons of AI is required. This article reflects an exploration of the current position.

Yes! No! Wait! (sorry)

I am a keen follower of cricket. Those who have played and watched will understand that confusion between batsmen when running between the wickets is disastrous.

The current situation with the proponents (confident yes) opposition (No, even if we must deploy the military) and those asking for a moratorium (Wait) leaves us in a situation where we are likely to be “run out”. We then find ourselves back in the pavilion playing no further part in the match.

Several industry leaders and analysts have underscored the inevitability of Artificial Intelligence’s progression and impact on various sectors, including healthcare.

May be Unstoppable

Mo Gawdat, former Chief Business Officer at Google X, argues that AI’s development is beyond the point of reversal. He stresses that resisting AI advancements is not an option. It will continue to evolve and reshape industries, driven by global investments and technological innovation. Gawdat likens AI’s progression to a “hurricane” that cannot be halted but must be managed carefully to ensure ethical outcomes. I think there are sorts of metaphors that may be applicable. These include having “a tiger by its tail” or “surfing a massive wave”. I have to say that the latter is more appealing. It implies that with skill and fitness we can survive and have an exhilarating experience!

Whilst outside of Healthcare, Lloyds Bank highlights the rapid rise of generative AI. They identify its transformative potential in business operations. The bank note that the technology is now “unstoppable.” Emphasising the imperative that businesses must integrate AI into their strategies or risk falling behind in the competitive landscape. Like banking this must apply to healthcare where competition remains a driving force in many systems.

Both sources align in recognising that AI is not just a technological trend but a force that will redefine industries. This includes healthcare, over the coming years. However, with this inevitability come significant concerns, such as ethical challenges, transparency issues, and the potential for bias; key issues that both Gawdat and other thought leaders suggest must be addressed through responsible development and regulation.

Stop

red stop sign

As a counter to this, there are prominent voices who advovate halting, or at least pausing, AI development. They express particular concern regarding advanced AI systems.

Eliezer Yudkowsky, a prominent AI theorist, has been one of the more extreme proponents of stopping AI development. He argues that highly advanced AI systems could pose existential threats to humanity if they surpass human intelligence. As this transpires Yudowsky suggests that this has the potential to create catastrophic outcomes. Yudkowsky advocates for a complete halt to powerful AI systems’ development. Enforcing such a ban, with military intervention if necessary. In order to prevent rogue actors from developing advanced AI systems unchecked. He believes that current safeguards are insufficient, and that humanity risks losing control over AI systems as they evolve beyond our ability to regulate them.

Pause

two black LED wait road post

For those in the “wait” camp an open letter signed by over 1,200 technologists, including Elon Musk and Steve Wozniak, called for a six-month moratorium on the development of AI systems more powerful than GPT-4.

The letter argues that AI labs are in an “out-of-control race”. Developing increasingly complex AI models without adequately considering the risks. The signatories emphasise the need for shared safety protocols and more robust AI governance before further AI advancements are made.

The letter reflects growing concerns about AI’s potential to disrupt labour markets, spread misinformation, and create undetectable biases. These could exacerbate social inequalities.

Collateral Damage

a man holding a wallet and a watch

Critics argue that the pace of AI development should be slowed until broader societal and environmental impacts are better understood. They express concerns about the economic and social inequalities that AI could exacerbate.

AI’s ability to automate jobs could widen the wealth gap. The wealthiest might benefit disproportionately from AI-driven economies, leaving much of the population behind.

a dirt field with rocks and a tree

Furthermore, AI is criticised for its environmental impact, as its energy consumption could soon surpass that of many smaller countries.

These arguments for halting or pausing AI reflect fears that, without sufficient oversight, advanced AI systems could have unpredictable and potentially disastrous consequences. Many also acknowledge, however, the complexity of enforcing such a halt, given the competitive race among global powers and the massive investments already poured into AI research.

BUT LOOK AT THE PRIZE

a woman on a beach

The integration of artificial intelligence (AI) into healthcare offers transformative potential for both patients and clinicians, promising to reshape diagnostics, treatment, and the overall delivery of care. Soon, AI’s capabilities are expected to enhance efficiency and accuracy in clinical decision-making, while in the longer term, it may revolutionise personalised medicine and population health management.

Near-Term Benefits

a drawing of a question mark

In the short term, AI is already providing tangible benefits, particularly in areas like medical imaging and diagnostics. AI algorithms can analyse scans such as X-rays, MRIs, and CTs with remarkable speed and precision. They often identifying early signs of diseases such as cancer or neurological conditions that might be missed by the human eye. For example, AI-driven imaging tools have demonstrated accuracy levels comparable to or even exceeding that of human radiologists in certain cases. This allows clinicians to make faster, more accurate diagnoses, enabling early intervention and potentially improving patient outcomes.

AI is also being used to streamline administrative tasks, reducing the time clinicians spend on paperwork and enabling them to focus more on direct patient care. For instance, AI-powered systems can automate the handling of patient records, billing, and appointment scheduling. This helps alleviate clinician burnout, a growing concern in many healthcare systems.

More Distant Future Potential

person holding clear glass glass

Looking further ahead, AI holds the promise of driving the next frontier in personalised medicine. It has the capability to leverage vast datasets from electronic health records, genomics, and wearable devices. This creates the potential to offer tailored treatment plans based on a patient’s unique genetic makeup, lifestyle, and real-time health metrics. In turn this could lead to more effective, individualised treatments. The result of this may improve long-term outcomes and reducing the trial-and-error nature of some current medical therapies.

AI-supported clinical decision support (CDS) tools can transform healthcare by assisting clinicians in making more accurate and efficient decisions. These systems use algorithms to analyse vast amounts of patient data, medical records, and clinical guidelines to provide real-time insights at the point of care. By leveraging AI, CDS can identify patterns, suggest diagnoses, recommend treatments, and even predict potential complications. This not only helps reduce human error but also improves patient outcomes by providing clinicians with evidence-based recommendations tailored to individual patient profiles. However, it is essential for clinicians to critically evaluate AI-generated suggestions to avoid over-reliance, particularly given risks like “AI hallucinations” where systems may produce inaccurate or misleading information.

Additionally, AI could enable predictive analytics, identifying patients at risk of certain diseases before symptoms appear. AI models may predict the onset of chronic conditions by analysing trends in patient data. This may allow for preventative measures to be initiated that reduce hospitalisations and improve quality of life. On a population level, AI could also contribute to managing public health challenges. It can potentially predict disease outbreaks or track epidemiological trends, helping healthcare systems respond more proactively.

Addressing the Challenge

a person holding a cube

The future of AI in healthcare looks incredibly promising to many, offering enhanced diagnostic precision, personalised treatments, and predictive analytics. Concerns around transparency, data security, and human oversight need to be carefully managed. We must ensure AI is implemented in a way that benefits both clinicians and patients alike.

Transparency

Alongside its benefits, the arrival of AI in healthcare has prompted significant ethical and practical concerns. A primary issue is transparency as many AI systems, particularly those using deep learning, operate as “black boxes. This makes it difficult to explain how decisions are made. This lack of explainability raises questions about accountability, particularly in high-stakes clinical settings where patient outcomes depend on AI-driven recommendations.

Data Privacy

Data privacy represents another concern as AI relies on vast amounts of patient data. Ensuring this data is securely stored and ethically used is a top priority for healthcare providers. Fears around data breaches and misuse of sensitive health information are real, particularly as AI systems are increasingly integrated with cloud-based platforms.

Loss of clinical judgement

There is also a broader concern about human oversight. While AI can assist in decision-making, there are worries about clinicians becoming overly reliant on algorithms. This might potentially compromising their own clinical judgement. Ensuring that AI serves as an augmentation tool, rather than replacing human decision-making, is critical to maintaining trust between clinicians and patients.

Loss of humanity and the human touch

Alongside any AI usage, human input into clinical care is essential in ensuring that patients receive holistic, personalised treatment that extends beyond their physical symptoms.

As most General Practitioners know and most aspirant GPs articulate in professional examinations, an essential part of providing care is actively listening to patients. We must understant their concerns, ideas, and expectations. Understanding their worries and anxieties, while also acknowledging their expectations of care, helps build trust and ensures that treatments align with the patient’s goals and preferences. This empathic and communicative approach helps clinicians not only address immediate health issues but also support the patient’s broader well-being.

Clinicians must care for individuals by considering their physiological, psychological, and social contexts—a core tenet of patient-centred care. Physiologically, clinicians evaluate patients’ health through diagnosis and treatment plans tailored to their unique conditions. Psychologically, understanding a patient’s emotional well-being is key, as mental health can significantly impact recovery and overall health outcomes. Socially, clinicians consider factors like the patient’s family support, work environment, and socioeconomic status, all of which influence their ability to adhere to treatment plans and maintain long-term well-being. This sophisticated equation and formulation of care planning is unlikely to be delivered solely by technology although might well benefit from AI input.

Another important element of clinical care is touching the patient. Obviously this needs to be both appropriate and with consent, but it is a significant element of the clinical process. The “laying on of hands” remains an important element of clinical practice and is reassuring to patients.

Regulation

When discussing the challenges of regulating AI as a medical device, it’s essential to look at not only the approaches taken by the EU and the UK, but also the evolving regulatory landscape in the US. Key thought leaders such as John Halamka, president of the Mayo Clinic Platform, and Eric Schmidt, former Google CEO, have provided valuable insights into the complexities and potential drawbacks of current regulations in these regions.

Different Approaches to Surfing the AI Wave

EU and UK Approaches

The EU’s AI Act classifies medical devices incorporating AI into a “high-risk” category, placing additional regulatory requirements on manufacturers, such as transparency, data governance, and human oversight. The EU’s Medical Device Regulation (MDR) already imposes stringent obligations on medical devices, but the new AI regulations require further layers of compliance, especially concerning algorithm transparency and error-tracking.

Critics like Eric Schmidt (former Google CEO and then Chair) argue that these transparency rules could stifle innovation, particularly for advanced machine learning systems that cannot fully explain their decision-making processes. While the EU emphasises protection of fundamental rights, the extensive conformity assessments and the complexity of its framework could delay product releases and make it harder for companies, particularly smaller companies, to innovate in the MedTech space.

In contrast, the UK’s MHRA is taking a more agile approach, attempting to balance safety with the need to foster technological development. The UK framework is more flexible in its handling of continuous learning algorithms (AI systems that evolve as they process more data) The UK approach seeks to ensure regulatory requirements don’t overly constrain innovation. The MHRA’s proposals are aimed at supporting real-world testing and clinical integration of AI, with an emphasis on healthcare-specific risk management.

US Approaches

In the United States, the Food and Drug Administration (FDA) is actively working to regulate AI and machine learning-based software as a medical device (SaMD). Unlike the EU, which has a top-down regulatory framework, the US is gradually updating its regulations through pilot programmes and industry engagement.

The FDA’s Digital Health Innovation Action Plan includes a pre-certification programme that evaluates software developers’ processes and culture of quality, rather than focusing solely on individual products. This more dynamic approach is designed to enable companies to bring AI-driven solutions to market more rapidly, while maintaining safety and efficacy.

The FDA is also focused on adaptive AI systems, recognising that these systems will evolve as they learn from new data. This stands in contrast to the more rigid frameworks in the EU, which require a high degree of transparency and explainability from AI models. The US approach is more permissive, allowing for adaptive learning as long as developers demonstrate effective oversight mechanisms.

Insights from Thought Leaders: John Halamka and Eric Schmidt

John Halamka, a leading voice in healthcare technology, has often pointed out the need for global harmonisation in regulating AI-based medical devices. He advocates for regulatory frameworks that can adapt to the rapid evolution of AI while still ensuring patient safety. Halamka has highlighted that overly rigid frameworks, such as the one proposed in the EU, could stifle innovation, particularly when dealing with machine learning systems that adapt over time. He believes regulators should focus on outcome-based evaluations, where the performance and safety of AI systems are assessed in “real-world” environments rather than imposing burdensome pre-market regulations.

Eric Schmidt has been a vocal critic of the EU’s approach, arguing that the transparency requirements could set Europe back significantly in the AI race. He contends that AI systems, especially those involving deep learning, often cannot explain their decision-making processes in a way that meets these transparency mandates. Schmidt also stresses that the EU’s emphasis on regulation before innovation is a mistake, pointing out that this could hinder Europe’s ability to compete with the US and China, both of which are more focused on innovation-driven AI development.

Other views- Meet the experts

The preparation for this article identified an array of Global experts on AI in healthcare. This included experts who are both strong proponents and cynics. Their views are briefly summarised below

Proponents of AI in Healthcare:

  • John Halamka (Mayo Clinic, USA):
  • “We need a global harmonisation of AI standards to ensure that AI tools are safe, effective, and focused on improving patient care, not just technology for technology’s sake.”
  • Eric Topol (Scripps Research, USA):
  • “AI can bring back the humanism in medicine by freeing clinicians from the data clerk role and allowing them to focus on their patients.”
  • Fei-Fei Li (Stanford University, USA):
  • “The future of AI should be about human-centred design. It should augment human capabilities and benefit society, especially in sensitive fields like healthcare.”
  • Juergen Schmidhuber (IDSIA, Switzerland):
  • “AI’s ability to solve problems that elude human experts, such as early-stage cancer detection, makes it an essential tool for the future of medicine.”
  • Andrew Ng (Coursera, USA):
  • “AI is the new electricity. Its transformative power in healthcare will democratise access to quality diagnostics and treatment, especially in underserved areas.”
  • Hua Zhang (China AI Alliance, China):
  • “China’s healthcare system will be revolutionised by AI, using big data and automation to improve efficiency and manage public health challenges.”
  • Regina Barzilay (MIT, USA):
  • “AI should be used to amplify the abilities of physicians and ensure early detection of life-threatening diseases like cancer.”
  • Daniel Kraft (Exponential Medicine, USA):
  • “AI is the superpower that will transform healthcare by making it proactive, predictive, and personalised.”

Critics or Cautious Advocates:

  • Eliezer Yudkowsky (MIRI, USA):
  • If somebody builds a too-powerful AI, I expect every single member of the human species and all biological life on Earth to die shortly thereafter.”
  • Max Tegmark (MIT, USA):
  • “We are racing ahead with AI systems that nobody—not even their creators—can understand, predict, or reliably control.”
  • Stuart Russell (UC Berkeley, USA):
  • “We must ensure AI is used for the benefit of humanity, but without control mechanisms, it could become dangerous, particularly in areas like healthcare.”

So What

Thought leaders offer diverse perspectives on AI in healthcare, with many emphasising the technology’s transformative potential and others urging caution due to ethical and safety concerns.

What is clear is that there is a globally consistent approach. This needs to recognise that control and regulation are imperative. Yet the regulation must also create a permissive climate to foster development. There is danger that, in our attempt to gain a tight grip, the AI soap slips out of our hands and out of control. This is an immense challenge to regulators and to clinicians more widely. We need to focus on considering ourselves in a partnership with AI, understanding its strengths and weaknesses and our joint potential to deliver better, safer care to our patients.

No Patient Data, Reduced Clinical Safety, Massive Heart’s Reflections.

System Resilience and Cyber Security are Essential Pillars of Clinical Safety

Healthcare increasingly relies on digital technology, the availability and security of patient data have become critical components of clinical safety. IT system failure and cyber-attacks pose significant risks to this data availability, potentially leading to severe consequences for patient care. This blog post explores the clinical safety impacts of data unavailability caused by such incidents. The key message is that data security, system resilience and cyber security are foundational elements of clinical safety. As the blogpost indicates, this is not a theoretical threat.

The Critical Role of Data in Healthcare

Access to accurate data at the point of care is essential

Healthcare providers depend on electronic health records (EHRs), imaging systems, and other digital tools to deliver timely and effective care. These systems store vital patient information, including medical histories, test results, treatment plans, and medication records. Effectively integrated systems operating within organisations support Clinicians in making informed decisions, coordinating care, and respond swiftly to patient needs.

Available, legible records enhance the safety of patient care within health services. Shared details (interoperability) then secure further safety gains. Sophisticated clinical decision support further enhances safety.

Where data is not available then the safety benefits are not available. Put simply care becomes substantially riskier for patients and places increased pressure on busy clinicians.

Implications of IT System Failures

Where systems fail, there are a number of patient, clinician and organisational consequences, these include:

  1. Delayed Treatments: When IT systems fail, accessing crucial patient information becomes difficult or impossible. Healthcare provision is contingent upon these bein available. Under attack, they may resort to manual processes that are slower and more error prone.
  2. Medication Errors: EHRs are instrumental in managing and tracking patient medications. If these records become inaccessible, the risk of medication errors increases significantly. Healthcare professionals may not have access to the most recent prescribing or drug administration information, leading to potential over- or under-dosing.
  3. Diagnostic Delays: Imaging systems and lab results rely on digital management systems. IT failures can prevent the timely review of diagnostic tests, delaying diagnoses and subsequent treatment plans.
  4. Operational Disruptions: Hospital operations, including scheduling, admissions, and discharges, rely heavily on IT systems. System failures can disrupt these processes, leading to inefficiencies and reduced capacity to care for patients.

Consequences of Cyber Attacks

Cyber-attacks, such as ransomware, can have even more profound impacts on data availability and clinical safety. These attacks typically involve malicious actors gaining unauthorized access to healthcare systems, encrypting data, and demanding a ransom for its release.

  1. Data Breaches: Cyber-attacks often result in data breaches, stealing sensitive patient information. This not only compromises patient privacy but also undermines trust in the healthcare system and compromises the safety of care.
  2. Extended Downtime: Cyber-attacks can cause prolonged downtime compared to conventional IT failures. Recovering from such incidents requires significant time and resources, during which patient care suffers severe impacts.
  3. Compromised Care Coordination: Effective healthcare delivery relies on coordinated efforts among various departments and professionals. Cyber-attacks disrupt these communication channels, leading to fragmented care and increased risk of medical errors.
  4. Financial Strain: The financial cost of cyber-attacks, including ransom payments, system restoration, and potential legal liabilities, can strain healthcare resources. This financial burden can divert funds away from patient care initiatives.

Loss of confidence

A secondary issue of failure of systems is that people lose confidence in the security of their data. They become understandably more reticent about sanctioning the sharing of their medical information. Even where this might secure compelling clinical safety benefits.

It is also an issue where clinicians lose confidence in technology solutions. The threat of “going back to paper” sends a chill through me. This maybe reassuringly familiar to senior clinicians and provide easy “data entry”. It is not acceptable as a safe system for patients. This is because we increase safety risks for patients through.

  • legibility issues.
  • retrieval issues and,
  • An absence of alerting and clinical decision support

Notable Examples of Incidents

Globally there have already been a significant number of cyber-attacks including providers in US (Medstar, UCLA Healthcare, and Anthem), Singapore and Germany (Dusseldorf University) and we have ongoing issues in the London system with Synnovis a significant provider of pathology and testing services. The following are more specific examples which have impacted hugely on systems. I have chosen these having lived through them!

Wannacry Attack (2017):

The WannaCry ransomware attack occurred worldwide in May 2017, targeting PCs running Windows. Attackers encrypted data and demanded a ransom, threatening to release information. Microsoft had prior knowledge of a potential attack 12 months earlier and had released a security patch for all Windows devices. Organizations that failed to install the patch, despite Microsoft’s recommendation, became targets. The WannaCry ransomware attack infected 200,000 PCs across 156 countries.

  • Hospital Services: Numerous hospitals and GP surgeries had to cancel appointments and redirect emergency patients due to the inability to access patient records.
  • Treatment Delays: Critical treatments, such as surgeries and chemotherapy sessions, were delayed, directly affecting patient care and safety.
  • Financial Costs: The NHS incurred significant expenses for system restoration and implementing improved cybersecurity measures to prevent future attacks.

The attack severely impacted healthcare systems globally, particularly affecting the UK’s National Health Service (NHS):

Even where healthcare providers did not suffer from the direct attack, their functions was impaired, for example by ambulance services protecting their data by closing access to their network, with the impacts including:

  • Ambulance handover process and screens disabled
  • Patient Transport Service booking portal not available.

Furthermore, specialist tertiary centres protected their data by closing access to their network, main impact being:

  • Hospitals could not transfer images to specialist centres
  • The transfer of information within clinical networks was disrupted

Leeds Laboratory Failure (2016)

In 2016, the Leeds Teaching Hospitals NHS Trust experienced a significant IT failure in its laboratory information management system:

  1. Test Delays: The failure resulted in delays in processing and reporting diagnostic tests, affecting patient diagnoses and treatment plans. GP services were affected as they were asked to prioritise urgent blood requests and to delay routine blood tests, for example, to monitor patient therapy.
  2. Operational Strain: The disruption led to an increased burden on clinical staff, who had to manage manual processes and mitigate the impact on patient care. It also placed a considerable response burden on technical staff and senior leaders within the organisation.
  3. Patient Safety Risks: The delays and operational challenges posed direct risks to patient safety, particularly for those requiring timely diagnostic results for critical conditions.
  4. Treatment Delays: The lack of blood results and uncertainty about blood cross matching led to the cancellation of elective operations across both Leeds and Bradford

HSE Ireland Ransomware Attack (2021):

In May 2021, the Health Service Executive (HSE) of Ireland suffered a major ransomware attack, leading to extensive system outages:

  1. Service Disruption: Hospital services, including outpatient appointments, diagnostic tests, and surgeries, were disrupted or postponed.
  2. Patient Care Impact: The attack severely impacted the ability of healthcare providers to access patient records, leading to delays in care and increasing the risk of medical errors. Patients were potentially turning up for scheduled appointments to organisations that could not access their schedule or the reason for attendance
  3. Financial and Recovery Efforts: The financial cost of the attack was substantial, encompassing ransom demands, system restoration, and long-term improvements in cybersecurity infrastructure.

Mitigating the Risks- A Prescription for Safety

Healthcare organizations must adopt comprehensive strategies to mitigate the risks associated with IT system failures and cyber attacks:

  1. Robust Cybersecurity Measures: Implementing strong cybersecurity protocols, including firewalls, encryption, and regular security audits, can help prevent cyber attacks and minimize their impact. This should involve very careful consideration with respect to the use of large cloud providers whose security measures are state of the art and who can ensure that appropriate security patches can be applied in a timely fashion. The days of “on premises” data storage and system maintenance must be numbered.
  2. Hosting arrangements must be supported with appropriate assurance and credentials. ISO standards are a good starting point for this.
  3. Regular Backups: Ensure and maintain regular, secure backups of all critical data ensures that information can be restored quickly in the event of a system failure or cyber-attack.
  4. Disaster Recovery Plans: Developing and regularly updating disaster recovery and business continuity plans can help organizations respond effectively to data unavailability incidents. This should include, however unpopular, running periodic business continuity exercises.
  5. Staff Training: Educating healthcare staff on cybersecurity best practices and the importance of data security can reduce the risk of human error contributing to data breaches or IT failures.
  6. System Architecture: Implementing redundant systems and network architectures can provide alternative pathways for accessing critical data, ensuring that patient care can continue uninterrupted during technical issues.
  7. Lessons Learned: Health services need to ensure the lessons are learned from organisations and systems that have been affected. The response to cyber-attacks often has clandestine elements to avoid alerting hostile actors about vulnerabilities. Despite this, we need to ensure that lessons are learned.
  8. Partnership: Fundamental to successful working is the establishment of effective partnerships between system providers, clinical, clinical safety and technical functions with the reciprocal functions within their customers

Conclusion

The availability of patient data is a cornerstone of modern healthcare. IT system failures and cyber-attacks pose significant threats to this availability, with serious implications for clinical safety. By understanding these risks and implementing robust preventive measures, healthcare organizations can protect patient data, ensure continuous care delivery, and maintain the trust and safety of their patients.

External Sources

https://thebiomedicalscientist.net/science/when-computers-crash-leeds-pathology-lab

https://www.nationalhealthexecutive.com/articles/wannacry-cyber-attack-cost-nhs-ps92m-after-19000-appointments-were-cancelled

https://www.england.nhs.uk/wp-content/uploads/2018/02/lessons-learned-review-wannacry-ransomware-cyber-attack-cio-review.pdf

https://www2.hse.ie/services/cyber-attack/what-happened

AI Tiger by its Tail- Let’s Get a Grip


Call to Action

AI is moving quickly in terms of its capability and sophistication. The requirements of clinical safety assurance with respect to AI software in healthcare are also likely to change quickly. We need to start discussions now! AI can potentially move faster than regulations and research, so we need training, values, and ethics in place. We also need to consider approaches to clinical safety assurance, liability, and peer review.


A Nice Dilemma Surely we should be happy not fearful?

On one hand we have a significant global clinical safety challenge. People experiencing harm even in the most developed healthcare systems. Additionally, there are massive inequalities in the delivery of safe care across the globe.

Our aspirations to deliver safer care may be thwarted by the workforce challenges with gaps in rotas and non-ideal staffing levels. Clinicians turning up to work will more stressed and, at times, focused on self-preservation alongside their professional obligations to patients.

On the other hand, we have the “coming wave of AI. This offers much around both the delivery of safe care and the collection and analysis of care safety. But it brings concern about safety, data protection and its impact on traditional clinical and managerial roles within healthcare.

Mustafa Suleyman poses the question” How do we keep a grip on the most valuable technologies ever invented as they get cheaper and spread faster than any in history?” But he recognises that the progress of AI is inevitable.

My view as a clinician with a focus on patient safety is that we cannot be digital King Canutes or Technology Luddites. Healthcare needs to prepare now to optimise the safety of AI in healthcare. We should seek to educate our end users. We should seek to optimise the human impact on patient care, care planning and safety culture. To mix metaphors it is better to have a tiger by its tail than risk the wild wild west!

The partnership between human clinicians and AI has the potential to improve the clinical safety of care for all. Furthermore, we can improve the care experience for both patients and clinicians.


Current State of Play in Healthcare- A brief overview

Whilst by no means ubiquitous, AI is already used for:

1. Diagnostics and Imaging: AI algorithms are being used to interpret X-rays, MRIs, and other imaging modalities, often with greater accuracy and speed than human radiologists. This improves the detection of diseases such as cancer, pneumonia, and brain anomalies.
2. Personalised Medicine: By analysing vast amounts of genetic data, AI helps tailor treatments to the individual, improving outcomes. This approach is particularly relevant in oncology and chronic disease management.
3. Predictive Analytics: Hospitals and healthcare providers use AI to predict patient admission rates. They also use AI to identify potential disease outbreaks, optimising staffing and resources while improving patient care.
4. Robot-Assisted Surgery: Robots, guided by AI, can perform surgeries with precision and flexibility beyond human capabilities. This leads to less invasive procedures, reduced recovery times, and improved surgical outcomes.
5. Virtual Health Assistants: AI-powered chatbots and virtual assistants provide 24/7 support to patients They offer medical advice helping to manage chronic conditions, and reminding patients to take their medication.


Predictions suggest that during 2024 we will see:


1. Expanded Telemedicine Capabilities: AI will enhance telehealth platforms with more advanced diagnostic tools and personalized treatment options, making healthcare more accessible, especially in remote areas.
2. Integration of AI in Electronic Health Records (EHRs): AI will play a more significant role in EHRs by automating data entry, highlighting relevant patient information, and suggesting treatment options, which can reduce administrative burdens and improve patient care.
3. Advances in Predictive Analytics: With more data and improved algorithms, AI’s ability to predict disease outbreaks, patient admissions, and even individual health risks will become more accurate, enabling preventive measures and personalized health plans.
4. Ethical AI Development: As AI becomes more integrated into healthcare, there will be an increased focus on addressing ethical considerations, such as data privacy, algorithmic bias, and ensuring AI supports rather than replaces human healthcare providers.


As a Primary Care Doctor- The Thrilling Future

As a GP I want to see how my practice will be supported by AI. So, what will I see?

1. Enhanced Diagnostic Support: AI-powered diagnostic tools can help primary care clinicians in interpreting medical images, lab results, and patient symptoms. These tools can provide more accurate and timely diagnoses, allowing better-informed decisions about patient care.
2. Personalised Treatment Plans: AI algorithms can analyse vast amounts of patient data to identify patterns and trends that may not be immediately apparent to human clinicians. This can help tailor treatment plans to individual patients based on their unique characteristics, medical history, and genetic makeup. Potential help for managing multimorbidity elegantly my previous blog.
3. Improved Patient Monitoring: AI-enabled remote monitoring devices can track patients’ vital signs and health metrics in real-time, allowing GP practices to monitor patients’ health status more closely between face-to-face visits. This can help identify potential issues early on and enable intervention before they escalate.
4. Efficient Administrative Tasks: AI can streamline administrative tasks such as patient scheduling and coding, freeing up more time for clinicians to focus on patient care. Automated systems can handle routine administrative duties, allowing clinicians to spend more time with patients and provide better-quality care.
5. Virtual Health Assistants: Virtual health assistants powered by AI can help answer patient queries, provide basic medical advice, and schedule appointments. These assistants can augment practice by handling routine inquiries, allowing focus on more complex patient interactions.
6. Decision Support Tools: AI-driven decision support systems can provide evidence-based recommendations and guidelines at the point of care, helping doctors to make more informed decisions about treatment options, medication choices, and preventive care measures.
7. Continuing Medical Education: AI-powered platforms can provide personalized learning resources and continuing medical education (CME) opportunities tailored to specific areas of interest and professional development needs. These platforms can help professionals to stay updated on the latest medical advancements and best practices in primary care.

Wow! What is not to like? Overall, the introduction of AI in healthcare is likely to enhance the efficiency, accuracy, and quality of primary care services, enabling the provision of better care to patients while also improving clinical workflows and clinician job satisfaction.


But there are Concerns.

It’s essential to embrace these technologies responsibly and ensure that they are integrated into clinical practice in a way that prioritises patient safety, privacy, and ethical considerations.

Achieving these benefits, however, will be dependent upon the successful deployment and adoption, addressing potential issues relating to data privacy and addressing the digital disparity between different regions and populations. Governments, healthcare providers.

Clinicians and patients will all require assurances regarding the safety of AI in supporting care. As a patient it would be essential to understand that the AI support is accurate, recommending best practice, acting without bias and able to explain its recommended outcomes.


Clinical Safety and Medical Device RegulationShould we regulate?

There appears to be consensus that AI should be subject to some degree of regulation. Even amongst those at the cutting edge such as Mira Murati, The Creator of ChatGPT express an opinion that regulation is required. Asking Inflection AI’s Pi whether AI in healthcare should be regulated gains an affirmative response. The arguments are as follows:

1. Ensuring safety and effectiveness: AI systems in healthcare can have a profound impact on patient care, and any errors or malfunctions could result in serious harm. Regulation can help ensure that AI technologies are developed, tested, and implemented with patient safety and effectiveness as top priorities. This includes verifying that AI systems are accurate, reliable, and perform as intended in real-world clinical settings.
2. Protecting data privacy: AI systems often require large amounts of patient data to function effectively, and this data can be highly sensitive. Regulation can help ensure that patient privacy is protected by establishing standards for data security, anonymization, and consent processes. This is crucial for maintaining patient trust in AI technologies and encouraging their widespread adoption.
3. Promoting transparency and accountability: AI systems can be complex and opaque, making it difficult for healthcare professionals and patients to understand how decisions are being made. Regulation can promote transparency by requiring that AI algorithms and decision-making processes are explainable, auditable, and subject to oversight.


So yes, but how?

I personally do not believe that current regulatory approaches such as EU MDR are sufficiently agile to allow the introduction of AI based software to rapidly realise benefits for patients and the clinicians providing care for them. Healthcare workforce challenges and health inequalities mean that we need to match the cutting-edge technology with cutting-edge assurance.

It is my belief that the glacial progress of current medical device regulation and the highly conservative approach taken by regulators as well as those interpreting regulations do not offer a way forward for the optimal introduction of this technology. Systems which develop greater assurance agility will gain significant safety advantages for their population.

I say this as this technology has the potential to rapidly improve the accessibility and safety of healthcare. We must start from a viewpoint that current approaches are not as safe for patients as we would like, and that AI software can provide a novel solution which improves safety.

Failure to do this risks replicating the sentiment that healthcare needs to “return to paper” when a software defect with clinical safety issues is identified. There is no room, in my view, for luddites because healthcare needs AI if it is going to provide sustainable, equitable care in the future. Current services are delivered by a challenge workforce, are expensive and offer inequitable access at both a global and system level.

That does not mean that there are not risks that need to be assessed and mitigated. I strongly believe that the UK NHS required safety assurance offers a starting point. This places legal requirements on healthcare organisations (and therefore software manufacturers) to retain clinical safety officers and implement a safety assurance process which ensures that potential clinical safety hazards are identified. There is then an imperative to ensure appropriate mitigation is determined and implemented with the final assessment of any residual risk. This process is summarised in the development of a Hazard Log and a production of a Safety Case which is updated for every release. This then becomes a transparent, clinically led assurance process.

I believe that this will then enable us to define potential hazards that we can mitigate and describe that mitigation in a safety case for each product and use case.


Hazards New Old and their Mitigation– A Professional View

My experience with respect to currently deployed healthcare software is that there are common high-level risks across products no matter who manufactures them and that these are largely independent of the product’s precise role.

An analysis across software including pre-hospital solutions, medications management solutions, full electronic medical records and laboratory information systems suggests that the following high-level hazards apply to most if not all systems:

system access;
• data accuracy;
• linked systems;
• Messaging failure between product components; and
• clinical workflows within the software.

In most instances, the hazards identified will be mitigated through software design and development before being assured through testing. Both manufacturers and customers become concerned where the mitigation of a hazard with clinical safety risk requires training or a change in business processes.

Whilst AI tools will still have the above hazards, I would suggest that there are additional or more prominant hazards that we might anticipate as AI is introduced.

1. Inaccurate or biased diagnoses: AI systems may produce incorrect or biased diagnoses due to limitations in their training data or algorithms. This could lead to inappropriate treatment plans and potentially harm patients.
2. Data privacy and security breaches: As AI systems rely on large amounts of sensitive patient data, there is a risk of unauthorized access or data breaches, which could compromise patient privacy and trust in the healthcare system.
3. Over-reliance on AI: Healthcare professionals may become overly reliant on AI systems, leading to a decrease in critical thinking and clinical judgment skills. This could result in missed diagnoses or other errors that could harm patients.
4. Lack of transparency: AI systems can be complex and difficult to understand, making it challenging for healthcare professionals and patients to understand the basis for AI-driven decisions. This lack of transparency can erode trust in AI and potentially lead to incorrect decisions.

We need, therefore, to provide assurances that AI systems have not been trained on data that has inherent biases including any algorithms. The systems are unlikely to be programmed to be unfair, they seek to make neutral, objective responses based upon the data available.

It is also essential that we do not develop an over reliance on the systems. AI should be viewed as a partner not a “truth engine” it can get things wrong and do this in such a subtle way that clinicians need to be vigilant. Furthermore, AI will also defend its position when challenged around inaccurate calculations, so the clinician needs to be both vigilant and assertive with respect to potential errors

When considering mitigating hazards in healthcare software and solutions, training has long been considered a potential mitigation. Where workforces are less stable with more transient staff (agency and locum clinicians for example), the delivery of organisational training is difficult. Consequently, we need training to be delivered much earlier at medical schools and as part of post graduate education at a whole professional level. This represents an imperative. Clinicians need to know how to surf on the coming wave!

One interesting concept is not to introduce a single AI instance at all. Before the luddites seize on this, the idea would be to introduce two instances that can peer review each other, potentially moderating any bias or inaccuracy. We can, therefore, create a virtuous 3-way partnership between the trained, vigilant clinician and the 2 AIs.


How Large Language Model Learn- CPD for AI?

Models have been trained on a body of knowledge prior to their release. For example, ChatGPT’s knowledge is currently based upon information up until April 2023. Beyond this ChatGPT switches to use BING in its search for information. Pi is trained up to September 2021 but its “creators and users” can help it stay up to date by providing the latest information and context with respect to discussions on specific topics. Healthcare will need to assess the risk of AI which is supporting clinical care being some months out of date. This may or may not be clinically relevant.

Estimates suggest that it may take up to 17 years for evidence to be implemented in clinical practice and for many years the UK British National Formulary provided stalwart support for clinical practice on a twice yearly updated hard copy. But there are times when information needs to be update quickly to be available to front line clinicians (think pandemic or monkey pox).

The problem is that even when we train very capable models, it’s still not clear how to keep them up to date or revise them when they make errors . As discussed, we may even have a requirement for additional AI systems “peer reviewing” the main system and identifying known errors. Even so, when do we update the principal system?

Understanding the foundation of an AI system’s learning is a fundamental requirement for safety assurance. If the model’s training was on outdated, biased, or incomplete information. The update approach to the model is important as a robust and reliable model may become rapidly less useful in supporting care.


Let’s Hallucinate

When discussing “hallucinations” in AI within a clinical or healthcare context, it typically refers to instances where AI systems generate incorrect, misleading, or fabricated outputs. These errors can be particularly concerning in healthcare applications, where AI is used for diagnostics, treatment recommendations, disease prediction, and patient monitoring, among other applications. AI hallucinations can pose several potential risks to patients, including:

1. Misdiagnosis: An AI system might incorrectly identify a disease or condition that a patient does not have, based on its analysis of medical images, patient data, or other inputs. This can lead to unnecessary worry, further testing, and potentially harmful treatments for conditions that the patient does not actually have.
2. Missed Diagnosis: Conversely, an AI might fail to identify a condition that is present, leading to a lack of necessary treatment. This can allow a disease to progress unchecked, potentially leading to worsened outcomes for the patient.
3. Inappropriate Treatment Recommendations: AI systems that assist with treatment planning might suggest inappropriate or suboptimal treatments based on incorrect analysis of patient data or misinterpretation of clinical guidelines. This can lead to ineffective treatment, unnecessary side effects, and delays in receiving the correct treatment.
4. Patient Data Privacy and Security: Hallucinations in AI could also be symptomatic of deeper issues with data handling and processing. Incorrect outputs could result from unauthorized access to or manipulation of patient data, leading to breaches of patient privacy and confidentiality.
5. Erosion of Trust: Repeated instances of AI hallucinations can erode trust in healthcare AI applications among both healthcare providers and patients. This could lead to underutilization of potentially beneficial AI tools, scepticism towards new technologies, and a preference for more traditional, possibly less efficient, methods.

There is an excellent example quoted in The AI Revolution in Medicine where a request is made for AI assistance in writing a discharge note in relation to a female patient with anorexia. The discharge note includes a highly plausible Body Mass Index which is entirely hallucinated as the AI did not have the information to calculate the index. This is alarmingly subtle. It underpins the need for careful checking.


Key Values

When it comes to AI in healthcare, there are several key values that should be established to ensure responsible and ethical use of AI technologies:

1. Patient safety and well-being: The primary value should always be the safety and well-being of patients. AI systems should be designed, tested, and implemented with the goal of improving patient outcomes and reducing the risk of harm.
2. Data privacy and security: Protecting patient data is essential for maintaining trust in AI systems. Healthcare organizations should prioritize data security and adhere to privacy regulations, such as HIPAA in the United States, to safeguard sensitive patient information.
3. Transparency and accountability: AI systems in healthcare should be transparent, explainable, and auditable to ensure that healthcare professionals and patients can understand the basis for AI-driven decisions. This transparency promotes trust and allows for accountability in the event of errors or malfunctions.
4. Equity and inclusivity: AI systems should be designed and trained to avoid perpetuating existing biases and inequalities in healthcare. This includes ensuring diverse and inclusive training data, as well as ongoing monitoring and auditing to identify and address potential biases.

Expanding on the patient safety value it is reasonable to ask what precisely needs to be done to assure design, demonstrate effective testing and ensure patient safety with implementation.


Assuring Patient Safety

To assure patient safety in AI-based healthcare systems, several steps should be taken during the design, testing, and implementation phases:

Involve multidisciplinary teams, including clinicians, data scientists, and ethicists, to ensure that AI systems align with clinical best practices and ethical principles.
Conduct thorough literature reviews and consult clinical guidelines to inform the design of AI algorithms.
Use human-centred design principles to ensure that AI systems are intuitive, user-friendly, and aligned with clinical workflows.
Testing
Test AI systems using diverse and representative data sets to ensure accuracy and generalisability across different patient populations.
Perform rigorous, independent validation studies to assess the safety, efficacy, and reliability of AI systems in real-world clinical settings.
Continuously monitor and evaluate AI systems after deployment to identify potential safety issues and opportunities for improvement.
Implementation
Develop clear guidelines and protocols for integrating AI systems into clinical workflows, including roles and responsibilities for human oversight and decision-making.
Provide comprehensive training and support to healthcare professionals to ensure proper use and interpretation of AI-generated results.


Spotlight on Ethical Considerations

There is an emergent role for ethicists working alongside the development and deployment of AI solutions. Such roles can help seek assurance and develop appropriate ethical boundaries. The ethical considerations reflect some of the clinical safety concerns but may be summarised as:

1. Transparency and accountability: AI systems in healthcare should be transparent, explainable, and auditable to ensure that healthcare professionals and patients can understand the basis for AI-driven decisions. This transparency promotes trust and allows for accountability in the event of errors or malfunctions.
2. Equity and inclusivity: AI systems should be designed and trained to avoid perpetuating existing biases and inequalities in healthcare. This includes ensuring diverse and inclusive training data, as well as ongoing monitoring and auditing to identify and address potential biases.
3. Privacy and security: AI systems rely on large amounts of sensitive patient data, which must be protected from unauthorized access or breaches to maintain patient trust and privacy.
4. Autonomy and informed consent: Patients should be informed about the use of AI systems in their care and given the opportunity to provide informed consent. This includes understanding the potential benefits, risks, and limitations of AI-based interventions.
5. Responsibility and liability: It is important to establish clear guidelines for responsibility and liability in the event of errors or harms caused by AI systems in healthcare.

The responsibility and liability will need more careful consideration. We may train and expect clinicians to use AI professionally as a partner and do so vigilantly, but it would be difficult to hold that individual clinician responsible for a subtle hallucination or unrecognised bias. Is this the time for no fault compensation models rather than approaches that blame individuals and how does the AI Industry contribute to this?

Comprehensive data on the prevalence and impact of AI-related clinical safety issues, is currently limited due to the relatively recent introduction of AI in healthcare and the rapidly evolving nature of these technologies. However, as AI adoption in healthcare continues to grow, it will become increasingly important to monitor and assess these costs to ensure the responsible and safe implementation of AI systems.


In Conclusion

AI is an inevitable future component of healthcare services. If we act now, we can direct its development and deployment towards becoming a highly effective force for good for both patients and clinicians. In the words of Star Trek’s The Borg “Resistance is Futile” but, more optimistically partnership is possible.

Safety assurance must be clinically led, multidisciplinary, hazards and values driven and evidence based. This is an ongoing process, supported by training for both the human and the technical partners in the brave new world. AI will give clinicians the ability to focus on what they do best whilst providing support to help safe practice. Space will also be created to focus on safety culture not just on safe clinical transactions.

Healthcare providers and the industry need to collaborate to deliver mechanisms for securing agile assurance. This may seem a little like laying the tracks down in front of a moving train but that is better than trying to act as a digital Canute or to stand back and watch the AI train disappear into the distance having figured out how to lay its own tracks.

High Performing Teams- Think Like a Gardner

High Performing Teams

High Performance with Massive Heart

I strongly believe that the ability to develop, support and empower high performing teams is fundamental to organisational success. The art and science of high performance at team level is part of Massive Heart Consulting’s key offering to client organisations. We understand the theory but, more importantly we have spent many years doing this. Our learning, training and insights are now available to consulting clients. I have had the privilege of leading several high performing teams. This article will look at some of the theory but will also give personal reflections.

More Gardening than Sport

I love sport and consequently have a natural inclination to use examples drawn from sport in terms of high performance. Whilst I hope readers will indulge me a little with this preference, I would like to suggest that the development of high performing teams is better described in a gardening metaphor. Teams need to be nurtured, their leaders need to tend, encourage, and develop them. They need to be “planted” in fertile organisational soil.

This means that merely pulling together a bunch of talented people, setting them a task and expecting high performance is naïve. Teams need leadership and need particular types of leadership. They need to be effectively coached, supported, focused, and protected to do their best work. High performance cannot be built by following a recipe, neither will it evolve in keeping with a magnificent garden it needs to be nurtured.

Indulge Me in a Sporting Example

The best description of a high performing team that I have read does come from a sporting background. David Kirk, who captained the New Zealand All Black Rugby team in the late 1980s and later worked for Mckinsey, writes eloquently about his experience of leading high performance, world class, rugby teams (David Kirk). He draws the distinction between “good” and “world class”. In doing so he identifies some key attributes. World Class team members have a compelling common vision, are consummate in what they do as individuals, are forever seeking improvement, are diverse and can manage internal team tensions. From the outside they deliver consistently with apparent ease and enjoyment. A type of collective “flow state”

High Performance Built by Master Craftsmen

Articles about high performance tend to describe attributes observed in high performance. There are many entitled the “X” (insert a number of your choosing) features of high performing teams. These are useful primers but, in my view, oversimplify the nurturing process that develops and maintains high performance. They almost suggest that a team can be built in the same way as a piece of flat pack furniture. In reality, they are more like fine furniture than self-assembly bookcases. Leaders must be time served master craftsmen. This is not just my opinion, David Hanna in his book “Designing Organisations for High Performance”[1] comments “developing high performance is not a quick fix activity”.

A Compelling Shared Vision

The literature on high performance is consistently supportive of the idea that a high performing team will have a shared vision. This is not just a “nice to do” or a “direction of travel” it is for the team a raison d’être. It may transcend the goals of the organisation, for example, a world class clinicals safety team in a software company may be focused on the patient whose care will be supported by the software not just providing safety assurance of the product. High quality work matters to team members, and they believe that their work matters within the business sector in which they work.

Focus on Continuous Improvement and Learning

High performing teams are also focused on continuous improvement in what they do, they are individually and collectively reflective with respect to their performance and potential improvements. Moreover, they understand that what has worked well historically may need to be adapted or even radically overhauled in the future. Key to the team function is a focus on the vision and the performance, they are not wedded to a methodology. Indeed, they seek to learn from others, from experience and through study.

Psychological Safety

Team members have strong psychological safety, they know that they have the support of their colleagues, their leader and that they are respected within the wider organisation and its stakeholders.

Diversity

High performance is boosted by diversity “The gains from diversity in the workplace are not just moral. Top quartile companies for diversity are more likely to financially outperform industry medians.”  Says Stephen Cappello, Senior Manager of Psychology, Thomas International drawing upon research by McKinsey (diversity and performance). The ability of teams to embrace diversity is indicative of their ability to supercharge their performance.

Leading

Leading high performance is highly rewarding but challenging. As a leader one must gain the confidence of the team, manage the organisational system in which the team is working to provide “air cover” and deliver resources. There is an art as to when to step up, for example in conversations with customers and when to allow empowered team members to manage situations. Leaders will generally coach, support, advocate and manage boundaries with the odd interjection to refocus or make a minor course adjustment. Above all leaders need to be credible, do their jobs exceptionally and deliver for the team. High performance leaders know their team as individuals and professionals and are supportive in both capacities. Feedback from high performing teams suggest that leaders need to be approachable, composed and value driven.

Coaching

There are 2 significant coaching contributions to high performing teams. Firstly, to support the leader as a gardener, nurturing the team and either building or sustaining its performance sometimes within an environment that it volatile, uncertain, complex, and ambiguous, often summarised as VUCA.

The second coaching intervention can be at team level, harnessing the team’s determination for constant never-ending improvement and the requirement for renewal and redesign to perform excellently within changing environments. Coaching sessions at a team level must be consensual with the external coach invited in to support the team in their quest for continued high performance. It cannot be imposed.

What are the symptoms?

Whilst it is likely that turnover within high performing teams will be low, new team members will be required at times to support the replacement of team members and to expand the team. New members will be welcomed, inducted, mentored, and supported as they become fully fledged team members.

The team will have confidence in their ability to deliver, stepping up to meet newly imposed deadlines or to address remedial issues within the organisation. Team members may be vociferous about the situation and clearly make their concerns known but will then “do the necessary”.

Within high performing teams, members will collaborate without direction to get work done to the required excellence.

Team members will present new ideas or study topics at meetings to ensure that the team continues to learn. What will their job look like in a year? In five years? With the fast-paced development of AI?

Fertile Organisations for High Performing Teams

Securing the soil to nurture and maintain high performance is partly the responsibility of the team leader. That said, there are wider responsibilities within the organisation and particularly its senior leadership. This is a key factor within high performance and was recognised by Tannenbaum as oneof the 7Cs of team effectiveness.[2] Teams and their outputs must be valued by the higher echelons of their organisation. This must be done explicitly. Ultimately high performance will flounder if the seed of that performance consistently fall on stony ground.

What I don’t agree with

Some literature strays into the territory of reporting structures and locations.

There is a suggestion that teams need to be co-located to achieve excellence. Whilst it is certainly true that a high performing football or basketball team need to play at the same stadium, I do not believe that this is a wider precondition for high performance. Having led global teams distributed across time zones, I do believe that high performance can be secured by dispersed teams. The ability of high performing teams to deliver is, at least in part, a function of their communication within the team whether this is face to face or virtual.

I also strongly believe that teams do not necessarily need common line reporting. Given the compelling vision and the correct leadership, a team can form focused on a project and deliver high performance.

So, What Next?

If you are interested in conversations about high performance, then Massive Heart Consulting may warrant your consideration. We are passionate about high performance and can give you, your leaders and your teams insights into both the theory and the practice. In the meantime, as you plant your cabbages, broccoli and cauliflowers[3] think about your role as a cultivator of high performance


[1] Hanna David P Designing Organisations for High Performance Addison Wesley Series on organisational development ISBN0-201-12693-1

[2] Tannenbaum,S.I. & Salas, E. (2020). Teams that work : the seven drivers of team effectiveness. Oxford University Press.

[3] Royal Horticultural Society Jobs to do in March  Things to plant in March

Move Over Dr Finlay, Dr FinlAI is the future

Move Over Dr Finlay. The Future may be Dr FinlAI- or is it?

Remembering Dr Finlay

Whilst the famous Dr Finlay’s case book television programmes may be receding into the midst of time, the character has become a model for the “ideal GP”.

Invented by A J Cronin the initial TV series was screened between 1962 and 1971,

Dr Finlay is a general physician beginning his practice in the fictional Scottish town of Levenford. Initially his career would have been before the NHS was founded. In the stories he is charming and becomes well-acquainted with his patients over many years. It is thus his trademark to know them from birth to death and so provide a continuous relationship.

The main characters were Dr. Finlay, the junior partner in the practice, played by Bill Simpson, Dr. Cameron, the craggy senior partner, played by Andrew Cruickshank and Janet, their unflappable housekeeper and receptionist at Arden House, played by Barbara Mullen. The impact of the programme was such that it is reported that Cruickshank was invited to speak to the BMA’s annual dinner. Additionally, a song entitled Dr Finlay, sung by Andy Stewart enjoyed a position in the UK charts for 5 weeks!

Impact, Evidence and Concern

The series was set in the 1930s but it is noticeable that, even when it screened in the 1960s, GPs were expressing concern about it raising unrealistic expectations from patients. Its impact had considerable traction as, even in 2011, 30 years after the last episode was screened, the Daily Telegraph ran a headline reading ‘Farewell Dr Finlay: patients think GPs are rude and rich’, (The Telegraph, 5th December 2011)!

Despite all the developments in medicine since the programme was set, the values of continuity of care and access to general practitioners still resonate with patients. They want to see a GP quickly when they perceive this is necessary but also to establish an ongoing relationship with a trusted GP. It is not just about the patient’s perceptions of care though. The Nuffield Trust’s publication from 2018[1]  sets out the benefits of continuity including outcomes, patient experience, and costs of care from continuity of care.

The Modern Dr Finlay

Delivery of the apparent nirvana where patients have instant access to “their doctor” is challenging. Let’s look at the “Dr Finlay comparators” to illustrate this:

  • The average life expectancy has increased by around 20 years since Dr Finlay’s time in Levenford[2]
  • Healthy life expectancy has also increased over time, but not as much as life expectancy, so more years are spent in poor health. Whilst Dr Finlay would have been focused on Scottish data, it is telling that, although a male in England could expect to live 79.4 years in 2018–20, his average healthy life expectancy was only 63.1 years – ie, he would have spent 16.3 of those years (20 per cent) in ‘not good’ health. In 2018–20 a female in England could expect to live 83.1 years, of which 19.3 years (23 per cent) would have been spent in ‘not good’ health.[3] And although females live an average of 3.7 years longer than males, most of that time (3 years) is spent in poor health. Dr Finlay is busy providing care to an expanded cohort of older patients with health challenges.
  • The modern-day Dr Finlay will be providing many more preventative care interventions. These may include statin medications, anticoagulants, ACE inhibitors and betablockers. All of these have an evidence-based rationale for prevention but require monitoring with periodic blood tests, consultations and managing possible side effects or patient concerns about the latest newspaper headline.
  • Whereas the original Dr Finlay would have seen a much higher proportion of his population smoking (82% of men and 41% of women in 1948)[4], modern Dr Finlay would see less smokers but much more significant obesity. Almost 75% of people aged 45–74 in the UK are living with overweight or obesity as well as over 20% of children before they leave primary school. There are also significant health inequalities in that those from lower income areas are significantly more likely to be living with obesity than those from more affluent areas.[5]
  • Since the programme screened in the early 1960s, the number of GPs per 2000 head of populations has increased only slightly. Whilst the NHS has seen an increase in doctors from its inception in 1948 from 1.4 per 2,000 to 5.1 per 2,000, the GP numbers suggest that, since 1961 the number of GPs has only increased from one per 2000 to 1.6 per 2000 despite the increased workload.[6]
  • In 2023, the current Dr Finlay is more likely to be a female, salaried GP working either within a portfolio career or working part time[7]. She may have strongly considered a career in hospital medicine before electing to pursue life as a GP.
  • Her practice may well have GP vacancies[8], A Pulse survey in 2023 suggested that nearly one in five GP positions in the UK are unfilled according to a Pulse survey, the highest since the data has been collected.  Of the total number of GPs needed, 18.5% of positions were unfilled. Only one in 10 vacancies are currently being advertised, with GPs saying they had given up trying to recruit the remaining 8.5% of vacancies. Two thirds – 66% of practices – said they would hire at least one more GP if there were no problems in recruiting, while 32% said they would hire at least two more. [9]
  • She faces increasing workload year on year[10]
  • Whilst recognising that she needs a multi-disciplinary team and appropriate premises for modern general practice, she may have some challenges in securing resources for that[11]

The Finlay Manifesto

As a GP, I believe strongly that patients and the wider health systems needs an effective model of general practice. I think the evidence supports this. It cannot, however, be a “one size fits all” model as there are strong variations in practice populations and geographical locations.

The future of General Practice should be a priority. A resilient, effective general practice model will provide accessible, cost-effective care for patients with the potential to ameliorate the increases in demand upon hospital care.

Workforce is the single most important element of realising this approach and will require.

  • System respect for clinicians working within primary care settings.
  • Effective models of contracting for healthcare provision which secure high-quality services but which support those working within the service and the organisations responsible for delivering the services.
  • Great training experiences which excite clinicians to pursue a career in primary care.
  • New models of multidisciplinary care which provide safe, accessible, high quality of patient care whilst delivering sustainable services.
  • Technology enabled clinicians and organisations using technical solutions that support effective care which is coordinated across practice teams and wider primary healthcare teams.
  • A focus on developing continuity at a practice or organisation level which becomes part of patient’s “felt experience” of using services repeatedly. In most instances continuity as per the good Dr Finlay is not achievable.

We cannot have a single model for providing and commissioning services. When I first started practising, there was an initiative called Community Orientate Primary Care which advocated practices actively recognising local needs and developing services to respond to them. Health Visitors were seen as potential public health nurses who could drive some of the local need assessments and work as part of the primary healthcare team to develop appropriate service responses. Empowering primary care to develop services that meet local needs.

The Future Dr FinlAI

So how can technology support future models? My assertion is that this is through empowering clinicians rather than replacing or dictating care.

We must recognise the value of human interactions in securing holistic care that considers the physical, psychological, and social elements of a person’s condition and to effectively co-design care with our patients.

At present, primary care systems provide a range of alerts to clinicians, proposing potential interventions, providing alerts, identifying potential gaps in care, and suggesting opportunities to use more cost-effective medication. When in clinical practice, I find these computer interventions to be frequently ill timed within the workflow with the effect that I must cancel, check whether the alert is relevant and then restart, simmering with outrage. Alerting is also unsophisticated, raising a flag abut a drug whether it is relevant to that patient. In addition to poor calibration and timing, alerts are often frankly rude, documenting in the records that the awful Dr Lee had ignored the computer’s advice. All of this must improve as we move forward.

There is no doubt that Artificial Intelligence will have the ability to deliver more sophisticated assistance but must be developed and deployed to meet specific pain points. In a world, for example, where specialist services are increasingly demanding templated referrals rather than elegant “Dear Dr……… Kind Regards” approaches, systems could do a lot more to support GPs navigate the local system.

Decision support will also need to evolve to help GPs and other clinicians have the complex discussions required by patients with multimorbidity. I have previously blogged on this (multimorbidity). But effective technology support for these discussions and the ability to document the outcome of these discussions in a system wide care plan will, I believe, become essential.

Patient engagement in their care is also an important element of future general practice. How can we activate patients to become active in their care. Submitting their home readings directly into their records, adding to their care plans and reviewing their records? I am sure that this will substantially improve the safety of care and personalise the way in which care is delivered. There will, no doubt, be some technophobic objections but I am personally struck by the fact that nearly all telephone consultations are done to mobile numbers and digital confidence is clearly growing,

Healthcare data has the potential to move us towards better bespoke care for individual people, improved care for cohorts and better service planning at a system level. For people with complex needs, we can use interoperability and coordination to help provide a concierge service to navigate the healthcare system, potentially reducing admissions, improving care experiences, and improving outcomes.

I personally feel that the future of primary care will see significant technological developments which support better care. I feel that the future Dr FinlAI will still be a human who is technologically supported to provide and participated in excellent care but in turn will bring their very important interpersonal skills and “gut feelings” to brilliant care.

It may be that Dr Finlay of Levenford might thoroughly approve of the care delivered by Dr FinlAI.


[1] https://www.nuffieldtrust.org.uk/sites/default/files/2018-11/improving-access-and-continuity-in-general-practice-summary-final.pdf

[2] https://www.statista.com/statistics/1040159/life-expectancy-united-kingdom-all-time/

[3] What Is Happening to Life Expectancy in England? | The King’s Fund 2022

[4] https://ash.org.uk/uploads/Smoking-Statistics-Fact-Sheet.pdf?v=1697728811

[5] https://www.rcplondon.ac.uk/news/world-obesity-day-let-s-talk-about-drugs

[6] https://www.nuffieldtrust.org.uk/resource/the-nhs-workforce-in-numbers#toc-header-6

[7] https://assets.publishing.service.gov.uk/media/5a7ff981ed915d74e33f7b37/CfWI_GP_in-depth_review.pdf

[8] https://assets.publishing.service.gov.uk/media/5a7ff981ed915d74e33f7b37/CfWI_GP_in-depth_review.pdf

[9] https://www.pulsetoday.co.uk/news/breaking-news/one-in-five-gp-positions-unfilled-as-vacancy-rates-reach-record-levels/

[10] https://www.rcgp.org.uk/getmedia/3613990d-2da8-458a-b812-ed2cf6d600a6/RCGP-Brief_GP-Shortages-in-England.pdf

[11] https://www.rcgp.org.uk/getmedia/3613990d-2da8-458a-b812-ed2cf6d600a6/RCGP-Brief_GP-Shortages-in-England.pdf

Clinical Safety- Is it driving improved patient safety?

Clinical Safety is a big issue

About 1 in 10 patients are harmed in healthcare according to the World Health Organisation (WHO). Half of this harm is classed as preventable. 50% of harm is attributed to medications justifying a focus on the safety of medication management systems. https://www.who.int/news-room/fact-sheets/detail/patient-safety

This level of patient harm, WHO suggests, reduces global economic growth by 0.7% per annum, amounting to trillions of dollars. Additionally, this global body, suggest that investment in reducing patient harm can lead to better patient outcomes and in significant cost savings.

Investment in clinical software is a key pillar for securing future patient safety. This article suggests that healthcare should look beyond current clinical safety assurance with respect to such technologies. We should consider how best to secure safety improvements through the design, development, and deployment of healthcare software. To ensure that new opportunities for supporting safer care are fully realised manufacturers of clinical software need to provide leadership.

Key Areas for Consideration

I concluded that there were at least four important elements to software development as a component of safer patient care. These were:

  1. Support for clinicians at the point of care to deliver safe interventions. At a basic level this includes ensuring that the clinician has access to patient records, can easily navigate the system to prescribe, order tests and interventions and track progress. It does, however, also extend to providing alerts, for examples for allergies and interactions, prescribed “best practice” workflows and clinical decision support.
  2. Systems must support the delivery of coordinated care. This would include care within the hospital, clinic, or primary care centre and extend to the wider system. Key questions are. How can software systems better support multidisciplinary team working? Can they underpin more effective transitions for clinicians such as shift handovers? Can continuity of care be improved for patients through better discharges? Do new systems provide the level of interoperability to ensure more comprehensive access to records and plans across local healthcare systems?
  3. Software also offers the opportunity to empower people within the delivery of their own care. This includes access to records and the ability to contribute readings and preferences for care. The patient is a much-ignored layer of safety. Research suggests that providing patients with access to their electronic health records can improve medication management safety. This should not be a surprise as it is likely that individual patients will be more “eagle eyed” than the busy clinician. Finally.
  4. Software analytics that allow healthcare providers to look at key trends and support quality improvement and performance management. For example, does a new staffing model on a ward have an influence on the numbers of falls, pressure areas or healthcare acquired infections?

A change in focus

To date, the main clinical safety focus has been on the safety of software products. In a sense this obeys the diktat “first do no harm” and is understandable; new technologies like new drugs must prove their safety. The means to provide that assurance are relatively established particularly in the NHS. Safety considerations need to move to a much more proactive approach. This seeks to ensure that new opportunities to improve safety are fully realised as technology develops. Furthermore, approaches need to ensure that software development focuses on developing functionality that will make a real-world difference to patient safety. Development organisations must not waste time on low impact functionality.

How to Change the Focus?- A Dynamic Approach

I believe that this requires:

  • Clinical Safety clinicians who are very familiar with the clinical workflows that are being supported by software. It is vital, for example, that medicines administration software is supported by clinicians who have undertaken drug rounds and understand the process in the real world.
  • Clinicians with safety expertise being embedded within development teams. In terms of agile software development this means, at a minimum, they should attend sprint reviews during the design and development process. Clinical safety must not be an “end of project” assurance process.
  • Greater collaboration between customers and manufacturer clinicians to seek opportunities for further safety improvements. I am personally a strong advocate of secondments from healthcare organisations that bring clinicians into development organisations for six to twelve months. This has all sorts of career benefits but, in this regard, will improve clinical input into software development. I also strongly support a position where clinicians whose “day job” is technology safety and development maintain some clinical practice. And
  • Digital Clinical Safety leadership having a high profile within both customer and manufacturer organisations. We are in the foothills of this development, but Chief Clinical Information Officers are now emerging as senior healthcare figures and, in some cases, becoming board members.

Conclusion

This clinically driven, diverse partnership approach will, I believe, lead to the development of software that substantially improves the safety of healthcare for patients. It will allow us collectively to fully realise the opportunities offered by Artificial Intelligence, Machine Learning, Analytic developments as well as a host of other opportunities that will arrive.

Assuring software safety will continue to be essential but let us also focus on designing and developing for safety.

© 2025 Massive Heart Blog

Theme by Anders NorenUp ↑