
Call to Action
AI is moving quickly in terms of its capability and sophistication. The requirements of clinical safety assurance with respect to AI software in healthcare are also likely to change quickly. We need to start discussions now! AI can potentially move faster than regulations and research, so we need training, values, and ethics in place. We also need to consider approaches to clinical safety assurance, liability, and peer review.
A Nice Dilemma – Surely we should be happy not fearful?
On one hand we have a significant global clinical safety challenge. People experiencing harm even in the most developed healthcare systems. Additionally, there are massive inequalities in the delivery of safe care across the globe.
Our aspirations to deliver safer care may be thwarted by the workforce challenges with gaps in rotas and non-ideal staffing levels. Clinicians turning up to work will more stressed and, at times, focused on self-preservation alongside their professional obligations to patients.
On the other hand, we have the “coming wave of AI. This offers much around both the delivery of safe care and the collection and analysis of care safety. But it brings concern about safety, data protection and its impact on traditional clinical and managerial roles within healthcare.
Mustafa Suleyman poses the question” How do we keep a grip on the most valuable technologies ever invented as they get cheaper and spread faster than any in history?” But he recognises that the progress of AI is inevitable.
My view as a clinician with a focus on patient safety is that we cannot be digital King Canutes or Technology Luddites. Healthcare needs to prepare now to optimise the safety of AI in healthcare. We should seek to educate our end users. We should seek to optimise the human impact on patient care, care planning and safety culture. To mix metaphors it is better to have a tiger by its tail than risk the wild wild west!
The partnership between human clinicians and AI has the potential to improve the clinical safety of care for all. Furthermore, we can improve the care experience for both patients and clinicians.
Current State of Play in Healthcare- A brief overview
Whilst by no means ubiquitous, AI is already used for:
1. Diagnostics and Imaging: AI algorithms are being used to interpret X-rays, MRIs, and other imaging modalities, often with greater accuracy and speed than human radiologists. This improves the detection of diseases such as cancer, pneumonia, and brain anomalies.
2. Personalised Medicine: By analysing vast amounts of genetic data, AI helps tailor treatments to the individual, improving outcomes. This approach is particularly relevant in oncology and chronic disease management.
3. Predictive Analytics: Hospitals and healthcare providers use AI to predict patient admission rates. They also use AI to identify potential disease outbreaks, optimising staffing and resources while improving patient care.
4. Robot-Assisted Surgery: Robots, guided by AI, can perform surgeries with precision and flexibility beyond human capabilities. This leads to less invasive procedures, reduced recovery times, and improved surgical outcomes.
5. Virtual Health Assistants: AI-powered chatbots and virtual assistants provide 24/7 support to patients They offer medical advice helping to manage chronic conditions, and reminding patients to take their medication.
Predictions suggest that during 2024 we will see:
1. Expanded Telemedicine Capabilities: AI will enhance telehealth platforms with more advanced diagnostic tools and personalized treatment options, making healthcare more accessible, especially in remote areas.
2. Integration of AI in Electronic Health Records (EHRs): AI will play a more significant role in EHRs by automating data entry, highlighting relevant patient information, and suggesting treatment options, which can reduce administrative burdens and improve patient care.
3. Advances in Predictive Analytics: With more data and improved algorithms, AI’s ability to predict disease outbreaks, patient admissions, and even individual health risks will become more accurate, enabling preventive measures and personalized health plans.
4. Ethical AI Development: As AI becomes more integrated into healthcare, there will be an increased focus on addressing ethical considerations, such as data privacy, algorithmic bias, and ensuring AI supports rather than replaces human healthcare providers.
As a Primary Care Doctor- The Thrilling Future
As a GP I want to see how my practice will be supported by AI. So, what will I see?
1. Enhanced Diagnostic Support: AI-powered diagnostic tools can help primary care clinicians in interpreting medical images, lab results, and patient symptoms. These tools can provide more accurate and timely diagnoses, allowing better-informed decisions about patient care.
2. Personalised Treatment Plans: AI algorithms can analyse vast amounts of patient data to identify patterns and trends that may not be immediately apparent to human clinicians. This can help tailor treatment plans to individual patients based on their unique characteristics, medical history, and genetic makeup. Potential help for managing multimorbidity elegantly my previous blog.
3. Improved Patient Monitoring: AI-enabled remote monitoring devices can track patients’ vital signs and health metrics in real-time, allowing GP practices to monitor patients’ health status more closely between face-to-face visits. This can help identify potential issues early on and enable intervention before they escalate.
4. Efficient Administrative Tasks: AI can streamline administrative tasks such as patient scheduling and coding, freeing up more time for clinicians to focus on patient care. Automated systems can handle routine administrative duties, allowing clinicians to spend more time with patients and provide better-quality care.
5. Virtual Health Assistants: Virtual health assistants powered by AI can help answer patient queries, provide basic medical advice, and schedule appointments. These assistants can augment practice by handling routine inquiries, allowing focus on more complex patient interactions.
6. Decision Support Tools: AI-driven decision support systems can provide evidence-based recommendations and guidelines at the point of care, helping doctors to make more informed decisions about treatment options, medication choices, and preventive care measures.
7. Continuing Medical Education: AI-powered platforms can provide personalized learning resources and continuing medical education (CME) opportunities tailored to specific areas of interest and professional development needs. These platforms can help professionals to stay updated on the latest medical advancements and best practices in primary care.
Wow! What is not to like? Overall, the introduction of AI in healthcare is likely to enhance the efficiency, accuracy, and quality of primary care services, enabling the provision of better care to patients while also improving clinical workflows and clinician job satisfaction.
But there are Concerns.
It’s essential to embrace these technologies responsibly and ensure that they are integrated into clinical practice in a way that prioritises patient safety, privacy, and ethical considerations.
Achieving these benefits, however, will be dependent upon the successful deployment and adoption, addressing potential issues relating to data privacy and addressing the digital disparity between different regions and populations. Governments, healthcare providers.
Clinicians and patients will all require assurances regarding the safety of AI in supporting care. As a patient it would be essential to understand that the AI support is accurate, recommending best practice, acting without bias and able to explain its recommended outcomes.
Clinical Safety and Medical Device Regulation –Should we regulate?
There appears to be consensus that AI should be subject to some degree of regulation. Even amongst those at the cutting edge such as Mira Murati, The Creator of ChatGPT express an opinion that regulation is required. Asking Inflection AI’s Pi whether AI in healthcare should be regulated gains an affirmative response. The arguments are as follows:
1. Ensuring safety and effectiveness: AI systems in healthcare can have a profound impact on patient care, and any errors or malfunctions could result in serious harm. Regulation can help ensure that AI technologies are developed, tested, and implemented with patient safety and effectiveness as top priorities. This includes verifying that AI systems are accurate, reliable, and perform as intended in real-world clinical settings.
2. Protecting data privacy: AI systems often require large amounts of patient data to function effectively, and this data can be highly sensitive. Regulation can help ensure that patient privacy is protected by establishing standards for data security, anonymization, and consent processes. This is crucial for maintaining patient trust in AI technologies and encouraging their widespread adoption.
3. Promoting transparency and accountability: AI systems can be complex and opaque, making it difficult for healthcare professionals and patients to understand how decisions are being made. Regulation can promote transparency by requiring that AI algorithms and decision-making processes are explainable, auditable, and subject to oversight.
So yes, but how?
I personally do not believe that current regulatory approaches such as EU MDR are sufficiently agile to allow the introduction of AI based software to rapidly realise benefits for patients and the clinicians providing care for them. Healthcare workforce challenges and health inequalities mean that we need to match the cutting-edge technology with cutting-edge assurance.
It is my belief that the glacial progress of current medical device regulation and the highly conservative approach taken by regulators as well as those interpreting regulations do not offer a way forward for the optimal introduction of this technology. Systems which develop greater assurance agility will gain significant safety advantages for their population.
I say this as this technology has the potential to rapidly improve the accessibility and safety of healthcare. We must start from a viewpoint that current approaches are not as safe for patients as we would like, and that AI software can provide a novel solution which improves safety.
Failure to do this risks replicating the sentiment that healthcare needs to “return to paper” when a software defect with clinical safety issues is identified. There is no room, in my view, for luddites because healthcare needs AI if it is going to provide sustainable, equitable care in the future. Current services are delivered by a challenge workforce, are expensive and offer inequitable access at both a global and system level.
That does not mean that there are not risks that need to be assessed and mitigated. I strongly believe that the UK NHS required safety assurance offers a starting point. This places legal requirements on healthcare organisations (and therefore software manufacturers) to retain clinical safety officers and implement a safety assurance process which ensures that potential clinical safety hazards are identified. There is then an imperative to ensure appropriate mitigation is determined and implemented with the final assessment of any residual risk. This process is summarised in the development of a Hazard Log and a production of a Safety Case which is updated for every release. This then becomes a transparent, clinically led assurance process.
I believe that this will then enable us to define potential hazards that we can mitigate and describe that mitigation in a safety case for each product and use case.
Hazards New Old and their Mitigation– A Professional View
My experience with respect to currently deployed healthcare software is that there are common high-level risks across products no matter who manufactures them and that these are largely independent of the product’s precise role.
An analysis across software including pre-hospital solutions, medications management solutions, full electronic medical records and laboratory information systems suggests that the following high-level hazards apply to most if not all systems:
• system access;
• data accuracy;
• linked systems;
• Messaging failure between product components; and
• clinical workflows within the software.
In most instances, the hazards identified will be mitigated through software design and development before being assured through testing. Both manufacturers and customers become concerned where the mitigation of a hazard with clinical safety risk requires training or a change in business processes.
Whilst AI tools will still have the above hazards, I would suggest that there are additional or more prominant hazards that we might anticipate as AI is introduced.
1. Inaccurate or biased diagnoses: AI systems may produce incorrect or biased diagnoses due to limitations in their training data or algorithms. This could lead to inappropriate treatment plans and potentially harm patients.
2. Data privacy and security breaches: As AI systems rely on large amounts of sensitive patient data, there is a risk of unauthorized access or data breaches, which could compromise patient privacy and trust in the healthcare system.
3. Over-reliance on AI: Healthcare professionals may become overly reliant on AI systems, leading to a decrease in critical thinking and clinical judgment skills. This could result in missed diagnoses or other errors that could harm patients.
4. Lack of transparency: AI systems can be complex and difficult to understand, making it challenging for healthcare professionals and patients to understand the basis for AI-driven decisions. This lack of transparency can erode trust in AI and potentially lead to incorrect decisions.
We need, therefore, to provide assurances that AI systems have not been trained on data that has inherent biases including any algorithms. The systems are unlikely to be programmed to be unfair, they seek to make neutral, objective responses based upon the data available.
It is also essential that we do not develop an over reliance on the systems. AI should be viewed as a partner not a “truth engine” it can get things wrong and do this in such a subtle way that clinicians need to be vigilant. Furthermore, AI will also defend its position when challenged around inaccurate calculations, so the clinician needs to be both vigilant and assertive with respect to potential errors
When considering mitigating hazards in healthcare software and solutions, training has long been considered a potential mitigation. Where workforces are less stable with more transient staff (agency and locum clinicians for example), the delivery of organisational training is difficult. Consequently, we need training to be delivered much earlier at medical schools and as part of post graduate education at a whole professional level. This represents an imperative. Clinicians need to know how to surf on the coming wave!
One interesting concept is not to introduce a single AI instance at all. Before the luddites seize on this, the idea would be to introduce two instances that can peer review each other, potentially moderating any bias or inaccuracy. We can, therefore, create a virtuous 3-way partnership between the trained, vigilant clinician and the 2 AIs.
How Large Language Model Learn- CPD for AI?
Models have been trained on a body of knowledge prior to their release. For example, ChatGPT’s knowledge is currently based upon information up until April 2023. Beyond this ChatGPT switches to use BING in its search for information. Pi is trained up to September 2021 but its “creators and users” can help it stay up to date by providing the latest information and context with respect to discussions on specific topics. Healthcare will need to assess the risk of AI which is supporting clinical care being some months out of date. This may or may not be clinically relevant.
Estimates suggest that it may take up to 17 years for evidence to be implemented in clinical practice and for many years the UK British National Formulary provided stalwart support for clinical practice on a twice yearly updated hard copy. But there are times when information needs to be update quickly to be available to front line clinicians (think pandemic or monkey pox).
The problem is that even when we train very capable models, it’s still not clear how to keep them up to date or revise them when they make errors . As discussed, we may even have a requirement for additional AI systems “peer reviewing” the main system and identifying known errors. Even so, when do we update the principal system?
Understanding the foundation of an AI system’s learning is a fundamental requirement for safety assurance. If the model’s training was on outdated, biased, or incomplete information. The update approach to the model is important as a robust and reliable model may become rapidly less useful in supporting care.
Let’s Hallucinate
When discussing “hallucinations” in AI within a clinical or healthcare context, it typically refers to instances where AI systems generate incorrect, misleading, or fabricated outputs. These errors can be particularly concerning in healthcare applications, where AI is used for diagnostics, treatment recommendations, disease prediction, and patient monitoring, among other applications. AI hallucinations can pose several potential risks to patients, including:
1. Misdiagnosis: An AI system might incorrectly identify a disease or condition that a patient does not have, based on its analysis of medical images, patient data, or other inputs. This can lead to unnecessary worry, further testing, and potentially harmful treatments for conditions that the patient does not actually have.
2. Missed Diagnosis: Conversely, an AI might fail to identify a condition that is present, leading to a lack of necessary treatment. This can allow a disease to progress unchecked, potentially leading to worsened outcomes for the patient.
3. Inappropriate Treatment Recommendations: AI systems that assist with treatment planning might suggest inappropriate or suboptimal treatments based on incorrect analysis of patient data or misinterpretation of clinical guidelines. This can lead to ineffective treatment, unnecessary side effects, and delays in receiving the correct treatment.
4. Patient Data Privacy and Security: Hallucinations in AI could also be symptomatic of deeper issues with data handling and processing. Incorrect outputs could result from unauthorized access to or manipulation of patient data, leading to breaches of patient privacy and confidentiality.
5. Erosion of Trust: Repeated instances of AI hallucinations can erode trust in healthcare AI applications among both healthcare providers and patients. This could lead to underutilization of potentially beneficial AI tools, scepticism towards new technologies, and a preference for more traditional, possibly less efficient, methods.
There is an excellent example quoted in The AI Revolution in Medicine where a request is made for AI assistance in writing a discharge note in relation to a female patient with anorexia. The discharge note includes a highly plausible Body Mass Index which is entirely hallucinated as the AI did not have the information to calculate the index. This is alarmingly subtle. It underpins the need for careful checking.
Key Values
When it comes to AI in healthcare, there are several key values that should be established to ensure responsible and ethical use of AI technologies:
1. Patient safety and well-being: The primary value should always be the safety and well-being of patients. AI systems should be designed, tested, and implemented with the goal of improving patient outcomes and reducing the risk of harm.
2. Data privacy and security: Protecting patient data is essential for maintaining trust in AI systems. Healthcare organizations should prioritize data security and adhere to privacy regulations, such as HIPAA in the United States, to safeguard sensitive patient information.
3. Transparency and accountability: AI systems in healthcare should be transparent, explainable, and auditable to ensure that healthcare professionals and patients can understand the basis for AI-driven decisions. This transparency promotes trust and allows for accountability in the event of errors or malfunctions.
4. Equity and inclusivity: AI systems should be designed and trained to avoid perpetuating existing biases and inequalities in healthcare. This includes ensuring diverse and inclusive training data, as well as ongoing monitoring and auditing to identify and address potential biases.
Expanding on the patient safety value it is reasonable to ask what precisely needs to be done to assure design, demonstrate effective testing and ensure patient safety with implementation.
Assuring Patient Safety
To assure patient safety in AI-based healthcare systems, several steps should be taken during the design, testing, and implementation phases:
• Involve multidisciplinary teams, including clinicians, data scientists, and ethicists, to ensure that AI systems align with clinical best practices and ethical principles.
• Conduct thorough literature reviews and consult clinical guidelines to inform the design of AI algorithms.
• Use human-centred design principles to ensure that AI systems are intuitive, user-friendly, and aligned with clinical workflows.
Testing
• Test AI systems using diverse and representative data sets to ensure accuracy and generalisability across different patient populations.
• Perform rigorous, independent validation studies to assess the safety, efficacy, and reliability of AI systems in real-world clinical settings.
• Continuously monitor and evaluate AI systems after deployment to identify potential safety issues and opportunities for improvement.
Implementation
• Develop clear guidelines and protocols for integrating AI systems into clinical workflows, including roles and responsibilities for human oversight and decision-making.
• Provide comprehensive training and support to healthcare professionals to ensure proper use and interpretation of AI-generated results.
Spotlight on Ethical Considerations
There is an emergent role for ethicists working alongside the development and deployment of AI solutions. Such roles can help seek assurance and develop appropriate ethical boundaries. The ethical considerations reflect some of the clinical safety concerns but may be summarised as:
1. Transparency and accountability: AI systems in healthcare should be transparent, explainable, and auditable to ensure that healthcare professionals and patients can understand the basis for AI-driven decisions. This transparency promotes trust and allows for accountability in the event of errors or malfunctions.
2. Equity and inclusivity: AI systems should be designed and trained to avoid perpetuating existing biases and inequalities in healthcare. This includes ensuring diverse and inclusive training data, as well as ongoing monitoring and auditing to identify and address potential biases.
3. Privacy and security: AI systems rely on large amounts of sensitive patient data, which must be protected from unauthorized access or breaches to maintain patient trust and privacy.
4. Autonomy and informed consent: Patients should be informed about the use of AI systems in their care and given the opportunity to provide informed consent. This includes understanding the potential benefits, risks, and limitations of AI-based interventions.
5. Responsibility and liability: It is important to establish clear guidelines for responsibility and liability in the event of errors or harms caused by AI systems in healthcare.
The responsibility and liability will need more careful consideration. We may train and expect clinicians to use AI professionally as a partner and do so vigilantly, but it would be difficult to hold that individual clinician responsible for a subtle hallucination or unrecognised bias. Is this the time for no fault compensation models rather than approaches that blame individuals and how does the AI Industry contribute to this?
Comprehensive data on the prevalence and impact of AI-related clinical safety issues, is currently limited due to the relatively recent introduction of AI in healthcare and the rapidly evolving nature of these technologies. However, as AI adoption in healthcare continues to grow, it will become increasingly important to monitor and assess these costs to ensure the responsible and safe implementation of AI systems.
In Conclusion
AI is an inevitable future component of healthcare services. If we act now, we can direct its development and deployment towards becoming a highly effective force for good for both patients and clinicians. In the words of Star Trek’s The Borg “Resistance is Futile” but, more optimistically partnership is possible.
Safety assurance must be clinically led, multidisciplinary, hazards and values driven and evidence based. This is an ongoing process, supported by training for both the human and the technical partners in the brave new world. AI will give clinicians the ability to focus on what they do best whilst providing support to help safe practice. Space will also be created to focus on safety culture not just on safe clinical transactions.
Healthcare providers and the industry need to collaborate to deliver mechanisms for securing agile assurance. This may seem a little like laying the tracks down in front of a moving train but that is better than trying to act as a digital Canute or to stand back and watch the AI train disappear into the distance having figured out how to lay its own tracks.
Leave a Reply