As artificial intelligence (AI) become increasingly embedded in routine healthcare – supporting tasks such as triage, documentation, interpretation of investigation, diagnosis and patient communication – it introduces new patient safety risks through incorrect outputs (“hallucinations”) that should be treated as safety errors rather than technical glitches. In our article in the Journal of Patient Safety, we argue that primary care must extend its established safety culture to AI by systematically detecting, classifying, reporting, and learning from AI-related errors using principles already applied to human error, such as audit, governance, and incident reporting.
We highlight evidence that AI-generated clinical text can contain omissions, fabrications, or unsafe recommendations that may not be apparent to clinicians and patients and that risk becoming “silent errors” in electronic health records. These errors can then contribute to cognitive offloading if clinicians over-trust AI outputs. To mitigate these risks, we call for routine AI oversight in practice (including review, sampling, and escalation), explicit clinician accountability for AI-influenced outputs, patient engagement in spotting discrepancies, and closer collaboration with AI developers.
Ultimately, AI errors are inevitable, and that embedding AI safety as a core, proactive design feature – rather than an afterthought – is essential to ensure AI enhances rather than compromises patient safety in primary care.
On 28 September 2017, I attended the Annual Institute of Global Health Innovation Lecture: Artificial General Intelligence and Healthcare, delivered by Dr Demis Hassabis, co-founder and CEO of Google DeepMind. Artificial intelligence is the science of making machines smart argued Dr Hassabis, so how can we make it improve the healthcare sector? Dr Hassabis then went on to describe the work that DeepMind was carrying out in healthcare in areas such as organising information, deep learning to support the reporting of medical images (such as scans and pathology slides), and biomedical science. Dr Hassabis also discussed the challenges of applying techniques such as reinforcement learning in healthcare. He concluded that artificial intelligence has great scope for improving healthcare; for example, by prioritising the tasks that clinicians had to carry out and by providing decision support aids for both patients and doctors.