Co-authored by Christian Lowden, Tom Hester (Medical Indemnity Broker; Tysers) and David Bodansky (Consultant Hand and Wrist Surgeon).
The evolution and adoption of artificial intelligence (AI) within healthcare is accelerating at a rapid rate.
The current impact of AI in healthcare, and the anticipation of what it can achieve varies considerably from those who are sceptical and believe its impact will be negligible, to those who already rely upon it in their practice and consider it will transform the future of services.
The UK government’s ambitious ‘Plan for Change’ has a heavy reliance on AI as a tool for delivery of healthcare. AI in healthcare is part of a £14 billion investment in “supercharging” the UK towards an AI-focussed future.
Can AI act as a pressure valve for the NHS?
We are, undoubtedly, seeing the potential across the healthcare system to improve quality of care, reduce staff workload, enhance patient safety, increase productivity, and support education and training.
Machine-learning AI models have already shown success in helping to alleviate pressures and can be expected to continue to evolve. A wide study of emergency departments utilising triage-based AI - Use of Artificial Intelligence in Triage in Hospital Emergency Departments: A Scoping Review - consistently demonstrated improvements in triage efficiency, resource allocation, prediction of hospital admission, identification of critical conditions, and alleviating overflow and workload.
In a more local example, Calderdale and Huddersfield NHS Foundation Trust have, since 2021, been using predictive analytics to assess likely adult intensive care unit occupancy and can do so with up to 90% confidence . Guys and St Thomas’ Hospital has developed a platform that can identify high risk patients with diabetes that are likely to deteriorate whilst waiting for surgery, suggesting prioritisation over those that can withstand waiting a little longer.
In a primary care setting, many GP practices now rely upon website-based ‘chatbots’ and online triage to reduce the burden on phonelines and help direct patients to self-care where necessary. However, one study - The Impact of Digital-First Consultations on Workload in General Practice: Modeling Study - demonstrated a 25% increase in GP workload as a result of patients utilising the new features.
Concerningly, analysis undertaken by The Royal College of Radiologists revealed that in relation to the waiting time for scan results a record 976,000 scans in England breached the one month NHS target in 2024 – a 28% increase on 2023. This is despite AI having demonstrated great potential in analysis of imaging and slides in radiology, fetal ultrasound, pathology, and mammography, and not least DeepMind’s breast cancer detection tool which, as reported in January 2020, demonstrated between 1.2-5.7% reduction in false positives and 2.7-9.4% reduction in false negatives versus a human comparator. Does this suggest a reluctance to rely upon AI in diagnostic imaging? Or is it more likely an absence of access given the benefits of adopting AI in this area of healthcare are described as including triaging to enable radiologists to focus on complex cases, improving sensitivity and specificity for subtle abnormalities, reducing reporting time to seconds, and the ability to standardise care across sites all without suffering cognitive load fatigue.
A recent large study has also shown how AI algorithms can accurately predict patient surgical outcomes and appropriate anaesthesia regimens, as well as monitoring patients during surgery in real time. The technology can help anaesthetists make more informed decisions pre- and intra-operatively, increase efficiency and safety, and reduce costs.
AI is also being applied to reduce administrative burdens on clinicians. It has been used successfully to summarise care records, draft letters, respond to requests, and convert guidelines into accessible formats. It can review social care plans, identify risks and suggest support.
There is also commentary on the potential role AI can role AI could play in supporting the Patient Safety Incident Response Framework through the analysis of thousands of incident reports to identify patterns more quickly, and sensitively, than human review. The government is driving ambient voice technologies (AVTs) to speed up clinic appointments, cut-out associated administrative tasks, and allow clinicians more time to attend patients.
Providing patients with alternative triaging, sources of information, and technology capable of diagnostics without human intervention provides for a sharing and democratising of services effectively moving them out of hospitals and GP surgeries.
Several smartphone apps now carry CE/UKCA certification for melanoma screening. However, the apps show highly variable, and sometimes poor, accuracy in real-world testing. A recent review found true-positive rates across tested apps ranging from 7-73% and true-negative rates from 37-94%, highlighting a very wide spread in ability to detect melanomas.
Beyond headline accuracy, important systemic limits reduce the safety of consumer AI. Many studies used high-quality, clinician-taken images under controlled conditions, not the lower-quality patient photos typical of home use, which often inflates reported performance. Algorithms also often ignore clinical context (history, age, risk factors) that clinicians rely upon. There is also documented bias including training datasets which are skewed toward fairer skin, resulting in poorer accuracy for darker skin.
Conversely, skin cancer detecting AI designed for use by clinicians, rather than by patients, has proven successful. In the USA, DermaSensor’s AI-powered spectroscopy non-invasively evaluates characteristics of a lesion and provides immediate results using an FDA-cleared algorithm. It has a sensitivity of 96% across all 224 skin cancers, false negatives results are as low as 3% and missed skin cancers are reduced from 18% to 9%*.
Plans are in place for high-street opticians to be able to rely upon AI to help identify early signs of dementia by 2026. The technology will allow optometrists to review the patient’s retina and, using AI-powered technology programmed with millions of images and details of the patient’s characteristics, recognise features of early dementia.
The move toward an AI-led or AI-assisted future in healthcare seems inevitable, and it will be essential for clinicians to embrace the evolution. Change is often met with resistance, but statistics show around 25% of UK clinicians have used AI to support their practice in the past 12 months and this figure can be expected to increase with 79% considering it will be useful or extremely useful in their field of practice.
Consultant Hand and Wrist Surgeon, Mr David Bodansky, says AI is improving rapidly, and already the technology has demonstrated the ability to pass the European Association of Neurological Societies’ board exams and the American Board of Orthopaedic Surgery examination. However, a common theme is that AI works well when the information is distilled down into a multiple choice format, or presented neatly and not when interacting with patients. This means that an AI system can be a powerful assistant, offering suggestions and highlighting errors on written notes or summaries, but still struggles to discern a patient’s worries, ideas and concerns and use this context to understand their clinical condition in a native, unfiltered manner, to come to a correct diagnosis and tailored treatment. A human can apply instinct and experience, whilst AI cannot. Although it may not yet replace your doctor, AI is increasingly becoming part of daily care.
Risk
An area of concern for insurers, clinicians and patients are the risks associated with implementing or relying on AI for diagnosis and treatment and, not least, who is responsible should it go wrong.
The General Medical Council report that many doctors consider the ultimate responsibility remains with them as the human in the loop. However, this may be an overly simplistic approach. It perhaps overlooks that there are multiple cogs turning at once to successfully deliver AI-based treatment from developer, programmer, manufacturer, operator, and regulator. Where and with whom liability ultimately rests is far from clear.
Currently, clinicians are not required to obtain express consent from a patient for AI to be relied upon in their treatment. However, an interpretation of Montgomery (Appellant) v Lanarkshire Health Board (Respondent) (Scotland) [11.03.2015] may well be that a clinician does, in fact, have a duty to inform patients if AI has, or will, be utilised. AI is not flawless and reliance upon its findings or as a tool for treatment could be deemed a “material risk” for the patient. Equally, in time, a clinician failing to utilise AI or offer it as a viable alternative may be found to have fallen below the standard of care a patient can expect to receive.
Regulation
A crucial factor ensuring the safety of AI and medical devices, but also delaying their roll out, is necessary adherence to regulations and, it follows, an understanding of the liability and exposure position should it fail.
In the UK, AI software will be classified as a medical device if it is intended by the manufacturer to be used for a medical purpose, such as the diagnosis, prevention, monitoring, prediction, prognosis, treatment, or alleviation of a disease. Such devices are regulated by the Medicines and Healthcare products Regulatory Agency (MHRA) under the UK Medical Devices Regulations 2002 (as amended) including amendments introduced by the Medical Devices (Amendment etc.) (EU Exit) Regulations 2019 and must bear a UKCA mark to be placed on the market in Great Britain. CE marking is accepted in Great Britain until 30 June 2030 and is also required for devices placed on the Northern Ireland market under the EU Medical Devices Regulation (MDR) or In Vitro Diagnostic Medical Devices Regulation (IVDR). Manufacturers must register with the MHRA, determine and assign the correct risk classification to their AI device, and ensure conformity assessment has been undertaken by the appropriate conformity assessment body. In addition, NHS procurement processes often require evidence of compliance with the Digital Technology Assessment Criteria (DTAC) or, where applicable, relevant National Institute for Health and Care Excellence (NICE) guidance.
Currently, the UK is behind its EU counterparts in implementing appropriate AI regulations. The EU have implemented the EU AI Act 2024 which is the first comprehensive regulation of its kind and is expected to set a global standard for governance. It prescribes “unacceptable”, “high”, “limited”, and “minimal” risk AI, and the obligations associated with each classification. The UK has seen slower parliamentary progress. The UK AI Regulation Bill, a Private Members Bill sponsored by Lord Holmes, awaits the second reading in the House of Lords (the process of parliamentary scrutiny having commenced in the House of Lords in March this year). The Bill, in its current form, seeks to establish a central AI Authority to coordinate regulation and auditing across sectors; embed requirements for safety, transparency, fairness, accountability, and compliance with equality, data protection, and consumer laws; mandate sector-specific AI “sandboxes” to allow safe innovation and testing; mandate appointment of AI Responsible Officers; and ensure clear labelling of AI use and records of training.
Data
Not only do concerns persist over ultimate liability for clinicians adopting AI, its expansion inevitably introduces potential patient confidentiality and data protection issues. Sharing and using confidential and sensitive patient information is subject to obligations under GDPR legislation and the principles apply. A key point is that the patient’s personal data must be strictly needed for the AI’s clinical purpose, and not for unrelated tasks. Noting the machine-learning and training of AI neural networks, it is challenging for users to ensure the data handled remains strictly for the purpose intended for that patient. ICO guidance confirms any healthcare data used for a new AI purpose, such as repurposing research, requires a lawful basis or patient consent.
The NHS, in particular, has strict data governance rules and the National Data Opt-Out enables patients to block their confidential information being used for most secondary purposes, including research. Around 3.6 million patients have already opted out.
Comment
AI is expected to be deeply embedded across UK healthcare. It is anticipated primary care will see virtual health assistants managing routine queries, freeing GPs to focus on complex cases; secondary care will see the application of AI in advanced triaging, diagnostics, treatment plans and surgery. High street health service providers will be able to reduce the burden on primary and secondary care. Researchers will see great advancement in drug design and novel treatments’ journey to market will be expedited. Administrative tasks across healthcare generally will fall to AI automation. Properly developed and regulated, this constellation of benefits will provide an improved and efficient service to patients, reduce clinician burn-out, and minimise errors and claims.
Now is the time for clinicians, insurers, lawyers and all healthcare stakeholders to understand the impact AI will have in their field and future proof their activities.
Notably, as Professor Richard Susskind has observed “…most of the short-term predictions about AI greatly overstate its impact but, crucially, most of the long-term claims hugely underestimate its effect”.
*Merry SP, Chatha K, Croghan I, Nguyen VL, McCormick B, Leffel D. Clinical Performance of Novel Elastic Scattering Spectroscopy (ESS) in Detection of Skin Cancer: A Blinded, Prospective, Multi-Center Clinical Trial. J Clin Aesthet Dermatol 2023 April: 16(4 Suppl): s16.
If you would like to explore any of the themes in this article further, or have any questions related to AI in healthcare, please connect with the authors via email or LinkedIn.
Christian Lowden - christian.lowden@kennedyslaw.com; LinkedIn
Tom Hester - tom.hester@tysers.com; LinkedIn
David Bodansky - info@davidbodansky.com; davidbodansky.com