Artificial intelligence technology products in healthcare – who is liable?

Artificial intelligence (AI) is prevalent across many areas of our day-to-day life and now increasingly so in medical products. This article focuses on the potential liability issues for such products, particularly in the healthcare sector.

Background

In 2018 Theresa May voiced the government’s ambition for the UK to be at the forefront of the AI revolution in healthcare, calling for industry and charities to work with the NHS. One aim being to develop algorithms utilising amassed patient data to warn GPs when a patient should be referred to an oncologist or other specialist. It is estimated that by 2033, early diagnosis via AI technologies could help improve patient outcomes and prevent 22,000 deaths from cancer each year.

Imperial College London is working with DeepMind Health on a project to research and explore whether AI-based techniques can improve the accuracy of breast cancer screening and estimation of future risk, with machine learning technology (a form of AI) being applied to anonymised mammograms from approximately 7,500 women. Elsewhere, the use of AI technology in the diagnoses of skin cancer is being researched in France, Germany and the US, with results indicating a higher accuracy of detection, compared with a team of dermatologists.

Who is liable?

Whilst the use of AI based techniques within medical treatment offers opportunities to enhance diagnosis, it also raises questions of where responsibility falls when there are failures to properly diagnose. For example, if AI diagnostic software fails to detect a cancerous tumour and the patient dies as a result of the failure to diagnose, who does the estate of the deceased patient sue?

Is it the treating doctor for clinical negligence, the AI technology service provider for a negligent service or the distributor/manufacturer/designer for a defective product? Ultimately, this will depend on the contract between the treating hospital/doctor and those in the supply chain of the AI technology service/product and the relevant contractual warranties, liability, indemnity and limitation clauses in place.

Product or service?

Whether AI technology is to be considered as a product/medical device or a service is key. The information produced by the software may not be a product, but the machine/system with the AI, may be a product in itself. However, bespoke software designed for a customer is generally considered a service. In the event that the AI software is considered a service, it is arguable that its supplier would only need to show its use of reasonable care and skill. Arguably, going forward it could become problematic for the courts to apply a reasonable computer standard, instead of the usual reasonable person standard.

If the AI technology is considered a product, the issue will then be whether the doctor using the AI technology is in control of the AI product, or whether the AI technology fully controls the product's operation.

An AI product manufacturer may seek to rely on the development risks defence, under the Consumer Protection Act 1987 i.e. that the product risk was not reasonably foreseeable at the time of programming and/or that the programming was in line with the relevant industry standards at the time of development. It will also be crucial for manufacturers to fully inform consumers/users of their products about the risks and limitations of the technology.

If a product’s user does not follow the manufacturer’s instructions, it is arguable that the chain of causation has been broken. However, where AI software has been developed with the specific intention that the user will change the software by using it to learn to perform a designated task (via amassed data), deciding if the chain of causation has been broken is more problematic. Complex questions may arise, including:

  • Was sufficient data entered for the AI machine to operate effectively?
  • Did the user misuse the AI machine?
  • Is there a fault in the algorithm?
  • Has the data set and/or the data inputted been corrupted?
  • Were there multiple entities that developed the initial code?

An approach to answering these questions will no doubt evolve, as both the AI technology and its related claims develop.

Insurance

Once liability is determined, insurers will obviously need to respond. Technology companies usually carry cover for financial loss resulting from technology errors and omissions in the service or product supplied. However, these policies are not designed to cover bodily injury or property damage, which are usually covered under general liability policies.

General liability policies usually exclude professional liability, potentially resulting in an AI technology company having no coverage for any property damage or bodily injury in respect of their services or technology products. In London, we are seeing a limited number of insurance carriers starting to offer bespoke coverage for contingent bodily injury under technology errors and omissions policies to cover this large gap.

Comment

Going forward, those in the AI technology and healthcare arena will need to ensure they have adequate policy cover and contractual protection for the provision/use of services and products with AI technology. Furthermore, as AI technology continues to evolve, we anticipate that the existing legislative framework for tort and product liability will also need to do so.

Read other items in Healthcare Brief - June 2019

Related item: Artificial Intelligence: time for Alternative Insurance?