AI tools have been used by the legal profession for a significant time without difficulty, for example, Technology Assisted Review in electronic disclosure. Over recent years and with the advent of large language models (LLMs) such as ChatGPT, the use of generative AI in dispute resolution has increased. Whilst AI can potentially be useful for summarising large bodies of text and performing administrative tasks, care needs to be taken to ensure that the information obtained through AI tools is accurate.
Judicial guidance
The Courts and Tribunals Judiciary has recently issued updated guidance for Judicial Office Holders on the use of AI and the need for independence, impartiality and integrity. Setting out key issues and risks, and suggested ways to mitigate those, the guidance highlights (among other matters) the need to:
Understand the limitations of AI:
- AI tools “are a poor way of conducting research to find new information you cannot verify”.
- “Even with the best prompts, the information provided may be inaccurate, incomplete, misleading, or biased. It must be borne in mind that “wrong” answers are not infrequent.
- “The currently available LLMs appear to have been trained on material published on the internet. Their “view” of the law is often based heavily on US and historic law”.
Uphold confidentiality and privacy:
- “Do not enter any information into a public AI chatbot that is not already in the public domain.”
- “Any information that you input into a public AI chatbot should be seen as being published to all the world.”
- “In the event of unintentional disclosure of confidential or private information you should contact your leadership judge and the Judicial Office. If the disclosed information includes personal data, the disclosure should be reported as a data incident.”
Ensure accountability and accuracy:
- “The accuracy of any information you have been provided by an AI tool must be checked before it is used or relied upon.”
- “AI tools may “hallucinate”, which includes….making up fictitious cases, citations or quotes, or refer to legislation, articles or legal texts that do not exist”.
Take responsibility:
- “Judicial office holders are personally responsible for material which is produced in their name.”
- “Judges must always read the underlying documents. AI tools may assist, but they cannot replace direct judicial engagement with evidence.”
Lawyers
The Law Society issued similar guidance on 1 October 2025 (Generative AI: the essentials). The guidance highlights the need to make sure that any information or documents that a solicitor submits “to the court are accurate and from genuine and verifiable sources.” The guidance also provides that “misuse of any tool, leading to inaccurate information being presented” will breach the SRA Code of Conduct.
Unfortunately there have been several recent cases where legal professionals have been censured by the court for misusing AI. For example, Choksi v IPS Law LLP [2025] EWHC 2804 (Ch) where a witness statement from the defendant’s managing partner contained references to a number of cases that had “wrong citations, wrong names or which simply did not exist”, and MS v Secretary of State for the Home Department (Professional Conduct: AI Generated Documents) Bangladesh [2025] UKUT 305 (IAC) where it was found that the barrister “had misused artificial intelligence and attempted to mislead the Tribunal”.
Dame Victoria Sharp, the president of the Kings Bench Division, issued a sharp warning on 6 June 2025 (R (on the application of Frederick Ayinde) v Haringey London Borough Council; Al-Haroun v Qatar National Bank QPSC and QNB Capital LLC [2025] EWHC 1383 (Admin)). In this decision, the Court addressed serious professional misconduct involving the misuse of AI by legal practitioners in two separate cases. Dame Sharp warned that:
“Freely available generative artificial intelligence tools, trained on a large language model such as ChatGPT are not capable of conducting reliable research. Such tools can produce apparently coherent and plausible responses to prompts, but those coherent and plausible responses may turn out to be entirely incorrect.” [para 6]
“Those who use artificial intelligence to conduct legal research notwithstanding these risks have a professional duty therefore to check the accuracy of such research by reference to authoritative sources, before using it in the course of their professional work (to advise clients or before a court, for example).” [para 7]
“There are serious implications for the administration of justice and public confidence in the justice system if artificial intelligence is misused.”[para 9]
Dame Sharp’s judgment refers to an American case (Kohls v Elison No 24-cv-3754 (D Minn 10 January 2025) where the parties relied on expert evidence about AI. Dame Sharp noted that “one of the experts had used generative AI to draft his report and it included citations of non-existent academic articles” and referred to observations made by United States District Judge Laura Provinzino in that case:
“The irony … a credentialed expert on the dangers of AI and misinformation, has fallen victim to the siren call of relying too heavily on AI – in a case that revolves around the dangers of AI, no less.
…
The Court thus adds its voice to a growing chorus around the country declaring the same message: verify AI-generated content in legal submissions!”
Experts
The use of AI by expert witnesses was addressed in the latest Bond Solon Expert Witness Survey published on 7 November 2025. 20% of respondents stated that they had used artificial intelligence in their role as an expert witness, an increase from 9.31% last year but still considerably lower than the national average of 65% across UK workers.
Of the 20% of experts who had used AI, most had done so to assist with research and others said that they “used AI to rephrase writing, and check grammar and spelling, or to calculate results from data.”
The vast majority of respondents (89%) felt that specific guidance was required for the use of AI by expert witnesses in the UK. The survey states that “it is clear that the relatively low uptake of the technology is likely down to fear of inviting unintended criticism”.
The survey asked whether experts would “accept an instruction where the solicitor insisted on providing the expert witness with a draft expert report for the case, that was generated by AI”. 14% said that they would accept such an instruction. This was raised as a matter of concern by Mr Justice Waksman, head of the Construction and Technology Court, at the Bond Solon Expert Witness Conference on 6 November 2025.
Comment
Whilst there is currently no specific guidance for expert witnesses on the use of AI, they should always comply with their duties under Part 35.3 of the Civil Procedure Rules (CPR). Expert evidence presented to the court should be the independent product of the expert, an expert witness should provide independent assistance to the court by way of objective unbiased opinion in relation to matters within their expertise, and an expert witness should state the facts or assumptions on which their opinion is based.
The CPR Part 35 duties make it incumbent upon the expert to check that the information obtained from AI, or from any other research sources, is accurate.
It would be helpful for CPR Part 35 to be updated to include specific guidance for experts on the use of AI. In the meantime, we would suggest that the warnings of Dame Sharp apply to experts as much as lawyers and that they should bear these in mind, as well as the recent judicial guidance. The Law Society guidance on generative AI states that a solicitor bears the professional responsibility for the factual accuracy of expert reports and so it is important that solicitors seek to regulate the use of AI by their experts.
The issue of AI could be addressed in the solicitor’s letter of instruction to the expert. For example:
- Enclose a copy of the recent judicial guidance and state that the expert is expected to comply with this if they intend to use AI tools in the preparation of their report.
- Highlight the need to ensure that any material produced by AI is double-checked and verified for accuracy.
- Highlight the need to maintain data security and that confidential or personal information should not be entered into AI tools.
- Suggest that if the expert uses AI, they should document the process clearly, so that they can explain how it was used if questioned by solicitors or by the court. That may include saving and/or reproducing in the body of their report a copy of the specific ‘prompt’ wording used to generate the AI output;
- Give clear instructions that the expert must verify that all decision-making and opinion contained in the report was generated themselves independently of AI.
United Kingdom