Preston Redman Banner Image

News

AI in the Legal Landscape

The High Court has warned against the use of generative artificial intelligence software when drafting legal documents and proceedings, following multiple instances of fabricated cases being submitted in legal arguments. Not only is this a very costly mistake for both legal professionals and litigants in person, but it may amount to misleading the Court, being in contempt of Court, or even perverting the course of justice; all of which carry significant consequences.

What is AI and how does it work?

Artificial intelligence is a computer system built to accept prompts from a user, and in turn generates text, images, videos, and documents.

Large language AI models such as ChatGPT, currently the most widely used AI platform with more than 700 million weekly users, generate responses through two main channels;

  1. The first is its core knowledge base; a large dataset of information that was built into the system during training. This material is static and does not update in real time. In ChatGPT’s case, the most recent information in that core training only goes up to June 2024.
     
  2. The second channel is its web-search capability, which allows the system to pull information directly from the internet. However, it only has access to material that is publicly available, and cannot reach subscription databases such as those used by legal professionals.

By default, AI models rely on their core knowledge. Unless prompted otherwise, it will not always automatically carry out a web search. Even when it does search the internet, the system does not “understand”, nor can it critically analyse information in the way a trained professional would. It can retrieve and summarise materials, but it cannot reason, apply judgment, or draw from lived experience in practice.

AI is simply code; it does not have morals, ethics, or reasoning skills, nor does it follow a code of conduct. The primary function of AI is to ‘recognise’ patterns in the data it has been trained on. Whilst the term ‘artificial intelligence’, and the responses it provides, may lead you to believe it can think, it is not capable of forming thoughts, it is not conscious.

AI Use in the Legal Sector

Publicly accessible AI platforms do not maintain comprehensive databases of legal materials or case law. As the core knowledge is derived from pre-existing datasets the information, particularly relating to recent case law, may be outdated.

These AI systems generate responses by predicting what a plausible answer should look like, based on patterns in the data it has been trained on. It cannot reliably cite legal authorities or interpret judgments with the nuance of a trained legal professional, even if it locates a real case online. AI is known to fabricate cases or misrepresent existing judgments, which makes verification against authoritative sources essential.

The arguments and responses generated by AI look real, they appear to be real cases, but they do not exist or have been misconstrued. Whether you are representing yourself, or whether you are a legal professional, the arguments and evidence you submit to the Court are your responsibility, and informing the Court after the fact that AI incorrectly cited non-existent cases will not form an adequate defence.

AI is no replacement for the level of education, training, and experience that legal professionals possess.

Whilst some legal databases have in-built AI systems, these AI models only use the information that is available on that database. In this sense, the database AI systems can be a more efficient way of searching the databases themselves, but it is only as accurate as the prompts it is given, and all cases, references, and practice notes it provides should be cross-referenced and checked for relevance.

Care should also be taken in non-contentious matters; such as drafting wills, contracts, and agreements. AI may generate documents that are not compliant, non-enforceable, or with unfavourable or incorrect terms.

AI Use in Proceedings

Earlier this year, an appellant in a trade mark dispute was criticised for misleading the Court by submitting cases that do not exist. The lawyer for the respondent in the same case also faced issues with his citations, having submitted cases that did not actually support his position.

The High Court has made it clear that citing fictional cases may result in criminal charges or being found in contempt of Court and, in judgment, stated that the use of AI when generating legal documents and submissions calls into question the competency and conduct of legal professionals.

The use of AI by legal professionals without “rigorous verification” has been deemed “negligent”, and the Bar Council has published guidance raising concerns over the ethics of using AI in legal arguments, as well as making it clear that it does not matter if the submission of a non-existent case was done with intent or not, as doing so is “incompetent and grossly negligent”. Legal professionals have a duty to not bring the profession into disrepute, and the Bar Council asserted that utilising false content in a legal setting breaches this duty, risking professional negligence claims amongst other serious ramifications.

This guidance is not exclusive to barristers, with the Solicitors Regulation Authority (SRA), warning that generative AI models such as ChatGPT often produce “hallucinations”, which is when “a system produces highly plausible but incorrect results”.

Externally to falsely attributed or non-existent case citations, the verbiage generated by AI can also present issues. The language it uses may be overly emotive in areas where a submission should remain as neutral as possible, it may impart the learned ‘opinions’ from subjective information it has been trained on, it may misunderstand the prompt it is given and produce irrelevant points, and it may conflate legal terms (for example, ‘tenants’ in the context of residential lettings, rather than ‘tenants’ in the context of leasehold properties).

Whilst the consequences may be more concerning to legal professionals in terms of the impact on their careers and their abilities to practice law, the Court takes the same view with litigants in person with regard to misleading the Court, contempt of Court, and perverting the course of justice.

AI Use in Legal Education

For law students, the consequences of submitting AI generated work to a university may be just as serious and constitutes academic misconduct, with many universities imposing severe penalties for students caught using AI generated work.

Aside from the academic repercussions, reliance on generative AI to conduct research and write essays will impact students’ ability to develop the skills necessary to succeed in legal academia and legal practice.

It is imperative that students do not rely on AI to research or draft essays, and caution should be taken when using AI as a revision tool given its propensity to produce incorrect information.

The issues with generative AI are numerous and significant, and the consequences of using AI in legal settings can be incredibly serious for both lawyers and laypersons. Whilst it can be a useful tool for personal use, it should not be used in lieu of instructing a qualified and competent legal professional, and should not be relied upon by lawyers.

Preston Redman’s experienced legal professionals are able to assist with a wide range of matters for individuals and business; including civil and family litigation, private client, company, and property matters. If you require advice or assistance in relation to any legal issue or query, please contact us on 01202 292 424.