The Use Of Artificial Intelligence In Courts And Tribunals

Print Friendly, PDF & Email

The Use Of Artificial Intelligence In Courts And Tribunals

Author: Courtney Wilbor

Key Contact: Aisha Wardell

With the integration of artificial intelligence becoming ever more common in the world generally, including within the legal sector, it comes as no surprise that the Courts and Tribunals Judiciary has recently issued guidance for Judicial Office Holders; highlighting what they perceive to be the key risks and concerns that surround the use of AI in the industry.

What is artificial intelligence?

Artificial intelligence (AI) is defined by the Courts and Tribunals Judiciary as “computer systems able to perform tasks normally requiring human intelligence”. Whilst there are a multitude of forms of artificial intelligence and an endless list of seemingly indecipherable lingo, the guidance refers specifically to the following terms when making its points:

  • Generative AI – a form of AI which generates new content
  • Generative AI Chatbot – this includes computer programs such as Chat GPT which aim to mimic an online human conversation
  • Large Language Model – this seeks to anticipate the most appropriate next word in a sentence
  • Machine Learning – in a similar manner to a Generative AI Chatbot, this form of AI utilises a plethora of data and algorithms to produce classifications or predictions
  • Technology Assisted Review – this functionality is ever more frequently being used as part of the disclosure process to identify potentially relevant documents from large swathes of data sets.

What risks are identified by the guidance?

The guidance highlights what it deems to be seven key limitations of the use of AI tools in the courts/tribunals as follows:

  1. Understanding AI and its applications

Whilst the aforementioned tools and systems may be useful in terms of producing a “non-definitive confirmation” of a fact, the guidance draws attention to the risks of using such material if its accuracy cannot be independently verified. This is because AI tools generate content in response to both the prompts they receive and the data that has been used to train them initially. As a result, users should err on the side of caution in using any such tools if they do not have secondary sources to confirm the accuracy of the content produced.

  • Uphold confidentiality and privacy

Arguably one of the most significant risks of AI, particularly in the context of Generative AI Chatbots, is that any information entered can be used to respond to queries from other users; therefore putting any private or confidential information at risk of public dissemination.

  • Ensure accountability and accuracy

AI draws on a combination of the information it receives from users and the data it was trained with to produce content. As such, there is a strong risk that fictitious cases and legislation may be manufactured to provide a basis for the legal reasoning produced. The implications of using such information have far reaching consequences, particularly for litigants in person.

  • Be aware of bias

Given that AI tools produce content based on the dataset used to train them, it should come as no surprise that the information subsequently produced can have bias. When reviewing any such information, it is therefore pertinent to correct any such errors or biases as a result.

  • Maintain security

When accessing AI tools, it is best practice solely to use work devices and email addresses in order to maintain both your own and the court/tribunals’ security.

  • Take responsibility

It is reiterated in the guidance that judicial office holders are “personally responsible for material which is produced in their name”. In light of this, the use of any AI tools should be agreed in advance to ensure that any risk mitigation is undertaken and the findings be appropriately referenced.

  • Be aware that court/tribunal users may have used AI tools

The guidance takes an optimistic and pragmatic approach to the use of AI tools in the courts/tribunals in that it recognises that, if used responsibly and in the appropriate context, there is no reason why a legal representative should be unable to use such technologies to assist their work. However, in doing so, it is important to be mindful of the duty that legal representatives owe to the court/tribunals to ensure that any material is accurate and appropriate. It would therefore be wise for any legal representative to independently verify any content produced by AI tools which they seek to rely on in a court/tribunal.

Looking ahead

A combination of a growing number of litigants in person utilising AI tools to form the basis of their advice and the limited capabilities such individuals have to verify the information generated represents a cause for concern for judicial office holders due to the potential prevalence of inaccuracies in submissions and other documents produced.

Users of AI in the courts, and in particular litigants in person, can therefore expect to be questioned by the judiciary on the extent to which AI tools have been used by them in preparing statements and documentation, including as to how the accuracy and content of those tools has been independently verified.

For lawyers, the use of AI tools to conduct legal research and analysis should be done so with a reminder of the professional obligation to ensure the accuracy and truth of any material put before a court/tribunal at the forefront of your mind.

Recent Posts

Whistleblower Protection Following Nicol V World Travel And Tourism Council
May 13, 2024
To Tip Or Not To Tip? How The Employment (Allocation Of Tips) Act 2023 Will Impact The Hospitality Industry
May 13, 2024
Reform Of The Sick Note
May 13, 2024
The legal risks posed by Artificial Intelligence in the workplace
AI: An automated workforce or… a very complicated calculator?
May 1, 2024
Unlocking The CQC’s Quality Statements – How And Why “Co-Production” Must Become A Cornerstone Of Your Service
April 26, 2024
Court Of Appeal Rules On Damages Award Following A Breach By The NHS Of Its Procurement Obligations – Braceurself Limited v NHS England
April 23, 2024



Skip to content