Scroll Top

From AI lawyers to AI judges courts are embracing tech for better and worse

Futurist_aijudges

WHY THIS MATTERS IN BRIEF

What would happen if your AI lawyers case was heard by a biased AI judge? And other questions noone has answers to yet …

 

Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trendsconnect, watch a keynote, or browse my blog.

Imagine finding yourself in court, but rather than a human judge considering your case, an Artificial Intelligence (AI) will be considering your evidence. One example of this happening is in Estonia whose government is now actively pursuing the automation of small contract disputes. The Estonian Ministry of Justice says it will seek to clear a backlog of cases using 100 so-called ‘AI judges’, the intention being to give human judges more time to deal with the more complex disputes.

 

RELATED
Samsung unveil their incredibly life-like digital humans and creep people out

 

The project could adjudicate small claim disputes under 7,000 euros. In concept, the two parties would upload documents and other relevant information, and the AI system will issue a decision that can be appealed to a human judge. While this implementation of AI has a direct impact on the parties in the case, AI is increasingly seeping into court process around the world in often quite mundane tasks, and I’ve already seen numerous trials of AI judges, for example with the European Court of Justice where they trialled the use of AI to judge human rights cases, and elsewhere.

A joint research project by the Australian Institute for Judicial Administration (AIJA), UNSW Law & JusticeUNSW Allens Hub for Technology, Law and Innovation and the Law Society of NSW’s Future of Law and Innovation in the Profession (FLIP Stream), has identified some of the key issues arising from the increasing presence of AI in court systems around the globe.

 

RELATED
Featured Futurist: How 3D printed homes could solve Britain's housing crisis, The Telegraph

 

The project’s report, “AI Decision-Making and the Courts: A Guide for Judges, Tribunal Members and Court Administrators”, identified examples of the use of AI in Australia and overseas, from computer- based dispute resolution software to the use of computer code based directly on rules driven logic, or ‘AI judges’ to help clear a backlog of cases.

Professor Lyria Bennett Moses is the Director of the UNSW Allens Hub, as well as the Associate Dean of Research at UNSW Law & Justice, and said that “despite hesitancy, AI was a growing part of court processes,” adding, “AI, as a concept and as practice, is becoming increasingly popular in courts and tribunals internationally. There can be both immense benefits as well as concerns about compatibility with fundamental values,” she said.

“AI in courts extends from administrative matters, such as automated E-filing of cases, to the use of data-driven inferences about particular defendants in the context of sentencing. Judges, tribunal members and court administrators need to understand the technologies sufficiently well to be in a position to ask the right questions about the use of AI systems,” she said.

 

RELATED
Australian researchers reproduce the effects of excercise with a pill

 

Some of the concerns around AI’s compatibility with legal values have been identified in the US following the use of what is known as the Correctional Offender Management Profiling for Alternative Sanctions tool, or COMPAS, the report says.

The tool is intended to augment judicial process by conducting a risk assessment on the likelihood that an offender will break the law again. As the research report notes, COMPAS integrates 137 responses to a questionnaire.

Questions range from the clearly relevant consideration, “How many times has this person been arrested before as an adult or juvenile?’, to the more opaque “Do you feel discouraged at times?” The code and processes underlying COMPAS is secret, and so not known to the prosecution, defence, or judge.

The findings of the COMPAS tool have very real consequences, and will inform the judge on whether the alleged offender can be or should be granted bail, or whether the accused should be eligible for parole.

 

RELATED
A Russian AI graphic designer fooled clients for over a year

 

In a 2013 case, Paul Zilly was convicted of stealing a lawnmower. The prosecution together with Mr Zilly’s lawyers agreed to a plea deal of one year in a county jail and a subsequent supervision order. But on the basis of a high risk of reoffending COMPAS score, the judge rejected the plea deal and sentenced Mr Zilly to two years in jail.

In 2016, non-profit investigative journalist site ProPublica looked through around 10,000 criminal defendants in Florida. It found that African American defendants were more likely to be given a false positive flag of high risk on the COMPAS software than white defendants who were more likely to be given a false positive low-risk score, despite not in fact being low risk – just one example of many of AI Bias.

Professor Bennett Moses questioned whether similar tools should ever be acceptable in an Australian context.

“Everyone has a right to be treated impartially,” she said. “The use of some tools is in conflict with important legal values. There are tools, frequently deployed in the United States, that ‘score’ defendants on how likely they are going to re-offend. This is not based on an individual psychological profile, but rather on analysis of a general pool of data. If people ‘like’ you have reoffended in the past, then you are going to be rated as likely to re-offend aswell,” she said.

 

RELATED
New hot qubits let quantum computers run near room temperature

 

“The variables used in this analysis include matters such as whether parents are separated, and, if so, one’s age when that occurred – the kinds of things that might statistically correlate with offending behaviour but are outside one’s own control. The tool is also biased [on some fairness metrics] against certain racial groups. It is important to ask whether the use of such tools would be appropriate in an Australian court,” she said.

Even though the Estonian project could save court resources and improve efficiency, the report raised several concerns around the implementation of AI in courtrooms, including how the secret nature of many AI systems and code meant that judges, in addition to parties, could be unaware of the way in which decisions were generated.

Another concern was that if AI models were created and used on predominantly English-speaking non-minority datasets, the software could have greater difficulty in interpreting accents or working with people from non-English speaking backgrounds.

 

RELATED
China's infamous Social Credit system is now coming for businesses

 

Also, an over reliance on AI systems in court processes could take an important human element out of justice or, as the report said, remove some of the “moral authority” and discretion used in applying the law. In some cases, judges have overridden their own decision based on the recommendations of AI leading to significant differences in outcome, particularly in the US with the use of COMPAS.

While the American experience of AI in the courtroom has raised questions domestically and internationally, the report also identified positive experiences where AI has aided access to justice. Professor Bennett Moses said language barriers was just one key area where AI could be of enormous value.

One practical and non-controversial example of a benefit is the use of natural language processing in converting audio of what is spoken by judges, witnesses in court and counsel, into text, she said.

This can make access to court transcripts faster and easier, particularly for those with hearing impairments. In China, some trials are captured ‘in real time’ in Mandarin and translated into English text.

 

RELATED
65 percent of all COVID disinformation came from just 12 people

 

“I’ve always believed that interesting legal questions lie on the technological frontier, whether that relates to AI or other new contexts to which the law is called to respond. My main advice here is to tread carefully, to seek to understand how things work before drawing conclusions on what the law should do about it. But we need people to ask the right questions, and help society answer them,” Professor Bennett Moses said.

And, from my perspective, we also need substantial regulatory oversight as well as solutions, such as AI auditing, to ensure that the AI models being developed and deployed are accountable, bias free and fair, and that their decision making can be interrogated and is as transparent as possible. But many of those solutions are far behind where they should be in terms of development, let alone deployment, which means that in the meantime people in court could be treated unfairly with possibly life altering consequences.

Related Posts

Leave a comment

EXPLORE MORE!

1000's of articles about the exponential future, 1000's of pages of insights, 1000's of videos, and 100's of exponential technologies: Get The Email from 311, your no-nonsense briefing on all the biggest stories in exponential technology and science.

You have Successfully Subscribed!

Pin It on Pinterest

Share This