Scroll Top

Two US lawyers just got fined for using ChatGPT to generate fake court cases

Futurist_2lawyers

WHY THIS MATTERS IN BRIEF

Some people think AI knows all, it doesn’t, and some professions should be very careful how they use it. This also sets a dangerous precedent as more professionals lean on AI to do their heavy lifting.

 

Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trendsconnect, watch a keynote, or browse my blog.

In what seems like a world first a US judge has fined two lawyers and a law firm $5,000 (£3,935) after fake citations that were generated by ChatGPT were submitted in a court filing. A district judge in Manhattan ordered Steven Schwartz, Peter LoDuca, and their law firm Levidow & Oberman to pay the fine after fictitious legal research was used in an aviation injury claim.

 

RELATED
Governments will soon start using AI to identify and close tax loopholes

 

Schwartz had admitted that ChatGPT, a chatbot that churns out plausible text responses to human prompts, invented six cases he referred to in a legal brief in a case against the Colombian airline Avianca.

The judge P Kevin Castel said in a written opinion there was nothing “inherently improper” about using Artificial Intelligence (AI) for assisting in legal work, but lawyers had to ensure their filings were accurate.

“Technological advances are commonplace and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance,” Castel wrote. “But existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings.”

 

RELATED
Europe's new Artificial Intelligence Act can demand AI models are retrained and deleted

 

The judge said the lawyers and their firm “abandoned their responsibilities when they submitted non-existent judicial opinions with fake quotes and citations created by the tool, then continued to stand by the fake opinions after judicial orders called their existence into question.”

Levidow, Levidow & Oberman said in a statement on Thursday that its lawyers “respectfully” disagreed with the court that they had acted in bad faith.

“We made a good-faith mistake in failing to believe that a piece of technology could be making up cases out of whole cloth,” it said.

Lawyers for Schwartz told Reuters he declined to comment. LoDuca did not immediately reply to a request from Reuters for comment, and his lawyer said they were reviewing the decision.

 

RELATED
Artificial sperm breakthrough gives infertile men new hope

 

ChatGPT had suggested several cases involving aviation mishaps that Schwartz had not been able to find through usual methods used at his law firm. Several of those cases were not real, misidentified judges or involved airlines that did not exist.

Chatbots such as ChatGPT, developed by the US firm OpenAI, can be prone to “hallucinations” or inaccuracies. In one example ChatGPT falsely accused an American law professor of sexual harassment and cited a non-existent Washington Post report in the process. In February a promotional video for Google’s rival to ChatGPT, Bard, gave an inaccurate answer to a query about the James Webb space telescope, raising concerns that the search company had been too hasty in launching a riposte to OpenAI’s breakthrough.

 

RELATED
DARPA is giving its deep web search tech a huge upgrade

 

Chatbots are trained on a vast trove of data taken from the internet, although the sources are not available in many cases. Operating like a predictive text tool, they build a model to predict the likeliest word or sentence to come after a user’s prompt. This means factual errors are not only possible but very common, but the human-seeming response can sometimes convince users that the answer is correct.

The judge said one of the fake decisions generated by the chatbot had “some traits that are superficially consistent with actual judicial decisions” but that other portions contained “gibberish” and were “non-sensical”.

In a separate written opinion, the judge threw out the underlying aviation claim, saying the statute of limitations had expired.

Related Posts

Comments (1)

[…] ChatGPT exploded onto the tech scene in November of last year, it’s been helping people write all kinds of material, generate code, and find information. It and other Large Language Models (LLMs) have facilitated […]

Leave a comment

You have Successfully Subscribed!

Pin It on Pinterest

Share This