WHY THIS MATTERS IN BRIEF Some people think AI knows all, it doesn’t, and some professions should be very careful how they use it. This also sets a dangerous precedent as more professionals lean on AI to do their heavy lifting. Love the Exponential Future? Join our XPotential...
Some people think AI knows all, it doesn’t, and some professions should be very careful how they use it. This also sets a dangerous precedent as more professionals lean on AI to do their heavy lifting.
In what seems like a world first a US judge has fined two lawyers and a law firm $5,000 (£3,935) after fake citations that were generated by ChatGPT were submitted in a court filing. A district judge in Manhattan ordered Steven Schwartz, Peter LoDuca, and their law firm Levidow & Oberman to pay the fine after fictitious legal research was used in an aviation injury claim.
Schwartz had admitted that ChatGPT, a chatbot that churns out plausible text responses to human prompts, invented six cases he referred to in a legal brief in a case against the Colombian airline Avianca.
The judge P Kevin Castel said in a written opinion there was nothing “inherently improper” about using Artificial Intelligence (AI) for assisting in legal work, but lawyers had to ensure their filings were accurate.
“Technological advances are commonplace and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance,” Castel wrote. “But existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings.”
The judge said the lawyers and their firm “abandoned their responsibilities when they submitted non-existent judicial opinions with fake quotes and citations created by the tool, then continued to stand by the fake opinions after judicial orders called their existence into question.”
Levidow, Levidow & Oberman said in a statement on Thursday that its lawyers “respectfully” disagreed with the court that they had acted in bad faith.
“We made a good-faith mistake in failing to believe that a piece of technology could be making up cases out of whole cloth,” it said.
Lawyers for Schwartz told Reuters he declined to comment. LoDuca did not immediately reply to a request from Reuters for comment, and his lawyer said they were reviewing the decision.
ChatGPT had suggested several cases involving aviation mishaps that Schwartz had not been able to find through usual methods used at his law firm. Several of those cases were not real, misidentified judges or involved airlines that did not exist.
Chatbots such as ChatGPT, developed by the US firm OpenAI, can be prone to “hallucinations” or inaccuracies. In one example ChatGPT falsely accused an American law professor of sexual harassment and cited a non-existent Washington Post report in the process. In February a promotional video for Google’s rival to ChatGPT, Bard, gave an inaccurate answer to a query about the James Webb space telescope, raising concerns that the search company had been too hasty in launching a riposte to OpenAI’s breakthrough.
Chatbots are trained on a vast trove of data taken from the internet, although the sources are not available in many cases. Operating like a predictive text tool, they build a model to predict the likeliest word or sentence to come after a user’s prompt. This means factual errors are not only possible but very common, but the human-seeming response can sometimes convince users that the answer is correct.
The judge said one of the fake decisions generated by the chatbot had “some traits that are superficially consistent with actual judicial decisions” but that other portions contained “gibberish” and were “non-sensical”.
In a separate written opinion, the judge threw out the underlying aviation claim, saying the statute of limitations had expired.
Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series.
Regularly featured in the global media, including AP, BBC, Bloomberg, CNBC, Discovery, RT, Viacom, and WIRED, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future.
A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries.
Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Aon, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.
FANATICALFUTURIST PODCAST! Hear about ALL the latest futures news and breakthroughs!SUBSCRIBE
EXPLORE MORE!
1000's of articles about the exponential future, 1000's of pages of insights, 1000's of videos, and 100's of exponential technologies: Get The Email from 311, your no-nonsense briefing on all the biggest stories in exponential technology and science.