Scroll Top

ChatGPT’s maker launches a laughably poor tool to detect AI written text

Futurist_plagtext

WHY THIS MATTERS IN BRIEF

Many companies claim they can detect AI written text, but even the world’s most famous generative AI company is bad at it …

 

Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trendsconnect, watch a keynote, or browse my blog.

OpenAI, the research laboratory behind viral AI program ChatGPT, has released a tool designed to detect whether text has been written by Artificial Intelligence (AI), but warns it’s not completely reliable – yet.

 

RELATED
New algorithms save lives by taking seconds to calculate future Tsunami impacts

 

In a blog post on Tuesday, OpenAI linked to a new classifier tool that has been trained to distinguish between text written by a human and that written by a variety of AI, not just ChatGPT.

Open AI researchers said that while it was “impossible to reliably detect all AI-written text,” good classifiers could pick up signs that text was written by AI. The tool could be useful in cases where AI was used for “academic dishonesty” and when AI chatbots were positioned as humans, they said.

 

The Future of AI, by keynote speaker Matthew Griffin

 

But they admitted the classifier “is not fully reliable” and only correctly identified 26% of AI-written English texts. It also incorrectly labelled human-written texts as probably written by AI tools 9% of the time.

 

RELATED
Maglev Aero unveils its ultra quiet revolutionary eVTOL propulsion system

 

“Our classifier’s reliability typically improves as the length of the input text increases. Compared to our previously released classifier, this new classifier is significantly more reliable on text from more recent AI systems.”

 

We used chatGPT to write a book!

 

Since ChatGPT was opened up to public access, it has sparked a wave of concern among educational institutions and research journals across the world that it could lead to cheating in exams or assessments.

Lecturers in the UK are being urged to review the way in which their courses were assessed, while some universities have banned the technology entirely and returned to pen-and-paper exams to stop students using AI.

 

RELATED
AI helps emergency dispatchers diagnose heart attacks by listening in on phone calls

 

One lecturer at Australia’s Deakin University said around one in five of the assessments she was marking over the Australian summer period had used AI assistance.

A number of science journals have also banned the use of ChatGPT in text for papers.

OpenAI said the classifier tool had several limitations, including its unreliability on text below 1,000 characters, as well as the misidentification of some human-written text as AI-written. The researchers also said it should only be used for English text, as it performs “significantly worse” in other languages, and is unreliable on checking code.

“It should not be used as a primary decision-making tool, but instead as a complement to other methods of determining the source of a piece of text,” OpenAI said.

 

RELATED
The USAF turned an F-16 fighter jet into the world's most advanced drone

 

OpenAI has now called upon educational institutions to share their experiences with the use of ChatGPT in classrooms. While most have responded to AI with bans, some have embraced the AI wave. The three main universities in South Australia last month updated their policies to say AI like ChatGPT is allowed to be used so long as it is disclosed.

Related Posts

Leave a comment

You have Successfully Subscribed!

Pin It on Pinterest

Share This