Scroll Top

Research shows GPT4 becomes 30 percent better when it critiques itself

Futurist_genaid

WHY THIS MATTERS IN BRIEF

By getting AI to critique itself researchers have not only given it a vital human skill but found new ways to improve AI’s without having to recode or redevelop them.

 

Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trendsconnect, watch a keynote, or browse my blog.

Even if the unlikely six-month moratorium on Artificial Intelligence (AI) development asked for by Elon Musk and the Life Foundation goes ahead, it seems GPT4 has the capability for huge leaps forward if it just takes a good hard look at itself after researchers asked it to critique its own work and saw a 30% performance boost.

 

RELATED
Robots go on patrol in South Korean care homes to detect mental health issues

 

“It’s not everyday that humans develop novel techniques to achieve state-of-the-art standards using decision-making processes once thought to be unique to human intelligence,” wrote researchers Noah Shinn and Ashwin Gopinath. “But, that’s exactly what we did.”

 

The Future of AI, by keynote Matthew Griffin

 

The “Reflexion” technique takes GPT4’s already impressive ability to perform various tests, and introduces “a framework that allows AI agents to emulate human-like self-reflection and evaluate its performance.” Effectively, it introduces extra steps in which GPT4 designs tests to critique its own answers, looking for errors and missteps, then rewrites its solutions based on what it’s found.

The team used its technique against a few different performance tests. In the HumanEval test, which consists of 164 Python programming problems the model has never seen, GPT4 scored a record 67%, but with the Reflexion technique, its score jumped to a very impressive 88%.

 

RELATED
Yet another AI has invented its own secret gibberish language to communicate

 

In the Alfworld test, which challenges an AI’s ability to make decisions and solve multi-step tasks by executing several different allowable actions in a variety of interactive environments, the Reflexion technique boosted GPT4’s performance from around 73% to a near-perfect 97%, failing on only 4 out of 134 tasks.

In another test called HotPotQA, the language model was given access to Wikipedia, and then given 100 out of a possible 13,000 question/answer pairs that “challenge agents to parse content and reason over several supporting documents.” In this test, GPT4 scored just 34% accuracy, but GPT4 with Reflexion managed to do significantly better with 54%.

More and more often, the solution to AI problems appears to be more AI. In some ways, this feels a little like a Generative Adversarial Network (GAN), in which two AIs hone each other’s skills, one trying to generate images, for example, that can’t be distinguished from “real” images, and the other trying to tell the fake ones from the real ones. But in this case, GPT is both the writer and the editor, working to improve its own output.

 

RELATED
Under Armour's new trainers are inspired by nature, designed by an AI and 3D printed

 

The paper is available at Arxiv.

Source: Nano Thoughts via AI Explained

Related Posts

Leave a comment

EXPLORE MORE!

1000's of articles about the exponential future, 1000's of pages of insights, 1000's of videos, and 100's of exponential technologies: Get The Email from 311, your no-nonsense briefing on all the biggest stories in exponential technology and science.

You have Successfully Subscribed!

Pin It on Pinterest

Share This