Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series. Regularly featured in the global media, including AP, BBC, Bloomberg, CNBC, Discovery, RT, Viacom, and WIRED, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Aon, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.
WHY THIS MATTERS IN BRIEF
By getting AI to critique itself researchers have not only given it a vital human skill but found new ways to improve AI’s without having to recode or redevelop them.
Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trends, connect, watch a keynote, or browse my blog.
Even if the unlikely six-month moratorium on Artificial Intelligence (AI) development asked for by Elon Musk and the Life Foundation goes ahead, it seems GPT4 has the capability for huge leaps forward if it just takes a good hard look at itself after researchers asked it to critique its own work and saw a 30% performance boost.
“It’s not everyday that humans develop novel techniques to achieve state-of-the-art standards using decision-making processes once thought to be unique to human intelligence,” wrote researchers Noah Shinn and Ashwin Gopinath. “But, that’s exactly what we did.”
The Future of AI, by keynote Matthew Griffin
The “Reflexion” technique takes GPT4’s already impressive ability to perform various tests, and introduces “a framework that allows AI agents to emulate human-like self-reflection and evaluate its performance.” Effectively, it introduces extra steps in which GPT4 designs tests to critique its own answers, looking for errors and missteps, then rewrites its solutions based on what it’s found.
The team used its technique against a few different performance tests. In the HumanEval test, which consists of 164 Python programming problems the model has never seen, GPT4 scored a record 67%, but with the Reflexion technique, its score jumped to a very impressive 88%.
In the Alfworld test, which challenges an AI’s ability to make decisions and solve multi-step tasks by executing several different allowable actions in a variety of interactive environments, the Reflexion technique boosted GPT4’s performance from around 73% to a near-perfect 97%, failing on only 4 out of 134 tasks.
In another test called HotPotQA, the language model was given access to Wikipedia, and then given 100 out of a possible 13,000 question/answer pairs that “challenge agents to parse content and reason over several supporting documents.” In this test, GPT4 scored just 34% accuracy, but GPT4 with Reflexion managed to do significantly better with 54%.
More and more often, the solution to AI problems appears to be more AI. In some ways, this feels a little like a Generative Adversarial Network (GAN), in which two AIs hone each other’s skills, one trying to generate images, for example, that can’t be distinguished from “real” images, and the other trying to tell the fake ones from the real ones. But in this case, GPT is both the writer and the editor, working to improve its own output.
The paper is available at Arxiv.