Scroll Top

Rumours still swirl around OpenAI’s human level AI “Q”

WHY THIS MATTERS IN BRIEF

OpenAI have been hiding a new development, but until they clarify what it is the rumour train is hurtling through the station.

 

Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trendsconnect, watch a keynote, or browse my blog.

Ever since the other week’s dramatic events at OpenAI, the rumor mill has been in overdrive about why the company’s Chief Scientific Officer, Ilya Sutskever, and its board decided to oust CEO Sam Altman.

 

RELATED
Russia reportedly re-activates autonomous "Deadhand" AI, plugs it into nuclear arsenal

 

While we still don’t know all the details and probably never will there have been reports that researchers at OpenAI had made a “breakthrough” in Artificial Intelligence (AI) that spooked staff members. Reuters and The Information both report that researchers had come up with a new way to make powerful AI systems and had created a new model, called Q* (pronounced Q star), that was able to perform grade-school-level math. According to the people who spoke to Reuters, some at OpenAI believe this could be a milestone in the company’s quest to build Artificial General Intelligence (AGI), a much-hyped concept referring to an AI system that is smarter than humans and in OpenAI’s case “capable of outperforming humans in all valuable economic work.”

 

The Future of AI and Generative AI, by Keynote Matthew Griffin

 

The company declined to comment on Q*.

In the meantime social media is full of speculation and excessive hype, so I called some experts to find out how big a deal any breakthrough in math and AI would really be.

 

RELATED
In a world first patent officials in South Africa credited an AI as an inventor

 

Researchers have for years tried to get AI models to solve math problems. Language models like ChatGPT and GPT-4 can do some math, but not very well or reliably. We currently don’t have the algorithms or even the right architectures to be able to solve math problems reliably using AI, says Wenda Li, an AI lecturer at the University of Edinburgh. Deep learning and transformers – a kind of neural network – which is what language models use, are excellent at recognizing patterns, but that alone is likely not enough, Li adds.

Math is a benchmark for reasoning, Li says. A machine that is able to reason about mathematics, could, in theory, be able to learn to do other tasks that build on existing information, such as writing computer code or drawing conclusions from a news article. Math is a particularly hard challenge because it requires AI models to have the capacity to reason and to really understand what they are dealing with.

A generative AI system that could reliably do math would need to have a really firm grasp on concrete definitions of particular concepts that can get very abstract. A lot of math problems also require some level of planning over multiple steps, says Katie Collins, a PhD researcher at the University of Cambridge, who specializes in math and AI. Indeed, Yann LeCun, Chief AI Scientist at Meta, posted on X and LinkedIn over the weekend that he thinks Q* is likely to be “OpenAI attempts at planning.”

 

RELATED
BAE uses AI to defeat todays state of the art electronic jamming systems

 

People who worry about whether AI poses an existential risk to humans or immortal dictators, one of OpenAI’s founding concerns, fear that such capabilities might lead to rogue AI. Safety concerns might arise if such AI systems are allowed to set their own goals and start to interface with a real physical or digital world in some ways, says Collins.

But while math capability might take us a step closer to more powerful AI systems, solving these sorts of math problems doesn’t signal the birth of a superintelligence.

“I don’t think it immediately gets us to AGI or scary situations,” says Collins.  It’s also very important to underline what kind of math problems AI is solving, she adds.

“Solving elementary-school math problems is very, very different from pushing the boundaries of mathematics at the level of something a Fields medallist can do,” says Collins, referring to a top prize in mathematics.

 

RELATED
OpenAI's AI learned to play Minecraft by just watching YouTube videos

 

Machine learning research has focused on solving elementary-school problems, but state-of-the-art AI systems haven’t fully cracked this challenge yet. Some AI models fail on really simple math problems, but then they can excel at really hard problems, Collins says. OpenAI has, for example, developed dedicated tools that can solve challenging problems posed in competitions for top math students in high school, but these systems outperform humans only occasionally.

Nevertheless, building an AI system that can solve math equations is a cool development, if that is indeed what Q* can do. A deeper understanding of mathematics could open up applications to help scientific research and engineering, for example. The ability to generate mathematical responses could help us develop better personalized tutoring, or help mathematicians do algebra faster or solve more complicated problems.

This is also not the first time a new model has sparked AGI hype. Just last year, tech folks were saying the same things about Google DeepMind’s Gato, a “generalist” AI model that can play Atari video games, caption images, chat, and stack blocks with a real robot arm. Back then, some AI researchers claimed that DeepMind was “on the verge” of AGI because of Gato’s ability to do so many different things pretty well. Same hype machine, different AI lab.

 

RELATED
Artificial General Intelligence is here, DeepMind unveils its Impala AGI

 

And while it might be great PR, these hype cycles do more harm than good for the entire field by distracting people from the real, tangible problems around AI. Rumors about a powerful new AI model might also be a massive own goal for the regulation-averse tech sector. The EU, for example, is very close to finalizing its sweeping AI Act. One of the biggest fights right now among lawmakers is whether to give tech companies more power to regulate cutting-edge AI models on their own.

OpenAI’s board was designed as the company’s internal kill switch and governance mechanism to prevent the launch of harmful technologies. The past week’s boardroom drama has shown that the bottom line will always prevail at these companies. It will also make it harder to make a case for why they should be trusted with self-regulation. Lawmakers, take note.

Related Posts

Leave a comment

EXPLORE MORE!

1000's of articles about the exponential future, 1000's of pages of insights, 1000's of videos, and 100's of exponential technologies: Get The Email from 311, your no-nonsense briefing on all the biggest stories in exponential technology and science.

You have Successfully Subscribed!

Pin It on Pinterest

Share This