Scroll Top

Controlling self-improving artificial super intelligence is probably impossible

WHY THIS MATTERS IN BRIEF

As AI becomes self-improving there increasingly looks like there’s no way we could control all the variables – or it …

 

Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trendsconnect, watch a keynote, or browse my blog.

After underachieving for decades Artificial Intelligence (AI) has suddenly become scary good. And if we’re not very careful it may become quite dangerous – even so dangerous that it constitutes an “existential risk” to humanity say some experts.

 

RELATED
Eclipse, the world's first AI produced short film hits the screens at Cannes

 

Geoffrey Hinton, one of the more credible “Doomers” because he worked for Google for many years and is known as the “Godfather of AI,” has repeatedly warned these threats are not just science fiction. He has said, for example: “It is hard to see how you can prevent the bad actors from using [AI] for bad things.” Such as creating malware, ransomware, and all kinds of fraudulent things as I’ve discussed previously.

The White House, some world leaders and many AI companies are trying to stop this from happening, but many people, including the likes of Elon Musk, say that their efforts won’t be enough and that if we want to make sure AI doesn’t cause irreparable harm we need to stop the development of AI models until we have a proper discussion on AI safety.

 

The Future of Artificial Intelligence, keynote by Matthew Griffin

 

For many people hearing about AI concerns, there’s a disconnect between today’s chatbots like ChatGPT, Bard and Claude 2, and the potential for a world-dominating evil super intelligent AI. So just how do we get from here to there?

The key idea is exponential improvement in AI, which will likely reach the threshold of better-than-human intelligence quite soon with some estimates, from the likes of Sam Altman and OpenAI, putting Artificial General Intelligence (AGI) at 2028. This AGI is generally described as AI that is as good, or better, at most or all human cognitive tasks, such as language, problem solving, math, reasoning and creativity. And, crucially, once achieved AGI will even build smarter AI, as we’re already seeing, and at a much faster rate than humans could. That means it will be able to improve itself, exponentially. And when this happens we will probably go through a “Foom!” moment of extremely rapid intelligence growth and then reach what some call Artificial Super Intelligence (ASI) by approximately 2045.

 

RELATED
Uber reveals six of its favourite flying taxi Skyport concepts

 

ASI can be thought as AI with godlike powers. If the smartest human who ever lived had maybe a 200 IQ it’s thought thatAGI could have an IQ of 1,600 and that ASI could have an IQ of one million or more – but of course any human-created test scale would have no meaning to an entity this intelligent.

AGI and ASI will likely be able to build superhuman robots that will become their bodies. Whether those robots are controlled by AI or by humans, they will, at the least, change everything we do in human societies, and at worst will be used by unscrupulous governments and corporations (or unscrupulous AI) to exert control over humanity.

The more likely near-term risk, however, will not be runaway autonomous AI; rather, it would be humans using AGI/ASI for malicious objectives. The “Great power” competition, for example between the US and China, and the already-here AI arms race could easily reach the point where autonomous AI takes over almost all aspects of strategy and the waging of war, and humans simply lose control of the plot.

Many efforts are underway to prevent dangerous AI scenarios, as mentioned. The White House issued a far-reaching executive order in November that positions the federal government to respond to AI in many areas. World leaders met in the UK to discuss AI safety and issued the Bletchley Declaration that sets an international process in motion. Industry leaders such as OpenAI have started a Superalignment initiative and Frontier Model Forum. OpenAI and Anthropic, a competitor founded by former employees of OpenAI, were created specifically to focus on safer AI.

 

RELATED
DeepMind's AI learns the rules as it goes so it can conquer the real world

 

But none of these efforts will achieve their objective of making AGI safe.

We now know there is no solution to what is called “the control problem” or “the alignment problem” of AI. Computer science professor Roman Yampolskiy detailed why in a 2022 Journal of Cyber Security and Mobility paper. His argument centers on how AI works, and making and verifying predictions about it – something nigh impossible even with today’s AI, often described as a “black box,” let alone the super intelligent AI on the horizon.  He found we cannot understand how AI operates now or predict its future actions, even when AI is still far short of superintelligence, which nixes any hope of controlling the technology as it gets smarter and smarter. The bottom line is that AI will increasingly be inscrutable, and thus uncontrollable, to mere humans as it trends toward AGI/ASI.

Imagining we can understand AGI/ASI, let alone control it, is like thinking a strand of a spider’s web could restrain Godzilla. Any solutions we can develop will be only probabilistic, not airtight. With AGI likely fooming into super intelligence essentially overnight, we can’t accept probabilistic solutions because AI will be so smart it will exploit any tiny hole, no matter how small. And has the “foom” already happened? Suggestive reports about the infamous Q* AI model in the wake of the bizarre drama at Open AI in November suggest that foom may be real already.

 

RELATED
This new prototype Covid-19 test requires just a smartphone and your voice

 

If leaky solutions are all we’ll ever have, all efforts to create “safer AI,” from executive orders to industry standards, amount to enabling irresponsible development of ever-more-powerful AI, under the view that someone, somewhere, will have the solutions in time.

But what if real solutions won’t be forthcoming, ever, per the logic I’ve described here? We will then have opened Pandora’s Box with no way to close it.

In dialogue with OpenAI’s head of AI safety Jan Leike on these issues he was asked about the need for essentially perfect solutions to AI safety, given the magnitude of the risk and he responded that “there is no ‘perfect’ in the real world, but there is a ‘good enough’ and ‘not good enough.’ What exactly the bar is depends on how the technology develops.”

 

RELATED
Amazon's AI reportedly tracks, monitors and automatically fires hundreds of employees

 

When asked what happens if “foom” happens before there are any strong (probabilistic) solutions to the control problem he said, “You already know the answer to this question.”

The clear conclusion from this chain of logic, for many people, is that we need to pause “Frontier AI” development – the development of new massive AI language models like GPT 5 – now, globally, while we hold a collective debate on AI safety. But let’s face it that’s not going to happen in which case I guess we all have to keep an ear out for that “foom” moment.

 

Related Posts

Leave a comment

EXPLORE MORE!

1000's of articles about the exponential future, 1000's of pages of insights, 1000's of videos, and 100's of exponential technologies: Get The Email from 311, your no-nonsense briefing on all the biggest stories in exponential technology and science.

You have Successfully Subscribed!

Pin It on Pinterest

Share This