Control is an illusion, and historically the smartest species has always wiped out less capable species,  so people are worried that’s what AI could do to humanity.


Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential Universityconnect, watch a keynote, or browse my blog.

Have you ever heard people ask whether or not AI will destroy the world, or ask if we’ll ever be able to control future Artificial Intelligence’s? If not then firstly what rock have you been hiding under, and is there space for one more, and if you have then you’ll know that no one ever comes up with a decent answer.


DeepMind AI reduces Googles data centre cooling bills by 40%


That said though, and for what it’s worth, every once in a while Elon Musk tells everyone that one day AI could become an immortal dictator, which would suggest he thinks we couldn’t control it, and every once in a while Google announces it’s still not succeeded in creating a kill switch that will let it terminate rogue AI’s, which, again, just suggests more of the same. And let’s not even go anywhere near the “Doomsday Games” event where hundreds of the world’s top experts and scientists couldn’t figure out how to solve the majority of the world’s doomsday scenarios, or the time Google demonstrated that more powerful AI’s get “aggressive” and “kill” weaker ones …

And as for my answer it’d also be no, categorically, especially as we get closer to realising the dawn of Artificial General Intelligence (AGI) and Artificial Super Intelligence (ASI) – two events which will change our world beyond all recognition. Afterall, we can’t even control what dumb software does to us, admittedly thanks mostly to bugs, hackers, and robo-hackers, so what chance would we have against “intelligent software” that can design, write, and evolve its own code, sometimes spontaneously – and all at near infinite speed?


Creative AI's have learnt how to create their own video games


Now scientists in the US have just delivered their verdict on whether we’d be able to control a high-level computer super-intelligence. The answer? Almost definitely not.

The catch is that controlling a super-intelligence far beyond human comprehension would require a simulation of that super-intelligence which we can analyse. But if we’re unable to comprehend it, it’s impossible to create such a simulation.

Rules such as “Cause no harm to humans” can’t be set if we don’t understand the kind of scenarios that an AI is going to come up with, suggest the authors of the new paper. Once a computer system is working on a level above the scope of our programmers, we can no longer set limits.


Scientists work out that time travel is 'mathematically possible'


“A super-intelligence poses a fundamentally different problem than those typically studied under the banner of ‘robot ethics’,” write the researchers.

“This is because a super-intelligence is multi-faceted, and therefore potentially capable of mobilising a diversity of resources in order to achieve objectives that are potentially incomprehensible to humans, let alone controllable.”

Part of the team’s reasoning comes from the halting problem put forward by Alan Turing in 1936. The problem centres on knowing whether or not a computer program will reach a conclusion and answer, so it halts or simply loops forever trying to find one.


AI generated games are starting to appear on Steam


As Turing managed to prove using some smart math, while we can know that for some specific programs, it’s logically impossible to find a way that will allow us to know that for every potential program that could ever be written. That brings us back to AI, which in a super-intelligent state could feasibly hold every possible computer program in its memory all at once.

Any program written to stop AI harming humans and destroying the world, for example, may reach a conclusion, and halt or not – it’s mathematically impossible for us to be absolutely sure either way, which means it’s not “containable.”

“In effect, this makes the containment algorithm unusable,” says computer scientist Iyad Rahwan, from the Max-Planck Institute for Human Development in Germany.


By 2021 this AI could help us talk to dolphins


The alternative to teaching AI some ethics and telling it not to destroy the world – something which no algorithm can be absolutely certain of doing, the researchers say – is to limit the capabilities of the super-intelligence. It could be cut off from parts of the internet or from certain networks, for example.

The new study rejects this idea too, suggesting that it would limit the reach of the AI – the argument goes that if we’re not going to use it to solve problems beyond the scope of humans, then why create it in the first place?

If we are going to push ahead with AI then we might not even know when a super-intelligence beyond our control arrives, such is its incomprehensibility. That means we need to start asking some serious questions about the directions we’re going in.


IBM's "flying brain" Watson heads into space


“A super-intelligent machine that controls the world sounds like science fiction,” says computer scientist Manuel Cebrian, from the Max-Planck Institute for Human Development. “But there are already machines that perform certain important tasks independently without programmers fully understanding how they learned it. The question therefore arises whether this could at some point become uncontrollable and dangerous for humanity.”

The research has been published in the Journal of Artificial Intelligence Research.

About author

Matthew Griffin

Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series. Regularly featured in the global media, including AP, BBC, Bloomberg, CNBC, Discovery, RT, Viacom, and WIRED, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Aon, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.

Your email address will not be published. Required fields are marked *