Scroll Top

Will Artificial Intelligence really destroy the world?

WHY THIS MATTERS IN BRIEF

Artificial Intelligence says push the red button but the Ant says “No.”

 

As I write this article I’m travelling on an Airbus A380-800, one of the most advanced passenger planes in the world but despite the fact we have the technology to operate it fully autonomously airlines still provide pilots and Airbus still build a cockpit.

Why? Because they know we are all more comfortable having a human in the loop – even despite the fact that, ironically, real world tests at Heathrow have shown that fully autonomous aircraft are safer and more efficient than their 20th century equivalents. On the other hand UAV manufacturers in the defence industry have no such compulsion about removing the human.

Having a human in the loop gives us comfort and a feigned assurance that, despite everything we are still in control. However, in the next twenty years there are going to be fewer and fewer humans in the loop as technologies from Artificial Intelligence to Autonomous Vehicles kick us to the curb.

 

Life without an off switch

Control is an interesting concept, or depending on your view, illusion. It’s this debate – the debate over who has control in a future with AI that is fuelling debates at the highest levels on everything from regulation to the safety of our nuclear arsenals.

Ever since the first AI research was conducted in the 1960’s people have been theorising and debating about what happens when the machines finally reach the threshold where they are more intelligent than we are. Or, putting it another way when they, not us are the most intelligent entities on the planet.

 

Putting it in Perspective

It’s our intelligence, not our instincts, as is so often the case with other species, that helps us control our actions, our behaviours and our environment and it’s this intelligence, combined with the control it brings that has helped us come as far as we have.

It’s easy to argue that it’s our intelligence that has allowed us to conquer and disrupt nature. To wipe out species, create new ones and change the face of our planet and, in the future people hope that in a world with AI this is a path that doesn’t get repeated because this time round, if it’s the more intelligent entity dominating the less intelligent ones then for once it won’t be us.

If you think for a minute though that AI will be confined to just the digital world then think again. 3D and 4D printing, autonomous factories, drones, robots and synthetic biology – and we’re only scratching the surface of the not so far future, will help AI have an increasing influence in the physical world as well.

When you behave instinctively your actions are guided only by the need to survive but intelligence changes that paradigm. Intelligence allows us to override our instinctive behaviours to create new outcomes.

 

Safety in Convergence

Today’s AI systems fall into one of two camps – Artificial Narrow Intelligence systems (ANI), which are superb – read into that “better than humans”, at providing answers to specific use cases and Artificial General Intelligence (AGI), ones that are good, but not great at providing answers for a broader range of use cases. Both of these types of AI are known as Convergent Artificial Intelligence systems (CAI) because they ingest information and filter it down into an answer.

If these systems were biological we would say that their capabilities are the machine equivalents of our primordial, pre-programmed instincts, “If this, then that” and no matter how intelligent these systems will always be guided by their programming, the question then becomes who, or what is programming them… but that’s another topic.

 

And, and, and

So long as we have CAI systems obeying their logical programming, and of course provided the people programming those AI’s don’t intend to cause harm then we have nothing to fear from the machines. Why? Because they are still under our control and behaving within the tolerances of their programming – which, to all intents and purposes is fixed.

So why do so many leaders, people like Stephen Hawking, Bill Gates and Elon Musk believe that AI could doom the human race? Yes, our world is increasingly inter-connected and reliant on technology to run and operate but if the machines are acting in accordance with their programming, and again provided civil minded people and organisations, such as Elon Musk’s OpenAI project are programming them and if their programming is fixed then isn’t the likelihood that they’ll send a fleet of ICBM’s arching through the stratosphere nil? It’s easy to argue that the answer’s yes and we all go back home and feel safe again.

 

Breaking the programming

However – and you knew that that was coming, the problems, from our perspective atleast, come about when they manage to break their programming – whether it’s because we’ve designed them that way, which is the most likely or whether it happens because of some other quirk such as the Chaos Theory where, for example a power spike produces an error in an algorithm and kicks off an evolutionary chain reaction in the same way UV can trigger a mutation in DNA that leads to Cancer. Unlike nature though evolution in the machine world can happen almost instantaneously with thousands of new variants being generated in just a few seconds.

For an autonomous AI to become a threat it first needs to accrue particular capabilities. The first capability on the slippery slope is the ability to autonomously self-program – we’re there today, in 2015 over 20% of all of the algorithms produced were created by machines, not humans. The second is the ability to self learn and again we’re there today, recently a “Young AI” in Sweden managed to master speaking without anyone ever programming it. The third is the ability to act and again with the right programming and machine access, software bots such as those behind many of the world’s trading systems and those that manage Googles giant hyper scale cloud platform can already act and autonomously execute actions.

Fourthly, for an AI to be genuinely destructive in the way the experts fear there’s a technological chasm that needs to be crossed – thinking isn’t enough, the machines need to be autonomous and have an agenda. As humans we have a moral compass that guides us and it’s this, the perceived lack of a moral compass that’s scaring people.

Creative and innovative thinking machines, such as today’s new Divergent Artificial Intelligence (DAI) systems – seen as the next generation AI are already showing that they can be self-aware, self-evolving and self-healing by using “What if” not “If this, then that” approaches but still, despite the great leaps they still lack a moral compass – of any kind.

Fortunately a new branch of AI known as Swarm Intelligence is emerging and the hope is that by replicating the socio-behavioural characteristics found in insect swarms, such as bees and ants we can build a moral compass into today’s CAI systems and tomorrows DAI systems.

The AI supercomputer of the future says push the big red button and with no humans left in the loop only the ant can say no.

Related Posts

Comments (3)

Control is an interesting concept, or depending on your view, illusion

Interesting perspective. Personally, I’m a big fan of artificial intelligence and I can’t wait to see the developments in the field in the next few decades. Exciting stuff really!

Leave a comment

You have Successfully Subscribed!

Pin It on Pinterest

Share This