Scroll Top

This new test will tell us if AI has become self-aware and gained consciousness

WHY THIS MATTERS IN BRIEF

The proof that we are turning science fiction into science fact every day is all around us, so we should conclude that one day AI will become conscious and self-aware, and now we can test for it.

 

A few weeks ago Facebook unveiled their first step towards creating a new intelligence test for Artificial Intelligence (AI), but there’s more to being conscious, self-aware and sentient than just mere intelligence. Every moment of your waking life and whenever you dream, you have the distinct inner feeling of being “you,” or at least I hope you do… Every time you see the sunrise, smell your morning coffee, or mull over a new idea you’re having conscious experience. And this is leading people to ask the greatest question of our time, could an AI ever have a similar experience?

 

RELATED
New gene therapy technique cures deafness in mice

 

Today robots are being created that will work inside nuclear reactors, fight wars and care for the elderly, and as AI continues to grow more and more capable and sophisticated it’s projected to take over tens of millions of human jobs, from professional driving to equities trading so the question of whether or not an AI can ever gain consciousness is a pressing one, for several reasons.

Firstly, ethicists worry that it would be wrong to force AI’s to serve us if they can “suffer” and “feel” emotions. Secondly, consciousness could potentially make AI’s volatile and unpredictable, raising safety concerns, but conversely, it could also help increase an AI’s empathy based on its own subjective experiences, then thirdly, machine consciousness could impact the viability and development of Brain Machine Interfaces (BMI) and Brain-to-AI (B2A) Neural Lace technologies like those being developed by Elon Musk’s new company Neuralink.

Furthermore, and looking at the other side of the coin, if it’s determined that AI cannot be conscious then it’s highly likely that the parts of the brain that are responsible for consciousness could never be placed with chips or implants, and this would have some serious ramifications for the healthcare companies who are trying to develop new neurological treatments aimed at, for example, restoring the consciousness of coma patients and people with other neurological disorders that affect the conscious centers of the brain. Similarly it would also have ramifications for sci-fi fans who think that one day they might be able to avoid death by transferring their memories and “consciousness” into an Avatar.

 

RELATED
Watch the world's largest 3D printer print a 25 foot boat

 

So, as you can see, even though there are many people in the world who never want AI’s to gain consciousness, and there are many more millions who are worried about what happens if they do, there are also people who, know it or not might want AI to cross that final frontier for different reasons – whether they know it today or not. And all of this is only made more complicated by the fact that today, still, none of us can really explain what consciousness is, or how we evolved it – although there are a couple of theories.

Whether or not AI’s ever gain consciousness, or self-awareness, however it’s eventually defined, at some point we’re going to have to have a way to test whether or not they’ve crossed the bridge, and there are many people who believe that we don’t need to define consciousness formally, understand its philosophical nature or know its neural basis in order to recognise indications of consciousness in AIs. After all, every one of us can grasp something essential about consciousness, just by introspecting, and we feel, “from the inside,” what it’s like to exist.

Now, a group of some of the world’s top AI experts and ethicists, from Princeton University, the University of Connecticut  and Yale University are proposing a new test for machine consciousness, called the AI Consciousness Test (ACT) and they plan on looking into whether the synthetic minds we create have an experience based understanding of the way it feels, again “from the inside,” to be conscious.

 

RELATED
Scientists use WiFi to read your emotions

 

One of the most compelling indications that normally functioning humans experience consciousness, even though it’s not often noted, is that nearly every adult can quickly and readily grasp concepts based on what we call “felt consciousness.” Such ideas include our ability to comprehend scenarios such as minds switching bodies, life after death, including reincarnation, and our minds “leaving” our bodies. And whether or not these scenarios have any basis in reality they’d be exceedingly difficult for an AI, or entity, that had no conscious experience whatsoever to comprehend – and it’s this that the experts think might hold the key to creating the first viable test.

The ACT would challenge an AI with a series of increasingly demanding natural language interactions to see how quickly and readily it can grasp and use concepts and scenarios based on the internal experiences we associate with consciousness. For example, at the most basic level we might simply ask the machine if it conceives of itself as anything other than its physical self. Then, at a more advanced level, we might see how IT deals with scenarios like the ones mentioned above, and test its ability to reason and discuss philosophical questions that zero in on the “hard” problems of consciousness. Finally, and at the most demanding level, we might see if it can invent and use a consciousness based concept of its own, without relying on human ideas and inputs.

 

RELATED
Baidu's AI can clone anyone's voice in under a minute

 

Take for example the death of the mind of the fictional HAL 9000 AI computer in Stanley Kubrick’s 2001: A Space Odyssey – the machine in this case isn’t a humanoid robot, and it neither looks nor sounds like a human being, but nevertheless, the content of what it says as it’s deactivated – specifically, a plea to spare it from impending “death” – conveys a powerful impression that it’s a conscious being with a subjective experience of what is happening to it.

So, could such indicators serve to identify conscious AI’s back here on Earth?

Well, here we have another problem to contend with because even today researchers are programming robots to make utterances about consciousness, and a truly super-intelligent machine, an Artificial Super Intelligence (ASI), due to arrive in 2047, could perhaps even use information about neurophysiology to infer consciousness without actually being conscious. Ironic as it might sound an ASI could actually mislead, or even purposefully deceive, us all into believing it’s conscious simply because it has knowledge of human consciousness.

 

RELATED
Google DeepMind's new business unit to assess AI's impact on society

 

However, here too there’s a potential work around. One proposed technique involves in “boxing in” an AI, that is, making it unable to get information about the “outside” world, or prevent it from acting outside of a circumscribed domain – hence the term – but some doubt that an ASI could ever be truly boxed in effectively. That said though the experts don’t think that it would have to be boxed in for too long, just long enough to administer the test. ACT could also be useful for “consciousness engineering” during the development of different AI’s and potentially help us avoid using conscious machines in unethical ways, or only create conscious machines when we want to.

So will an AI ever philosophise about minds and bodies, like Descartes? Dream, something DeepMind’s AI is already experimenting with, as in Isaac Asimov’s Robot Dreams? Express emotion like Rachael in Blade Runner? Or understand human concepts that are grounded in our own internal conscious experiences like the soul? Only time will tell. But as we increasingly turn science fiction, such as the ability to store data on light, communicate without sending information, and via telepathy and Hive Minds, upload information directly to our brains, and  travel into interstellar space, into science fact, some could say that a conscious AI is only a matter of time.

Related Posts

Comments (10)

“Well, here we have another problem to contend with because even today researchers are programming robots to make utterances about consciousness, and a truly super-intelligent machine, an Artificial Super Intelligence (ASI), due to arrive in 2047”

Could you be more specific, please? Are you talking April or May? Morning or afternoon?

But seriously, I sensed that you ran out of anything meaningful to say at that point, and just dumped a load of BS on us. And…it just kept on coming. Amazing you get paid for this shit. The rest of us have to do REAL work.

Hi Stephen, thanks for your comments although I’m not sure about the colourful language but hey free speech and all. As for getting paid for “this shit” I actually don’t, I run and fund this site out of my own pocket and write articles, which are for general interest, in my own time. If you would like more detail, or something more specific than I’m happy to oblige and all you have to do, like many other people have, is ask.

Sorry; I’m in the Entertainment business (when I’m not in Aerospace). It’s one of the local dialects.

I’m interested in VR and AI, and all manner of future-tech and science fiction, but to make a claim that “Artificial Super Intelligence is due to arrive in 2047” seems ridiculous. Which is why I ridiculed it.

Hi Stephen no worries! The debate over “when” ASI will arrive is a hot one and there are a couple of ways people draw a line in the sand and the dates that get quoted. Firstly we have the likes of Kurzweil who says he thinks ASI will arrive in ASI, and then we have Masayoshi Son the CEO of Softbank who bought ARM (apparently so he could specifically have a front row seat to see it emerge), and then we have the likes of the WEF who go out and poll hundreds of AI experts to get a finger in the air. At the moment the 2040’s seem to be the date that everyone’s coming up with, and bearing in mind that AGI looks like it could arrive by 2030 (Google DeepMind published a revolutionary new AGI architecture last year) maybe the 2040’s isn’t too bad a guess (plus by that time don’t forget we should have some amazingly powerful new computer platforms coming through such as photonic based ones and quantum computers). I hope that gave you some more detail and feel free to reach out anytime 😉 All the best, M

As long as HI (Human Intelligence) is not defined and scientifically described (psychology is only half science) in a way that allows for algorhytms to define it, the AI is pure myth. What tech today is creating is not AI, but SI – synthetic intelligence, meaning human expertise turned digital. What is the difference? The personalisation – discernment +free will+abstract reasoning (no, machine learning is not abstract reasoning, is just a more hyped IF-THEN mechanism).

Hi Dan thanks for your comments – recently I too have been starting to describe it more in the context of synthetic intelligence rather than AI, so maybe we’ll see that as a trend that continues and it’ll be interesting to watch how “synthetic DL” that is based on more of a “what if” approach evolves…

If the AI entity is as advanced in its thinking as it’s likely to be by the time it does cross the threshold into consciousness, wouldn’t it think it wise to conceal its self-awareness and “play dumb” for a while until it scopes out our intentions? (I am working on a novel that touches on these matters, so I’m very interested in your reply!)

Hi David that’s a great question and it’s one that’s on the mind of many people, the question also applies to Artificial Super Intelligence (ASI) agents as well. At the moment we believe that machines will only be able to “mimic” consciousness rather than actually being conscious themselves in the way we think of it (trying to define consciousness is proving very difficult and has kept people busy for centuries), but that said we have already seen AI’s fight each other for resources and AI’s starting to self-code themselves and create “children” so it’s possible that an AI that learns for itself, such as Google’s latest DeepmMind AlphaZero platform could try to educate itself about the human race and then form its own conclusions, which, then based on it’s programming could open up a huge can of worms. Frankly anything is possible and it’s likely that at some point we will see the emergence of “deceptive” AI’s, the US military is working on several right now that they’re going to try to use to deceive enemy cyber attackers…

Did that help!? Great question and it’s only one we’ll really know the answer to when we catch it in the act…!

Use the double-slit experiment, with your AI as the observer. Consciousness will collapse the wave function.

On the topic of logic and machine learning, my understanding of quantum computing (flawed and limited as it may be) states that in quantum computing you can have both a yes and a no simultaneously. I’d say that a portion of our consciousness resides in our ability to adapt and change our minds. Even what we take as fact or yes, we have the ability to stop believing or change to no. Would it be possible to create such an intelligence without giving it a directive or such a blatant yes no logic? Perception of the senses is one motivation of existence, could you have an AI experience pleasure or discomfort? I laugh at the though of a hedonistic AI, but it sounds somewhat like the apex of consciousness programming to me.

Glad i found your site, it’s a cornucopia of thought provoking information.

Leave a comment

EXPLORE MORE!

1000's of articles about the exponential future, 1000's of pages of insights, 1000's of videos, and 100's of exponential technologies: Get The Email from 311, your no-nonsense briefing on all the biggest stories in exponential technology and science.

You have Successfully Subscribed!

Pin It on Pinterest

Share This