Skip to main content Scroll Top

The Godfather of AI says the technology is already concious

WHY THIS MATTERS IN BRIEF

We don’t understand human consciousness yet, but many are already saying AI is conscious … how does that work? And is it?

 

Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trendsconnect, watch a keynote, or browse my blog.

In a widely shared video clip, the Nobel-winning computer scientist Geoffrey Hinton told LBC’s Andrew Marr in the UK that current Artificial Intelligence (AI) models are conscious – which is something we actually have a test for. Asked if he believes that consciousness has already arrived inside AIs, Hinton replied without qualification, “Yes, I do.”

Hinton appears to believe that systems like ChatGPT and DeepSeek do not just imitate awareness, but have subjective experiences of their own – which isn’t necessarily as far fetched as it sounds as researchers recently discovered that AI reasoning models actually, literally, talk to themselves to arrive at answers. This is a startling claim coming from someone who is a leading authority in the field.

 

RELATED
New Quantum ML algorithm could revolutionise Quantum AI before it even begins

 

Many experts will disagree with Hinton. Even so, we have arrived at a historically unprecedented situation in which expert opinion is divided on whether tech companies are inadvertently creating conscious lifeforms with AI lab Anthropic even hiring an AI Welfare Officer a while ago just in case. This situation could become a moral and regulation nightmare.

What makes Hinton believe current AIs are conscious? In the viral clip, he invokes a suggestive line of reasoning. Suppose I replace one neuron in your brain with a silicon circuit that behaves the same way. Are you still conscious? The answer is, surely, yes. Hinton infers that the same will be true if a second neuron is replaced, and a third, and so on.

The outcome of this process, Hinton supposes, would be a person with a circuit board in place of a brain who is nonetheless conscious. Why, then, should we doubt that existing AIs are also conscious? In making this argument, Hinton strays from computer science into philosophy.

You would also remain conscious after having one neuron in your brain replaced by a microscopic rubber duck. Likewise for the second neuron, and the third. But somewhere in this process, consciousness would cease. The same might be true of silicon circuits.

 

RELATED
Scientists have developed the world's first working artificial womb

 

We shouldn’t be too sanguine about this reply, however. For one thing, there exist other arguments for the view that current AIs might have achieved consciousness. An influential 2023 study, suggests a 10 percent probability that existing Language Language Models (LLMs) are conscious, rising to 25 percent within the next decade.

Furthermore, many of the serious practical, moral, and legal challenges associated with conscious AI arise just so long as a significant number of experts believe that such a thing exists. The fact that they might be mistaken does not get us out of the woods.

Remember Blake Lemoine, the senior software engineer who announced that Google’s LaMDA model had achieved sentience, and urged the company to seek the program’s consent before running experiments on it?

Google was able to dismiss Lemoine for violating employment and data security policies, thereby shifting the focus from Lemoine’s claims about LaMDA to humdrum matters of employee responsibilities. But companies like Google will not always be able to rely on such policies – or on California’s permissive employment law – to shake off employees who arrive at inconvenient conclusions about AI consciousness.

 

RELATED
AI generated games are starting to appear on Steam

 

As the Lemoine case illustrates, we face an immediate practical problem of perceived AI consciousness. Other examples of this problem are easy to foresee. Imagine the case of someone falling deeply in love with their AI and insisting that it is a sentient partner worthy of marriage. Or consider the prospect of advocates rallying for legal rights on behalf of an AI “friend.”

What should we do about such cases when the people involved are able to back up their beliefs by appealing to experts such as Hinton?

Companies like Google, Microsoft, and OpenAI put enormous resources into AI ethics teams working on such tasks as mitigating biases and curbing harmful content, but few are working on the problem of perceived consciousness.

Perhaps we shouldn’t be surprised. Addressing the problem of perceived AI consciousness means taking a stand on profound philosophical puzzles that fall way beyond the ordinary purview of software developers. These companies might well prefer to keep clear of the issue while they can get away with it, as well as to keep whatever discussions they are having on the subject strictly in house.

 

RELATED
Revolutionary seawater powered battery gives underwater drones a big boost

 

This approach cannot be maintained indefinitely, however. As Hinton says later on in the LBC interview, “There’s all sorts of things we have only the dimmest understanding of at the present about the nature of people, about what it means to have a self… And they’re becoming crucial to understand.”

Related Posts

Leave a comment

Pin It on Pinterest

Share This