Scroll Top

Google DeepMind has given its AI the ability to imagine and ultimately innovate

WHY THIS MATTERS IN BRIEF

Imagination is thought to be a uniquely human trait, but now that AI’s are being taught how to imagine not only will it make them better at understanding the consequences of their actions but it could make them better innovators too.

 

Researchers at Google’s infamous DeepMind Artificial Intelligence (AI) labs, who so far have given their AI’s a human memory, and taught them to build new AI’sdream, fight, learn like humans and play games, while at the same time drawing up a new architecture to reach Artificial General Intelligence (AGI), this week announced that they’ve begun developing an AI with an imagination that can think through the consequences of its actions before acting on them, and there could be huge ramifications for the future of innovation.

 

RELATED
Anthropic's Claude 3 model seems to show signs of basic self-awareness

 

The team behind the idea, who are based out of London, think that the ability for AI’s to imagine is going to be crucial in helping develop the algorithms of the future, and help AI’s better adapt to changing conditions.

However, that said, as we start to see the rise of creative AI’s, who are becoming increasingly skilled at mastering iterative innovation, a form of innovation where you take something that already exists and incrementally improve it, and who so far have designed everything from chairs to shoes, it’s also possible that this new technique could also be used to help create the first generation of new truly “innovative” AI’s that create primary, that is to say create things noone has ever seen before, or even disruptive and radical innovations.

 

RELATED
Volumetric video startup turns 2D video streams into interactive 3D content

 

After all, if these AI’s can plug into big data sources, such as Facebook and Twitter to identify new human needs, and gaps in the market, and they can already do this today at scale, then being able to fashion new product recommendations and even build new products, for example, new applications – something that Microsoft’s DeepCoder platform can already do, albeit at a limited scale, or even new music or movies, then all of a sudden the leap from iterative to primary innovation doesn’t seem as big as it used to.

Furthermore if we can crack the algorithm for human intelligence, then it’s inevitable that one day, probably sooner rather than later, and with the right focus, that we’ll crack the algorithm for innovation and creativity too. After all, everything’s an algorithm, and I’m sorry to say that if you’re one of those people who’s holding on to the statement that the last jobs to be automated by AI’s will be creativity, dexterity and empathy then one of those is going to fall sooner than the others. That said though AI researchers are working on every one of those fields so don’t think any of them will be safe havens for too long, and I’d suggest that people who say that those kind of skills are decades away should probably go and visit their local AI labs and get an update.

Anyway, back to the story.

 

RELATED
Invisible AI helps cars makers see what machines and humans do wrong

 

“When placing a glass on the edge of a table, for example, humans will likely pause to consider how stable it is and whether it might fall,” said the researchers in a blog post, “on the basis of that imagined consequence we might re-adjust the glass to prevent it from falling and breaking. If our algorithms are to develop equally sophisticated behaviours that we take for granted, they too must have the capability to ‘imagine’ and reason about the future, and beyond that they must be able to construct a plan using this knowledge.”

While the new design thinking has already been trialled on the quiet in a number of the DeepMind AlphaGo bots that debuted recently, and that smashed their human opponents, the team go on to say that because of the complexity of the real world their new AI must operate “at a different level.”

To realise their goal the researchers combined several existing AI approaches together, including reinforcement learning, where a system learns through trial and error, and deep learning, that’s modelled on the human brain, and what they ended up with is a system that is able to mix trial and error with simulation capabilities and that can learn about their environment, and think before they act.

 

Warning: Imagination at work
 

One of the ways they tested the new algorithms was with a 1980’s video game called Sokoban, which you can see in the video above, where players have to push crates around to solve puzzles. Some moves can make the level unsolvable, so advanced planning is needed, and, as is the usual practise, the AI wasn’t given the rules of the game beforehand.

The researchers found their new ‘imaginative’ AI solved 85 percent of the levels it was given, compared with 60 percent for AI agents using older approaches.

 

RELATED
Baxter the robot fixes its mistakes by reading your mind

 

“The imagination-augmented agents outperform the imagination-less baselines considerably,” said the researchers, “they learn with less experience and are able to deal with the imperfections in modelling the environment.”

The team also noted a number of improvements in the new bots – they could handle gaps in their knowledge better, they were better at picking out useful information for their simulations, and as a result they could create better strategies. But it’s not just advance planning – it’s advance planning with extra creativity, so potential future actions can be combined together or mixed up in different ways in order to identify the most promising paths forward.

 

RELATED
Rabbit R1 sells out as customers continue to queue up for the must have new gadget

 

Despite the success of DeepMind’s testing though, it’s still early days for the technique, and these games are still a long way from representing the complexity of the real world, but that said it’s a promising start, and if nothing else one day soon we’ll be confident that our AI infused robot overlords won’t go spilling their drinks all over the place.

“Further analysis and consideration is required to provide scalable solutions to rich model-based agents that can use their imaginations to reason about, and plan, for the future,” concluded the researchers.

Related Posts

Leave a comment

You have Successfully Subscribed!

Pin It on Pinterest

Share This