Scroll Top

AI robots learned how to play horrible football all by themselves

WHY THIS MATTERS IN BRIEF

We’re developing AI that can learn for itself, and that’s the real news from this story.

 

Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trendsconnect, watch a keynote, or browse my blog.

Recently we saw Artificial Intelligence (AI) beating humans in real sports such as drone racing and whatever this sport is … and now, a year on from when AI robots took on humans at boxing, a team of AI powered robots developed by Google’s DeepMind have figured out how to play football. Like it’s that hard any way …

 

RELATED
Meet the robot surgeon that's going to be taking aim at your organs

 

The miniature humanoid robots, which are about knee height, were able to make tackles, score goals and easily recover from falls when tripped. In order to learn how to play, AI researchers first used DeepMind’s state-of-the-art MuJoCo physics engine to train virtual versions of the robots in decades of match simulations.

The simulated robots were rewarded if their movements led to improved performance, such as winning the ball from an opponent or scoring a goal.

 

It’s horrible but awesome!

 

Once they were sufficiently capable of performing the basic skills, DeepMind researchers then transferred the AI into real-life versions of the bipedal bots, who were able to play one-on-one games of football against each other with no additional training required.

 

RELATED
The Future of Healthcare 2050, a VR tour into the future with Novartis

 

“The trained soccer players exhibit robust and dynamic movement skills, such as rapid fall recovery, walking, turning, kicking and more,” DeepMind noted in a blog post.

“The agents also developed a basic strategic understanding of the game, and learned, for instance, to anticipate ball movements and to block opponent shots.

“Although the robots are inherently fragile, minor hardware modifications, together with basic regularisation of the behaviour during training led the robots to learn safe and effective movements while still performing in a dynamic and agile way.”

 

RELATED
McDonalds open their first Dark Kitchen in the UK

 

paper detailing the research, titled ‘Learning agile soccer skills for a bipedal robot with deep reinforcement learning’, is currently under peer-review.

Previous DeepMind research on football-playing AI has used different team set ups, increasing the number of players in order to teach simulated humanoids how to work as a team.

The researchers say the work will not only advance coordination between AI systems, AKA Swarm AI, but also offer new pathways towards building Artificial General Intelligence (AGI) that is of an equivalent or superior level to humans.

Related Posts

Leave a comment

EXPLORE MORE!

1000's of articles about the exponential future, 1000's of pages of insights, 1000's of videos, and 100's of exponential technologies: Get The Email from 311, your no-nonsense briefing on all the biggest stories in exponential technology and science.

You have Successfully Subscribed!

Pin It on Pinterest

Share This