Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series. Regularly featured in the global media, including AP, BBC, Bloomberg, CNBC, Discovery, RT, Viacom, and WIRED, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Aon, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.
WHY THIS MATTERS IN BRIEF
Coding robots to do things is so yesterday – training them in simulation is the future and it’s fast.
Recently I showed off how training a robotic hand in simulation, which crammed hundreds of years of “learning” into mere days, ended up creating the world’s most dexterous robotic hand that can solve a Rubik’s cube single handed in under a minute. And that’s impressive – almost as impressive as the fact that it’s now so good it could replace all the human pickers jobs in warehouses, and even one day help with robo-surgeries. All of which is just the beginning.
Now, in a lab at Berkeley, a robot called Cassie has taught itself to walk, a little like a toddler might. Through trial and error, it learned to move in a simulated world. Then its handlers sent it strolling through a minefield of real-world tests to see how it’d fare.
And, as it turns out, it fared pretty darn well. With no further fine-tuning, the robot, which is basically just a pair of legs, was able to walk in all directions, squat down while walking, right itself when pushed off balance, and adjust to different kinds of surfaces. And the innovation could have future applications in everything from general purpose robots to exosuits.
It’s the first time a machine learning approach known as reinforcement learning has been so successfully applied in two-legged robots.
For years, the internet has been enthralled by videos of robots doing far more than walking and regaining their balance. All that is table stakes these days. Boston Dynamics, the heavyweight champ of robot videos, regularly releases mind-blowing footage of robots doing parkour, running, and complex dance routines. At times, it can seem the world of iRobot is just around the corner.
No coding required …
This sense of awe is well-earned. Boston Dynamics is one of the world’s top makers of advanced robots.
But in these cases the companies behind the robots still have to meticulously hand program and choreograph the movements of the robots in their videos. This is a powerful approach, and the Boston Dynamics team has done incredible things with it.
In real-world situations, however, robots need to be robust and resilient. They need to regularly deal with the unexpected, and no amount of choreography will do. Which is how, it’s hoped, machine learning can help.
Reinforcement learning has been most famously exploited by Alphabet’s DeepMind to train algorithms that thrash humans at some the most difficult games. Simplistically, it’s modelled on the way we learn. Touch the stove, get burned, don’t touch the thing again, and so on.
In Cassie’s case, the Berkeley team used reinforcement learning to train an algorithm to walk in a simulation. It’s not the first AI to learn to walk in this manner. But going from simulation to the real world doesn’t always translate. Subtle differences between the two can literally trip up a fledgling robot as it tries out its sim skills for the first time.
To overcome this challenge, the researchers used two simulations instead of one. The first simulation, an open source training environment called MuJoCo, was where the algorithm drew upon a large library of possible movements and, through trial and error, learned to apply them. The second simulation, called Matlab SimMechanics, served as a low-stakes testing ground that more precisely matched real-world conditions.
Once the algorithm was good enough, it graduated to Cassie. And amazingly, it didn’t need further polishing. Said another way, when it was born into the physical world – it knew how to walk just fine. In addition, it was also quite robust. The researchers write that two motors in Cassie’s knee malfunctioned during the experiment, but the robot was able to adjust and keep on trucking.
Other labs have also been hard at work applying machine learning to robotics. Last year Google used reinforcement learning to train a simpler four legged robot to walk by itself. And OpenAI has used it with robotic arms. Boston Dynamics, too, will likely explore ways to augment their robots with machine learning. Then, new approaches – like this one from MIT which offers continuous learning beyond training – may also move the dial.
It’s still early stages though and while there’s no telling when machine learning will exceed more traditional methods based on the results it’s likely going to be sooner rather than later. And in the meantime, Boston Dynamics robots, for example, are now helping clean up the real world, from assisting the NYPD to cleaning up nuclear plants …