Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series. Regularly featured in the global media, including AP, BBC, Bloomberg, CNBC, Discovery, RT, Viacom, and WIRED, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Aon, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.
WHY THIS MATTERS IN BRIEF
Robots that can autonomously identify and skill-fully manipulate objects without needing to be previously trained could revolutionise the robotics industry.
Human dexterity is an impressive skill that requires both our body’s fine motor skills and highly efficient visual acuity, and despite recent advances from companies like Elon Musk’s OpenAI who recently created a robot hand that was as dextrous as a humans when manipulating a cube, researchers are still finding it difficult to create truly dextrous robots that can tackle a wide range of situations and use cases without a hump load of training first. OpenAI’s robot hand, for example, was trained using a staggering 800 years worth of AI simulations…
While some breakthroughs in the sector in recent years have seen the creation of robots capable of making basic distinctions between objects, and then fumbling around to pick them up, advances in the sector overall have been slow and rudimentary.
Now though a team at MIT have breathed new life into the field and built a new robot system that lets robots not only visually recognise objects, importantly without any form of prior training, but do so so well that they can then complete a multitude of different tasks, such as picking those objects up, faster and more accurately than ever before. And it’s a breakthrough that could have huge implications for robots everywhere.
The researchers have called this crucial development in machine vision Dense Object Nets, or DON for short. DON functions by “analysing objects as collections of points on a visual roadmap,” a process that allows the system to understand all the object’s different components, such as the tongue of a shoe, even if it has never seen it before.
This means DON can autonomously do very specific tasks such as grab an object from just one of its corners or parts – an ability previous systems lacked.
“Many approaches to manipulation can’t identify specific parts of an object across the many orientations that object may encounter,” said in a statement paper co-author and PhD student Lucas Manuelli.
The fact that DON has overcome that issue may eventually see the system become invaluable to the manufacturing and picking lines of the retail giants like Amazon who are continually looking for smarter robots that they can use to help them pick goods faster, or in the agriculture industry where robots are increasingly being used to harvest food at incredible speed. However, those are just two potential future applications, and as the system continues to evolve it could very well have near unlimited uses.
Since DON doesn’t require data to be labelled by humans the system can learn and even supervise itself independently. One example of the many tasks that DON could one day excel at would be that of cleaning a messy house, or even sorting out your lego collection, said the researchers.
“As narrow AI applications broaden to consume more human tasks we can imagine a future where a humanoid robot will cook the dinner, clean the kitchen, do the dishes, and fold the laundry,” said Chief AI Officer and co-founder of Ziff.AI Ben Taylor who wasn’t involved in the research. “These types of tasks, which felt like science fiction just a few years ago, are moving closer to becoming a reality. The real question I have, is what will we do with the free time?”
Ah, that old chestnut again – where have I heard that before I wonder?
Meanwhile Director of Dacian Consulting Andrei Luchici adds that he believes the system may be the beginning of a revolutionary trend for the industry.
“Previous machine vision systems, albeit very powerful, only recognised what objects were present in an image but [unlike this system] were not able to act on that information,” said Luchici, “DON solves that problem which means that we can now start to build increasingly more complex systems of smart agents that can teach themselves how to recognise and interact with different objects. I believe [MIT’s] results are going to start a new wave of machine vision applications from robotic manipulation and process control to new intelligent automation solutions.”