Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series. Regularly featured in the global media, including AP, BBC, CNBC, Discovery, RT, and Viacom, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.
WHY THIS MATTERS IN BRIEF
- The human brain is good at looking at a scene and figuring out what happens next, now an AI can look at a photo and create a video of the future from it and that opens up some interesting applications
Lots of devices can take a snapshot that records what has just happened but what if your device could capture and “record” situations that were about to happen? Ironically it’s not as far fetched as you might think. MIT CSAIL researchers have created a deep learning algorithm that can create videos that show what it expects to happen in the future. The team trained the new system by getting it to watch over two million videos and the resulting system then created the new footage by pitting two neural networks against each other.
One neural network created the scene by figuring out which objects are moving in still frames while the other did a quality check and tried to determine is the resulting videos were real or simulated. The artificial video was then deemed a success when the AI checker was fooled into thinking it was a real video.
As you’d expect the technology has its limits, for example, at the moment it can’t generate videos longer than 1.5 seconds long and the results aren’t mind blowingly realistic – it isn’t aware that objects are still there when they move and tends to exaggerate their sizes. However, it’s good enough to predict relatively complicated scenes like waves on the beach, or people walking on grass.
If CSAIL can extend predictions and make them more realistic, though, it could have a far reaching impact – especially as AI’s are increasingly able to interpret and understand what is it they’re actually seeing, not just the physics of movement.
Self driving cars could predict where vehicles and pedestrians are going with greater accuracy, and, for example, if an AI was able to figure out just by looking at a snapshot of someones posture and limb position that the person might be drunk it could feed that data back into the car to make sure it took appropriate action to avoid an accident. Meanwhile security cameras could spot mismatches in footage based on what they expect to see and the technology could also be used for relatively mundane tasks like adding animation to still images or creating better video compression technologies. And then, regardless of circumstances, predicting the future will help AI understand what’s going on right now and that could help with just about any instance where computer vision is important.