Scroll Top

MIT’s new AI can create videos of the future by looking at a photo



The human brain is good at looking at a scene and figuring out what happens next, now an AI can look at a photo and create a video of the future from it and that opens up some interesting applications.


Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential Universityconnect, watch a keynote, or browse my blog.

Lots of devices can take a snapshot that records what has just happened but what if your device could capture and “record” situations that were about to happen? Ironically it’s not as far fetched as you might think. MIT CSAIL researchers have created a deep learning algorithm that can create videos that show what it expects to happen in the future. The team trained the new system by getting it to watch over two million videos and the resulting system then created the new footage by pitting two neural networks against each other.


This AI will design your house, over and over again


One neural network created the scene by figuring out which objects are moving in still frames while the other did a quality check and tried to determine is the resulting videos were real or simulated. The artificial video was then deemed a success when the AI checker was fooled into thinking it was a real video.

As you’d expect the technology has its limits, for example, at the moment it can’t generate videos longer than 1.5 seconds long and the results aren’t mind blowingly realistic – it isn’t aware that objects are still there when they move and tends to exaggerate their sizes. However, it’s good enough to predict relatively complicated scenes like waves on the beach, or people walking on grass.



If CSAIL can extend predictions and make them more realistic, though, it could have a far reaching impact – especially as AI’s are increasingly able to interpret and understand what is it they’re actually seeing, not just the physics of movement.


DeepMind's self-learning AI took four hours to learn and master chess


Self driving cars could predict where vehicles and pedestrians are going with greater accuracy, and, for example, if an AI was able to figure out just by looking at a snapshot of someones posture and limb position that the person might be drunk it could feed that data back into the car to make sure it took appropriate action to avoid an accident. Meanwhile security cameras could spot mismatches in footage based on what they expect to see and the technology could also be used for relatively mundane tasks like adding animation to still images or creating better video compression technologies. And then, regardless of circumstances, predicting the future will help AI understand what’s going on right now and that could help with just about any instance where computer vision is important.

Related Posts

Leave a comment


1000's of articles about the exponential future, 1000's of pages of insights, 1000's of videos, and 100's of exponential technologies: Get The Email from 311, your no-nonsense briefing on all the biggest stories in exponential technology and science.

You have Successfully Subscribed!

Pin It on Pinterest

Share This