WHY THIS MATTERS IN BRIEF Most online educational courses are text and graphics based, but now Udacity is using deepfake tech to automatically generate educational videos from the content. Interested in the future and want to experience even more?! eXplore More. Producing content for Massive Open Online Course (MOOC)...
Most online educational courses are text and graphics based, but now Udacity is using deepfake tech to automatically generate educational videos from the content.
Interested in the future and want to experience even more?! eXplore More.
Producing content for Massive Open Online Course (MOOC) platforms like Coursera and EdX might be academically rewarding, and potentially lucrative, but it’s also hugely time consuming – particularly where videos are involved, so Udacity, in an ode to Soul Machines who recently created “Will” the world’s first avatar teacher who’s already taught over 250,000 children about energy, have been looking into ways to get Artificial Intelligence (AI) to produce the videos automatically for them – something that would be a game changer in the academic world.
After all, professional level lecture clips require not only a veritable studio’s worth of equipment, but significant resources to transfer, edit, and upload footage of each lesson, so that’s why research scientists at Udacity, an online learning platform with over 100,000 courses, are investigating a new machine learning framework that automatically generates lecture videos from audio narration alone. And, for now at least the tech they’re developing isn’t a million miles away from other so called synthetic content AI generators, like the ones I’ve discussed many times before that are being used to create DeepFakes and next generation Text to Video content, among many other things.
An example of the tech
They claim in a preprint paper (“LumièreNet: Lecture Video Synthesis from Audio“) on Arxiv.org that their AI system, called LumièreNet, “can synthesise footage of any length by directly mapping between audio and corresponding visuals.”
“In current video production an AI that semi, or fully), automates lecture video production at scale would be highly valuable to enable agile video content development (rather than reshooting each new video),” wrote the paper’s co-authors. “To [this] end, we propose a new method to synthesise lecture videos from any length of audio narration: … A simple, modular, and fully neural network-based [AI] which produces an instructor’s full pose lecture video given the audio narration input, which has not been addressed before from deep learning perspective, as far as we know.”
The researchers’ model has a pose estimation component that’s not too dissimilar from Nvidia’s latest GauGAN AI or the so called full body DeepFake tech that recently came out of Japan, that synthesises body figure images from video frames extracted from a training data set, chiefly by detecting and localizing major body points to create detailed surface-based human body representations.
Meanwhile a second module in the model, a bidirectional recurrent long-short term memory (BLSTM) network that processes data in order so that each output reflects the inputs and outputs that precede it, takes as input audio features and attempts to suss out the relationship between them and visual elements.
To test LumièreNet, the researchers filmed an instructor’s lecture video for around eight hours at Udacity’s in-house studio. This yielded roughly four hours of video and two narrations for training and validation. The researchers report that the trained AI system produces “convincing” clips with smooth body gestures and realistic hair, but note that its creations, two of which are here and here, likely won’t fool most observers because the pose estimator can’t capture fine details like eye motion, lips, hair, and clothing, synthesized lecturers rarely blink and they tend to move their mouths unnaturally. Worse, their eyes sometimes look in different directions and their hands always appear oddly blurry.
The team posits that the addition of “face keypoints” (i.e., fine details) might lead to better synthesis, and they note that — fortunately — their system’s modular design allows each component to be trained and improved independently.
“[M]any future directions are feasible to explore,” wrote the researchers. “Even though our approach is developed with primary intents to support agile video content development, which is crucial in current online MOOC courses, we acknowledge there could be potential misuse of the technologies … We hope that our results will catalyse new developments of deep learning technologies for commercial video content production.”
Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series.
Regularly featured in the global media, including AP, BBC, Bloomberg, CNBC, Discovery, RT, Viacom, and WIRED, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future.
A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries.
Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Aon, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.
FANATICALFUTURIST PODCAST! Hear about ALL the latest futures news and breakthroughs!SUBSCRIBE
EXPLORE MORE!
1000's of articles about the exponential future, 1000's of pages of insights, 1000's of videos, and 100's of exponential technologies: Get The Email from 311, your no-nonsense briefing on all the biggest stories in exponential technology and science.