When you don’t have real data to work with why not create some? That’s synthetic data …


Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trendsconnect, watch a keynote, or browse my blog.

Deep learning has pushed the capabilities of Artificial Intelligence (AI) to new levels, but there are still some kinks to straighten out, such as AI bias, as well as how to best train AI models to handle safety-critical applications such as self-driving cars.

If an AI recommendation engine gets its predictions wrong and puts a strange advert in your browser window, you might raise an eyebrow. But no long-term damage would have been done, but things are very different when algorithms get behind the wheel and encounter something they’ve never seen before.


Google's lifelike sounding AI assistant, Duplex, blows past uncanny valley


Unsurprisingly rare events, or edge cases as they’re known, represent an especially tricky problem for self-driving car developers so now many of them are using Synthetic Data – which is “fake” data generated by AI’s that’s based on lifelike simulations of real-world events – to help them train their AI’s better and faster.

Examples include solutions such as NVIDIA’s Omniverse Replicator, which I’ve spoken about before, that lets developers augment real-world environments with digitally rendered scenarios such as a layer of thick snow covering the road or obscuring street signs. Another illustration of its capabilities is to digitally simulate a child running into the road chasing after a ball.

Developers could, of course, use crash test dummies and various props to achieve the same thing, but the time and expense of training an AI this way generally outweighs any benefits. Plus, if things went wrong, you’d risk damaging the vehicle and its sensors, whereas in a digitally simulated environment everything can be simply refreshed and rerun if things don’t go according to plan.


Bach lives again, in AI form


Elsewhere firms such as Synthesis AI have shown how synthetic data can be used to test the effectiveness of vehicle driver safety monitoring systems. These tools work by tracking the driver’s face to identify signs of drowsiness or distraction and then the output can be linked to Advanced Driver Assistance Systems (ADAS), for example to prime pre-crash mechanisms if the safety monitoring alerts fail to trigger a response from the driver.

Naturally, developers wouldn’t ask a test driver to fall asleep at the wheel on purpose – as a vehicle speeds along the road – so that they could put a potential facial detection algorithm and the mitigations that go with it to the test instead.

The result of Synthesis AI’s newest algorithm is a service called FaceAPI. The tool allows users to create millions of unique 3D driver models with different facial expressions “FaceAPI is already able to produce a wide variety of emotions, including, of course, closed eyes and drowsiness,” write the creators. Now, expanding on the capabilities of the synthetic data-generating software, the model can also represent a driver looking down at their phone or turning to talk to a passenger rather than focus on the road ahead.


This AI detects art forgeries by analysing artists brushstrokes


Undoubtedly the availability of realistic synthetic data for AI training can give firms a helping hand in entering markets where competitors may hold large datasets that would otherwise provide a high barrier to entry. Making it straightforward for start-ups to generate useful AI training sets based on synthetic data gives newer companies the capacity to quickly build momentum without needing to invest large amounts of capital.

Synthetic data also goes beyond just recreating scenarios that would be problematic in the real world, it can let developers dig into all manner of real world areas where training AI’s would ordinarily be expensive, time-consuming, or both.

About author

Matthew Griffin

Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series. Regularly featured in the global media, including AP, BBC, Bloomberg, CNBC, Discovery, RT, Viacom, and WIRED, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Aon, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.

Your email address will not be published. Required fields are marked *