Scroll Top

Deepfakes and AI generated disinformation confuse AI trading systems

Futurist_aiquant

WHY THIS MATTERS IN BRIEF

Deepfakes and fake data, which can then be ingested and analysed by AI trading systems, could send them into a spin and get them to buy and sell stocks they shouldn’t.

 

Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trendsconnect, watch a keynote, or browse my blog.

The other week a deepfake picture of an apparent explosion at the Pentagon sent the S&P 500 down by 0.3% as Artificial Intelligence (AI) quantitative traders, or Quants as they are known, quickly ditched stocks in response.

 

RELATED
Cali's first river interceptor prevents trash from ever reaching the oceans

 

While trading firms’ algorithms have grown increasingly likely to filter misinformation, the threat landscape is growing into a high-stakes cat-and-mouse game that few anticipate solutions to in the near term.

 

Predicted: The Future of Trust and Quants 2017 Keynote, by keynote Matthew Griffin

 

RavenPack data scientist Peter Hafez says quants have two immediate concerns: “We see quants facing two hurdles: fake images that may fool a journalist, and reports of fake images that fool the algorithm itself.”

Combating deepfakes at scale will likely require significant technological investments to dampen trading mayhem, especially for high-speed trading firms like quants that make their profits through minute differences in the prices of different financial instruments.

 

RELATED
Inside OpenAI, the company setting AI free

 

In the meantime, some quants are turning to data sources that aggregate news from different sources into a sentiment score – which can be just as risky if they aggregate together lots of fake news. And then others are trading using trends rather than sharp price movements driven by social media and current news stories, and others are also changing algorithms to cross-check sources for validity.

A recent UN report described how private and public sector collusion could drive disinformation campaigns to achieve political or financial agendas and specifically alerted the public to how AI-generated deepfakes can stoke civil unrest.

According to the University of Buffalo Media Forensics Lab, generating a deepfake only requires about 500 images or ten seconds of video. However, MIT Technology Review argues the prevailing view is that AI works better with more data.

 

RELATED
Google steps up it's AI text to image game to fight off the competition

 

OpenAI’s GPT-2 model used 40 Gigabytes, while GPT-3 used 570 Gigabytes. The firm has not disclosed how much data its GPT-4 model uses but experts believe it’s in the 2 Petabyte range.

More data can also potentially generate more realistic deepfakes. A recent European Union draft bill asks companies to reveal the data sources their AI algorithms use to generate content.

But, as the problem of deepfakes – especially as it relates to criminals using them to manipulate markets and for financial gain – grows Buffalo’s forensics lab detects deepfakes by identifying abnormal eye movements while Intel’s FakeCatcher tool can analyse human faces to detect authentic human blood flow.

 

RELATED
New exercise drug sees lazy mice get fit by doing nothing

 

Google’s forthcoming metadata and watermarking tools can also help identify synthetic content. Metadata appended to a file provides context for the content while watermarking embeds resilient seed data that can survive modest edits.

Related Posts

Leave a comment

EXPLORE MORE!

1000's of articles about the exponential future, 1000's of pages of insights, 1000's of videos, and 100's of exponential technologies: Get The Email from 311, your no-nonsense briefing on all the biggest stories in exponential technology and science.

You have Successfully Subscribed!

Pin It on Pinterest

Share This