Scroll Top

You can now try the “world’s most dangerous” fake news AI for yourself

futurist_fake_news_ai

WHY THIS MATTERS IN BRIEF

A basic version of the “world’s most advanced, and dangerous, fake news generator” is now online for you to try for yourself.

 

Interested in the future and want to experience even more?! eXplore More.

This spring, the Elon Musk founded AI research lab OpenAI made a splash with an Artificial Intelligence (AI) system that generates text – a system that was so good at generating convincing and realistic text, such as articles, poems, and Fake News that the secrets behind how it works were deemed “too dangerous to release.”

 

RELATED
This creative AI paints like the old masters

 

Now though, a couple of months on, the public has a chance to give it a try, at least, a limited and dumbed version of it, and I’d strongly suggest you give it a whirl like Ollie, an English teacher in the UK recently did when I showed it off to teachers at a school near Reading during one of my Future of Education presentations.

Initially, OpenAI released an extremely restricted version of the system, citing concerns that it’d be abused and now they’ve released a more powerful version, although still significantly limited compared to the whole thing, and you can check it out for yourself.

The way it works is amazingly simple. A user gives the system, called GPT-2, a prompt — a few words, a snippet of text, a passage from an article, what have you. The system has been trained, on data drawn from the internet, to “predict” the next words of the passage — meaning the AI will turn your prompt into a news article, a short story, or a poem.

 

RELATED
The first 247 AI news channel finally goes live

 

The results can be quite sophisticated. When I tested it, I fed GPT-2 the beginnings of stories about snowstorms in the Northwest, about college students, and about GPT-2 itself. The system then took it from there, inventing imaginary scientists to quote and imaginary organizations to cite, and it even enthused about the rapid progress of AI.

OpenAI initially decided not to release the full system to the public, out of fears it could be used by malicious actors to swamp us all with fake news. Instead, so instead they released smaller and less capable versions — a staggered rollout that OpenAI hopes will allow researchers to explore the system and learn from it, while still keeping the potential risks at bay.

 

RELATED
OpenAI seeks new private AI training datasets to help them realise AGI

 

AI is getting more sophisticated — and that’s a big deal. It has the potential to assist us in tackling some of the biggest problems of our day, from drug development to clean energy. But researchers worry it can have unintended consequences, increase inequality, and, when systems get powerful enough, even pose real danger. We’re still figuring out how to balance AI’s benefits against its potential hazards.

Related Posts

Leave a comment

EXPLORE MORE!

1000's of articles about the exponential future, 1000's of pages of insights, 1000's of videos, and 100's of exponential technologies: Get The Email from 311, your no-nonsense briefing on all the biggest stories in exponential technology and science.

You have Successfully Subscribed!

Pin It on Pinterest

Share This