Scroll Top

OpenAI released the full version of their “dangerous” fake news spewing AI

futurist_openai_fullrelease

WHY THIS MATTERS IN BRIEF

OpenAI have created a text generator so powerful that people fear it will be the ultimate fake news machine, and now the full version’s been released.

 

Interested in the Exponential Future? Connect, download a free E-Book, watch a keynote, or browse my blog.

In February OpenAI, the Artificial Intelligence (AI) company backed by Elon Musk,  catapulted itself into the public eye when it produced a text based language model so good at generating fake text content and fake news that the organization decided not to release it – although a smaller model of the final product was made available for people to try. Some within the AI research community argued it was a smart precaution, others wrote it off as a publicity stunt. The lab itself, a small San Francisco based for-profit that seeks to create Artificial General Intelligence (AGI), and recently snagged over $1Billion from Microsoft to help it get there, has firmly held that their work is an important experiment in how to handle high stakes research. And this is among the highest stakes research of our time.

 

RELATED
Deepfakes and AI generated disinformation confuse AI trading systems

 

Now just over six months later, the policy team has published a paper examining the impact of the decision thus far. Alongside it, the lab also released a larger version of its model, known as GPT-2, that’s half the size of the full one, to the public.

In May, a few months after GPT-2’s initial debut, OpenAI revised its stance on withholding the full code to what it calls a “staged release” – the staggered release of incrementally larger versions of the model in a ramp-up to the full one. In February, it published a version of the model, as I mentioned above, that was merely 8 percent of the size of the full one. It then published another roughly a quarter of the full version before the most recent release. During this process, it also partnered with selected research institutions to study the full model’s societal implications.

 

RELATED
Companies are racing to fill the seas with autonomous drone ships

 

The report is worth reading because it details what OpenAI learned throughout this process. It notes that both the staged release and research partnership agreements proved to be processes worth replicating in the future. They helped OpenAI better understand and anticipate the possible malicious uses of GPT-2. And indeed, the research partners were able to better quantify some of the threats that were only previously speculative. A study conducted by collaborators at Cornell University, for example, found that readers on average believed GPT-2’s outputs to be genuine news articles nearly as often as New York Times ones, and then as a result several researchers outside of OpenAI’s official partnerships also began tackling the challenge of detecting machine-generated text.

The authors concluded that after careful monitoring, OpenAI had not yet found any attempts of malicious use but had seen multiple beneficial applications, including in code autocompletion, grammar help, and developing question-answering systems for medical assistance. As a result, the lab felt that releasing the most recent code was ultimately more beneficial. Other researchers argue that several successful efforts to replicate GPT-2 have made OpenAI’s withholding of the code moot anyway.

 

RELATED
Researchers warn we could run out of data to train AI's by 2026

 

Unexpectedly, the report has received a mixed response, with some have lauded OpenAI for sparking a discussion and introducing a new set of norms that didn’t previously exist.

“The staged release of GPT-2 […] was a useful experiment,” says Peter Eckersley, the director of research at the Partnership on AI, of which OpenAI is a member. “Through gathering the AI community to debate these matters, we’ve found there are many subtle pieces that need to be gotten right in deciding when and how to publish research that has a risk of unintended consequences or malicious uses.”

Others, however, have remained critical of OpenAI’s decisions. Vanya Cohen, a recent master’s graduate from Brown University who recreated an open-source version of GPT-2, argues that withholding the model does more to slow down countermeasures research than replication.

 

RELATED
Google unveiled a new type of AI that spontaneously mutates and evolves all by itself

 

“Large language models like GPT-2 are the best currently available tools for identifying fake text generated by these same models,” he says.

Still others were more measured: “I don’t think a staged release was particularly useful in this case because the work is very easily replicable,” says Chip Huyen, a deep learning engineer at Nvidia, who are making their own fascinating creative machines. “But it might be useful in the way that it sets a precedent for future projects. People will see staged release as an alternative option.”

Oren Etzioni, the CEO of the Allen Institute for Artificial Intelligence, which also adopted a staged release for its language model Grover, echoes the sentiment: “I applaud their intent to design a thoughtful, gradual release process for AI technology but question whether all the fanfare was warranted.”

 

RELATED
Google DeepMind publishes breakthrough Artificial General Intelligence architecture

 

Jack Clark, the policy director of OpenAI, places GPT-2 in the context of the organization’s broader mission.

“If we are successful as an AI community in being able to build [artificial general intelligence], we will need a huge amount of historical examples from within AI” of how to handle high-stakes research, he says. “But what if there aren’t any historical examples? Well, then you have to generate [your own] evidence—which is what we’re doing.”

Related Posts

Leave a comment

EXPLORE MORE!

1000's of articles about the exponential future, 1000's of pages of insights, 1000's of videos, and 100's of exponential technologies: Get The Email from 311, your no-nonsense briefing on all the biggest stories in exponential technology and science.

You have Successfully Subscribed!

Pin It on Pinterest

Share This