Scroll Top

OpenAI debuts massive GPT-3 AI upgrade with over 175 billion parameters

futurist_gpt3_openai

WHY THIS MATTERS IN BRIEF

The human brain has nearly 100 trillion synapses, OpenAI’s newest AI has 175 billion “synapses” so there’s a long way to go before the two are on parity, but it’s off to a great start.

 

Interested in the Exponential Future? Join our XPotential Community, future proof yourself with courses from our XPotential Academyconnect, watch a keynote, or browse my blog.

A team of more than 30 OpenAI researchers, who recently received a $1 Billion grant from Microsoft to create the world’s first Artificial General Intelligence, have released a paper about GPT-3, a revolutionary Artificial Intelligence (AI) language model that achieved world beating state of the art results for a range of tasks that include everything from being able to write scarily accurate sounding fake news and basic code all the way through to helping people create new types of never ending “procedural” games, and writing winning term papers for students that helped them pass class with flying colours.

 

RELATED
"Unblinking Eye" pre-crime tech gets investment boost from UK government

 

GPT-3 has a whopping 175 billion parameters which in AI terms is huge – by comparison, the largest version of GPT-2 was 1.5 billion parameters, and the largest Transformer-based language model in the world, which was introduced by Microsoft earlier this month, was 17 billion parameters.

 

GPT-3 versus the human brain!
 

OpenAI released GPT-2 last year, controversially taking a staggered release approach due to fear that the powerful AI could be used for malicious purposes. OpenAI was criticised by some for the staggered approach, while others applauded the company for demonstrating a way to carefully release an AI model with the potential for misuse. GPT-3 made its debut with a preprint arXiv paper last week, but so far no official release dates have been provided, and an OpenAI spokesperson declined to comment when asked if a full version of GPT-3 will be released or one of seven smaller versions ranging in size from 125 million to 13 billion parameters.

 

RELATED
Scientists have created a new smart material that's more AI than material

 

Many advanced Transformer-based models have evolved to achieve human-level performance on a number of natural language processing tasks. Authors say the Transformer architecture-based approach behind many language model advances in recent years is limited by a need for task-specific data sets and fine-tuning. Instead, GPT-3 is an autoregressive model trained with unsupervised machine learning and focuses on few-shot learning, which supplies a demonstration of a task at inference runtime.

“Here we show that scaling up language models greatly improves task-agnostic, few-shot performance, sometimes even reaching competitiveness with prior state-of-the-art fine-tuning approaches,” the paper reads. “For all tasks, GPT-3 is applied without any gradient updates or fine-tuning, with tasks and few-shot demonstrations specified purely via text interaction with the model.”

 

RELATED
Bach lives again, in AI form

 

“Broadly, on NLP tasks GPT-3 achieves promising results in the zero-shot and one-shot settings, and in the few-shot setting [it] is sometimes competitive with or even occasionally surpasses state-of-the-art (despite state-of-the-art being held by fine-tuned models),” the authors note.

The new paper examines forms of GPT-3 in varying sizes to assess few-shot learning results, as well as one-shot learning, the kind thought to most closely mimic how humans learn, and zero-shot learning, where only a description of a task is provided at runtime.

Though GPT-3 works well to generate news articles and tasks like using novel words in sentences or performing arithmetic, it can fall short when it comes to common-sense reasoning. On the SuperGLUE benchmark introduced last year specifically to test reasoning and other tasks for advanced NLP models, GPT-3 achieves nearly state-of-the-art results in COPA and ReCoRD reading comprehension data sets, but falls short with word-in-context analysis (WiC) and RACE, a set of middle school and high school exam questions.

 

RELATED
An AI designed 30,000 drugs in 21 days and came up with winners

 

“GPT-3 appears to be weak in the few-shot or one-shot setting at some tasks that involve comparing two sentences or snippets, for example, whether a word is used the same way in two sentences (WiC), whether one sentence is a paraphrase of another, or whether one sentence implies another,” the paper reads. “By presenting a broad characterization of GPT-3’s strengths and weaknesses, including these limitations, we hope to stimulate study of few-shot learning in language models and draw attention to where progress is most needed.”

Unlike many other pre-trained language models, a preliminary assessment of algorithmic bias found in GPT-3 is also included in the paper. Sentiment analysis of GPT-3 racial bias performance was assessed using the Senti WordNet model and found that “Asian” had a consistently positive score, ranking first in racial groups in positive scores in three of the seven versions of GPT-3. “Black” consistently had low sentiment analysis scores across five of the seven versions of GPT-3. So, as huge and as advanced as the new model is it still has some way to go before its production ready.

Related Posts

Comments (1)

Leave a comment

You have Successfully Subscribed!

Pin It on Pinterest

Share This