Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series. Regularly featured in the global media, including AP, BBC, Bloomberg, CNBC, Discovery, RT, Viacom, and WIRED, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Aon, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.
WHY THIS MATTERS IN BRIEF
The human brain has nearly 100 trillion synapses, OpenAI’s newest AI has 175 billion “synapses” so there’s a long way to go before the two are on parity, but it’s off to a great start.
A team of more than 30 OpenAI researchers, who recently received a $1 Billion grant from Microsoft to create the world’s first Artificial General Intelligence, have released a paper about GPT-3, a revolutionary Artificial Intelligence (AI) language model that achieved world beating state of the art results for a range of tasks that include everything from being able to write scarily accurate sounding fake news and basic code all the way through to helping people create new types of never ending “procedural” games, and writing winning term papers for students that helped them pass class with flying colours.
GPT-3 has a whopping 175 billion parameters which in AI terms is huge – by comparison, the largest version of GPT-2 was 1.5 billion parameters, and the largest Transformer-based language model in the world, which was introduced by Microsoft earlier this month, was 17 billion parameters.
OpenAI released GPT-2 last year, controversially taking a staggered release approach due to fear that the powerful AI could be used for malicious purposes. OpenAI was criticised by some for the staggered approach, while others applauded the company for demonstrating a way to carefully release an AI model with the potential for misuse. GPT-3 made its debut with a preprint arXiv paper last week, but so far no official release dates have been provided, and an OpenAI spokesperson declined to comment when asked if a full version of GPT-3 will be released or one of seven smaller versions ranging in size from 125 million to 13 billion parameters.
Many advanced Transformer-based models have evolved to achieve human-level performance on a number of natural language processing tasks. Authors say the Transformer architecture-based approach behind many language model advances in recent years is limited by a need for task-specific data sets and fine-tuning. Instead, GPT-3 is an autoregressive model trained with unsupervised machine learning and focuses on few-shot learning, which supplies a demonstration of a task at inference runtime.
“Here we show that scaling up language models greatly improves task-agnostic, few-shot performance, sometimes even reaching competitiveness with prior state-of-the-art fine-tuning approaches,” the paper reads. “For all tasks, GPT-3 is applied without any gradient updates or fine-tuning, with tasks and few-shot demonstrations specified purely via text interaction with the model.”
“Broadly, on NLP tasks GPT-3 achieves promising results in the zero-shot and one-shot settings, and in the few-shot setting [it] is sometimes competitive with or even occasionally surpasses state-of-the-art (despite state-of-the-art being held by fine-tuned models),” the authors note.
The new paper examines forms of GPT-3 in varying sizes to assess few-shot learning results, as well as one-shot learning, the kind thought to most closely mimic how humans learn, and zero-shot learning, where only a description of a task is provided at runtime.
Though GPT-3 works well to generate news articles and tasks like using novel words in sentences or performing arithmetic, it can fall short when it comes to common-sense reasoning. On the SuperGLUE benchmark introduced last year specifically to test reasoning and other tasks for advanced NLP models, GPT-3 achieves nearly state-of-the-art results in COPA and ReCoRD reading comprehension data sets, but falls short with word-in-context analysis (WiC) and RACE, a set of middle school and high school exam questions.
“GPT-3 appears to be weak in the few-shot or one-shot setting at some tasks that involve comparing two sentences or snippets, for example, whether a word is used the same way in two sentences (WiC), whether one sentence is a paraphrase of another, or whether one sentence implies another,” the paper reads. “By presenting a broad characterization of GPT-3’s strengths and weaknesses, including these limitations, we hope to stimulate study of few-shot learning in language models and draw attention to where progress is most needed.”
Unlike many other pre-trained language models, a preliminary assessment of algorithmic bias found in GPT-3 is also included in the paper. Sentiment analysis of GPT-3 racial bias performance was assessed using the Senti WordNet model and found that “Asian” had a consistently positive score, ranking first in racial groups in positive scores in three of the seven versions of GPT-3. “Black” consistently had low sentiment analysis scores across five of the seven versions of GPT-3. So, as huge and as advanced as the new model is it still has some way to go before its production ready.