Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series. Regularly featured in the global media, including AP, BBC, Bloomberg, CNBC, Discovery, RT, Viacom, and WIRED, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Aon, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.
WHY THIS MATTERS IN BRIEF
Since we haven’t created an ASI yet we could stop AI development today to prevent one from appearing so what the creators mean is that we (Humanity) won’t stop. And that’s different from can’t.
Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trends, connect, watch a keynote, or browse my blog.
Artificial General Intelligence (AGI), also known as super intelligence, has been theorised by philosophers and academics for decades, though rapid advances in recent years mean we may now be on the cusp of it, senior figures within OpenAI warned.
The Future of AI by keynote Matthew Griffin
In a blog post published on Wednesday, OpenAI’s Sam Altman, Greg Brockman and Ilya Sutskever said that AI superintelligence will be unprecedented in its power – both positive and negative.
“Given the picture as we see it now, it’s conceivable that within the next ten years AI systems will exceed expert skill level in most domains, and carry out as much productive activity as one of today’s largest corporations,” the post stated.
“In terms of both potential upsides and downsides, super intelligence will be more powerful than other technologies humanity has had to contend with in the past.”
OpenAI laid out three ways humanity can navigate the arrival of super intelligence, though warned lawmakers and regulators against trying to stop it.
“We believe it would be unintuitively risky and difficult to stop the creation of super intelligence,” the post warned.
“Because the upsides are so tremendous, the cost to build it decreases each year, the number of actors building it is rapidly increasing, and it’s inherently part of the technological path we are on, stopping it would require something like a global surveillance regime, and even that isn’t guaranteed to work. So we have to get it right.”
Altman appeared before a congressional hearing last week to face questions from US senators about the risks that advanced AI poses.
The 38-year-old told the Senate Judiciary Subcommittee on Privacy, Technology and the Law that he believed AI-powered chatbots like ChatGPT were a “significant area of concern” and required rules and guidelines to prevent misuse.
“There’s no way to put this genie in the bottle. Globally, this is exploding,” Democratic Senator Cory Booker acknowledged.
One potential way to prevent AI harms like election manipulation would be by introducing licensing and testing requirements for the development of AI, Altman said.
One possibility, according to OpenAI, is through the creation of a US licensing agency for AI called the Office for AI Safety and Infrastructure Security (Oasis).