WHY THIS MATTERS IN BRIEF
Since we haven’t created an ASI yet we could stop AI development today to prevent one from appearing so what the creators mean is that we (Humanity) won’t stop. And that’s different from can’t.
Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trends, connect, watch a keynote, or browse my blog.
OpenAI, the Artificial Intelligence (AI) firm behind the hit ChatGPT, has warned that the arrival of AI “that surpasses humans” is unavoidable.
Artificial General Intelligence (AGI), also known as super intelligence, has been theorised by philosophers and academics for decades, though rapid advances in recent years mean we may now be on the cusp of it, senior figures within OpenAI warned.
The Future of AI by keynote Matthew Griffin
In a blog post published on Wednesday, OpenAI’s Sam Altman, Greg Brockman and Ilya Sutskever said that AI superintelligence will be unprecedented in its power – both positive and negative.
“Given the picture as we see it now, it’s conceivable that within the next ten years AI systems will exceed expert skill level in most domains, and carry out as much productive activity as one of today’s largest corporations,” the post stated.
“In terms of both potential upsides and downsides, super intelligence will be more powerful than other technologies humanity has had to contend with in the past.”
OpenAI laid out three ways humanity can navigate the arrival of super intelligence, though warned lawmakers and regulators against trying to stop it.
“We believe it would be unintuitively risky and difficult to stop the creation of super intelligence,” the post warned.
“Because the upsides are so tremendous, the cost to build it decreases each year, the number of actors building it is rapidly increasing, and it’s inherently part of the technological path we are on, stopping it would require something like a global surveillance regime, and even that isn’t guaranteed to work. So we have to get it right.”
Altman appeared before a congressional hearing last week to face questions from US senators about the risks that advanced AI poses.
The 38-year-old told the Senate Judiciary Subcommittee on Privacy, Technology and the Law that he believed AI-powered chatbots like ChatGPT were a “significant area of concern” and required rules and guidelines to prevent misuse.
“There’s no way to put this genie in the bottle. Globally, this is exploding,” Democratic Senator Cory Booker acknowledged.
One potential way to prevent AI harms like election manipulation would be by introducing licensing and testing requirements for the development of AI, Altman said.
One possibility, according to OpenAI, is through the creation of a US licensing agency for AI called the Office for AI Safety and Infrastructure Security (Oasis).