Scroll Top

OpenAI are building a team to stop super AI going rogue



As AI gets more powerful many people worry how we’ll control it, so teams are looking into it …


Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trendsconnect, watch a keynote, or browse my blog.

If the individuals who are at the very forefront of Artificial Intelligence (AI) technology are commenting about the potentially catastrophic effects of highly intelligent AI systems, then it’s probably wise to sit up and take notice.


Scientists turned the human body into a data network to advance cyber security


Just a couple of months ago, Geoffrey Hinton, a man considered one of the “godfathers” of AI for his pioneering work in the field, said that the technology’s rapid pace of development meant that it was “not inconceivable” that Artificial Super Intelligence (ASI) — considered as being far far superior to all the human minds on the planet — could end up wiping out humanity.

And Sam Altman, CEO of OpenAI, the company behind the viral ChatGPT chatbot, had admitted to being “a little bit scared” about the potential effects of advanced AI systems on society.

Altman is so concerned that on Wednesday his company announced it’s setting up a new unit called Superalignment aimed at ensuring that superintelligent AI doesn’t end up causing chaos or something far worse.

“Superintelligence will be the most impactful technology humanity has ever invented, and could help us solve many of the world’s most important problems,” OpenAI said in a post introducing the new initiative. “But the vast power of superintelligence could also be very dangerous, and could lead to the disempowerment of humanity or even human extinction.”


World's first intercontinental Quantum Internet link goes live, hosts video conference


OpenAI said that although superintelligent AI may seem like it’s a way off, it believes it could be developed by 2030. And it readily admits that at the current time, no system exists “for steering or controlling a potentially superintelligent AI, and preventing it from going rogue.”

To deal with the situation, OpenAI wants to build a “roughly human-level automated alignment researcher” that would perform safety checks on a superintelligent AI, adding that managing these risks will also require new institutions for governance and solving the problem of superintelligence alignment.

For Superalignment to have an effect, OpenAI needs to assemble a crack team of top machine learning researchers and engineers.

The company appears very frank about its effort, describing it as an “incredibly ambitious goal” while also admitting that it’s “not guaranteed to succeed.” But it adds that it’s “optimistic that a focused, concerted effort can solve this problem.”


A convincing DeepFake Elon Musk just Zoom-bombed a video call


New AI tools like OpenAI’s ChatGPT and Google’s Bard, among many others, are so revolutionary that experts are certain that even at this pre-superintelligence level, the workplace and wider society face fundamental changes in the near term.

It’s why governments around the world are scrambling to play catchup, hurriedly moving to impose regulations on the rapidly developing AI industry in a bid to ensure the technology is deployed in a safe and responsible manner. However, unless a single body is formed, each country will have its own views on how best to use the technology, meaning those regulations could vary widely and lead to markedly different outcomes. And it’s these different approaches that will make Superalignment’s goal all the harder to achieve.

Related Posts

Leave a comment


1000's of articles about the exponential future, 1000's of pages of insights, 1000's of videos, and 100's of exponential technologies: Get The Email from 311, your no-nonsense briefing on all the biggest stories in exponential technology and science.

You have Successfully Subscribed!

Pin It on Pinterest

Share This