WHY THIS MATTERS IN BRIEF
Sam Altman wants to develop AI’s that outperform human workers in all economically valuable work and is likely underestimating the “holy crap” moment many people are having.
Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trends, connect, watch a keynote, or browse my blog.
The world is changing faster than ever before as powerful emerging technologies such as Artificial Intelligence (AI) develop exponentially fast and are then deployed into the wild just as fast. For some this furious rate of AI-led global change is scary, in part because now a lot of people are worried about how they protect themselves from AI automation, that in the words of Sam Altman will soon “outperform humans all all economically valuable work,” but for rich dudes like Altman the CEO of OpenAI, who have so much money they don’t have to ever work again, this worry is likely far down the list of thinks keeping him up at night – unlike the rest of us mortals
Securely back at the help of leading global AI firm OpenAI in a recent interview Altman said that he sees largely positive trade offs to technology that is racing quickly to match human intelligence.
The Future of Work and AI, by keynote Matthew Griffin
“Although we are giving something up here, in some sense we are going to have things that are smarter than us,” he told tech mogul Bill Gates during a recent podcast conversation, “If we can get into this world of post scarcity, we will find new things to do.”
Altman and Gates engaged in an insightful dialogue that netted profound insights, peeling back the layers of Altman’s perspectives on AI and its trajectory. But first, Altman sought to reassure industry watchers following the ultimately unsuccessful corporate coup last year.
“A lot of people have remarked on the fact that the team has never felt more productive, or more optimistic, or better,” Altman said.
Altman touched upon the philosophical aspects of AI, contemplating a future where AI surpasses human intelligence. He has previously shared his concerns regarding the socio-economic impact of AI, calling for better regulations that ensure a properly aligned AI development.
For Altman, AI “will lead to a society in which workers will be able to do more things for the same amount of money, making them more productive for their employers.”
“If you make a programmer three times more effective, it’s not just that they can write, they can do three times more stuff, it’s that they can… think of totally different things,” he said.
For Altman, this shift in the work dynamics that AI is causing is inevitable.
“The part that I find potentially a little scary is just the speed with which society is going to have to adapt and that the labor market will change,” he said, which is what I’ve been saying too for years now.
Altman previously said that “the hypothetical idea that we already have done something really bad by launching ChatGPT” is something that bothers him. However, as worried as he may sound for the future of those affected by AI, OpenAI is relentlessly pushing the boundaries of its GPT models, presenting more powerful LLMs, a store for customized agents that could easily replace more jobs, and partnering with news sites to train its future GPT-5 model on their content.
As for the future of AI, Altman emphasized the need for significant leaps in AI’s cognitive skills. For him, multi-modality (the capacity of a model to understand inputs that go beyond text, and contain images or videos for example) will play a key role in determining which model dominates the AI race.
However, the quality of the outputs will be the differentiating factor, and “the most important areas of progress will be around reasoning ability… and also reliability,” he said.
Altman and Gates discussed the topic of regulatory and ethical constraints, exploring calls for a cohesive global regulatory framework for AI. Considering the expansive impact of advanced AI systems, he once again advocated for a balanced, international governance approach:
“For these… future extraordinarily powerful systems, we have been socializing the idea of a global regulatory body,” he said.
Altman’s reflections present a multifaceted outlook on AI’s future: a mix of hope and prudence, ingenuity and accountability. New players in the AI field are challenging OpenAI’s dominance, and regulators (just as everyday workers) are more worried than ever.
That doesn’t scare Altman.
“It’s, you know, both annoying, motivating and fun,” he said, “But it does push us to be better and do faster and do things faster.”