WHY THIS MATTERS IN BRIEF
AI is nearing the point where it is capable of self-recursive and continuous self-improvement, and that summons in a new new age of AI.
Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trends, connect, watch a keynote, or browse my blog.
On Thursday, OpenAI announced GPT 5.3 Codex, a new coding model that’s purportedly 25 percent faster than its predecessor, along with other impressive benchmarks. The real head-turner, however, is another claim the Sam Altman led company made about its development. GPT-5.3-Codex, supposedly, is its first Artificial Intelligence (AI) model “that was instrumental in creating itself,” with its team “blown away” by the results. And it’s not the first model made by itself – it’s the second after Anthropic’s AI Claude Opus 4.5 built its successor Opus 4.6.
Is the Singularity nearly upon us? Is this the long-awaited sign of recursive self-improvement, a point at which the machines are finally able to continually rewrite their own code to transform themselves into superior beings?
Not quite. In less sensational wording, OpenAI restates the AI’s role as being used to “accelerate its own development.” What that looked like, according to the blog post, is that “the Codex team used early versions to debug its own training, manage its own deployment, and diagnose test results and evaluations.”
The Future of AI 2030, by Keynote Matthew Griffin
In other words, GPT 5.3 Codex helped the human coders with some of their tasks – impressive, maybe, but hardly a sign of humankind’s immediate obsolescence.
The imaginations of AI boosters, nonetheless, went into overdrive. The responses to a post on the r/singularity subreddit about the news were filled with a mixture of doom-tinged hype, hype-tinged doom, and a good deal of gallows humor.
“I hope everyone remembers how good a mid level manager I was before the machines came,” one user wrote. The sentiment extended to X. “Holy moly – so it begins!” tweeted another user, who runs an AI newsletter.
Taken together, it’s a sign of how the discourse around AI is still dominated by sci-fi concepts and rhetoric and distorted by hype. AI companies have been more than occasionally guilty of stoking the flames. Last month, the head of Anthropic’s Claude Code Boris Cherny also claimed that “pretty much” 100 percent of the company’s code is now AI-generated using its own model. What this actually looks like behind the scenes, though, remains hazy. And it’s still far removed from the idea of AI models building new models completely autonomously.
Anthropic, regardless, isn’t alone. With the new Codex release, Altman – who’s been on an emotional rollercoaster as of late – added to the pile with a sob story about how his coding tool was so good that it made him depressed.
“I built an app with Codex last week. It was very fun,” Altman tweeted. “Then I started asking it for ideas for new features and at least a couple of them were better than I was thinking of. I felt a little useless and it was sad.”
Who said the “vibe” in “vibe-coding” had to be good?















