Scroll Top

Google CEO says 25% of all new Google code is AI generated

WHY THIS MATTERS IN BRIEF

AI is starting to take over the coding world and coding jobs … and it’s only going to get multiple times better, fast.

 

Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trendsconnect, watch a keynote, or browse my blog.

The first Artificial Intelligence (AI) coding systems emerged back in 2017, and just a few months ago Amazon announced they’d saved over 4,500 man years and over $260 Million in development costs by using AI to update over 30,000 Java apps. And now Google’s CEO has revealed that AI systems now generate more than a quarter of new code for its products, with human programmers overseeing the computer-generated contributions. The statement, made during Google’s Q3 2024 earnings call, shows how AI tools are already having a sizeable impact on software development and even legacy code translation work.

 

RELATED
Researchers unveil a new AI that can detect Covid-19 from lung X-Rays

 

“We’re also using AI internally to improve our coding processes, which is boosting productivity and efficiency,” Pichai said during the call.

“Today, more than a quarter of all new code at Google is generated by AI, then reviewed and accepted by engineers. This helps our engineers do more and move faster.”

Google developers aren’t the only programmers using AI to assist with coding tasks. It’s difficult to get hard numbers, but according to Stack Overflow’s 2024 Developer Survey, over 76 percent of all respondents “are using or are planning to use AI tools in their development process this year,” with 62 percent actively using them. A 2023 GitHub survey found that 92 percent of US-based software developers are “already using AI coding tools both in and outside of work.”

AI-assisted coding first emerged in a big way with GitHub Copilot in 2021, and the feature saw a wide release in June 2022. It used a special coding AI model from OpenAI called Codex, which was trained to both suggest continuations to existing code and create new code from scratch from English instructions. Since then, AI-based coding has expanded in a big way, with ever-improving solutions from AnthropicMetaGoogle, OpenAI, and Replit.

 

RELATED
Researchers have created synthetic artificial skin

 

GitHub Copilot has expanded in capability as well. Just yesterday, the Microsoft-owned subsidiary announced that developers will be able to use non-OpenAI models such as Anthropic’s Claude 3.5 and Google’s Gemini 1.5 Pro to generate code within the application for the first time.

While some tout the benefits of AI use in coding, the practice has also attracted criticism from those who worry that future software generated partially or largely by AI could become riddled with difficult-to-detect bugs and errors.

According to a 2023 study by Stanford University, developers using AI coding assistants tended to include more bugs while paradoxically believing that their code is more secure. This finding was highlighted by Talia Ringer, a professor at the University of Illinois at Urbana-Champaign, who told reporters that “there are probably both benefits and risks involved” with AI-assisted coding, emphasizing that “more code isn’t better code.”

While introducing bugs is certainly a risky side-effect of AI coding, the history of software development has included controversial changes in the past, including the transition from assembly language to higher-level languages, which faced resistance from some programmers who worried about loss of control and efficiency. Similarly, the adoption of object-oriented programming in the 1990s sparked criticism about code complexity and performance overhead. The shift to AI augmentation in coding may be the latest transition that meets resistance from the old guard.

 

RELATED
The pentagon gets closer to fully autonomous war with new edict

 

“Whether you think coding with AI works today or not doesn’t really matter,” posted former Microsoft VP Steven Sinofsky in September. Sinofsky has a personal history of coding going back to the 1970s. “But if you think functional AI helping to code will make humans dumber or isn’t real programming just consider that’s been the argument against every generation of programming tools going back to Fortran.”

Strong preferences about “proper” coding practices have circulated widely among developers over the decades, and some of the more extreme positions may seem silly today, especially those concerning quality-of-life improvements that many programmers now take for granted. Daring Fireball’s John Gruber replied to Sinofsky’s tweet by saying, “I know youngster[s] won’t believe me, but I remember when some programmers argued that syntax coloring in text editors would make people dumber.”

Ultimately, all tools augment or enhance human capability. We use tools to build things faster, and we have always used tools to build newer, more complex tools. It’s the story of technology itself. Draftsmen laid out the first silicon computer chips on paper, and later engineers designed successive chips on computers that used integrated circuits. Today, electronic design automation (EDA) software assists in the design and simulation of semiconductor chips, and companies like Nvidia are now using AI algorithms to design them.

 

RELATED
Quantum artificial life created for the first time

 

Does that mean current AI models are capable of generating flawless, high-quality code that developers can just insert and forget? Likely not. For now, skilled humans with experience still need to be in the loop to ensure everything works properly, which seems to be the practice Google’s CEO was touting in the earnings call. Like any tool, AI assistance in skilled hands may significantly accelerate a task – and yet a hammer alone cannot build a house.

Related Posts

Leave a comment

You have Successfully Subscribed!

Pin It on Pinterest

Share This