Scroll Top

DeepMinds new AI designed a new algorithm to speed up global computing

Futurist_deepmindaialgo

WHY THIS MATTERS IN BRIEF

The world needs more computing power, like alot more, so companies are trying to find ways to boost it.

 

Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trendsconnect, watch a keynote, or browse my blog.

A while ago I talked about a Google Artificial Intelligence (AI) that could develop new better AI’s that beat everything that the best human AI developers could create, and now another AI has done what this one did, but at a more granular scale – it does creates new algorithms and optimises old ones.

 

RELATED
New analysis suggests AI could upend everyones pay and flatten wages

 

Now, an algorithm used trillions of times a day around the world could run up to 70 per cent faster thanks to an AI created by UK based firm DeepMind after it found an improved way for computers to sort data that has been overlooked by human programmers for decades.

“We honestly didn’t expect to achieve anything better: it’s a very short program, these types of programs have been studied for decades,” says Daniel Mankowitz at DeepMind.

Known as sorting algorithms, they are one of the workhorses of computation, used to organise data by alphabetising words or ranking numbers from smallest to largest. Many different sorting algorithms exist, but innovations are limited as they have been highly optimised over the decades.

 

RELATED
Futurist Keynote, UK: Reverse Brainstorming, RSM

 

Now, DeepMind has created an AI model called AlphaDev that is designed to discover new algorithms to complete a given task, with the hope of beating our existing efforts. Rather than tweaking current algorithms, AlphaDev starts from scratch.

It uses assembly code, which is the intermediate computer language that sits between human-written code and sequences of binary instructions encoded in 0s and 1s. Assembly code can be painstakingly read and understood by humans, but most software is written in a higher-level language that is more intuitive before being translated, or “compiled”, into assembly code. DeepMind says that assembly code affords AlphaDev more leeway to create more efficient algorithms.

The AI is told to build an algorithm one instruction at a time and tests its output against a known correct solution to ensure it is creating an effective method. It is also told to create the shortest possible algorithm. DeepMind says that the task grows rapidly more difficult with larger problems, as the number of possible combinations of instructions can rapidly approach the number of particles in the universe.

 

RELATED
A new Virtual Government is helping the UN solve global inequality

 

When asked to create a sorting algorithm, AlphaDev came up with one that was 70 per cent faster than the best for lists of five pieces of data and 1.7 per cent faster for lists of over 250,000 items.

“We initially thought it made a mistake or there was a bug or something, but, as we analysed the program, we realised that AlphaDev had actually discovered something faster,” says Mankowitz.

Because sorting algorithms are used in a lot of common software, this improvement could have a significant cumulative effect globally. Such algorithms are so vital that they are written into libraries of code that anyone can use, rather than writing their own. DeepMind has made its new algorithms open-source and included them in the commonly used Libc++ library, meaning people can already use them today. This is the first change to this part of the sorting algorithm library in over a decade, says DeepMind.

 

RELATED
China wants to shape and lead global AI standards

 

Mankowitz says that Moore’s Law – the idea that the amount of computing power of a single chip doubles at regular intervals – is coming to an end because miniaturisation is hitting immutable physical limits, but that AlphaDev might be able to help compensate for this by improving efficiency.

“Today these algorithms are being pulled [run in software] we estimate trillions of times every day and [are] able to be used by millions of developers and companies all around the world,” says Mankowitz. “Optimising the code of fundamental functions that get pulled trillions of times a day hopefully will have big enough benefits to encourage people to attempt to do even more of these functions and to have that as one path to unblocking this bottleneck [of Moore’s Law slowing].”

Mark Lee at the University of Birmingham, UK, says AlphaDev is interesting and that even a 1.7 per cent speed boost is useful. But he says that even if similar efficiencies are found in other common algorithms he is sceptical this approach will make up for Moore’s Law breaking, as it won’t be able to make the same gains in more esoteric software.

“I think they’re going to be able to do that to things like sorting algorithms, and standard kind of compute algorithms. But it’s not going to be applied to… complex bits of code,” he says. “I think increases in hardware are still going to outstrip it.”

 

RELATED
Watch a $300 DeepFake that casts Will Smith as Neo in the Matrix

 

Although, based on what we’re seeing AI achieve already he’s likely wrong about that, so we’ll see.

Journal reference

Nature DOI: 10.1038/s41586-023-06004-9

Related Posts

Comments (1)

[…] self-improvement cycle would involve the agent training itself on new data sets to sharpen its skills. Crucially, […]

Leave a comment

EXPLORE MORE!

1000's of articles about the exponential future, 1000's of pages of insights, 1000's of videos, and 100's of exponential technologies: Get The Email from 311, your no-nonsense briefing on all the biggest stories in exponential technology and science.

You have Successfully Subscribed!

Pin It on Pinterest

Share This