Skip to main content Scroll Top

Alibaba slashes AI training GPUs by 82 percent in world first

WHY THIS MATTERS IN BRIEF

Blocked from buying the latest Nvidia GPUs chinese companies are getting very efficient at training their own LLM AI’s to compete with the West.

 

Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trendsconnect, watch a keynote, or browse my blog.

As US companies continue to try to spend over $5.7 Trillion on new data center build outs for their Artificial Intelligence (AI) ambitions, and after Chinese researchers became the first to be able to train AI across distributed GPU clusters, Alibaba Cloud claims its new Aegaeon pooling system reduces the number of Nvidia GPUs required to serve large language models by 82% during a multi-month beta test inside its Model Studio marketplace. The result, published in a peer-reviewed paper presented at the 2025 ACM Symposium on Operating Systems (SOSP) in Seoul, suggests that cloud providers may be able to extract significantly more inference capacity from existing silicon, especially in constrained markets like China, where the supply of Nvidia’s latest H20s remains limited.

 

RELATED
Scientists double the speed of quantum storage devices for quantum computers

 

Unlike training-time breakthroughs that chase model quality or speed, Aegaeon is an inference-time scheduler designed to maximize GPU utilization across many models with bursty or unpredictable demand. Instead of pinning one accelerator to one model, Aegaeon virtualizes GPU access at the token level, allowing it to schedule tiny slices of work across a shared pool. This means one H20 could serve several different models simultaneously, with system-wide “goodput” — a measure of effective output — rising by as much as nine times compared to older serverless systems.

The system was tested in production over several months, according to the paper, which lists authors from both Peking University and Alibaba’s infrastructure division, including CTO Jingren Zhou. During that window, the number of GPUs needed to support dozens of different LLMs — ranging in size up to 72 billion parameters — fell from 1,192 to just 213.

While the paper does not break down which models contributed most to the savings, reporting by the South China Morning Post says the tests were conducted using Nvidia’s H20, one of the few accelerators still legally available to Chinese buyers under current US export controls.

 

RELATED
Scientists stored a movie inside a living bacteria, then replayed it

 

Alibaba says the gains came from two main techniques: Packing multiple models per GPU, and using a token-level autoscaler to dynamically allocate compute as output is generated, rather than reserving resources at the request level. In benchmarks, Aegaeon beat the goodput of ServerlessLLM and MuxServe by margins ranging from 1.5 times to 9 times.

Whether those savings translate outside Alibaba’s stack remains to be seen. Alibaba Cloud’s paper does not specify the exact network fabric used in the beta test, but we know the company offers its own eRDMA elastic RDMA network and has a record of building highly‑integrated GPU serving stacks, suggesting the results may depend on an optimized, vertically integrated environment.

Regardless, the result is likely to attract interest from other hyperscalers looking to stretch scarce accelerator fleets as inference demand continues to spike.

Related Posts

Leave a comment

Pin It on Pinterest

Share This