Scroll Top

Major Chinese Generative AI training breakthrough runs across distributed datacenters

WHY THIS MATTERS IN BRIEF

Hammered by US GPU export sanctions in a major world first Chinese companies have managed to train new Generative AI models across distributed data centers and GPU clusters.

 

Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trendsconnect, watch a keynote, or browse my blog.

As China continues to see many of its largest companies being Blacklisted and subjected to technology export restrictions, such as the export of Nvidia’s top of the line H100 and B200 GPUs, which are used in training the latest Artificial Intelligence (AI) models, the country has been forced to find new ways to train their AI models in order to stay competitive with the West. And they’re coming up with quite a number of groundbreaking new methods to literally do more with less, or in Chinese terminology – to “build world class [LLM] models fast, reliably, and cheaply.”

 

RELATED
An AI built to generate "Fake News" is now helping detect it

 

Recently an industry analyst revealed that China has developed a single Generative AI (GAI) model across multiple data centers — a massive feat considering the complexity of using different GPUs in a single data center, let alone using servers in multiple geographic locations. Patrick Moorhead, Chief Analyst at Moor Insights & Strategy, said on X that China was the first country to manage this achievement and that he discovered it during a conversation about a presumably unrelated NDA meeting.

 

The Future of Artificial Intelligence, by keynote speaker Matthew Griffin

 

This technique of training GAIs across different locations/architectures is essential for China to keep its AI dreams of world dominance moving forward, especially as American sanctions have stopped it from acquiring the latest, most powerful chips to drive its research and development. Since Nvidia does not want to lose the Chinese market, it created the less powerful H20 AI chips that fall within Washington’s restrictive performance parameters. However, there are rumours that even these down-tuned chips might be banned soon, highlighting the uncertainty Chinese tech companies face in the current political climate.

 

RELATED
China's AI anti-corruption program is getting shut down for being too good

 

Because of this uncertainty, Chinese researchers have been working on melding GPUs from different brands into one training cluster – as well as developing their own competitive chips. By doing so, the institutions could combine their limited stocks of sanctioned high-end, high-performance chips, like the Nvidia A100, with less powerful but readily available GPUs, like Huawei’s Ascend 910B or the afore mentioned Nvidia H20. This technique could help them combat the high-end GPU shortage within China, although it has historically come with large drops in efficiency.

However, it seems that China has found ways to solve this issue, especially with the news of the single GAI development across multiple data centers. Although we don’t have any information on this GAI yet, it shows the lengths that Chinese researchers will go to, to ensure that they can continue driving China’s AI ambitions forward. As Huawei said, China would find ways to continue moving its AI development despite American sanctions. After all, necessity is the mother of invention.

Related Posts

Leave a comment

You have Successfully Subscribed!

Pin It on Pinterest

Share This