Skip to main content Scroll Top

World’s AI DRAM crisis could be solved with a fiber loop that spans the world

WHY THIS MATTERS IN BRIEF

DRAM and memory prices are through the roof – literally – and that’s pushing the cost of every technology and device sky high. But there’s a solution.

 

Matthew Griffin is the World’s #1 Futurist Keynote Speaker and Global Advisor for the G7 and Fortune 500, specializing in exponential disruption across 100 countries. Book a Keynote or Advisory SessionJoin 1M+ followers on YouTube and explore his 15-book Codex of the Future series.

 


 

Anybody can create an X account and speak their mind, but not all minds are worth listening to. However, as the world faces a DRAM and memory crisis thanks in no small part to the multi-trillion dollar commitments to build vast new AI data centers, when John Carmack tweets folks tend to listen.

His latest musings pertain using a protracted fiber loop as an L2 cache of sorts, to hold AI model weights for near-zero latency and gigantic bandwidth. Carmack came upon the idea after considering that single mode fiber speeds have reached 256 Tb/s, over a distance of 200 km. With some back-of-the-Doom-box math, he worked out that 32 GB of data are in the fiber cable itself at any one point.

 

RELATED
Hackers use 51 percent attack to commandeer three cryptocurrencies, steal millions

 

Artificial Intelligence (AI) model weights can be accessed sequentially for inference, and almost so for training. Carmack’s next logical step, then, is using the fiber loop as a data cache to keep the AI accelerator always fed. Just think of conventional RAM as just a buffer between SSDs and the data processor, and how to improve or outright eliminate it.

The discussion spawned a substantial amount of replies, many from folks from high pay grades. Several pointed out that the concept in itself is akin to delay-line memory, harkening back to the middle of the century when mercury was used as a medium, and soundwaves as data. Mercury’s mercurialness proved hard to work with, though, and Alan Turing himself proposed using a gin mix as a medium.

The main real-world benefit of using a ‘Fiber Line Memory System’ would actually be in power savings, as it takes a substantial amount of power to keep DRAM going, whereas managing light requires very little. Plus, light is predictable and easy to work with. Carmack notes that “fiber transmission may have a better growth trajectory than DRAM,” but even disregarding plain logistics, 200 km of fiber are still likely to be pretty pricey.

Some commenters remarked other limitations outside of the fact that Carmack’s proposal would require a lot of fiber. Optical amplifiers and DSPs could eat into energy savings, and DRAM prices will have to come down at some point anyway. Some, like Elon Musk, even suggested vacuum as the medium (space lasers!), though the practicality of such a design would be iffy.

 

RELATED
Magnetic human skin breakthrough could lead to new types of wearables

 

Carmack’s tweet alluded to the more practical approach of using existing flash memory chips, wiring enough of them together directly to the accelerators, with careful consideration for timing. That would naturally require a standard interface agreed upon by flash and AI accelerator makers, but given the insane investment in AI, that prospect doesn’t seem far-fetched at all.

Variations on that idea have actually been explored by several groups of scientists. Approaches include Behemoth from 2021, FlashGNN and FlashNeuron from 2021, and more recently, the Augmented Memory Grid. It’s not hard to imagine that one or several of these will be put into practice, assuming they aren’t already.

Related Posts

Leave a comment

Pin It on Pinterest

Share This