WHY THIS MATTERS IN BRIEF
The world’s thirst for internet speeds is only going one way, now Japanese scientists just bust internet speed the record.
Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, connect, watch a keynote, or browse my blog.
Ever wonder why the internet, as a whole, didn’t break when Covid-19 hit? In a matter of weeks, online habits changed dramatically. Kids went to school on Zoom, adults followed suit at work. Desperate to escape, people binged on Netflix. Doomscrolling is now a word in the dictionary. All this happened virtually overnight.
Demand for internet bandwidth went through the roof – as much as 60 percent by last May according to the OECD – and yet, the internet seemed…mostly fine. Sure, there were people behind the scenes managing these traffic increases, as well as new forms of Artificial Intelligence (AI) that were specially built to help manage the onslaught, but generally, the infrastructure to handle the surge was already in place. There were no headlines of mass outages or server farms catching fire. The reason? Good planning, in some cases, years in advance.
The basic assumption, and it’s proven to be a good one, is that more people will want to send more stuff over the internet tomorrow, or in ten years. We may not know how many people or what stuff exactly, but “growth” has generally been a good guess.
To meet tomorrow’s demands, we have to start building a more capable internet today. And by we, I mean researchers in labs around the world. So it is that each year we’re duly notified of a new eye-watering, why-would-we-need-that speed record.
The history of the internet
In May last year a team hit 44.2 terabits per second, and then in August a University College London (UCL) team, set the top mark at 178 terabits per second. Now, a year later, researchers at Japan’s National Institute of Information and Communications Technology (NICT) say they’ve nearly doubled the record with speeds of 319 terabits per second.
It’s worth putting that into perspective for a moment. When the UCL team announced their results last year, they said you could download Netflix’s entire catalog in a second with their tech. The NICT team has doubled that Netflix-library-per-second speed.
And here’s how they did it. The fastest internet signals are made up of data converted to pulses of light and sent flying down bundles of hair-like glass strands called fiber optics. Fiber optic cables enable far faster data transmission with less loss than traditional copper wires. Millions of miles of fiber now crisscross continents and traverse oceans. This is the web in its most literal sense.
Even with all that infrastructure in place researchers are always trying to figure out how to jam more and more data into the same basic design – that is, keep things more or less compatible but improve the number of Netflix libraries per second we can download. And they can do that in a few ways.
First, light has wave-like properties. Like a wave on water, you can think of a light wave as a series of peaks and troughs moving through space. The distance between peaks (or troughs) is its wavelength. In visible light, shorter wavelengths correspond to bluer colors and longer wavelengths to redder colors. The internet runs on infrared pulses of light that are a bit longer that those in the visible band.
We can code information in different wavelengths – like assigning a different “color” of light for each packet of information – and transmit them simultaneously. Expand the number of wavelengths available and you increase the amount of data you can send at the same time. This is called wavelength division multiplexing.
That’s the first thing the team did: They expanded the selection of “colors” available by adding a whole band of wavelengths, the S-band, that had only been demonstrated for short-range communication previously. In the study, they showed reliable transmission including the S-band over a distance of 3,001 kilometers, or nearly 2,000 miles.
The trick to going the distance was two-fold. Fiber cables need amplifiers every so often to propagate the signal over long distances. To accommodate the S-band, the team doped – that is, they introduced new substances to change the material’s properties – two amplifiers, one with the element erbium, the other with Thulium. These, combined with a technique called Raman amplification, which shoots a laser backwards down the line to boost signal strength along its length, kept the signals going over the long haul.
While standard long-distance fiber contains only a single fiber core, the cable here has four cores for increased data flow. The team split data into 552 channels, or “colors,” each channel transmitting an average 580 gigabits per second over the four cores.
Crucially, though, the total diameter of the cable is the same as today’s widely used single-core cabling, so it could be plugged into existing infrastructure.
Next steps include further increasing the sheer amount of data their system can transmit and lengthening its range to trans-oceanic distances.
This kind of research is only a first step to experimentally show what’s possible, as opposed to a final step showing what’s practical. Notably, while the speeds achieved by the NICT team would fit into existing infrastructure, we would need to replace existing cables.
The prior UCL work, which added S-band wavelengths over shorter distances, focused on maximizing the capacity of existing fiber cables by updating just the transmitters, amplifiers, and receivers. Indeed, that record was set on fiber that first hit the market in 2007. In terms of cost, this strategy would be a good first step.
Eventually, though, old fiber will need replacing as it approaches its limits. Which is when a more complete system, like the one NICT is investigating, would come in. But don’t expect hundred-terabit speeds to enable your gaming habits anytime soon. These kinds of speeds are for high-capacity connections between networks across countries, continents, and oceans, as opposed to the last few feet to your router.
Hopefully, they’ll ensure the internet can handle whatever we throw at it in the future – new data hungry applications we’re only beginning to glimpse or can’t yet imagine, three billion new users, or both at the same time.