0

WHY THIS MATTERS IN BRIEF

In order to maintain fusion superheated plasma has to be held in magnetic confinement and AI’s proving to be very good at managing that.

 

Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trendsconnect, watch a keynote, or browse my blog.

If I asked you to name a few things that Artificial Intelligence (AI) can do you might include it being able to break its own reality, evolve, imagine, spawn children AI, and create synthetic genomes for future designer humans. But you probably didn’t say control superheated plasma. So far the closest that AI has gotten to helping the world figure out fusion is to figure out that Cold Fusion, in other words fusion at room temperature is impossible, until it turns out it might not be … but that’s another story.

 

RELATED
Futurist Keynote, Warrington: The Future of Industrial Controls Nuclear Security, Cyber Senate

 

Now it turns out that DeepMind’s streak of applying its world-class AI to hard science problems continues. In collaboration with the Swiss Plasma Center at EPFL in Switzerland the UK based AI firm has now trained a deep reinforcement learning algorithm to control the superheated soup of matter inside a nuclear fusion reactor. The breakthrough, published in the journal Nature, could help physicists better understand how fusion works, and potentially speed up the arrival of an unlimited source of clean energy.

 

The Future of Energy, by Futurist Speaker Matthew Griffin

 

“This is one of the most challenging applications of reinforcement learning to a real world system,” says Martin Riedmiller, a researcher at DeepMind.

In nuclear fusion, the atomic nuclei of hydrogen atoms get forced together to form heavier atoms, like helium. This produces a lot of energy relative to a tiny amount of fuel, making it a very efficient source of power. It is far cleaner and safer than fossil fuels or conventional nuclear power, which is created by fission—forcing nuclei apart. It is also the process that powers stars.

 

RELATED
Scientists get funding to grow neural networks in petri dishes

 

Controlling nuclear fusion on Earth is hard, however. The problem is that atomic nuclei repel each other. Smashing them together inside a reactor can only be done at extremely high temperatures, often reaching hundreds of millions of degrees – hotter than the center of the sun. At these temperatures, matter is neither solid, liquid, nor gas. It enters a fourth state, known as plasma: a roiling, superheated soup of particles.

 

Courtesy: EPFL

 

The task is to hold the plasma inside a reactor together long enough to extract energy from it. Inside stars, plasma is held together by gravity. On Earth, researchers use a variety of tricks, including lasers and by using magnets that are so powerful that they could lift aircraft carriers off the ground. In a magnet-based reactor, known as a tokamak, the plasma is trapped inside an electromagnetic cage, forcing it to hold its shape and stopping it from touching the reactor walls, which would cool the plasma and damage the reactor.

 

RELATED
Bring Your Own Climate to work day

 

Controlling the plasma requires constant monitoring and manipulation of the magnetic field. The team trained its reinforcement-learning algorithm to do this inside a simulation. Once it had learned how to control—and change—the shape of the plasma inside a virtual reactor, the researchers gave it control of the magnets in the Variable Configuration Tokamak (TCV), an experimental reactor in Lausanne. They found that the AI was able to control the real reactor without any additional fine tuning. In total, the AI controlled the plasma for only two seconds but this is as long as the TCV reactor can run before getting too hot and melting so it could have been longer.

Ten thousand times a second, the trained neural network takes in 90 different measurements describing the shape and position of the plasma and adjusts the voltage in 19 magnets in response. This feedback loop is far faster than previous reinforcement-learning algorithms have had to deal with. To speed things up, the AI was split into two neural networks. A large network, called a critic, learned via trial and error how to control the reactor inside the simulation. The critic’s ability was then encoded in a smaller, faster network, called an actor, that runs on the reactor itself.

Source: Deepmind

About author

Matthew Griffin

Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series. Regularly featured in the global media, including AP, BBC, Bloomberg, CNBC, Discovery, RT, Viacom, and WIRED, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Aon, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.

Your email address will not be published. Required fields are marked *