WHY THIS MATTERS IN BRIEF
First AI helped control aircraft avionics, then it took on missions, and it won’t be soon before it replaces military pilots altogether.
Recently Artificial Intelligence (AI) beat human topgun pilots in simulated battles and dog fights, and last month in a world first, the US Air Force successfully flew an AI copilot on a U-2 spy plane in California, marking the first time AI had ever been given control of an active US military system. The computerised U-2 copilot, which was affectionately named Artuµ, brought back memories of when R2-D2 in Star Wars co-piloted a rebel X-Wing, but that said though the odd spelling was actually a nod to something that isn’t from a galaxy far, far away, it was a nod to the µZero algorithm developed by the AI Jedi’s at DeepMind.
Designed to dominate games like chess and Go, the USAF retrained DeepMind’s commercial algorithm to operate the spy plane’s radar in under five weeks. And after a million simulated missions, Artuµ wasn’t a typical Airman on its first flight at the 9th Reconnaissance Wing – it was the actual mission commander.
The results were so promising that the USAF has now added electronic warfare to Artuµ’s next mission – a role which means it’s very likely to see some real world missions very soon.
The era of algorithmic warfare has indeed begun, and as scary as that sounds it’s taking place on a much larger battlefield whose “weaponry,” of all things, hails from commercial technology and startup companies.
Whether played on boards or computers, games were perfect candidates for developing deep reinforcement learning, or the process by which AI intuits the rules, rewards, and penalties of unfamiliar systems through large-scale trial and error.
As a new startup in 2010, DeepMind cut its teeth training AI to play old-school video games without access to the underlying code. By 2013, its AI could beat humans at Pong, and by 2020, all humans at any Atari game. But DeepMind’s crowning achievement came in 2016 when AlphaGo, µZero’s great-great-grand-algorithm, dethroned the reigning world Go champion and ended human pre-eminence in strategy games – a true “Sputnik” moment for gaming, computer science, and military futurists alike.
Gamifying warfare for computer applications is nothing new. Franchises like Call of Duty, for example, make a pun-intended killing at it. U-2 reconnaissance operations are no different: Finding enemy targets is the reward, missing targets, or worse, being shot down, is the penalty, and physics are essentially the rules. But to grasp this high stakes “game” Artuµ had to learn everything the hard way, including even basic tactics no human would try – like expecting enemy air defenses also to shoot down enemy aircraft. They don’t. Lesson learned. Repeat.
But playing against a simulation takes Artuµ’s lessons only so far. To achieve beyond-human capabilities, just as the µZero family did for strategy games, Artuµ played against itself. However, unlike playing the opposite side of a board game, an opposing military operation, like denying reconnaissance during a simulated missile strike, needs unique algorithm training for its unique reward-penalty scheme. Consequently, the USAF had no “enemy Artuµ” to train against for Artuµ’s first flight.
To operationalise electronic warfare, what Winston Churchill dubbed the “Wizard War,” the USAF team gamified it by creating an opponent for Artuµ in their U-2 FedLab called Cetuµ then ran millions of self-training simulations to teach them both the good side of electronic warfare, sensing, and the dark side, jamming.
Now, given modern military reliance on the electromagnetic spectrum that’s a big step, and not only could it change the balance of power, but it will also likely one day lead to the realisation of Churchill’s Wizard War.