Scroll Top

For the first time the UN believes a hunter killer drone autonomously killed humans in battle

Futurist_hkdrone

WHY THIS MATTERS IN BRIEF

We already have the technology to create fully autonomous weapons systems that can hunt and kill targets, as well as the ability to create fully autonomous kill chains, but this is the first example of one being used. Possibly.

 

Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential Universityconnect, watch a keynote, or browse my blog.

It’s the sort of thing that can almost pass for background noise these days – recently a number of publications tentatively declared, based on a UN report from the Libyan civil war, that hunter killer robots, like the ones the US Marines want to “lead them into battle,” may have hunted down humans autonomously for the first time. As one headline put it: “The Age of Autonomous Killer Robots May Already Be Here.” But is it? As you might guess, it’s a hard question to answer.

 

RELATED
Artificial atom breakthrough to usher in era of secure communications

 

The new coverage has sparked a debate among experts that goes to the heart of our problems confronting the rise of autonomous robots in war. Some said the stories were wrongheaded and sensational, while others suggested there was a nugget of truth to the discussion. Diving into the topic doesn’t reveal that the world quietly experienced the opening salvos of the Terminator timeline in 2020. But it does point to a more prosaic and perhaps much more depressing truth: that no one can agree on what a killer robot is, and if we wait for this to happen, their presence in war will have long been normalized.

 

A rousing advert for the autonomous HK Drone …

 

It’s cheery stuff, isn’t it? It’ll take your mind off the global pandemic at least. Let’s jump in …

The source of all these stories is a 548-page report from the United Nations Security Council that details the tail end of the Second Libyan Civil War, covering a period from October 2019 to January 2021. The report was published in March, and you can read it in full here. To save you time: it is an extremely thorough account of an extremely complex conflict, detailing various troop movements, weapon transfers, raids and skirmishes that took place among the war’s various factions, both foreign and domestic.

 

RELATED
World's most dexterous robot now solves Rubiks' cubes one handed

 

The paragraph we’re interested in, though, describes an offensive near Tripoli in March 2020, in which forces supporting the UN-backed Government of National Accord (GNA) routed troops loyal to the Libyan National Army of Khalifa Haftar (referred to in the report as the Haftar Affiliated Forces or HAF). Here’s the relevant passage in full:

“Logistics convoys and retreating HAF were subsequently hunted down and remotely engaged by the unmanned combat aerial vehicles or the lethal autonomous weapons systems such as the STM Kargu-2 (see annex 30) and other loitering munitions. The lethal autonomous weapons systems were programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true ‘Fire, Forget and Find’ capability.”

The Kargu-2 system that’s mentioned here is a quadcopter built in Turkey: it’s essentially a consumer drone that’s used to dive-bomb targets. It can be manually operated or steer itself using machine vision. A second paragraph in the report notes that retreating forces were “subject to continual harassment from the unmanned combat aerial vehicles and lethal autonomous weapons systems” and that the HAF “suffered significant casualties” as a result.

 

RELATED
Encryption’s arch nemesis is a ticking quantum time bomb

 

But that’s it. That’s all we have. What the report doesn’t say — at least not outright — is that human beings were killed by autonomous robots acting without human supervision. It says humans and vehicles were attacked by a mix of drones, quadcopters, and “loitering munitions,” we’ll get to those later, and that “the quadcopters had been programmed to work offline” – this latter statement is key because if the systems were in fact working offline then they would indeed have been working autonomously.

However, and here’s the rub whether the attacks took place without connectivity is unclear.

These two paragraphs made their way into the mainstream press via a story in the New Scientist, which ran a piece with the headline: “Drones may have attacked humans fully autonomously for the first time.”

The NS is very careful to caveat that military drones might have acted autonomously and that humans might have been killed, but later reports lost this nuance. “Autonomous drone attacked soldiers in Libya all on its own,” read one headline. “For the First Time, Drones Autonomously Attacked Humans,” said another.

 

RELATED
Boldest iPhone hack ever chains together Zero Day exploits to create Zero Click attack

 

Let’s be clear: by itself, the UN does not say for certain whether drones autonomously attacked humans in Libya last year, though it certainly suggests this could have happened. The problem is that even if it did happen, for many experts, it’s just not news.

The reason why some experts took issue with these stories was because they followed the UN’s wording, which doesn’t distinguish clearly between loitering munitions and lethal autonomous weapons systems or LAWS – which is policy jargon for killer robots.

Loitering munitions, for the uninitiated, are the weapon equivalent of seagulls at the beachfront. They hang around a specific area, float above the masses, and wait to strike their target — usually military hardware of one sort or another, though it’s not impossible that they could be used to target individuals.

The classic example is Israel’s IAI Harpy, which was developed in the 1980s to target anti-air defenses. The Harpy looks like a cross between a missile and a fixed-wing drone, and is fired from the ground into a target area where it can linger for up to nine hours. It scans for telltale radar emissions from anti-air systems and drops onto any it finds. The loitering aspect is crucial as troops will often turn these radars off, given they act like homing beacons.

 

RELATED
New analysis suggests AI could upend everyones pay and flatten wages

 

“The thing is, how is this the first time of anything?” tweeted Ulrike Franke, a senior policy fellow at the European Council on Foreign Relations. “Loitering munition have been on the battlefield for a while – most notably in Nagorno-Karaback. It seems to me that what’s new here isn’t the event, but that the UN report calls them lethal autonomous weapon systems.”

Jack McDonald, a lecturer at the department of war studies at King’s College London, says the distinction between the two terms is controversial and constitutes an unsolved problem in the world of arms regulation.

“There are people who call ‘loitering munitions’ ‘lethal autonomous weapon systems’ and people who just call them ‘loitering munitions,’” he says. “This is a huge, long-running thing. And it’s because the line between something being autonomous and being automated has shifted over the decades.”

 

RELATED
If satellites can withstand 10,000G then Spintronics can fling them into space

 

So, is the Harpy a lethal autonomous weapons system? A killer robot? It depends on who you ask. IAI’s own website describes it as such, calling it “an autonomous weapon for all weather,” and the Harpy certainly fits a makeshift definition of LAWS as “machines that target combatants without human oversight.” But if this is your definition, then you’ve created a very broad church for killer robots. Indeed, under this definition a land mine is a killer robot, as it, too, autonomously targets combatants in war without human oversight.

However, as Schwarz says: “With this news story having made the rounds, now is a great time to mobilize the international community toward awareness and action.”

But with the UN continuing to be plagued by delays in debating the issue because, literally, some countries have been quibbling over who pays the expenses for meals, and others, such as China, Russia, the UK and US, among others, just outright saying they won’t sign up to a charter banning the development of autonomous weapons systems I think the only thing we can guarantee is that if this wasn’t the first case of an autonomous weapons system “hunting down and killing people” then we won’t have to wait long until one actually does.

Related Posts

Comments (1)

[…] UN has already seen what it alleges was the first example of a drone making an autonomous kill decision when it lost access to the control network it was on, and now the deployment of Artificial […]

Leave a comment

EXPLORE MORE!

1000's of articles about the exponential future, 1000's of pages of insights, 1000's of videos, and 100's of exponential technologies: Get The Email from 311, your no-nonsense briefing on all the biggest stories in exponential technology and science.

You have Successfully Subscribed!

Pin It on Pinterest

Share This