Scroll Top

US proposes to regulate AI the same way it regulates weapons exports

futurist_ai_regulation

WHY THIS MATTERS IN BRIEF

AI can be both very beneficial, but also in the wrong hand very dangerous, and the US is thinking about regulating it in the same way they regulate military hardware.

 

Artificial Intelligence (AI) technology has the capability to be one of, if not the, most impactful technologies ever and at the moment, like almost every other government on the planet, the US government has no idea how to properly regulate it. But what the US does know is that it doesn’t want other countries using its own AI technology against it, especially in the event of war as we continue to see the emergence of autonomous AI powered weapons systems, from Chinese “fire and forget” cruise missiles to fully autonomous Russian nuclear submarines.

 

RELATED
NIST unveils its top four quantum resistant encryption algorithms

 

As a result a new proposal published recently by the Department of Commerce lists a wide range of AI technologies that could potentially require a license to sell to certain countries, and the categories of restricted “exports” are as diverse as Machine Vision and Natural Language Processing tech. As you’d expect though it also lists military specific products like adaptive camouflage and surveillance technology.

The small number of countries these regulations would target includes one of the biggest names in AI – China, who last year announced that they want to be world leaders in AI by 2030. Donald Trump, who has placed tariffs on hundreds of billions of dollars of Chinese goods as part of a simmering trade war, has long railed against China’s alleged theft of US intellectual property, and it looks like the DOC’s proposals are a warning shot across China’s bow.

“This is intended to be a shot across the bow, directed specifically at Beijing, in an attempt to flex their muscles on just how broad these restrictions could be,” says David Edelman, a former adviser to president Barack Obama who leads research on technology and public policy issues at MIT.

On two occasions this year, the White House has moved to stop China from receiving tech exports on national security grounds, and the US already regulates certain exports to China, and products capable of military use are required to be licensed before they can be exported, as is the case with North Korea, Syria, and Iran. But since AI isn’t a device or a physical product, it could prove a difficult task to restrict how the technology flows out of the country, says Jack Clark, policy director at the non-profit OpenAI who are one of the world leaders in the development of AI. He argues that AI, as a dual-use technology, can be utilised as both a weapon or a tool. Because AI is not tethered to a specific physical device, regulating it must address how a broad technology could function on any computer.

 

RELATED
Diagnosing Type 2 diabetes is now as easy as taking a selfie

 

“It’s like trying to restrict math,” Edelman says.

In addition, tech companies like Google and Microsoft regularly post open source AI software and tools on the internet, in an effort to get more people using their paid services and expanding the reach of AI tools in general, and it’s still unclear whether open source code would be, or even could be, called an export. For example, publicly available code was exempt when the US first started regulating the export of encryption technologies.

These kinds of hard questions are needed for sensible regulation, Clark says: “I’m happy to see this because it’s a conversation that needs to be had. It’s going to be a difficult and frustrating process because it’s complicated issue.”

Negotiations will be further complicated by the 30 day window offered for comments during the brief consultation, which Edelman and Clark said is unusually short. And as for being able to enforce it – well, how the US does that is anyone’s guess.

Related Posts

Leave a comment

You have Successfully Subscribed!

Pin It on Pinterest

Share This