Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the 311 Institute, a global futures and deep futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future.” Regularly featured in the global media, including AP, BBC, CNBC, Discovery, RT, and Viacom, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Bain & Co, BCG, BOA, Blackrock, Bentley, Credit Suisse, Dell EMC, Dentons, Deloitte, Du Pont, E&Y, GEMS, HPE, Huawei, JPMorgan Chase, KPMG, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, UBS, and many more.
WHY THIS MATTERS IN BRIEF
AI can be both very beneficial, but also in the wrong hand very dangerous, and the US is thinking about regulating it in the same way they regulate military hardware.
Artificial Intelligence (AI) technology has the capability to be one of, if not the, most impactful technologies ever and at the moment, like almost every other government on the planet, the US government has no idea how to properly regulate it. But what the US does know is that it doesn’t want other countries using its own AI technology against it, especially in the event of war as we continue to see the emergence of autonomous AI powered weapons systems, from Chinese “fire and forget” cruise missiles to fully autonomous Russian nuclear submarines.
As a result a new proposal published recently by the Department of Commerce lists a wide range of AI technologies that could potentially require a license to sell to certain countries, and the categories of restricted “exports” are as diverse as Machine Vision and Natural Language Processing tech. As you’d expect though it also lists military specific products like adaptive camouflage and surveillance technology.
The small number of countries these regulations would target includes one of the biggest names in AI – China, who last year announced that they want to be world leaders in AI by 2030. Donald Trump, who has placed tariffs on hundreds of billions of dollars of Chinese goods as part of a simmering trade war, has long railed against China’s alleged theft of US intellectual property, and it looks like the DOC’s proposals are a warning shot across China’s bow.
“This is intended to be a shot across the bow, directed specifically at Beijing, in an attempt to flex their muscles on just how broad these restrictions could be,” says David Edelman, a former adviser to president Barack Obama who leads research on technology and public policy issues at MIT.
On two occasions this year, the White House has moved to stop China from receiving tech exports on national security grounds, and the US already regulates certain exports to China, and products capable of military use are required to be licensed before they can be exported, as is the case with North Korea, Syria, and Iran. But since AI isn’t a device or a physical product, it could prove a difficult task to restrict how the technology flows out of the country, says Jack Clark, policy director at the non-profit OpenAI who are one of the world leaders in the development of AI. He argues that AI, as a dual-use technology, can be utilised as both a weapon or a tool. Because AI is not tethered to a specific physical device, regulating it must address how a broad technology could function on any computer.
“It’s like trying to restrict math,” Edelman says.
In addition, tech companies like Google and Microsoft regularly post open source AI software and tools on the internet, in an effort to get more people using their paid services and expanding the reach of AI tools in general, and it’s still unclear whether open source code would be, or even could be, called an export. For example, publicly available code was exempt when the US first started regulating the export of encryption technologies.
These kinds of hard questions are needed for sensible regulation, Clark says: “I’m happy to see this because it’s a conversation that needs to be had. It’s going to be a difficult and frustrating process because it’s complicated issue.”
Negotiations will be further complicated by the 30 day window offered for comments during the brief consultation, which Edelman and Clark said is unusually short. And as for being able to enforce it – well, how the US does that is anyone’s guess.