Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series. Regularly featured in the global media, including AP, BBC, Bloomberg, CNBC, Discovery, RT, and Viacom, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Aon, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.
WHY THIS MATTERS IN BRIEF
Artificial Intelligence has great promise in the civilian and military realms, and as countries weaponise it the US doesn’t want to be left behind.
The Pentagon, which last year embedded the world’s best robo-hacker into its critical systems to protect them from cyber attacks, and who also recently announced a program to create an Artificial Intelligence (AI) capable of monitoring the whole world, has made public for the first time the outlines of its “master plan for speeding the injection of AI into military equipment.” A move that they hope will close the gap between them and China as China ramps up its efforts to be the world leader in AI by 2030, and increasingly embeds AI into everything from cruise missiles to submarines.
By declassifying key elements of a strategy it had adopted last summer, the US Department of Defense appeared to be trying to address disparate criticism that it was not being heedful enough of the risks of using AI in its weaponry or not being aggressive enough in the face of rival nations’ efforts to embrace AI.
The 17-page strategy summary said that AI, which is a shorthand term for machine driven learning and decision making, held out great promise for military applications, and that it “is expected to impact every corner of the Department, spanning operations, training, sustainment, force protection, recruiting, healthcare, and many others.”
It depicted AI’s embrace in solely positive terms, asserting that “with the application of AI to defense, we have an opportunity to improve support for and protection of US service members, safeguard our citizens, defend our allies and partners, and improve the affordability and speed of our operations.”
Stepping back from AI in the face of aggressive AI research efforts by potential rivals would have dire, even apocalyptic, consequences, it further warned. It would “result in legacy systems irrelevant to the defense of our people, eroding cohesion among allies and partners, reduced access to markets that will contribute to a decline in our prosperity and standard of living, and growing challenges to societies that have been built upon individual freedoms.”
The publication of the Pentagon strategy’s core concepts comes eight months after a Silicon Valley revolt against the military’s premier AI research program. After thousands of Google employees signed a petition protesting the company’s involvement in an effort known as Project Maven, meant to speed up the analysis of videos taken by a drone so that military personnel could more readily identify potential targets, Google announced on June 1 that it would back out of it.
But the release of the strategy makes clear that the Trump administration isn’t having second thoughts about the utility of AI. It says the focus of the Defense Department’s Joint Artificial Intelligence Center (JAIC), created last June, will be on “near-term execution and AI adoption.” And in a section describing image analysis, the document suggests there are some things machines can do better than humans can. It says that “AI can generate and help commanders explore new options so that they can select courses of action that best achieve mission outcomes, minimizing risks to both deployed forces and civilians.”
The JAIC is still adding staff, and its new director Lt. Gen. Jack Shanahan was confirmed by the Senate only two months ago. Shanahan’s last posting before taking over the JAIC was running Project Maven. While the Center’s budget in 2019 was only $90 million, it is responsible for overseeing hundreds of AI programs costing more than $15 million, and total Defense Department spending on AI over the next five years has been projected at $1.7 billion.
The summary repeatedly states that the military has an ethical obligation to conscientiously use AI by publicly discussing guidelines for its use and by ensuring that it’s employed only when safe. But that benchmark is not precisely defined in the unclassified summary, and it reiterates an earlier, vague policy that the department will require “appropriate levels of human judgment over the use of force” by machines.
The strategy also calls for the development of new defense “principles” to guide how the military will use AI, mirroring what companies like Google have done in announcing a set of ethics for the use of its own technology. The Pentagon has said it will develop these principles through the Defense Innovation Board, an advisory group made up of outside technology experts, including some top executives from Silicon Valley, which will conduct meetings across the country as part of its outreach. The board is due to give the secretary of defense recommendations for principles this summer.
During his two years in office, former Secretary of Defense James Mattis repeatedly said that his main goal was to make the military “more lethal,” including through the use of AI. But groups like the Campaign to Stop Killer Robots have been working to promote the idea of an arms control ban for autonomous technologies in weapons and have been working to increase public support. The group sponsored a poll released in January that found 52 percent of Americans opposed the idea of armed weapons systems that could choose to kill.
Although the strategy summary describes other countries, particularly Russia and China, as investing heavily in AI and “eroding” the US technical advantage, others are warning that the US is already behind.
“I think that both Russia and China are in a better position than we are. I think they’re ahead of us,” Senate Armed Services Committee Chairman James Inhofe, R-Okla., said speaking to reporters before the release of the strategy.
Meanwhile China’s State Council released a report in 2017 calling for the country to become the global leader in AI by 2030. That includes broad applications of AI and the development of a domestic industry targeted to be worth $150 billion.
Despite his concern, Inhofe, who shapes defense spending through Congress’s annual defense policy bill, said that AI wasn’t his top priority. “There are other things that need to be done first,” he said.
The summary was released a day after President Trump announced the American AI Initiative, which focuses on broader commercial interest in AI, but neither of the two documents outlined any new proposed funding.