Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series. Regularly featured in the global media, including AP, BBC, Bloomberg, CNBC, Discovery, RT, Viacom, and WIRED, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Aon, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.
WHY THIS MATTERS IN BRIEF
The increasing weaponisation and automation of new, advanced technologies such as artificial intelligence, drones and robots has the UN worried, now it wants a debate about banning them before it’s too late.
This week the UN announced that next year over 100 nations who are part of the international Convention on Certain Conventional Weapons (CCCW) will formally debate banning nation states from developing and deploying so called “killer robots”.
Personally, and splitting hairs here, I’d argue that the UN have their nomenclature wrong, and while it might be a moot point, I’d suggest that they focus more of their attention on “autonomous killer platforms” – not just robots. After all, is a drone a robot? Is a missile? Is a ship? Is a submarine? Sitting where I do, and as readers of this site can attest, we’re seeing the militarisation of a broad range of systems and platforms, not just robots or “robotic” variants and if you’re having a debate often how you frame it affects the outcome.
At the recent UN CCCW Fifth Review Conference in Geneva the 123 nations that are part of the CCCW agreed to “formalise” their efforts to tackle the challenges and concerns associated with the increasing weaponisation and militarisation of artificial intelligence, drones and robots – many of which, such as the USN Zumwalt and Russia’s latest nuclear capable drone submarine, which the Pentagon recently found flexing its smarts off of the East coast of the US, as well as a proliferation of other smaller autonomous military platforms, can already seek out, select and attack targets without meaningful human control.
Arguably we are already at the point where we can take the human out of the “kill chain” decision making loop. The technology, and the command and control systems, are already there – albeit that they aren’t fully mature yet and while military leaders such as Deputy US Defense Secretary Robert Work have gone on record to say that there will always be a human in the loop we all know that if one nation removes that human from the loop then the others will follow – like dominos.
The call to implement a worldwide ban on autonomous killer systems comes at a time when experts warn that the time left to develop “meaningful systems” to control the technology is already running out.
“The international meeting in Geneva was an important step toward stemming the development of killer robots, but there is no time to lose,” said Steve Goose, arms director of Human Rights Watch, a co-founder of the Campaign to Stop Killer Robots, “once these weapons exist, there will be no stopping them and the time to act on a pre-emptive ban is now.”
And Goose isn’t the only one concerned about the path that some nation states are travelling down with Bill Gates, Elon Musk and Stephen Hawking being just a few of the international luminaries to also voice their concerns.
The nations will come together again in either April or August in 2017 in Geneva to formally discuss the possibility of a wholesale ban on the development of these technologies and for the first time, China said in Geneva it too sees a need for a new international instrument on lethal autonomous weapons systems.
“As artificial intelligence advances, the possibility that machines could independently select and fire on targets is fast approaching,” warned Bonnie Docherty, a director at Human Rights Watch.
In other words act now before it’s too late.