Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the 311 Institute, a global futures and deep futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future.” Regularly featured in the global media, including AP, BBC, CNBC, Discovery, RT, and Viacom, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Bain & Co, BCG, BOA, Blackrock, Bentley, Credit Suisse, Dell EMC, Dentons, Deloitte, Du Pont, E&Y, GEMS, HPE, Huawei, JPMorgan Chase, KPMG, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, UBS, and many more.
WHY THIS MATTERS IN BRIEF
Noone wants war, and arguably noone should want autonomous war where AI’s and robots make the decisions, but for the fifth year in a row the UN has cancelled it’s scheduled debate, over the funding of the meetings.
Share your opinion, is this an important debate to have or are there bigger things to worry about?
You would think that the debate over the future of how warfare will be conducted, particularly when it comes to the development and use of new “smart” Artificial Intelligence (AI) infused semi-autonomous and fully autonomous “robo” weapons platforms, like the ones I discuss on my blog, that are now proliferating around the world, would be important.
For those of you that would like a quick catch up on today’s state of play then today we have Russia’s autonomous nuclear subs, and smart hypersonic missiles, and China’s smart cruise missiles, then from a US perspective the Commanding General of the US Marines announced they’re going all in on hunter-killer drones, the US Airforce is embracing drone swarms that are controlled and deployed from F-35’s and F/A-18’s, building F-16 drones, and an unmanned hypersonic SR-72, and might turn the F-35 into the world’s largest autonomous drone. Meanwhile the US Navy is standing up the first autonomous drone squadron out of Jacksonville, and it’s already launched the world’s first autonomous mine hunters, semi-autonomous destroyers and subs. The list goes on, and this is just a small fraction of the developments, but suffice to say just the firepower in that small list is “significant.”
Today, every one of these systems that I know of, “comfortingly,” has what we call a human in the loop – an individual that makes the final “kill” decision – but one day, when one military complex decides to push the firmware update that takes him or her out of the loop, we’ll see the advent of a new form of warfare – one that sci-fi films have been predicting for decades. Autonomous warfare.
As if that wasn’t scary enough today there isn’t a single expert on the planet, by their own admission, whether it’s the experts from Baidu, Facebook, Google, Microsoft, Nvidia, or OpenAI, who really understand how their Deep Learning platforms, that will one day be embedded into and run these systems, “do what they do,” for example, spontaneously learn, and create secret languages.
AI is still very much a black box that we haven’t decoded, but that admittedly we one day might, and that’s despite a number of attempts, by the likes of MIT, to get them to explain their decisions, and Nvidia trying to peer into its AI’s brain. Furthermore, despite attempts from Google and pals, we still haven’t found a way to create a viable kill switch for rogue AI’s, which could be further compounded by the fact that AI’s are designing and building new AI’s, as well as starting to figure out how to code themselves. And all that’s just the tip of a gnarly iceberg that we could all spend days discussing and debating…
A future potentially full of black box rogue, self-evolving, self-replicating AI’s that we can’t turn off and autonomous weapons platforms, sounds fun right? Anyway, back to the point, you’d think that discussing the future of these platforms and their future role and place in warfare would be important, however, this week The Campaign to Stop Killer Robots, yes that’s their name, and it’s actually apt under the circumstances, announced that they were “deeply disappointed” that the United Nations (UN) Convention on Conventional Weapons (CCW) committee, a framework treaty that prohibits or restricts certain weapons, cancelled a crucial week of formal discussions on fully autonomous weapons that, at the start of the year, the UN announced would take place this August.
In the past five years since the emergence of these platforms there has been next to no debates about them or their impact, and, okay, fair enough, in my minds’ view meetings get cancelled all the time, but what actually got my goat about this one was the reason why it was cancelled. Can you guess?
Uh-uh, no that wasn’t it.
The reason why the UN is not debating the future role and development of autonomous weapons and their impact this calendar year is “because of the failure of several states, most notably Brazil, to pay their assessed dues for the [CCW] convention’s meetings.”
In other words, Brazil and a few other countries owe the UN money for the meetings – maybe they should come to London, I can hire them a meeting room on the cheap. And I could probably stretch my budget to include donuts, call me. Anyway…
“The collective failure of countries to find a solution to their financial woes doesn’t mean they can stop addressing concerns over weapons that would select and attack targets without further human intervention” said Mary Wareham of Human Rights Watch, and coordinator of the Campaign to Stop Killer Robots, “if the CCW is unable to act, nations must find other ways to maintain the momentum toward a ban.”
“Countries that agree with the need to retain human control of weapons systems should move swiftly to adopt national policies and laws and to negotiate a new international treaty prohibiting fully autonomous weapons,” she added.
So far the call for a pre-emptive ban on fully autonomous weapons has been endorsed by 19 countries and dozens more have affirmed the need to retain human control over the selection of targets and use of force, and last December, China became the first permanent member of the UN Security Council to suggest a new international law is needed to cover them, with France, the UK and US likewise supporting the initiative, albeit in a luke warm fashion, and Russia opposing it.
Hopefully the UN will resolve its meeting room funding crisis soon and figure out a way forward, although as the pace of development of these platforms, and AI, accelerates one hopes those decisions don’t take too long.