Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series. Regularly featured in the global media, including AP, BBC, CNBC, Discovery, RT, and Viacom, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.
WHY THIS MATTERS IN BRIEF
Machines are increasingly becoming autonomous and intelligent and it’s one of the greatest dangers humanity has ever faced, later this year law makers in Europe will finally determine how we govern, monitor, manage and tax them in order to minimise the damage to society.
A draft report submitted to the European Parliament’s legal affairs committee has finally opened the can of worms that bureaucrats around the world have been trying to avoid for years – how to deal with increasingly intelligent, and autonomous machines. And as countries around the world, like Dubai start replacing police officers with robots and countries like America begin rolling out autonomous vehicles and autonomous-capable weapons platforms – which will be debated by the UN this year – the high level discussion can’t come soon enough.
The report, which was first commissioned by the EU in May last year and which only became public earlier this month recommends, among many other things, that robots be equipped with a “kill switch” in order to manage the potential dangers in the evolving field of self-learning autonomous robotics.
The broad ranging report, which was approved by the European legal affairs committee by a vote of 17-2, contains a variety of proposals designed to address possible legal and ethical issues that could arise as a consequence of the development of autonomous artificial intelligences. These include the establishment of a European Agency for robotics and AI, plus a call for all 27 member states to discuss the implementation of a Universal Basic Income as a strategy to address the possible mass unemployment that could result from robotics replacing large portions of the workforce.
In a supreme case of life imitating art, the report opens by referencing Mary Shelley’s Frankenstein, and later suggests Issac Asimov’s Three Laws of Robotics as a general principle that designers and producers of robotics should abide by.
The report also discusses the tricky task of identifying and attributing legal liability when and if a robots actions result in damage or harm, and as robots develop cognitive abilities that give them the ability to learn from experience and make independent decisions, the question of legal responsibility becomes an urgent one to address. The report asks how a robot could be held responsible for its actions, and at what point that responsibility falls on either the manufacturer, owner or user.
Interestingly, a proportionate scale of responsibility is proposed that takes into account the capacity of a robot’s self-learning abilities and the report states, “the greater a robot’s learning capability or autonomy is, the lower other parties’ responsibility should be, and the longer a robot’s ‘education’ has lasted, the greater the responsibility of its ‘teacher’ should be.”
The report also raises the suggestion of creating a compulsory insurance scheme, similar to that for car insurance, whereby producers or owners of robots are required to take out cover for potential damage caused by their robots, and it goes so far as to question whether a new legal category of “electronic persons” needs to be created in the same way the notion of corporate personhood was developed to give corporations some of the same legal rights as that of a natural person. Of course, the idea of giving robots any form of legal rights akin to that of a person has been hotly debated for years.
Balancing the idea of granting a robot some form of legal rights with the proposal of a “kill switch” also raises some problematic contradictions, it could be said that it’s the equivalent of giving humans rights provided they have a suicide switch first.
The idea of mandating manufacturers implement a form of “kill switch” into their designs is not new. In 2016 researchers at Google DeepMind proposed what they called a “big red button” that would prevent an AI from embarking on, or continuing, a harmful sequence of actions. The paper Google released discussed the problems with implementing such a form of kill switch into a machine with self-learning capabilities. After all, the AI may learn to recognise the actions that its human controller is trying to subvert and either avoid undertaking similar tasks causing it to become dysfunctional or, in a worst-case scenario, learn to disable its own “big red button.”
The Google DeepMind researchers suggested that any robot programmed with a kill switch would also have to be programmed with a form of selective amnesia that causes it to forget that it had ever been interrupted or usurped. This would stop the robot gaining awareness of its lack of true “autonomy”.
Ironically, the legal implications of implementing a kill switch would seem to then refocus a legal liability back onto the robot’s owner, for if a robot undertook a harmful action and the kill switch was not activated, then its foreseeable that the owner could be deemed liable for negligence.
It’s incredibly clear that the questions raised by this EU report are a nightmare of “what ifs” and grey areas, but they certainly are ones that governments and regulatory bodies will need to grapple with sooner rather than later. The full house of the European Commission will debate and vote on the proposals in this wide-ranging report in February and its decisions could ultimately set the foundation for how we legally approach AI research and regulation for many years to come.