Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the 311 Institute, a global futures think tank working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future.” Regularly featured in the global media, including AP, BBC, CNBC, Discovery, RT, and Viacom, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring several Education and Lunar XPrize teams, building the first generation of biological computers and re-envisioning global education with the G20, and helping the world’s largest conglomerates ideate the next 20 years of intelligent devices and machines. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Bain & Co, BCG, BOA, Blackrock, Bentley, Credit Suisse, Dell EMC, Dentons, Deloitte, Du Pont, E&Y, HPE, Huawei, JPMorgan Chase, KPMG, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, UBS, and many more.
WHY THIS MATTERS IN BRIEF
As automation, in the form of autonomous hardware and software based robots, becomes more commonplace, law makers are debating who should be responsible for their actions.
If a robot, acting autonomously, injures or otherwise wrongs a human, who will be held responsible? Some European lawmakers think that the best way to resolve this question will be to give robots “electronic personalities,” something they suggested last year, a form of legal personhood. But, and here’s a broader questionn, if you could sue a robot then would it use a robot lawyer, bearing in mind they’re already getting rather good…? Hmmm.
Proposed in a 2017 report from the European Union, this form of robot rights wouldn’t give them status like that of human beings — they’re not going to get married, or buy a house, or adopt a child, but it would make it possible for self-learning robots to sue and be sued, with legal status like a corporation.
Proponents say that the idea is a common-sense move, one that will be necessary to create a workable legal structure as these entities become smarter and more integrated into everyday human life. But funnily enough not everyone agrees. At least 156 Artificial Intelligence (AI) experts have warned the European Commission that this proposed move would be a poor one, both from a “legal and ethical perspective,” in a letter reported by Politico.
As opponents see it, the move would simply shift responsibility for a robot’s mistake, say, if a caretaker robot used too much force on a patient and broke their arm, away from the manufacturer and onto the robot itself.
“This [European Parliament position] was what I’d call a slimy way of manufacturers getting out of their responsibility,” letter signatory Noel Sharkey, University of Sheffield emeritus professor of AI and robotics, told Politico.
But it does seem that the authors of the EU report considered this possibility. As The Verge reports, the report’s authors suggest that the EU establish a baseline for legislation that a person need only establish “a causal link between the harmful behaviour of the robot and the damage suffered by the injured party” to be compensated by the manufacturing company.
This would mean that victims of robo-crime would be given justice no matter how the robot acted, so that the manufacturer couldn’t squirm out of paying by simply saying they never intended the robot to act in that way. For example, if the Uber autonomous car that killed a cyclist in March had done so because it decided to chase people down the sidewalk, Uber would still be held responsible.
As our robots get more complex, so will the conversations about their rights and the legal frameworks within which they operate. There are also other, mind-bending complications of robot rights if a machine’s actions and products belong to it, and not to its creator. For example, as one lawyer posited to the Guardian, if a robot with personhood were to create a device and patent it, who owns the patent — the robot or its creator? Who gets the profit from the sale of that item? And if the robot were sold to a different person, would that person gain control over that intellectual property as well? And as AI’s especially begin getting more creative, designing everything from clothes for Amazon, sneakers for Under Armour sneakers and creating chart topping albums and songs for Sony, as well as creating new parts for aircraft like the Airbus A380, these are all questions that are going to need to be resolved sooner rather than later.
All this debate of course, for now at least, only pertains to robots that are reactive and proactive in their actions, but not truly self-aware, and that’s because those types of robots are still a long way off… for now.