Matthew Griffin, award winning Futurist and Founder of the 311 Institute, a global futures think tank, is described as "The Adviser behind the Advisers." Regularly featured on AP, CNBC, Discovery and RT, his ability to identify and track hundreds of game changing emerging technologies, and explain their impact on global culture, industry and society, is unparalleled. Recognised for the past five years running as one of the world's foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments investors, multi-nationals and regulators around the world envision, build and lead an inclusive future. A rare talent Matthew sits on the Technology and Innovation Committee (TIAC) for Centrica, one of Europe’s largest energy companies, and his recent work includes mentoring XPRIZE teams, building the first generation of biocomputers, helping the world’s largest manufacturers companies envision the next five generations of smartphones and devices, and what comes next, and helping companies including Qualcomm envision the next twenty years of semiconductors. Matthew's clients are the who’s who of industry and include Accenture, Bain & Co, BOA, Blackrock, Bloomberg, Booz Allen Hamilton, BCG, Bentley, Dell EMC, Dentons, Deloitte, Deutsche Bank, Du Pont, E&Y, Fidelity, Goldman Sachs, HPE, Huawei, JPMorgan Chase, KPMG, Lloyds Banking Group, McKinsey, Monsanto, PWC, Qualcomm, Rolls Royce, SAP, Samsung, Schroeder's, Sequoia Capital, Sopra Steria, UBS, the UK's HM Treasury, the USAF and many others.
WHY THIS MATTERS IN BRIEF
As automation, in the form of autonomous hardware and software based robots, becomes more commonplace, law makers are debating who should be responsible for their actions.
If a robot, acting autonomously, injures or otherwise wrongs a human, who will be held responsible? Some European lawmakers think that the best way to resolve this question will be to give robots “electronic personalities,” something they suggested last year, a form of legal personhood. But, and here’s a broader questionn, if you could sue a robot then would it use a robot lawyer, bearing in mind they’re already getting rather good…? Hmmm.
Proposed in a 2017 report from the European Union, this form of robot rights wouldn’t give them status like that of human beings — they’re not going to get married, or buy a house, or adopt a child, but it would make it possible for self-learning robots to sue and be sued, with legal status like a corporation.
Proponents say that the idea is a common-sense move, one that will be necessary to create a workable legal structure as these entities become smarter and more integrated into everyday human life. But funnily enough not everyone agrees. At least 156 Artificial Intelligence (AI) experts have warned the European Commission that this proposed move would be a poor one, both from a “legal and ethical perspective,” in a letter reported by Politico.
As opponents see it, the move would simply shift responsibility for a robot’s mistake, say, if a caretaker robot used too much force on a patient and broke their arm, away from the manufacturer and onto the robot itself.
“This [European Parliament position] was what I’d call a slimy way of manufacturers getting out of their responsibility,” letter signatory Noel Sharkey, University of Sheffield emeritus professor of AI and robotics, told Politico.
But it does seem that the authors of the EU report considered this possibility. As The Verge reports, the report’s authors suggest that the EU establish a baseline for legislation that a person need only establish “a causal link between the harmful behaviour of the robot and the damage suffered by the injured party” to be compensated by the manufacturing company.
This would mean that victims of robo-crime would be given justice no matter how the robot acted, so that the manufacturer couldn’t squirm out of paying by simply saying they never intended the robot to act in that way. For example, if the Uber autonomous car that killed a cyclist in March had done so because it decided to chase people down the sidewalk, Uber would still be held responsible.
As our robots get more complex, so will the conversations about their rights and the legal frameworks within which they operate. There are also other, mind-bending complications of robot rights if a machine’s actions and products belong to it, and not to its creator. For example, as one lawyer posited to the Guardian, if a robot with personhood were to create a device and patent it, who owns the patent — the robot or its creator? Who gets the profit from the sale of that item? And if the robot were sold to a different person, would that person gain control over that intellectual property as well? And as AI’s especially begin getting more creative, designing everything from clothes for Amazon, sneakers for Under Armour sneakers and creating chart topping albums and songs for Sony, as well as creating new parts for aircraft like the Airbus A380, these are all questions that are going to need to be resolved sooner rather than later.
All this debate of course, for now at least, only pertains to robots that are reactive and proactive in their actions, but not truly self-aware, and that’s because those types of robots are still a long way off… for now.