Matthew Griffin, award winning Futurist and Founder of the 311 Institute is described as "The Adviser behind the Advisers." Recognised for the past five years as one of the world's foremost futurists, innovation and strategy experts Matthew is an author, entrepreneur international speaker who helps investors, multi-nationals, regulators and sovereign governments around the world envision, build and lead the future. Today, asides from being a member of Centrica's prestigious Technology and Innovation Committee and mentoring XPrize teams, Matthew's accomplishments, among others, include playing the lead role in helping the world's largest smartphone manufacturers ideate the next five generations of mobile devices, and what comes beyond, and helping the world's largest high tech semiconductor manufacturers envision the next twenty years of intelligent machines. Matthew's clients include Accenture, Bain & Co, Bank of America, Blackrock, Bloomberg, Booz Allen Hamilton, Boston Consulting Group, Dell EMC, Dentons, Deloitte, Deutsche Bank, Du Pont, E&Y, Fidelity, Goldman Sachs, HPE, Huawei, JP Morgan Chase, KPMG, Lloyds Banking Group, McKinsey & Co, Monsanto, PWC, Qualcomm, Rolls Royce, SAP, Samsung, Schroeder's, Sequoia Capital, Sopra Steria, UBS, the UK's HM Treasury, the USAF and many others.
WHY THIS MATTERS IN BRIEF
As automation, in the form of autonomous hardware and software based robots, becomes more commonplace, law makers are debating who should be responsible for their actions.
If a robot, acting autonomously, injures or otherwise wrongs a human, who will be held responsible? Some European lawmakers think that the best way to resolve this question will be to give robots “electronic personalities,” something they suggested last year, a form of legal personhood. But, and here’s a broader questionn, if you could sue a robot then would it use a robot lawyer, bearing in mind they’re already getting rather good…? Hmmm.
Proposed in a 2017 report from the European Union, this form of robot rights wouldn’t give them status like that of human beings — they’re not going to get married, or buy a house, or adopt a child, but it would make it possible for self-learning robots to sue and be sued, with legal status like a corporation.
Proponents say that the idea is a common-sense move, one that will be necessary to create a workable legal structure as these entities become smarter and more integrated into everyday human life. But funnily enough not everyone agrees. At least 156 Artificial Intelligence (AI) experts have warned the European Commission that this proposed move would be a poor one, both from a “legal and ethical perspective,” in a letter reported by Politico.
As opponents see it, the move would simply shift responsibility for a robot’s mistake, say, if a caretaker robot used too much force on a patient and broke their arm, away from the manufacturer and onto the robot itself.
“This [European Parliament position] was what I’d call a slimy way of manufacturers getting out of their responsibility,” letter signatory Noel Sharkey, University of Sheffield emeritus professor of AI and robotics, told Politico.
But it does seem that the authors of the EU report considered this possibility. As The Verge reports, the report’s authors suggest that the EU establish a baseline for legislation that a person need only establish “a causal link between the harmful behaviour of the robot and the damage suffered by the injured party” to be compensated by the manufacturing company.
This would mean that victims of robo-crime would be given justice no matter how the robot acted, so that the manufacturer couldn’t squirm out of paying by simply saying they never intended the robot to act in that way. For example, if the Uber autonomous car that killed a cyclist in March had done so because it decided to chase people down the sidewalk, Uber would still be held responsible.
As our robots get more complex, so will the conversations about their rights and the legal frameworks within which they operate. There are also other, mind-bending complications of robot rights if a machine’s actions and products belong to it, and not to its creator. For example, as one lawyer posited to the Guardian, if a robot with personhood were to create a device and patent it, who owns the patent — the robot or its creator? Who gets the profit from the sale of that item? And if the robot were sold to a different person, would that person gain control over that intellectual property as well? And as AI’s especially begin getting more creative, designing everything from clothes for Amazon, sneakers for Under Armour sneakers and creating chart topping albums and songs for Sony, as well as creating new parts for aircraft like the Airbus A380, these are all questions that are going to need to be resolved sooner rather than later.
All this debate of course, for now at least, only pertains to robots that are reactive and proactive in their actions, but not truly self-aware, and that’s because those types of robots are still a long way off… for now.