Matthew Griffin, award winning Futurist and Founder of the 311 Institute, a global futures think tank working between the dates of 2020 and 2070, is described as "The Adviser behind the Advisers." Regularly featured on AP, CNBC, Discovery and RT, his ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past five years as one of the world's foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive future. A rare talent Matthew sits on the Technology and Innovation Committee (TIAC) for Centrica, Europe’s largest utility company, and his recent work includes mentoring XPrize teams, building the first generation of biocomputers and re-inventing global education, and helping the world’s largest manufacturers envision, design and build the next 20 years of devices, smartphones and intelligent machines. Matthew's clients are the who’s who of industry and include Accenture, Bain & Co, BCG, BOA, Blackrock, Bentley, Credit Suisse, Dell EMC, Dentons, Deloitte, Du Pont, E&Y, HPE, Huawei, JPMorgan Chase, KPMG, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, UBS, the USAF and many others.
WHY THIS MATTERS IN BRIEF
Now that cars are more software than hardware bugs in the code, as well as how the software is tuned, will have a big impact on safety, and that’s before cybercriminals start targeting future fleets.
Uber has reportedly found that a software problem likely caused a fatal accident involving one of its self-driving cars in Tempe, Arizona in March. That software is meant to determine how the car should react to detected objects, two people familiar with the matter told The Information.
Although the car’s sensors reportedly detected the pedestrian, Uber’s software determined that it didn’t need to immediately react because of how it was tuned.
The software is supposed to ignore what are known as “False positives,” or objects that wouldn’t be an issue for the vehicle, like a plastic bag or piece of paper, something that executives feel might have happened in this example but are still investigating. Company executives told The Information that they believe the system was tuned in a way that made it react less to these objects, meaning it reportedly didn’t react fast enough when the pedestrian crossed the street, and if you’re wondering why the software’s “sensitivity” might have been set so low, or tuned in this particular way, then it’s alleged it was to help create a smoother ride for the car’s passengers because self-driving cars are, at the moment at least, still notorious for their jerky ride quality as most of them jitter at the slightest sign of danger, erring on the side of caution.
During the collision, an operator was behind the wheel but the car was in autonomous mode. The operator was not looking at the road the moments before the car hit 49 year old Elaine Herzberg at around 40 mph. Uber settled with the victim’s family later that month. This was the first known fatality specifically from an autonomous vehicle accident on a public road.
For now, Uber has temporarily halted its self-driving operations in all cities where it’s been testing its vehicles, including Tempe, Phoenix, Pittsburgh, San Francisco and Toronto.
An Uber spokeswoman said the company has initiated a top to bottom safety review of its autonomous vehicle program and hired the former chair of the US National Transportation Safety Board, Christopher Hart, to advise the company on its overall safety culture.
“Our review is looking at everything from the safety of our system to our training processes for vehicle operators,” the spokeswoman said.
Meanwhile, the Tempe police are working with Uber representatives, theNational Transportation Safety Board (NTSB) and the US Department of Transportation’s National Highway Traffic Safety Administration in their investigation to determine who, or what, was at fault for the accident. Uber declined to say whether the tuned down software was responsible for the crash.
“We’re actively cooperating with the NTSB in their investigation,” the Uber spokeswoman said, “out of respect for that process and the trust we’ve built with NTSB, we can’t comment on the specifics of the incident.”
While there are new Artificial Intelligence (AI) powered coding platforms emerging, such as Blackberry’s Jarvis and new robo-hackers that can hunt for software bugs in self-driving cars and other forms of autonomous vehicles, as well as the Pentagon’s own mission critical systems, tuning the software is still done largely by trial and error. However, moving one step on from this awful accident, it also shouldn’t be lost on anyone that as more self-driving cars emerge the threats from cyber criminals and terrorists will escalate exponentially, for example, imagine a cyber criminal, human or AI, now using this exploit, or creating a similar one, to take control of an entire fleet of self-driving cars and hold the companies that operate them ransom… Ransomware on steroids?
All of this is still relatively uncharted territory of course, but as self-driving cars begin to get rolled out in earnest, by companies like Google who’s just debuted their first fleet, and by companies like Ford from 2019 onwards, we need to be asking sterner questions and putting more controls in place, and at the moment while there is some progress in that department I’d say based on what’s coming, and what we’ve witnessed so far, lest we also forget Tesla’s own catalogue of crashes, that not enough detailed questions are being asked, or solutions found, to tomorrow’s emerging problems.