Scroll Top

Uber’s fatal self-driving car crash reportedly mischaracterised pedestrian as a bag

WHY THIS MATTERS IN BRIEF

Now that cars are more software than hardware bugs in the code, as well as how the software is tuned, will have a big impact on safety, and that’s before cybercriminals start targeting future fleets.

 

Uber has reportedly found that a software problem likely caused a fatal accident involving one of its self-driving cars in Tempe, Arizona in March. That software is meant to determine how the car should react to detected objects, two people familiar with the matter told The Information.

 

RELATED
Commercial supersonic flights get nearer after NASA green lights X-59 build

 

Although the car’s sensors reportedly detected the pedestrian, Uber’s software determined that it didn’t need to immediately react because of how it was tuned.

The software is supposed to ignore what are known as “False positives,” or objects that wouldn’t be an issue for the vehicle, like a plastic bag or piece of paper, something that executives feel might have happened in this example but are still investigating. Company executives told The Information that they believe the system was tuned in a way that made it react less to these objects, meaning it reportedly didn’t react fast enough when the pedestrian crossed the street, and if you’re wondering why the software’s “sensitivity” might have been set so low, or tuned in this particular way, then it’s alleged it was to help create a smoother ride for the car’s passengers because self-driving cars are, at the moment at least, still notorious for their jerky ride quality as most of them jitter at the slightest sign of danger, erring on the side of caution.

During the collision, an operator was behind the wheel but the car was in autonomous mode. The operator was not looking at the road the moments before the car hit 49 year old Elaine Herzberg at around 40 mph. Uber settled with the victim’s family later that month. This was the first known fatality specifically from an autonomous vehicle accident on a public road.

 

RELATED
Heatherwick's autonomous EV concept also cleans up polluted city air

 

For now, Uber has temporarily halted its self-driving operations in all cities where it’s been testing its vehicles, including Tempe, Phoenix, Pittsburgh, San Francisco and Toronto.

An Uber spokeswoman said the company has initiated a top to bottom safety review of its autonomous vehicle program and hired the former chair of the US National Transportation Safety Board, Christopher Hart, to advise the company on its overall safety culture.

“Our review is looking at everything from the safety of our system to our training processes for vehicle operators,” the spokeswoman said.

Meanwhile, the Tempe police are working with Uber representatives, theNational Transportation Safety Board (NTSB) and the US Department of Transportation’s National Highway Traffic Safety Administration in their investigation to determine who, or what, was at fault for the accident. Uber declined to say whether the tuned down software was responsible for the crash.

 

RELATED
This hybrid Chinese drone soars like a bird and swims like a fish

 

“We’re actively cooperating with the NTSB in their investigation,” the Uber spokeswoman said, “out of respect for that process and the trust we’ve built with NTSB, we can’t comment on the specifics of the incident.”

While there are new Artificial Intelligence (AI) powered coding platforms emerging, such as Blackberry’s Jarvis and new robo-hackers that can hunt for software bugs in self-driving cars and other forms of autonomous vehicles, as well as the Pentagon’s own mission critical systems, tuning the software is still done largely by trial and error. However, moving one step on from this awful accident, it also shouldn’t be lost on anyone that as more self-driving cars emerge the threats from cyber criminals and terrorists will escalate exponentially, for example, imagine a cyber criminal, human or AI, now using this exploit, or creating a similar one, to take control of an entire fleet of self-driving cars and hold the companies that operate them ransom… Ransomware on steroids?

 

RELATED
MIT and DeepMind's new AI can't be tricked with weird lighting

 

All of this is still relatively uncharted territory of course, but as self-driving cars begin to get rolled out in earnest, by companies like Google who’s just debuted their first fleet, and by companies like Ford from 2019 onwards, we need to be asking sterner questions and putting more controls in place, and at the moment while there is some progress in that department I’d say based on what’s coming, and what we’ve witnessed so far, lest we also forget Tesla’s own catalogue of crashes, that not enough detailed questions are being asked, or solutions found, to tomorrow’s emerging problems.

Related Posts

Comments (1)

[…] all heard about self-driving cars – that work and don’t work – but how about autonomous nuclear subs that creep around the US East coast and other drone […]

Leave a comment

You have Successfully Subscribed!

Pin It on Pinterest

Share This