WHY THIS MATTERS IN BRIEF
While this incident has been put down to human error, allegedly, cyber attacks on fleets of driverless cars could make these sorts of scenes common place unless companies come together to develop new security standards.
Ubers self-driving fleet has already run into trouble after two of the company’s vehicles were spotted running red lights in San Francisco. Uber blamed both incidents on “human error”.
A video of the first incident was uploaded to YouTube by a taxi company after one of its dashboard cameras recorded it and it was first spotted by the San Francisco Examiner, which confirmed the details with the Luxor Cab company’s operations manager, Charles Rotter.
The video shows a vehicle passing through a red light as a pedestrian has already begun to cross the road.
The same thing apparently occurred in another area of the city the same morning, as spotted and tweeted about by a writer, Annie Gaus. She told the Examiner the Uber almost hit the Lyft car she was in: “It was close enough that both myself and the driver reacted and were like, ‘Shit’. It stopped suddenly and stayed like that.”
(Not enough time to get a good shot, but…whoops!) pic.twitter.com/XK49nMF2Q4
— Annie Gaus (@AnnieGaus) December 14, 2016
Uber, which had only just begun trialling cars in the city the day of the incident, has refused to obtain permits for testing the cars in the state of California, arguing there is no need because human drivers will always be present for the test.
In a blog announcing the San Francisco trials, posted the same day as the incident, Uber said: “We understand there is a debate over whether or not we need a testing permit to launch self-driving Ubers in San Francisco. We have looked at this issue carefully and we don’t believe we do. Before you think, ‘there they go again’ let us take a moment to explain… the rules apply to cars that can drive without someone controlling or monitoring them. For us, it’s still early days and our cars are not yet ready to drive without a person monitoring them.”
Now, however, it’s blaming the red light incidents on the very people who are meant to be doing the monitoring.
“These incidents were due to human error,” said a spokesperson, “this is why we believe so much in making the roads safer by building self driving Ubers. These vehicles were not part of the pilot and were not carrying customers. The drivers involved have been suspended while we continue to investigate.”
It’s unclear whether the human drivers were operating the vehicles, or should have overridden the computer system but either way it’s a fairly moot point, whether or not the cars were carrying passengers, running a red light is a threat to every car, driver, passenger and pedestrian in the vicinity.
If neither the cars nor their human drivers can be relied upon to follow one of the simplest and most important rules of the road, it’s a pretty big problem for Uber’s test drives in densely populated cities. The car sharing company had already begun trials in Pittsburgh earlier in 2016, where members of the public reported seeing one Uber driving the wrong way down a road, and another involved in a potential incident. There are no incidents on record with the city’s authorities, though.
The point of self driving cars, of course, is to vastly improve road safety as well as efficiency. But they are in their infancy, and these incidents are two of a handful to have been reported among those developing the technology.
In February, a Google car hit a bus in California and was deemed to be at fault, while another was involved in a wreck in California, and earlier this year two Tesla Model S drivers were killed, one in Florida and one in the Netherlands, while using the Autopilot beta feature, incidents that were blamed on speeding.
[…] cars onto the world’s roads, particularly in light of some of the recent fatal crashes and near misses that have happened […]