Human Driver Crashes Google’s Self-Driving Car
Several days ago, news accounts reported that Google’s self-driving Prius had caused a chain-reaction crash while driving near Google’s Mountain View, CA, headquarters, in which it hit another Prius, which hit a Honda Accord, which struck another Accord, which hit another Prius. If so, that would have been the first-ever car accident caused by a driverless car. But it turns out that, according to Google, a human was driving the car, using its manual controls. Thus the crash was not the fault of the driverless car technology after all, although online commentators (among them, Brian Caulfield for Forbes’ blog Shiny Objects, and Jalopnik) have been skeptical, saying they won’t believe that a human was at the controls until they can see the accident report.
Although the cars can drive themselves, law requires them to have a human at the wheel in case there is a problem with the technology. The self-driving cars have driven more than 160,000 miles so far in California, where there is no legal ban on them. They are equipped with a video camera that detects traffic lights, pedestrians, and other moving objects, and a rotating sensor that creates a three-dimensional map of the cars’ surroundings. Nevada recently became the first state to pass a law allowing driverless cars.
The topic of driverless cars is fraught with issues. For one thing, if it had indeed been the driverless car technology that had caused the crash, who would police have blamed in their accident report? For another, will driverless cars encourage drivers to pay even less attention to the road because they rely on their robotic co-drivers?
As Jalopnik writes:
Google can’t be hoping to have its software legally blamed for a slice of the traffic crashes that cost more than $160 billion a year in this country. Yet if the operators of Google’s self-driving cars retain all legal responsibility, simply turning the system on would be seen in court as a sign they weren’t paying attention.
The biggest battle in auto safety today involves keeping drivers focused on driving. Google’s self-driving car seems like the ultimate distracted driving machine.
It appears, for example, that the aim of the current system is to save the life of the driver in an accident. However, why is that outcome the best one to aim for? If the car finds itself in a situation where a bad accident, a pile-up, say, seems likely, then it might be preferable to sacrifice the driver in order to save others in the vicinity.
In driver education, we do not train drivers to make these sorts of calculations. Instead, folklore suggests that drivers instinctively protect themselves. Of course, a computerized co-driver with lots of information about the situation may well have the opportunity to decide who gets to live and who does not. So, we need to think about how this decision is to be made.
In the opinion of Edward Tenner, a historian of technology and culture, writing in The Atlantic, the main issue involving driverless cars is whether Americans would even buy them:
Automated driving’s biggest problems, though, are social, not legal or technological. It will eventually work well in homogeneous, prosperous nations with strict checks, like those of Germany’s rigorous nonprofit inspectorate, the TÜV, and Japan’s private garages. The smaller, richer, more disciplined and more homogeneous the country, the better the prospects. The shaky financial state of many American drivers and the notoriously high cost of electronic component replacements (safety systems need multiple redundant versions of key hardware and software) would make the automated car an exotic techie luxury here, the 21st-century Segway. Google should be thinking Singapore and Canton Zurich, not Reno.
Image by Google, used under Fair Use: Reporting.