A Stanford University engineering professor, whose self-driving race car named Shelley has made 153 turns on 12.4 miles of Colorado’s Pike’s Peak trail, is “causing a great deal of discomfort to automakers and tech giants,” writes Keith Naughton for BloombergBusiness. The professor, J. Christian “Chris” Gerdes, is director of the Center for Automotive Research (CARS) at Stanford. Shelley is an Audi TT-S that can run a competitive lap on a race track without a human driver, Gerdes’ page says.
In a 2010 TED talk, Gerdes explains why he wanted to develop a self-driving race car:
First, we believe that before people turn over control to an autonomous car, that autonomous car should be at least as good as the very best human drivers. Now, if you’re like me, and the other 70 percent of the population who know that we are above-average drivers, you understand that’s a very high bar. There’s another reason as well. Just like race car drivers can use all of the friction between the tire and the road, all of the car’s capabilities to go as fast as possible, we want to use all of those capabilities to avoid any accident we can. […] Now, you may push the car to the limits not because you’re driving too fast, but because you’ve hit an icy patch of road, conditions have changed. In those situations, we want a car that is capable enough to avoid any accident that can physically be avoided.
Naughton writes that Gerdes, originally viewed as an enthusiast, is now more often seen as a conscience by the auto and tech industries. That is because “[h]e is raising questions about ethical choices that must inevitably be programmed into the robotic minds expected one day soon to be driving along the nation’s highways.”
Industry leaders are listening to Gerdes, because he is not exactly anti-technology. Desccribing Gerdes as being like Switzerland — neutral — Patrick Lin, a Cal Poly philosophy professor who worked with Gerdes for a year, said Gerdes is asking the hard questions, for example, how industry needs to go further than just obeying the law.
Naughton gives as an example a self-driving vehicle being programmed to cross a double yellow line to avoid a road crew, when that technically is against the law. Gerdes said, “We need to think about traffic codes reflecting actual behavior to avoid putting the programmer in a situation of deciding what is safe versus what is legal.”
Ethics of Autonomous Vehicles
It was hearing about the book “Robot Ethics,” by Lin and George Bekey, that started Gerdes thinking about the ethics of autonomous vehicles. In a recent presentation Gerdes gave at Mercedes-Benz’s North American R&D facility near Sunnyvale, California, he said that Stanford’s Revs program — which he directs — had been partnering with Stanford’s philosophy department about these ethical issues ethics of autonomous vehicles, reports Doug Newcomb for PC. Stanford is also doing tests to figure out what kinds of decisions self-driving cars should make in thorny situations, Newcomb writes.
Gerdes told Naughton that the technology of self-driving vehicles is at the peak of hype right now, and that there may be a valley ahead before the many benefits of such vehicles are available to society. In addition to a possible valley, the future holds at least one new vehicle for Gerdes to use in his research. In a nod to Back to the Future, later this month Gerdes will unveil a self-driving DeLorean nicknamed Marty.”
Image courtesy Stanford University Engineering.