artist's rendering of Google self-driving car with Lidar sensor on its roof

Artist’s rendering of Google’s self-driving car with Lidar sensor on its roof.

A researcher has found that a tool similar to a laser pointer, built from $60 worth of parts, can confuse a self-driving car that uses lidar sensors, as Samuel Gibbs writes for The Guardian. Using such a laser, hackers can attack self-driving cars from behind, in front, or from the side of the vehicles, without passengers knowing about it, Gibbs writes. The hack can cause autonomous vehicles to behave as though another car, a wall, or a person is in front of them, potentially paralyzing them or forcing them to take evasive action, Gibbs writes.

Among those car makers whose prototype self-driving cars using lidar (laser ranging systems) are Google, Lexus, and Mercedes-Benz, Gibbs writes. Although most such prototypes depend on having multiple sensors, lidar has been the most effective at creating a 360-degree detection grid about the vehicles, Gibbs writes. “Lidar, usually mounted on the car’s roof, uses spinning lasers in a similar manner to radar, detecting objects and building a 3D image of the world around the car,” Gibbs writes.

Gibbs quotes Dr. Jonathan Petit, on his research:

‘If a self-driving car has poor inputs, it will make poor driving decisions,’ Petit, a former research fellow in University College Cork’s Computer Security Group, told technology news outlet IEEE Spectrum. ‘I can spoof thousands of objects and basically carry out a denial-of-service attack on the tracking system so it’s not able to track real objects,’ he said.

Petit, now with the Boston-based cyber security firm Security Innovation, will present his research later this year at the Black Hat Europe security conference, Andrea Peterson reports, in an article appearing in the Los Angeles Times. Petit said that this vulnerability in self-driving cars ought to be a “wake-up call” to the companies developing and testing them, Peterson writes. Experts have said that the risks caused by such vulnerabilities can only increase as self-driving technology takes more control away from drivers, Peterson writes. She adds that even fixed-gear bicycles appear to have confused some prototype autonomous cars.

Security Innovator’s blog writes that owners, drivers and buyers of vehicles connected to the Internet need to be aware that although such technology as voice-activated systems and other safety features can help reduce car accidents, their vehicles can also be hacked in ways similar to computers and mobile phones. Automakers need to work more closely with security companies, the blog post says. The post says that the SPY Car Act of 2015, introduced by Senators Edward J. Markey (D-MA) and Richard Blumenthal (D-CT) on July 21, directs the National Highway Traffic Safety Administration and the Federal Trade Commission to establish security standards for connected cars. Under the act, all vehicles with accessible data or control signals would be required to “be capable of detecting, reporting, and stopping attempts to intercept such driving data or control the vehicle.” Any violator would be liable for a civil penalty of up to $5,000 per violation.

A commenter named Milton writes the following to The Guardian piece:

It will require some ingenuity, but not unprecedented genius or major new technology, to secure lidar against the type of spoofing described. My guess is that this will include approaches such as frequency-hopping and an encrypted pulse stream, so that a lidar system can discriminate its own echoes from fakes.

Embed this infographic:
Embed this image: