Local (303) 454-8000
Toll Free (800) ROSEN-911
Se Habla Español
Contact Me For a Free Consultation

University of Cambridge Tech Will Help Cars See

| (Google+)
Navigation system autonomous cars.

Google and other self-driving cars might one day benefit from two new systems developed by University of Cambridge researchers

New technology invented by researchers at the University of Cambridge in the United Kingdom may eventually make it possible for driverless cars to see the road more effectively than current technology does, according to news reports. The researchers have actually developed two complementary systems, wrote the University of Cambridge:

Two newly-developed systems for driverless cars can identify a user’s location and orientation in places where GPS does not function, and identify the various components of a road scene in real time on a regular camera or smartphone, performing the same job as sensors costing tens of thousands of pounds.

The systems are not yet ready to control driverless cars, but their function is an essential part of the development of driverless cars and robots.

Object Recognition

One of the systems, SegNet, recognizes objects in real time, and is more skilled at that task than the most advanced radar systems on semi-autonomous cars currently on the market, Aaron Turpen wrote for Gizmag. It has the ability to look at a street scene and identify its components — such as roads, street signs, pedestrians, buildings — as belonging in one of 12 categories. And it is can do this in almost all lighting conditions, including at night. Although it has been designed to work in urban environments, it works via “deep learning,” which allows it to learn as it goes. Eventually it should be able to recognize people and things in rural areas and in more types of weather and climates such as snow and desert environments. Although SegNet is not yet ready to be used to control a vehicle, it could be used in vehicles as a warning system to prevent accidents.

Training the System

Cambridge undergraduate students trained SegNet by showing the system 5,000 images of street scenes, in which the pixels of each one had been manually labeled. Each image took about 30 minutes to label. Once the labeling was completed, the researchers spent two days training the system.

According to the university:

There are three key technological questions that must be answered to design autonomous vehicles: where am I, what’s around me and what do I do next. SegNet addresses the second question, while a separate but complementary system answers the first by using images to determine both precise location and orientation.

More Accurate Than GPS

The other system was designed by Kendall and Cipolla. It is a lot more accurate than GPS, and works in places where GPS does not, such as in tunnels and in cities that do not have reliable GPS signals. It can tell, for example, whether it is looking at the east or west side of a building, even if both sides look the same.

Cambridge invites readers to try the system out online. And you can try the SegNet system out here. Here are details about the systems that the researchers are presenting at the International Conference on Computer Vision in Santiago, Chile. 

And here is a video about the two systems:


My daughter and I first consulted with Dan Rosen after a very serious auto accident. Dan had several phone conferences with me, and Tracie was available whenever I called. We would recommend personal injury attorney Dan Rosen to anyone!
Sally from Denver, Colorado

Law Offices of Daniel R. Rosen

1400 16th Street #400, Denver, CO 80202
Open M-F 9am to 5pm.
(303) 454-8000 | Contact Us

Don't live in Denver?

We also have offices in Englewood, Colorado Springs and Greeley.
Embed this infographic:
Embed this image: