Thanks to an algorithm developed by University of California San Diego researchers that can improve the way autonomous cars recognize pedestrians, the roads will one day be even safer, Aaron Turpen wrote for Gizmag. Electrical engineers at UCSD’s Jacobs School of Engineering have taken a step toward the goal of having computers recognize objects as well as humans can.
Nuno Vasconcelos, the electrical engineering professor who led the research, said real-time vision is a prime goal for self-driving cars. The pedestrian detection algorithm his team created is a mix of traditional computer vision classification, called cascade detection, and so-called deep learning models.
As Turpen explained it:
Most pedestrian detection systems divide an image into small sections (referred to as ‘windows’) that are processed by a classification program to determine the presence of a human form. This can be challenging for engineers because humans come in various shapes and sizes and distance changes the perspective and size of objects. In a typical real-time application, this involves processing millions of these windows at 5-30 frames per second.
Cascade detection includes an early stage in which the algorithm quickly identifies and gets rid of windows that it easily detects do not contain a person (such as ones of the sky). Subsequent stages process other windows that are not as easy for the algorithm to classify, such as ones that contain a tree with human-like aspects, such as shape, color, or contours. The last stages require the algorithm to differentiate between a pedestrian and objects that look like one. Because the final stages only process a few windows (because earlier ones have been discarded), the overall complexity is low.
But traditional cascade detection is not powerful enough to deal with the more difficult-to-analyze windows in the final stages. To solve this problem, the researchers developed an original algorithm that uses deep-learning models in the final stages of the cascaded detector. Deep-learning models are better able to handle complex pattern recognition, because they have been trained with hundreds or even thousands of examples of images that either contain or do not contain a person.
Turpen said the algorithm the team developed can be applied to delivery robots and low-flying drones, as well as self-driving vehicles, enabling all to detect pedestrians and avoid accidents.
Image used with permission from the UC San Diego Jacobs School of Engineering.