With the next generation of ADAS, techniques such as machine learning or neural networks enter the cars. Higher processing power and sensor resolution as well as larger memories makes it possible that techniques hitherto restricted to stationary systems will run in future vehicles.
Daimler introduced a software technology called ‘scene labelling’ that helps camera-based driver assistance systems to better detect critical situations and thus enable them to take action sooner and with higher precision. The technique classifies unknown situation and thus automatically identifies objects relevant to the ADAS, from bicyclists to pedestrians or wheelchair users. The developers “showed” the ADAS thousands of images of cities in which 25 object classes were labelled manually, such as vehicles, pedestrians, roads, sidewalks, buildings or even trees. With this image material as input, machine learning techniques enabled the ADAS to correctly classify completely unknown new camera images and to make driving decisions on this basis. These algorithms are running on Deep Neural Networks, powerful computers that are neurally linked, similar to the connection of neurons in the human brain. Thus, the system is comparable to human vision.
Besides scene labelling, Daimler showed a radar-based vehicle with capabilities that exceed today’s vehicle radar systems: It can resolve not only dynamic objects but static ones as well. It also utilises what Daimler calls micro Doppler that provides a signature of moving objects and thus unambiguously identifies pedestrians and cyclists.
A third Daimler test vehicle was equipped with a system that can detect and identify the intentions of pedestrians and cyclists. Based on features such as head posture, body position the system predicts if the person detected will cross the street or stay on the sidewalk. The system can reduce the reaction time by the one critical second that decides over crash or no crash, Daimler claims.