After eyes, cars get ears for the environment

February 03, 2020 //By Christoph Hammerschmidt
After eyes, cars get ears for the environment
Modern cars assess their environment with "sense organs" such as radar, lidar or cameras. In the future, they may also have a sense of hearing. Researchers at the Fraunhofer Institute for Digital Media Technology IDMT in Oldenburg (Germany) have developed prototypes for detecting external sounds such as sirens.

Driver assistance systems in today's cars take the strain off the driver. Camera, lidar and radar detect relevant objects in the environment, acting as eyes, so to speak. What automobiles still lack is the sense of hearing - systems that can perceive and classify external sounds. In the future, such "ears" will form the basis for autonomous driving in combination with intelligent radar and camera sensor technology. To this end, researchers at the Fraunhofer IDMT in Oldenburg are developing AI-based technologies for acoustic event recognition.

External acoustic perception systems do not yet exist for autonomous vehicles, despite their high application potential. Such systems could, for example, signal in a fraction of a second when a vehicle is approaching with the siren switched on. In this way, the autonomous vehicle knows that it must take evasive action. In addition, there are numerous other scenarios where an acoustic early warning system would be helpful: for example, when entering residential streets where potentially children are playing, but also for detecting dangerous situations or errors. In addition, such a system could monitor the condition of the vehicle or act as an emergency call point via voice recognition.

The challenges in the development of an acoustic sensory organ for vehicles include optimal signal reception through sensor positioning, signal pre-processing and improvement, and noise reduction. Special beamforming algorithms developed at Fraunhofer IDMT enable the dynamic localization of moving sound sources, such as the siren on an emergency vehicle. IDMT's event detectors get previously trained using machine learning methods with the acoustic signatures of the relevant sounds. Appropriate acoustic libraries were created for this purpose. The result is intelligent sensor platforms with effective detection performance. AI-based algorithms for audio analysis determine the disturbing and target sounds. "We apply methods of machine learning. We train our algorithms with a wide variety of previously collected sounds," says Danilo Hollosi, group leader for acoustic event detection at Fraunhofer IDMT. Together with industrial partners, the first prototypes have already been realized and should be ready for the market by the middle of the next decade.

Vous êtes certain ?

Si vous désactivez les cookies, vous ne pouvez plus naviguer sur le site.

Vous allez être rediriger vers Google.