Activity detection in the vehicle interior takes privacy into account

September 01, 2021 // By Christoph Hammerschmidt
Activity detection in the vehicle interior takes privacy into account
In partially and fully autonomous driving at SAE level 3 and 4, the vehicle electronics return the command to the driver in particularly complex driving situations. This requires the computer to know how quickly it can take over. The driver's state of readiness is therefore continuously monitored by an interior camera. A new system from the Fraunhofer Institute of Optronics, System Technologies and Image Exploitation IOSB derives statements about the driver's activities from the image data for the first time and analyses how quickly the driver could take over control.

In automated driving, the vehicle decides what it has to do - it steers, brakes and accelerates. However, until the time comes when vehicles can do without a driver altogether, partially automated vehicles will support the driver and give him or her increasingly more freedom. Naturally, partially automated vehicles require handovers between the car and the driver, for example at a construction site on the motorway or during the transition into city traffic after a motorway journey. The vehicle needs to keep a constant eye not only on the surroundings, but also on the driver to determine how quickly he might take control of the vehicle if necessary. Existing driver observation systems are mainly limited to detecting drowsiness; they do not primarily evaluate the images from the cameras.

Researchers at Fraunhofer IOSB are pursuing this approach in a current project. "With our technology, we not only recognise the face, but also the current poses of the driver and passengers," explains Michael Voit, group leader at Fraunhofer IOSB. From this, the researchers want to reliably determine what the driver and passengers are currently doing.

The core of the development lies in algorithms and machine learning methods. The algorithms analyse the camera data in real time and find out what the driver's attention is focused on. The technology thus goes beyond pure image recognition and interprets activities in context. The researchers first learned the system by annotating numerous camera shots by hand: Where are the hands, feet, shoulders of the people, where are objects such as smartphones, books and co. recognisable? They then evaluated the algorithms with new images and corrected or verified their results.

The system abstracts images of the driver and passengers into a digital skeleton - an abstract, reduced representation of the person's body pose. From their movement and a complementary object recognition, it infers the activity. The algorithm thus recognises whether someone is sleeping or looking at the


Vous êtes certain ?

Si vous désactivez les cookies, vous ne pouvez plus naviguer sur le site.

Vous allez être rediriger vers Google.