Simple colour stain can confuse autonomously driving vehicles

October 28, 2019 //By Christoph Hammerschmidt
Simple colour stain can confuse autonomously driving vehicles
AI algorithms, which convert the camera images in self-driving cars into control instructions, are easy to deceive. Researchers have now shown how "optical hacking" works. Simple color patterns can be enough to confuse autopilots in self-driving vehicles, the researchers now warn the auto industry.

A colour pattern on a T-shirt, a bumper sticker or an emblem on a shopping bag - all this could become a safety risk for autonomous cars. As a team of researchers at the University of Tübingen (Germany) has shown, the optical flow algorithms of deep neural networks are surprisingly easy to dupe. "It took us three, maybe four hours to create the pattern - it was very quick," says Anurag Ranjan, PhD student in the Department of Perceptive Systems at the Max Planck Institute for Intelligent Systems (MPI-IS) in Tübingen. He is the first author of the publication "Attacking Optical Flow", a joint research project of the Department of Perceptive Systems and the Research Group for Autonomous Machine Vision at the MPI-IS and the University of Tübingen.

The danger that production vehicles currently available on the market are affected is low. Nevertheless, as a precaution, the researchers informed some car manufacturers who are currently developing self-driving models. They briefed them of the risk so that they could react promptly if necessary.

In their research, Anurag Ranjan and his colleagues tested the robustness of a number of different algorithms for determining the so-called optical flow. Such systems are used in self-driving cars as well as in robotics, medicine, video games and navigation, to name but a few. The optical flow describes the movement in a scene captured by the on-board cameras. Recent advances in machine learning have led to faster and better methods of calculating motion direction and speed. However, the research of the Tübingen scientists makes clear that such methods are susceptible to interference signals - for example, a simple, colourful pattern that is placed in the scene. Even if the pattern actually does not move at all, deep neural networks, which are used to a large extent in this application today, can lead to incorrect calculations: the network suddenly comes to the conclusion that parts of the scene are moving in the wrong direction.

The phenomenon is not new in itself. Researchers have shown several times in the past that even tiny patterns can confuse neural networks. For example, objects such as stop signs have been misclassified. This alone is a serious safety deficiency. Current research in Tübingen shows for the first time that algorithms for determining the movement of objects are also susceptible to such attacks. When used in safety-critical applications such as autonomous vehicles, however, these systems must be robust and safe with regard to such attacks.


Vous êtes certain ?

Si vous désactivez les cookies, vous ne pouvez plus naviguer sur le site.

Vous allez être rediriger vers Google.