Deep learning method improves environment perception of self-driving cars: Page 2 of 2

May 18, 2020 //By Christoph Hammerschmidt
Deep learning method improves environment perception of self-driving cars
People, bicycles, cars or street, sky, grass: Which pixels of an image belong to people or objects in the foreground of the environment of a self-driving car, and which pixels represent the urban background scenery? This task, known as panoptical segmentation, is a fundamental problem in many fields such as self-driving cars, robotics, augmented reality and even in biomedical image analysis.

Most of the previous methods that address this problem require large amounts of data and are too computationally intensive for use in real-world applications such as robotics, because these runtime environments are highly resource constrained. "Our EfficientPS not only achieves high output quality, it is also the most computationally efficient and fastest method," says Valada.

More information:

Related articles:

Artifical Intelligence Roadmap: A human-centric approach to AI in aviation

Brainstorm: Report on Artificial Intelligence

Startup improves AI training procedures for autonomous cars

Vous êtes certain ?

Si vous désactivez les cookies, vous ne pouvez plus naviguer sur le site.

Vous allez être rediriger vers Google.