Autonomous cars learn to drive with foresight

October 29, 2019 //By Christoph Hammerschmidt
Autonomous cars learn to drive with foresight
Good drivers anticipate dangerous situations and adjust their driving accordingly before things get hot. Researchers at the University of Bonn (Germany) now want to teach this ability to autonomous cars.

An empty street, a row of parked cars at the edge: nothing to call for caution. But wait: Doesn't a street open up in front, half covered by parking cars? Maybe I'd rather take my foot off the gas - who knows if someone will come from the side. When driving, we constantly encounter situations like this that make us take special care. Interpreting them correctly and drawing the right conclusions requires a lot of experience. In contrast, self-propelled cars sometimes behave like a student driver in his first hour. Scientists now want to educate them to adopt a more proactive driving style.

Computer scientist Professor Dr. Gall heads the Computer Vision working group at the University of Bonn, which, in cooperation with his colleagues from the Institute of Photogrammetry and the Autonomous Intelligent Systems working group, is researching a solution to this problem. At the International Conference on Computer Vision in Seoul (November 1), the scientists will present a first step towards this goal. "We have further developed an algorithm that completes and interprets lidar data," he explains. "This enables the car to adapt to possible dangers at an early stage.

Lidar is a rotating laser mounted on the roof of most autonomous cars. Per revolution, the system records the distance to around 120,000 points around the vehicle. With the high quality of the data it generates, lidar is considered the “gold standard” for surround sensing technologies for autonomous vehicles.

The problem is that the distance between the measuring points increases as the distance increases. Even for a human it is therefore hardly possible to obtain a correct picture of his surroundings from a single lidar scan, i.e. the distance measurements of a single revolution. "A few years ago, the University of Karlsruhe (KIT) recorded large quantities of lidar data, a total of 43,000 scans," explains Dr. Jens Behley of the Institute of Photogrammetry. "We have now taken sequences from several dozen scans and superimposed them." The data obtained in this way also contains points that the sensor had only recorded when the car had already driven a few dozen meters further. Put simply, they show not only the present, but also the future.

These superimposed point clouds contain important information such as the geometry of the scene and the spatial extent of the objects it contains, which are not available in a single scan," stresses Martin Garbade, who is currently doing his doctorate at the Bonn University’s Institute of Computer Science. "In addition, we have labeled each individual point in them - for example: There's a sidewalk, there's a pedestrian and there's a motorcyclist back there." The scientists fed their software with a data pair: a single lidar scan as input and the corresponding superimposition data including semantic information as desired output. They repeated this for several thousand such pairs.


Vous êtes certain ?

Si vous désactivez les cookies, vous ne pouvez plus naviguer sur le site.

Vous allez être rediriger vers Google.