Automotive perception software can deal with the unexpected

January 07, 2019 // By Christoph Hammerschmidt
With an AI-based software solution, Israeli technology company Vayavision is tackling a decisive part of the computational perception of autonomous vehicles (AV): The raw data fusion. The software creates an accurate 3D environmental model of the area around the self-driving vehicle. Its unique feature: It offers a superior detection of unexpected objects – a phenomenon well known to any human driver. With this capability, it could drastically increase the driving safety.

The vendor claims that VayaDrive 2.0 breaks new ground in several categories of autonomous vehicle (AV) environmental perception – examples are raw data fusion, object detection, classification, SLAM, and movement tracking. With its enhanced capabilities, the software provides crucial information about dynamic driving environments, enabling safer and reliable autonomous driving, and optimizing cost-effective sensor technologies.

The software combines Artificial Intelligence (AI), analytics, and computer vision technologies with computational efficiency to scale up the performance of AV sensors hardware. VayaDrive 2.0 is compatible with a wide range of cameras, lidars, and radars.

The software solves a key challenge facing the industry: the detection of “unexpected” objects. Roads are full of “unexpected” objects that are absent from training data sets, even when those sets are captured while travelling millions of kilometers. Thus, systems that are mainly based on deep neural networks fail to detect the “unexpected.”

To detect objects, no single type of sensor is enough; cameras don’t see depth, and distance sensors, such as lidars and radars, possess very low resolution. VayaDrive 2.0 upsamples sparse samples from distance sensors and assigns distance information to every pixel in the high resolution camera image. This allows autonomous vehicles to receive crucial information on an object’s size and shape, to separate every small obstacle on the road, and to accurately define the shapes of vehicles, humans, and other objects on the road.

“VayaDrive 2.0’s raw data fusion architecture offers automotive players a viable alternative to inadequate ‘object fusion’ models that are common in the market,” said Youval Nehmadi, CTO and co-founder of Vayavision. “This is critical to increasing detection accuracy and decreasing the high rate of false alarms that prevent self-driving vehicles from reaching the next level of autonomy.”  

More information: https://vayavision.com/


Vous êtes certain ?

Si vous désactivez les cookies, vous ne pouvez plus naviguer sur le site.

Vous allez être rediriger vers Google.