While everyone agreed with the importance of building a safe Level 4 (L4) autonomous car, many of the existing AV developers are still looking for the “North Star” to guide their test methodology. We brought together a broad collection of perspective—senior leaders from an OEM, suppliers, and a simulation startup—to share their perspectives:
Matthijs Klomp, Vehicle Motion and Climate Solution Architect, Volvo
Thomas Herpel, Test System Development Manager, Zukunft Mobility (a company of ZF)
Paul Borbely, Tools Development Manager, Valeo
Celite Milbrandt, Founder and CEO, monoDrive
Here are the three major takeaways I got from the discussion:
No industry organization has laid out a ratified testing methodology for L4+ autonomous vehicles. The consensus from the panelists is that though regulations are behind, testing is critical to build safe autonomous cars. Currently, automotive companies are very good at testing the individual components, whether it’s a camera, radar, or LiDAR. This has proven effective for building L1 or L2 autonomous vehicles.
Such systems have helped human drivers drive more safely. However, if we want true autonomous vehicles on the road, we need to figure out how to test complex systems with many advanced sensors sharing information with a processing system that uses artificial intelligence (AI) to identify all objects around the car to make the best and safest decision. The panelists reiterated the complexity of these systems.
It’s important to understand that AI may interpret information differently based on very subtle changes in the image, radar sensor, or LiDAR sensor data cloud. When the sensors are integrated with a central computing component, the OEMs and suppliers will need to develop, test, and validate the brain with the AI algorithms to be able to correctly identify roadblocks, pedestrians, cars on the road, and surrounding environment, and in turn decide the safest path forward.
However, simply extending the current component-level testing to a full autonomous-driving stack is very difficult for several reasons:
You will have more sensors to test.
You will need to test the system, not just components.
You will need to test and verify the algorithms in different, nearly countless scenarios and environments.
All of those will escalate the development time and cost exponentially with the current approach—it’s not scalable. Current physical testing isn’t enough; therefore, virtual testing will be required.
SIL, HIL, road testing: What’s the split?
Though the industry is starting to realize the need for the virtual testing system, physical testing will never go away. The big question is to figure out what can be tested in the simulation software environment and what needs to be left for the physical testing.
While the exact split remains uncertain, perception testing starts with the simulation because many test scenarios are hard to duplicate in the real world. The model should be trained to correctly identify the objects; then it will be tested in different environments such as rain, daylight, etc. If an engineer is able to run such scenarios in the software, it sounds like it would be a great plan to move many of the tests to simulation-based test or software-in-the-loop (SIL) test.
There’s much hesitation to shift more testing to SIL, though, because most engineers don’t trust simulation. The key is the high-fidelity realistic simulation tools that can be correlated to real road data. In such cases, the engineer can write thousands of scenarios, and then test, verify, and validate some scenarios with the real-world data.
Hardware-in-the-loop (HIL) testing was also an important thread during the panel discussion (Fig. 2). HIL uses the same control hardware and software that will run in the vehicle, but in a controlled lab environment. HIL testing also allows design teams to inject the data at different points in the system to increase the test coverage and include actual hardware timing and behavior to validate sensor faults, bitstream errors in the sensors, and communication bus errors, in real-time. HIL is still a required step in the validation process and must be done before the vehicle is on the road, perhaps now more important than ever.
Test coverage: How much testing is enough?
At some point, the company has to declare “we’ve done enough testing and we trust that this vehicle is safe.” Most automotive teams are conscientious and skilled, but some companies may have time-to-market pressures or knowledge gaps that cause them to implement an incomplete test plan. How does that sound? Scary, right?
So, how should we approach it then? A virtual world that mimics the real-world environment very accurately and precisely would allow test engineers to generate test scenarios, run those test cases, and cover the most critical test cases to validate the safety of the cars. Such a high-fidelity simulation environment allows companies to increase the test coverage by being able to train the data via slight modifications of the environment.
Perception is all about identifying all of the environment within “sight.” All elements need to be identified correctly. To do this properly in simulation, you need to not only simulate the visual world, but also materials of the real world, as well as the behavior of environment and other intelligent actors—for example, whether people are walking, biking, or driving). We can’t forget about animals and scooters. You also need to simulate the vehicle dynamics and the behavior of the all sensors in the vehicle.
With all of this in simulation, we can test the perception and planning of the vehicle, but we still need to test the controls. We need to be sure that when a decision is made to steer left or right or stop, the car responds quickly and reliably. That will require a totally different, yet better understood set of simulation tools, and yes, more HIL testing.
So, will comprehensive simulation eventually take hold?
The increase of the simulation usage, testing sensor fusion, and boosting the test coverage in SIL all sound great. Then, what’s the issue for moving forward? The majority of the industry has a perception that simulation isn’t “good enough” to simulate the real-world environment. And the lack of the collaboration between OEM and Tier 1 suppliers doesn’t help to convince each other.
We, as the automotive test community, have an opportunity to educate the entire industry on the latest simulation tools and encourage the collaboration. Everyone wants autonomous vehicles to be safer than human drivers, and we’re well on our way. I was encouraged to hear first-hand that some of the automotive industry folks are seeing the need to change the old perception to accept the new. The test community is coming together and tackling the challenges of getting safe and reliable autonomous vehicles on the road.
About the author:
Jamie Smith is Director of Global Automotive Strategy at National Instruments – www.ni.com
This article first appeared on ElectronicDesign – www.electronicdesign.com