MENU

Ensuring Functional Safety Compliance for ISO 26262

Ensuring Functional Safety Compliance for ISO 26262

Technology News |
By eeNews Europe



This is the Age of Enlightenment for our vehicles. With advances in technology, vehicles are becoming smarter and more autonomous by the generation. For example, advanced driver assistance systems (ADAS) are enabling vehicles to make intelligent choices in a wide range of driving situations, thus increasing our safety. Multiple microcontrollers (MCUs), sensors, and other semiconductors are at work to make this functionality possible. All of these components must be verified at the intellectual property (IP), semiconductor, engine control unit, and OEM levels—and then tested in manufacturing to ensure that everything works when a buyer drives away from the dealership in her new vehicle.

But after 10 years of operation, how will that same automotive system react to permanent (boot-time stuck-at – SA0/SA1 and single event transient – SET) faults and radiation-based (single event upset – SEU) faults? That is the role of the ISO 26262 standard and functional safety verification—to ensure that the automotive system will behave as expected, even in the face of unplanned or unexpected circumstances. This article discusses efficient methodologies to ensure functional safety compliance.

Safety Verification Classifies Faults

Automotive designs are generally developed with a specified set of safety goals that assure certain functionality of the system in the event of fault within the system. These faults have to be treated differently from manufacturing faults. Where the manufacturing fault is detected on test equipment at the factory, the safety-related fault must be detected when the device is installed in the running system. When a fault occurs in non-critical portions of the system it can be considered safe, but if it occurs in a critical portion, it may be dangerous. In this second situation, if it is both detected by the safety monitors and corrected by the subsequent systems, the fault can be identified as dangerous but detected. However, if it fails to be detected or is detected but violates a safety goal, the fault is dangerous. The task of safety verification is to classify faults into the safe, dangerous, and dangerous-detected categories, codify this classification into the safety plan, and then execute a verification program to determine the ratio of dangerous-undetected to dangerous faults. The result sets the automotive safety integrity level (ASIL).

Figure 1: Typical fault detection circuits

Why Functional Safety Verification Mirrors Functional Verification
Functional safety verification can be considered a mirror of functional verification. In a typical functional verification methodology, the design under test (DUT) is held as a control while a wide range of stimulus is applied. Conversely, in a typical safety verification methodology, the stimulus is controlled to a few typical sequences while a wide range of faults are applied to the DUT. The technical challenge with safety verification is that the DUT logic can’t actually be changed because doing so would invalidate the concept of verifying faults in the actual design. It would also invalidate the ISO 26262-required tool confidence level (TCL) assessment of the verification tools used. For these reasons, safety verification must share both testbench and DUT code as well as execute concurrently with functional verification.

The starting point for safety verification is the set of monitoring points for the fault detection circuits. These points are strobed during the actual design execution so the same effect must be modeled in safety verification. A small set of functional test sequences, sometimes referred to as “smoke tests”, stimulate the DUT during safety verification. Once this environment is set up, the design nodes must be automatically discovered and then collapsed to create a fault dictionary for safety verification.

From this point, safety verification methodology iterates over the fault dictionary, injecting permanent and SEU faults. For both SA0/SA1 and SEU, the starting point is the injection node and the injection time, and the SET can also take a hold time parameter. The simulation proceeds normally up to the injection time. At that point, the logic value at the injection point is changed for SEU, or changed and held for SA0/SA1 and SET. As the simulation proceeds from this time, the logic value at the strobe point in the faulted or “bad machine” is compared to the value at the same strobe point until a value difference is detected or the simulation finishes. This process is repeated for the whole fault dictionary.

This is where the work in safety verification begins. As indicated in Figure 2, the detection condition for each fault is reported. Faults that are reported as undetected or potentially detected need further debug before they can be classified.

Undetected faults occur when the simulation finishes without any logic difference detected at the strobe point. Potentially detected faults occur when the logic state at the strobe point becomes unknown (X) or high impedance (Z) due to the fault changing the logic value propagation between the injection and strobe points.

Both undetected and potentially detected faults may be dangerous, so the debug task is identify the reason for the detection condition. For example, a fault may be undetected if the combinatorial logic masks the fault. A fault may also be undetected if fault node is either not controllable from the stimulus or the stimulus used fails to exercise the logic at the faulted node. In each case, further simulation or formal analysis is needed. If the fault is masked or not controllable, it may be removed from the ratio calculation as safe. If it shows that the stimulus does cause the fault to propagate but the strobe fails to detect the fault, then the fault is designated as dangerous.

Faults of the third condition may also be designated as dangerous regardless of whether the simulation finishes ON_TIME, DELAYED, or PREMATURE. The actual designation occurs based on the processing between the strobe point and the safety system output. For example, if the ECC fails to correct the fault due to accumulated error or if the time between the fault insertion and detection by the comparator exceeds the safety goal, the fault is dangerous-undetected. A SystemVerilog, PSL, or OVL assertion at the safety system output (see Figure 1) can help automate the designation of detected faults.

The goal of the safety verification process is to identify the dangerous-detected and dangerous faults. The ratio of dangerous-detected faults to the total dangerous and dangerous-detected faults in the system is then used to calculate the Automotive Safety Integrity Level (ASIL).

Figure 2: Temporal fault classification semantics

Build Safety Verification Into Functional Verification Flow

In a relatively small design—for example, under a few hundred thousand logic gates plus any analog circuitry—it may be possible to run safety verification using sampled input for the testbench and manually analyze the results. As system complexity increases, however, a more efficient methodology is needed. Safety verification should become part of the functional verification flow so that sophisticated testbenches in modern functional verification can be used to control the fault injection and to support the debug process. Similarly, the same simulator should be used to eliminate the efficiency loss due to debugging result differences caused by the use of a modified DUT or a different simulation engine.

Given that the safety simulation process may involve hundreds of thousands or millions of temporal faults, automated regression verification as established by metric-driven verification can both increase the efficiency of identifying the undetected and potentially detected fault simulations and automatically aggregate the safe from unsafe faults. Taken together, these techniques can reduce the effort for safety verification by up to 50%.

Figure 3: Functional verification and safety verification flow. For higher resolution click here.

Summary

Functional safety verification enables engineers to create more dependable systems. The techniques associated with this new verification requirement are outlined in the ISO 26262 standard, but the skills that a verification team develops by incorporating functional safety are transferrable to other applications as well. With complex systems like ADAS being developed today, automotive verification engineers are leading the electronics industry into the age of dependable design.

About the authors:

Adam Sherer is Verification Product Management Group Director, at Cadence Design Systems, Inc.
John Rose is Product Engineering Architect at Cadence Design Systems, Inc.

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News

Share:

Linked Articles
10s