Robust encapsulation of virtualised ECUs
The approach that Renesas uses in its R-Car product is to implement extensive support for hardware virtualisation that goes far beyond what is used today for ordinary PCs used as virtual machines.
PC virtualisation works mainly by virtualising the memory used by the various applications. In other words, a program accesses a memory address, but that is then translated into a physical address using special hardware. This translation unit is controlled by an operating system (or hypervisor) that knows exactly which access rights the relevant program has.
But in a highly-integrated SoC like the R-Car, there is a lot more hardware that can access the RAM independently. For example, a video input could manipulate the entire program’s RAM if there are no protection mechanisms in place. Video decoders and audio DSPs can do similar things too. And if a malware program fraudulently obtained the rights to manipulate output addresses, it could export all the secrets in the RAM to an HDMI interface.
The R-Car provides a solution to this problem with a multi-level memory protection concept. It includes global access rights for the relevant hardware accelerators, and each of these modules can be moved into a virtual address space. Once there, they are controlled by a system MMU, which translates memory requests into physical addresses in the same way as the CPU MMU. The same access rights apply as for the CPU-based software and the system MMU is controlled entirely by the secure operating system (Fig. 2). Combined with the comprehensive secure boot concept, it is practically impossible to manipulate the operating system later on.
However, providing memory protection alone would be insufficient, as the various encapsulated programs still need access to the hardware accelerators in the R-Car. For example, both the infotainment application and the instrument cluster will need to display 3D graphics on their respective screens. But as the powerful 3D GPU is the largest component on the SoC by a wide margin, it is not possible to simply double its size or provide half a GPU to each application. The first would be too expensive and the second ineffective, especially if the two applications do not have the same performance requirements. And what would happen if a third application needed the GPU?