MENU

Answering the call for high performance, high data throughput in automotive applications

Answering the call for high performance, high data throughput in automotive applications

Technology News |
By eeNews Europe



Embedded designers are increasingly tasked with enabling more functionality in embedded applications to deliver feature-rich and highly interactive user experiences. A good example of this is the evolution of electronics technology within automobiles. In the past years we’ve seen a complete transformation where things like digital dashboards and infotainment systems that control temperature, entertainment and more, have become the norm. A next step will be head up displays and highly advanced instrument clusters enabling apps in the connected car and thus effectively acting as your smartphone in the car. There will be the familiar controls for radio and navigation, along with new features for self-parking, advanced GPS and more. For this you need crisp 2D and 3D graphics on the display which again require fast processing.

High performance as well as cost and space saving

Especially in the automotive industry, the growing demand for infotainment and connectivity drives designers not only to go for higher performance solutions but also to lean on the cost and space savings side. Up to now, they have been able to take advantage of parallel NOR devices for performance. The industry is transitioning to serial peripheral interface (SPI) memories to take advantage of the low signal count and systems with a Quad SPI (QSPI) NOR memory can even achieve up to 80 MB/s data throughput using a single (DDR) QSPI memory with a so called data learning pattern (DLP). DLP is a Spansion-patented technology and, currently, only Spansion QSPIs with DLP can achieve these rates. Two QSPI devices would double the data throughput to 160 MB/s. These SPI memories retain compatibility with the original interface specified over 25 years ago. However, as the system-level read throughput continues to demand ever increasing speeds, a new look at the embedded memory interface offers a solution.

A new interface accelerates data throughput

Using a high speed interface such as the Spansion HyperBus Interface used by Spansion’s HyperFlash memory, the data throughput can be accelerated up to 333 MB/s. This is more than four times the fastest Quad SPI flash currently available today with one-third the pin-count of parallel NOR flash.

The HyperFlash memory pinout overlays nicely onto the dual QSPI pinout, which makes the migration path for designers from existing QSPI designs to a faster performance as easy as possible and offers a fast back-up solution. It also allows system applications to be scaled to different levels of flash performance when paired with compatible controllers, giving OEMs the ability to offer different product models with a single design. HyperBus implements a low pincount bus interface with a simple read/write protocol which is suitable for both memories and peripheral interfaces. Especially for instrument cluster applications and displays with high resolution, instant-on GUI requirements, this technology enables the balance between system performance, cost and space efficiency. In combination, the Spansion HyperFlash Memory can be a solution to some of the bandwidth issues that have confronted NOR users in the past.

The universal footprint of HyperFlash eases the migration from existing QSPI designs to a faster performance and allows system applications to be scaled to different levels of flash performance when paired with compatible controllers, so different product models can be offered with a single design.

Comparing pin count and read throughput

Two of the most significant criteria used to evaluate NOR Flash devices are the sustained read throughput and the number of pins required to implement the bus interface. Comparing different NOR Flash devices and their respective active signal counts we find, that all legacy parallel interfaces require between 30 to 40 pins (see figure 1). The SPI interface has evolved to use a 6 pin QSPI variant that has gained favour when enhanced read throughput is required. The HyperBus interface uses only 12 pins and marks a siginficant improvement delivering higher data throughput than QSPI while using only 6 pins more.

Comparing the pincount of the different Flash memory types.

The HyperBus Interface delivers a substantial improvement in read throughput compared with legacy NOR flash interfaces (see figure 2). With an 80 MB/s read throughput, QSPI has reached performance levels comparable with the asynchronous and page mode interfaces. Parallel NOR burst mode offerings come in around 133 MB/s with environments that expect a mix of wrapped and continuous read transactions. The HyperFlash memories leveraging the Spansion HyperBus Interface provide a new standard for performance by delivering 333 MB/s using a 12-pin interface. The 333 MB/s is achieved with the 1.8 V version of the interface; the 3 V version runs at 100 MHz and provides 200 MB/s.

Compared to legacy NOR flash interfaces, the HyperBus Interface offers a significant increase in read throughput.

Spansion’s current NOR flash family of HyperFlash memories includes 128-Mb, 256-Mb and 512-Mb products. These initial offerings are compatible with either 1.8-V or 3.0-V operating voltages. Engineering samples of the 512-Mb device are available today with fully qualified parts available in the third quarter of 2014. The 128-Mb and 256-Mb HyperFlash densities will follow in early 2015. Spansion will develop higher or lower densities depending upon market demand.

Conclusion

The HyperBus Interface was developed to satisfy the need for higher performance while remaining sensitive to the pin-count constraints of modern microcontrollers. The philosophy behind it was to create a simple burst mode, read/write interface and transaction protocol that can be used by both memories and peripherals. The IOs are derived from LPDDR1 for the 1.8-V HyperBus Interface and from legacy NOR for the 3.0-V HyperBus Interface. Nothing exotic has been deployed, just an optimal usage of existing, market-tested signaling technology.

The Spansion HyperBus Interface has the ability to satisfy the memory requirements for both volatile and non-volatile memories in a large swath of high-performance applications. Although Spansion’s focus is to place memory on the HyperBus Interface, the bus protocol is intended to be general purpose, leaving open the possibility for the introduction of non-memory peripheral devices.

The HyperBus technology has major implications for the automotive space. This speed allows for much faster boot time, direct execute-in-place from flash and less code shadowing, reducing the amount of RAM needed. For the consumer, this means more functionality, interactivity and performance out of the applications they’re using.

About the author:

Hiro Ino is Senior Director of Product Line Management, Flash Memory Group, Spansion. Mr. Ino’s experience in the hi-tech industry started as a system design engineer being part of a development team for designing 3D-graphics supercomputers at Evans & Sutherland. Through his career, his contribution has evolved from engineering and engineering management to business management. Among his latest experiences include: Senior Director of Strategic Business Development at SanDisk, VP of Marketing and Business Development at m-systems (which was acquired by SanDisk), VP of Marketing at T-RAM Semiconductor, a high-profile venture-backed start-up developing a novel high-performance RAM technology, and Director of WW Memory Business at Sony Electronics developing ultra-high-speed memory for CPU cache applications. Mr. Ino received his degrees in Electrical Engineering and Computer Sciences from the University of California at Berkeley.

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News

Share:

Linked Articles
10s