FPGAS provide platform for driver assistance

28 April 2011

Figure 1:Project Growth of Camera Usage in Advance Driver Assistance Applications

In the last five years, the automotive industry has made remarkable advances in driver assistance (DA) systems that truly enrich the driving experience and provide drivers with invaluable information about the road around them. This article by Paul Zoratti looks at how automotive electronics design teams around the world are leveraging FPGAs to quickly bring new DA innovations to market.

Since the early 1990s, developers of advanced DA systems have striven to provide a safer, more convenient driving experience.

Over the past two decades we’ve seen initial deployment of DA features such as ultrasonic park assist, adaptive cruise control and lane departure warning systems in high-end vehicles.

Recently, automotive manufacturers have added rear-view cameras, blind-spot detection and surround-vision systems as options on a variety of vehicle platforms.

Except for ultrasonic park assist, deployment volumes for DA systems have been limited. However, research firm Strategy Analytics forecasts that DA system deployment will rise dramatically over the next decade.

In addition to government legislation and strong consumer interest in safety features, innovations in remote sensors and associated processing algorithms that extract and interpret critical information are an uptake in DA system deployment.

Over time, these driver assistance systems will become more sophisticated and move from high-end to mainstream vehicles thanks in large part to FPGA-based processing platforms.

DA sensing technology trends

Sensor research and development activities have leveraged adjacent markets such as cell-phone cameras to produce devices that not only perform in the automotive environment, but also meet strict cost limits.

Similarly, developers have refined complex processing algorithms using PC-based tools and are transitioning them to embedded platforms.

While ultrasonic sensing technology has led the market, IMS Research (see Figure 1) shows camera sensors dominating in the coming years. A unique attribute of camera sensors is the value of both the raw and processed outputs. Raw video from a camera can be directly displayed for a driver to identify and assess hazardous conditions, something not possible with other types of remote sensors (for example, radar).

Alternatively (or even simultaneously), the video output can be processed using image analytics to extract key information such as the location and motion of pedestrians. Developers may further expand this “dual-use” concept of camera sensor data by bundling multiple consumer features based on a single set of cameras, as illustrated in Figure 2.

From such applications, it’s possible to draw a number of conclusions regarding the requirements of suitable processing platforms for camera-based DA systems:

• They must support both video processing and image processing.

• They must provide parallel data paths for algorithms associated with features that will run concurrently.

• Given that many new features will require megapixel image resolution, connectivity and memory bandwidth will be just as critical as raw processing power.

FPGAs: meeting DA processing platform requirements

Consider a wide-field-of-view, single-camera system that incorporates a rear cross-path warning feature. The system’s intent is to provide a distortion-corrected image of the area behind the vehicle. In addition, object detection and motion-estimation algorithms will generate an audible warning if an object is entering the projected vehicle path from the side.

Figure 2:Bundling of Multiple Features on a Single Set of Camera Sensors

The functional diagram in Figure 3 illustrates how the camera signal is split between the video- and image-processing functions. The raw processing power needed to perform these functions can quickly exceed what’s available in a serial digital signal processor (DSP). Clearly, parallel processing along with hardware acceleration is a viable solution.

FPGAs offer highly flexible architectures to address various processing strategies. Within the FPGA fabric, it is a simple matter to split the camera signal to feed independent video and image processing IP blocks. Unlike in serial processor implementations, which must time-multiplex resources across functions, the FPGA can execute and clock processing blocks independently.

What’s more, should it become necessary to make a change in the processing architecture, the FPGA’s ability to reprogram hardware blocks surpasses solutions based on specialised application specific standard products (ASSPs) and application-specific integrated circuits (ASICs), giving FPGA implementations a large advantage when anticipating the future evolution of advanced algorithms.

For computationally intensive processing, FPGA devices such as the new Xilinx Automotive (XA) Spartan-6 family offer up to 132 independent multiply-and-accumulate (MAC) units with pre-adders.

Another benefit of FPGA implementation is device scalability. As OEMs look to bundle more features, the processing needs will rise. For example, the rear-view camera may need to host a monocular ranging algorithm to provide drivers with information on object distance. The added functionality would require yet another parallel-processing path.

Implementing this in a specialised ASIC or ASSP could be problematic, if not impossible, unless the designers made provisions for such expansion ahead of time.

Attempting to add this functionality to a serial DSP could require a complete rearchitecture of the software design even after moving to a more powerful device in the family (if it is plausible at all). By contrast, an FPGA-based implementation allows you to add the new functional block utilising previously unused FPGA fabric, leaving existing blocks virtually intact. Even if the new function requires more resources than are available in the original device, part/package combinations frequently support moving to a denser device (that is, one with more processing resources) without the need to redesign the circuit board or existing IP blocks.

Finally, the reprogrammable nature of the FPGA offers “silicon reuse” for mutually exclusive DA functions. Going back to the rear-looking camera example, the features discussed are useful while a vehicle is backing up, but an FPGA-based system could leverage the same sensor and processing electronics while the vehicle is moving forward, with a feature like blind-spot detection. In this application, the system analyses the camera image to determine the location and relative motion of detected objects. Since this feature and its associated processing functions are not required at the same time as the backup feature, the system can reconfigure the FPGA fabric within several hundred milliseconds based on the vehicle state. This allows complete re-use of the FPGA device to provide totally different functionality at very little cost.

Meeting DA external memory bandwidth requirements

In addition to raw processing performance, camera-based DA applications require significant external memory access bandwidth. The most stringent requirements will come from multi-camera systems with centralised processing - for example, a four camera surround-view system.

Assuming 4-megapixel imagers (1280 x 960), 24-bit colour processing and performance of 30 frames per second (fps), just storing the images in external buffers requires 3.6 Gbps of memory access.

Considering that images may need to be simultaneously read and written, the requirement doubles to 7.2 Gbps. Assuming an 80% read/write burst efficiency increases the requirement to 8.5 Gbps. Note that this estimate does not include other interim image storage or code access needs.

Clearly, camera-based DA applications are memory bandwidth-intensive.

These systems also commonly require memory controllers; however, adding one in a cost-effective manner requires efficient system-level design. Again, developers can leverage the FPGA’s flexibility to meet this need. Spartan-6 devices offer two hardened memory controller blocks (MCBs) that designers can configure for 4, 8 or 16bit DDR, DDR2, DDR3 or LPDDR memory interfaces. They can clock the MCBs at up to 400 MHz, providing 12.8-Gbps memory access bandwidth to a 16-bit-wide memory device. Furthermore, with two MCBs, the raw bandwidth doubles, to 25.6 Gbps. The two MCBs can run independently or work together using FPGA fabric to create a virtual 32-bit-wide data width.

In short, FPGA memory controllers provide customised external memory interface design options to meet DA bandwidth needs and optimise all aspects of the cost equation (memory device type, number of PCB layers, etc.).

Figure 3: Rear-view Camera with Rear Cross-Path Warning Functional Block Diagram

DA Image Processing

In addition to external memory needs, camera-based DA processing can benefit from on-chip memory that serves as line buffers for processing streaming video or analysing blocks of image data. Bayer transform, lens distortion correction and optical-flow motionanalysis are examples of functions that require video line buffering. For a brief quantitative analysis, consider a Bayer transform function using 12-bit-pixel Bayer pattern intensity information to produce 24-bit colour data. Implemented as a raw streaming process, a bi-cubic interpolation process requires buffering four lines of image data. Packing the 12-bit-intensity data into 16-bit locations requires approximately 20.5 kbits of storage per line, or 82 kbits for four lines of data.

FPGA devices provide on-chip memory resources in the form of Block RAM (BRAM).

The Spartan-6 family has increased the BRAM-to-logic ratios to support image processing needs. The Spartan-6 devices now offer between 216 kbits and 3 Mbits of BRAM memory structured in dual-port 18-kbit blocks capable of 250 MHz clocking.

Transporting video data

Another DA processing platform issue relates to transport of video data from remotely mounted cameras to central processing or display-capable modules.

Most of today’s camera installations rely on analogue composite video transport (for example, NTSC). However, this method presents several problems for advanced DA systems. Interlaced fields can reduce the effectiveness of object-recognition and motion-estimation algorithms, and analogue signals are susceptible to electrical noise, which adversely affects image quality. Finally, with the advent of digital imagers, conversion to or from composite video (CVBS) formats can introduce unnecessary system costs.

A preferred method is to use a digital transport mechanism. Transporting 12 bits of data in parallel can be costly in terms of cable and connectors, so serialisation techniques involving low-voltage differential signaling (LVDS) or Ethernet technologies are currently under consideration. Serialising pixel data requires the use of devices with high-speed interfaces. A single 30-fps megapixel imager with 12-bit pixel depth generates data at greater than 500 Mbps.

XA Spartan-6 devices offer differential I/O that can operate at speeds exceeding 1 Gbps, and several members of the family also offer “gigabit transceivers” that can be clocked at better than 3 Gbps. It is possible to leverage these high-speed I/O capabilities along with FPGA fabric to implement emerging LVDS serdes signaling protocols within the FPGA device itself, eliminating external components and reducing system cost.

Functional partitioning of DA processes

For the single-camera system with rear cross-path warning example, the video and image-processing functions clearly benefit from parallel processing and hardware acceleration, while the cross-path warning generation is a serial decision process. So a platform that can support both types of processing is clearly an advantage. Today’s FPGA devices support instantiation of soft processors such as the

MicroBlaze32-bit RISC processor, available on Spartan-6 devices. Combining full-function processors with FPGA fabric allows for optimised functional partitioning – that is, the functions that benefit from parallel processing or hardware acceleration are implemented in FPGA fabric, while those more suited for serial processes are implemented in software and executed on the MicroBlaze.

While the MicroBlaze is capable of supporting system-on-chip (SoC) architectures, Xilinx’s next-generation devices will up the ante with an Extensible Processing Platform that includes a hardened ARM-based processor along with a hardened set of peripherals. Xilinx is targeting these next-gen devices for the most complex of DA systems.

System designers working on driver assistance processing platforms must grapple with such design considerations as architectural flexibility, platform scalability, external memory bandwidth, on-chip memory resources, high-speed serial interfaces and parallel/serial process partitioning. The challenge is to strike an appropriate balance between meeting these needs and maintaining a competitive product cost structure.

In this quest, FPGA technology is a viable alternative to standard ASSP and ASIC approaches. As an example, the resource attributes of the Spartan-6 family offer options and capabilities in meeting the DA processing platform requirements. With today’s FPGAs utilising 40 nm process nodes and next-generation devices moving to 28 nm, their competitive position as a DA processing platform of choice promises to be very strong for some time to come.

The author is Automotive Systems Architect, Driver Assistance Platforms Manager, Xilinx Inc.


Print this page | E-mail this page