Integrating image, radar & time-of-flight sensors in embedded systems
01 July 2020
Growing consumer demand for drones, intelligent vehicles & AR/VR headsets is driving tremendous growth in the sensor market. Research firm, Semico has identified automotive (27% CAGR), drone (27% CAGR) & AR/VR headset (166% CAGR) applications as the primary demand drivers for sensors, forecasting semiconductor OEMs will ship over 1.5 billion image sensors per year by 2022.
This article was originally featured in the July 2020 issue of EPDT magazine [read the digital issue]. Sign up to receive your own copy each month.
As Tom Watzka, Mobile Systems Architect at low power programmable logic specialist, Lattice Semiconductor tells us, these applications require multiple sensors to capture data about their operating environment. For example, an intelligent car might utilise several high-definition image sensors for the rear-view and surround cameras, a LIDAR sensor for object detection and a radar sensor for blind spot monitoring (see Figure 1).
This proliferation of sensors presents a problem, since all of these sensors need to send data to the car’s AP (application processor), and the AP has a finite number of I/O (input/output) ports available. The density of wired connections to the AP on the device’s circuit board also increases, which creates design footprint challenges in smaller devices, such as headsets.
One solution to the AP’s shortage of I/O ports is the use of virtual channels. Virtual channels consolidate video streams from different sensors into a single stream that can be sent to the AP over a single I/O port. A popular current standard for connecting camera sensors to an AP is the MIPI Camera Serial Interface-2 (CSI-2) specification, developed by the MIPI Alliance. CSI-2 can combine up to 16 different data streams into one by using the CSI-2 virtual channel function. However, combining streams from different images sensors into one video stream presents several challenges.
Challenges of enabling virtual channels
Combining sensor data from the same type of sensor into one channel is not a complicated proposition. The sensors can be synchronised and their data streams concatenated so they can be sent to the AP as one image with twice the width. The challenge arises from the need to combine the data streams of different sensors (see Figure 2). For example, a drone might use a high-resolution image sensor for object detection during daylight operation, and a lower-resolution IR sensor to capture heat patterns for object detection at night. These sensors have different frame rates, resolutions and bandwidths that cannot be synchronised. In order to keep track of the different video streams, every CSI-2 data packet needs to be tagged with a virtual channel identifier, so the AP can process each packet as needed.
In addition to packet tagging, combining data streams from different types of sensors also requires the sensor data payload be synchronised. If the sensors operate at different clock speeds, separate clock domains need to be maintained for each sensor. These domains are then synchronised before being output to the AP.
Virtual channels require a dedicated hardware bridge for processing
Implementing a bridge solution for supporting virtual channels in hardware can address the issues described above. A dedicated virtual channel bridge means all image sensors connect to the bridge’s I/O port, so the bridge can connect to the AP over a single port, freeing up valuable AP ports to support other peripherals. This also addresses the design footprint challenge created by tracing multiple connections between sensors and the AP on the device’s circuit board; the bridge consolidates those multiple traces to the AP. FPGAs allow for the implementation of parallel data paths for each sensor input, with each path in its own clock domain. These domains are synchronised in the virtual channel merge stage, as seen in Figure 3, removing the processing burden from the AP.
Benefits of PLD-based virtual channel hardware
When it comes to implementing virtual channel support in hardware, the most compelling IC platform of choice is an FPGA. FPGAs are ICs with flexible I/O ports that can support a wide variety of interfaces, and have large logic arrays that can be programmed with hardware description languages, such as Verilog. Unlike ASICs, which require lengthy design and QA processes, FPGAs have already been QA-qualified for manufacturing, and can be designed in days or weeks. However, traditional FPGAs are typically viewed as physically large, power-hungry devices that are not well suited for power-constrained embedded applications. But new generations of FPGAs are challenging this perception.
The CrossLink™ family of FPGAs from Lattice Semiconductor provides the right combination of performance, size and power consumption for video bridge applications utilising virtual channels. They offer two 4-lane MIPI D-PHY transceivers operating at up to 6Gbps per PHY and a form factor as small as 6mm2. They support up to 15 programmable source synchronous differential I/O pairs such as MIPID-PHY, LVDS, sub-LVDS and even single ended parallel CMOS, yet consume less than 100mW in many applications. The CrossLink FPGA family supports sleep mode to reduce power usage in standby. Lattice also provides a comprehensive software IP library to help customers more quickly implement different types of bridging solutions.
Virtual channels enabled by the MIPI Camera Serial Interface-2 (CSI-2) specification help embedded engineers consolidate multiple sensor data streams over a single I/O port, reducing overall design footprint and power consumption for applications using large numbers of images sensors. By virtue of their reprogrammability, performance and size, low power FPGAs, such as Lattice Semiconductor’s CrossLink family, let customers add support for virtual channels to their device designs quickly and easily.
Contact Details and Archive...