Emerging opportunities and challenges in automotive radar
12 May 2017
An imminent revolution is upon the automotive market. The pursuit of the highly autonomous vehicle is giving rise to dramatic new processing and data networking capabilities in next generation vehicles.
The automobile transformed our society, the layout of our cities, and in many ways had a significant impact on our environment. The advent of the autonomous vehicle could very well have as big an impact as the automobile itself. Not just in terms of how we transport ourselves within and between cities, or the economic and urban impact of the “shared mobility” model, but also in terms of saving lives. It is estimated that driverless cars could reduce traffic fatalities by upwards of 90% by mid-century.
This is a compelling motivation to drive the technology forward, but there are many technological hurdles to solve first. This article focuses on some of those technology fundamentals of the autonomous vehicle, and provides a sampling of the design challenges, especially pertaining to radar sensing, on the path to the driverless vehicle.
The automation spectrum
There is a spectrum of autonomous driving modes that can be roughly partitioned into distinct automation “levels”.
For instance, Figure 1 shows an example of such a set of definitions captured in an international standard from the SAE (Society of Automotive Engineers) outlining increasing levels of automobile autonomy (level 0 through level 5). The SAE standard includes a Level 0, but as that includes no driver assist functions it is left out of the summary in Figure 1.
Figure 1: Levels of driving automation (SAE international standard, J3016)
• Level 1 provides some degree of driver assistance, for example adaptive cruise control, Autonomous Emergency Braking (AEB) or lane keeping.
• In Level 2 we see “Partial Automation”, such as automated driving functions for parking or traffic jam assistance.
• In Level 3 the driver experiences a true automated driving experience, but the “Conditional” term is critical. In “Conditional Automation” the driver is expected to take over outside of specific automated driving modes, or even within an automated driving mode when the system requires human intervention.
• This condition disappears in Level 4 where the system no longer assumes that human intervention will be required. Level 4 is truly autonomous driving, although the vehicle retains driver controls (e.g. a steering wheel!) and is available for human control as desired.
• It is with Level 5, “Full Automation”, that the steering wheel and all vestiges of the human control interface are removed and the vehicle is designed specifically for automated driving.
A key element in the spectrum of safety features (driver assist) versus autonomous driving is the role of New Car Assessment Program (NCAP) initiatives. These have served as building blocks to the ultimate realisation of the autonomous vehicle.
Figure 2 compares the Euro NCAP roadmap to some key milestones in the progress towards the autonomous vehicle. It is clear that the Euro NCAP roadmap is progressing in terms of the autonomous levels of driving, from Level 1 to Level 2 for example.
New NCAP features can have a significant influence on the attach rate of sensor technologies. Junction Assist (a new Euro NCAP function for 2020) is a strong example that will have a marked effect on the attach rate of forward facing short-range radar (SRR). This is due to the requirement of SRR modules on the vehicle’s forward corners to confirm safe conditions in junctions, something new in radar module deployment. Typically automotive radar has been confined to a long-range radar (LRR) module at the very front of the vehicle and/or SRR modules at the rear corners (e.g. for blind spot monitoring).
Therefore, the positive influence of a 5-star rating on car sales has driven the penetration of such new functions as Autonomous Emergency Braking (AEB). In addition, as observed with Junction Assist, new NCAP functions are also increasing the attach rate of sensing technologies (e.g. forward corner SRR for Junction Assist). We see NCAP also influencing the sensor fusion processing capabilities in next generation vehicles.
Beyond the safety features that are explicitly required by Euro NCAP testing, like AEB, Euro NCAP introduced in 2010 a reward scheme for advanced technological solutions that could be shown to impact passenger safety, or “Euro NCAP Advanced Rewards”.
New advanced features on the horizon will be semi-autonomous and likely in the Level 3 class of autonomous driving support. This is laying the groundwork in the vehicle, in terms of sensing and data processing infrastructure, for increasing vehicle automation. By looking at the Euro NCAP future roadmap, and other NCAP initiatives, we can see the sensing and data processing building blocks for tomorrow’s autonomous vehicle.
Sensing, perception, and planning
While the system design of autonomous vehicles is likely to vary widely in detail, there is a common high-level view of the overall architecture. Figure 3 illustrates this high-level architecture which falls into roughly three segments: Sensing, Perception, and Planning.
As the figures shows, the sensing technologies are dominated by camera, radar and LiDAR. As discussed below, these sensing technologies present different strengths and weaknesses, but all are required to output data leading to a reliable situational assessment by the system software.
This situational assessment is dependent on processing at the Perception level, where the system identifies objects and their placement, makes predictions of obstacle direction, and constructs a three-dimensional map of the vehicle’s surroundings, leading to a “drivable path”. This determination of drivable path is often seen as the key challenge in the system implementation and is critical to the subsequent decisions based on driving behaviours (or driving policies).
For example, to have an Accurate Drivable Path, detection is one key to correct the application of the driving behaviours. Indeed, the more resolution available from the sensors the better, even when the sensor output is combined with an existing, pre-configured, high resolution map. To achieve a high level of confidence in determining the Drivable Path, it can be anticipated that all those sensors on the left of the diagram above will be required: Camera, Radar and LiDAR.
“All our knowledge begins with the senses” - Immanuel Kant
While Kant could not imagine this quote applied to autonomous vehicles, it is no less true. The autonomous vehicle relies on a variety of sensing technologies to construct an accurate 3D map of its surroundings through environmental modelling.
In terms of object detection, cameras are at a slight disadvantage compared to radar due to inherent limitations of 2D computer vision. However, cameras with their far greater resolution (leading to highest data density) are crucial for classification: interpreting the textures and other semantics of the car’s surroundings.
Radar, through its Doppler measurements, can directly detect relative velocities.
Alternatively, cameras have the specialised capacity to read road signs and markings, just as we do.
This will likely change as the infrastructure adapts to non-human sensing technologies (such as the placement of metal flakes in lane markings and signs for radars to detect). Radar also has unique advantages in terms of its range and its ability to detect objects even in the most adverse environmental conditions.
Finally, size and cost play a vital role in the quick adaptation of that technology and we are seeing radar modules becoming more attractive in both size and cost versus the other sensing technologies.
It is clear that for the autonomous vehicle, all three sensing technologies will likely be required in the beginning to ensure full coverage of the car’s surroundings. However, the next section will cover radar specifically where dramatic trends are underway for next generation systems.
The evolving radar network
There are three key trends in automotive radar:
• an increasing number of small lower-cost SRR modules positioned around the vehicle for “surround-sense” requirements;
• an increasing range and resolution requirements in LRR radar for both front and rear;
• and the emergence of a “radar network” where the output of multiple radar modules are combined in a radar-specific fusion ECU. The different modules and the central radar fusion ECU are the components of the radar network.
New highly autonomous applications (e.g. traffic jam assist, fully automated parking, urban autonomous driving etc.) require detailed mapping of the environment around the car, especially in lower-speed urban applications.
Radar has specific advantages in terms of small size, low cost and detection capabilities that is driving its proliferation in small SRR modules around the vehicle. Along with these advantages, radar is also being driven to higher levels of resolution in order to support these autonomous driving features. This increase in resolution enables radar modules to perform classification and mapping functions used in the perception-level processing described in Figure 3.
Increasing radar resolution requires leaps in performance per power and integration
The ambition to increase resolution in radar is met with various design approaches that in turn result in specific challenges. These include:
• Increase transmitters (Tx): there is a requirement to increase the number of transmit channels to better resolve objects in the environment (e.g. adding a transmit antenna for elevation scan) and increased resolution (enabling more MIMO – see below). The number of transmitters has a direct and significant impact on radar module power.
• Increase receiver (Rx) sensitivity: one approach to better receive sensitivity is to increase the number of receive channels. This directly impacts the radar processing requirements as well as power dissipation.
• Use of advanced multiple input, multiple output (MIMO) techniques: using MIMO techniques, it is possible to create a large number of virtual antennae, greatly increasing resolution.
The automotive market is truly experiencing a revolution in terms of the sensing and processing capabilities in next-generation vehicles. It is an unprecedented acceleration of technology adoption largely driven by the potential to save lives and fundamentally redefine the use-case model for passenger vehicles.