Testing the limits of automated vehicle safety
01 January 2021
Currently, active driver assistance (ADA) does less to assist & more to interfere, concluded an August 2020 study by the Society of Automotive Engineers that found, in 4,000 miles of driving, ADA encountered an issue, on average, every 8 miles. Particularly concerning was the inability of many systems to respond appropriately to a slow-moving vehicle.
The full version of this article was originally featured in the January 2021 issue of EPDT magazine [read the digital issue]. Sign up to receive your own copy each month.
Here, Hao Zheng, co-founder & CEO of intelligent 3D sensing VC-backed University of Cambridge spinout, RoboK explains why surging numbers of sensors in vehicles are an opportunity for the automotive electronics industry…
ADA is defined as a system that combines braking, accelerating and steering to actively assist the driver, as opposed to the wider definition of ADAS (advanced driver-assistance systems), which includes assistance that only engages when needed. Currently, only Level 0 (features such as conventional cruise control) to Level 2 (such as adaptive cruise control and lane keeping) are available for public purchase.
More recently, the goal of achieving true driverless mass market vehicles has been delayed through concerns over safety and pressure on development budgets post-COVID-19. This has caused the industry to refocus its efforts on both introducing L2+ functionality and making lower levels of ADAS economic enough to include in lower cost vehicles.
However, there remain many challenges in achieving these goals that are highly relevant to sensor providers and the wider automotive electronics sector. In particular, areas such as perception, rapidly increasing numbers of sensors, overall system complexity and cost of safety testing during development are all of critical importance.
Balancing accuracy & computation complexity in perception
Perception is about ensuring that vehicles can ‘understand’ the information acquired from sensors in real-time, in order to reliably interpret situations and react appropriately. It is not sufficient to recognise an object is a vehicle – the system needs more specific information, equivalent to “there is a vehicle in the adjacent lane, slightly ahead, that is likely to cut into my lane”.
Although a single type sensor, such as a radar or camera, may be sufficient for features such as forward collision monitoring, the higher level of automation offered by higher levels of ADAS demand additional accuracy and redundancy through robust environmental perception. This can be achieved in part by processing data from a combination (or fusion) of sensors – such as cameras, radar, Global Positioning System (GPS) receivers, inertial measurement units (IMUs) and, in the case of full AVs (autonomous vehicles), LiDAR.
Fusing information from multiple sensors creates a more complete picture that takes advantage of the strengths of each type – or ‘modality’ – of individual sensor, and offers more contextual information. Additionally, overlap between the sensors improves perception and resilience.
Such multi-modal sensor systems produce massive amounts of data – an automated vehicle with a diverse sensor package may collect up to a terabyte of data per hour, and processing is highly computationally intensive.
Even in development and testing, the extreme computational demands of perception are an issue, typically resulting either in hardware footprints which are too costly for use in low cost vehicles, or lower power approaches that don’t support testing in real time and significantly delay (and increase the cost of) the development process.
So, optimising both the sensor fusion and interpretation step within perception has the potential to significantly reduce the cost of ADAS – both on a per vehicle basis, but also from a cost/duration of testing perspective.
Read the full article in EPDT's January 2021 issue...
Contact Details and Archive...