The Importance of Functional Safety & Integrity in Automotive Positioning

Author : Stefania Sesia, Global Head of Application Marketing for Automotive, u-blox

15 November 2023

Figure 1: The possible risks to integrity
Figure 1: The possible risks to integrity

As we navigate the pathway toward fully autonomous vehicles, the range of driver assistance functions provided within a new model's feature set will increase with each generation. For example, advances in artificial intelligence (AI) and computer vision will facilitate the emergence of high levels of autonomy - taking us to autonomous driving level ADL5.

By 2030, approximately 70% of new vehicles will provide at least lane-holding and lane-changing functionality. These basic ADL2 automated driving functions are already partly available today in the mid to premium automobile segments. As we reach the end of the decade, the market will have evolved to include higher levels of automated driving in lower-cost segments. In parallel, mobility-as-a-service (MaaS) mass market offerings based on ADL4/ADL5 will become available in the world's metropolises. 

Using sensor fusion, advanced driver assistance system (ADAS) and autonomous driving system (ADS) implementations interpret data derived from multiple sensing devices. Radar, LiDAR, video, ultrasonic and camera data are processed to create an accurate synopsis of what surrounds the vehicle and its position within the environment. 

In addition, the global navigation satellite system (GNSS) receiver is the only sensor to add the vehicle's absolute position with high accuracy down to a decimetre level, thereby representing lane-level accuracy. Thus, GNSS enables a vehicle to determine precisely where it is on the map.

The ADAS system might use the GNSS information to suggest/decide the best vehicle behaviour, e.g., path planning. It can also be shared with nearby cars and traffic infrastructure thanks to wireless communication technology (5G/V2X) or can be used to confirm that the vehicle is on the appropriate type of road for a specific driving function to be engaged (geofencing). 

As soon as a vehicle relies on automated driving and hence on its actual position, and it makes decisions based on this, the question about trustworthiness/integrity of the information needs to be answered as human lives are potentially at risk. 

This requires an integrity concept - which is defined as ‘the measure of trust that can be placed in the correctness of the information supplied by the positioning system’. Potential risks endangering the integrity can be grouped into being hardware (HW) or software (SW) related malfunctions in the vehicle, environmental factors and unintentional misuse, as visualised in Figure 1. In the automotive domain, the risk of using unsafe information should be guaranteed to be as low as 10-8 and 10-6 per drive (with a drive being a 1 hour period). 

Figure 2: A Stanford diagram with Pr - probability, PL- protection level, AE - actual error (in position), AL - alert limit, TTA - time to alert and TIR - target integrity risk
Figure 2: A Stanford diagram with Pr - probability, PL- protection level, AE - actual error (in position), AL - alert limit, TTA - time to alert and TIR - target integrity risk

ISO 26262 functional safety
The automotive environment is a harsh one. Many sources of interference, heat and vibration can either damage electronic circuitry or degrade its functionality. The consequences of failure within an autonomous safety system could be a collision, resulting in injury or death. The stakes could not be higher. There are legal ramifications to contend with here as well. The ISO 26262 standard has been created specifically to address the behaviour of automotive systems under fault conditions. Automakers must ensure that their systems have been developed according to safety criteria and provide evidence that the risks of hardware or systematic development faults are acceptably low. 

Through a hazard risk assessment (HARA), the OEM or the system owner may assess the potential risks from different threats and error sources. Each risk assessment then needs to be classified based on a set of indicators to define the automotive safety integrity level (ASIL), based on which safety goals are defined. 

In certain cases, in particular for highly automated driving solutions, specific safety goals are also associated with the GNSS receiver in relation to a corresponding target ASIL rating (typically ASIL B). The goal is to set requirements to implement hardware and software in a way to prevent malfunction or hazards. 

SOTIF
As a complement of the ISO 26262 automotive functional safety standard, the ISO 21448 has been introduced to cover the safety of the intended functionality (SOTIF). It covers the absence of unreasonable risks due to hazards caused by performance insufficiencies in the implementation of electric and/or electronic (E/E) elements in the system (such as external environmental conditions). 

In the case of GNSS, the risks associated with environmental conditions derive from failures in the constellations and in the GNSS system parts more broadly. These can be large errors arising from atmospheric disturbances, like tropospheric and ionospheric storms, ionospheric scintillation, multipath distortion of the direct signals or reflected ones (non-line-of-sight) and threats from other sensors (e.g. inertial or wheel speed sensors).

PL - as a measure of integrity
The aviation industry has solved the problem of integrity in the past with the introduction of what is called the 'protection level' (PL). This is a real-time statistically calculated bound - indicating the region in which the user is located with a guaranteed probability and assurance that a timely alert is generated if the conditions cannot be guaranteed. 

Figure 3: left) Fault position projection onto stacked street layers; right) Projection into the correct ODD road type
Figure 3: left) Fault position projection onto stacked street layers; right) Projection into the correct ODD road type

The PL is used together with a threshold called the ‘alert limit’ (AL), which depends on the application. The Stanford diagram in the Figure 2 represents the relationship between PL, AL and the ‘actual error’ (AE), and the condition in which the system is considered to work properly. In particular, when the actual error is  larger than the PL and the AL,  if this situation is not detected and an alert is not sent before a ‘time to alert’, the system is said to be in a hazardous and misleading operation (unsafe operation). The probability of occurrence of this condition must be guaranteed to be lower than the ‘target integrity risk’ (TIR).

One of the main challenges for the integrity concept is to determine sufficiently tight PL bounds coupled with very low target integrity risks, e.g. 10-6/per drive or lower. This requires taking into account errors that can happen with very small probabilities. Research is very active in this field and the team at u-blox is pioneering advanced techniques.

In order to show the importance of integrity, let’s take an example. Consider a use case such that the vehicle uses the position information in order to activate specific autonomous driving features. In the 1st case illustrated in Figure 3 (left-hand side), the vehicle position is projected wrongly onto the wrong street with an error that is bigger than the AL, while the PL is lower than the AL. If this situation is not detected and an alert not sent, the specific autonomous driving feature could be wrongly activated and operate on the particular road type the where vehicle is currently located while it is not meant to. In the 2nd example (right-hand side), the position error is within the PL and AL bound. This is a normal operation condition. It is worthwhile mentioning that in this example, a precise and reliable map is also required. 

Conclusion
For an autonomous safety system to function correctly, the vehicle's position within its environment, and in proximity to other cars and objects, must be known. This position is an estimate and is subject to environmental influences, HW and systematic development faults. Integrity is a measure of the trust that can be placed in the correctness of the supplied information. GNSS sensors yield absolute positioning information and complement other ADAS/ADS sensors in sensor-fused systems. Integrity within the system includes giving users timely warnings when not to use it within the system. Compliance with automotive standard ISO 26262 ensure that autonomous safety systems behave predictably in fault situations. Compliance with the complementary ISO 21448 SOTIF standard assures system functionality in adverse conditions. An increasing proportion of new vehicles will be equipped with autonomous safety features, and we believe that the need for safe components such as GNSS will drastically increase in the future - with the deployment of highly automated driving and for advanced V2X use cases.


Contact Details and Archive...

Print this page | E-mail this page






This website uses cookies primarily for visitor analytics. Certain pages will ask you to fill in contact details to receive additional information. On these pages you have the option of having the site log your details for future visits. Indicating you want the site to remember your details will place a cookie on your device. To view our full cookie policy, please click here. You can also view it at any time by going to our Contact Us page.