laitimes

Is autonomous driving really safe?

author:Lithium battery dynamics
Is autonomous driving really safe?
Is autonomous driving really safe?

Source of this article: Zhiche Technology

On April 26, 2024, a Wenjie M7 collided with a road maintenance vehicle on the high-speed express lane, the front of the Wenjie M7 caught fire, and the whole car was shrouded in flames.

After the incident, the family questioned that the vehicle was purchased on January 14, and the new car, which was only 3 months old, advertised as AEB automatic emergency braking, GAEB special-shaped obstacle automatic emergency braking, ternary lithium battery flame retardant materials and thermal runaway protection technology impact and other effects, but at the time of the accident, where did these functions play? Where do airbags go?

Is autonomous driving really safe?

The family's question is also a topic we would like to discuss today: Why did AEB, as an autonomous driving assistance product, not avoid this tragedy? Could it be that the company's advertised Automatic Emergency Braking (AEB) function is fake? Let's take a look at the requirements for AEB functionality in the AEB regulations:

Is autonomous driving really safe?
Is autonomous driving really safe?
Is autonomous driving really safe?

It can be seen from the national standard that whether it is a moving target or a stationary target, when the relative speed is greater than a certain value, the vehicle is not required to brake. At the time of the accident, the speed of the M7 was 115 km/h, which was already in excess of the maximum relative speed specified by the regulations, and the vehicle's AEB function could not cover this speed range.

Let's take a look at the description of the AEB function in the instruction manual of the M7:

Is autonomous driving really safe?
Is autonomous driving really safe?

It can be seen that the manual clearly states that the working speed of the AEB is 4-85km/h, and the speed of the vehicle has exceeded the working speed of the AEB at the time of the accident. Therefore, the AEB function is not triggered in line with the product function design, which is not a product quality issue.

Through the above two materials, we found that whether from a legal perspective or from the perspective of product quality, there is no so-called product quality problem in the automatic driving assistance system of the M7, which means that the car company does not have to bear the relevant legal responsibility.

This conclusion seems to be cold and unwilling for ordinary users to accept. Yes, who buys a new car and looks through the vehicle manual? For the use and understanding of this function of assisted driving, most users may have heard it from the sales mouth of the 4S store when buying a car. It is difficult to say whether the salesperson's promotion and description of the function are completely in accordance with the vehicle manual. In addition, the average user's expectations of the functions of autonomous or assisted driving systems are often higher than the performance range of the product or system itself.

So, is autonomous driving safe? Or is it really meeting the user's expectations for security? This article will analyze and discuss in detail.

Is autonomous driving really safe?

What is the reason for autonomous driving?

I still remember that when the author first engaged in the research and development of autonomous driving, it was still in the initial stage of domestic autonomous driving. One of the more important topics discussed in an in-house technology sharing session was "Why do we need to research autonomous driving technology?" At that time, many of the reasons mentioned were to improve road traffic safety, improve travel experience and efficiency, promote the rational use of energy and environmental protection, and provide more travel options. But the author believes that the most important of these reasons should be to improve road traffic safety.

As we all know, manual driving has high safety hazards due to the limitations of driving level, self-state, and human vision space. According to the data of the "Road Traffic Accident Statistical Annual Report" of the Traffic Management Bureau of the Ministry of Public Security, from 2017 to 2019, there were an average of 231,900 traffic accidents in mainland China every year, with an average annual death toll of 63,000, and another 240,000 people were injured by non-fatal injuries, and more than 90% of road traffic accidents were caused by human factors by drivers. The prone points are at intersections (no signal lights) and traffic lights, and their number accounts for more than 50%, and typical road sections such as curves also have a certain scale.

Is autonomous driving really safe?

Everyone knows that rushing, speeding, drunk driving, fatigue driving, etc., are the main causes of accidents. The perception, decision-making, planning, and control systems in autonomous driving technology fully identify all kinds of dynamic and static targets on the road, realize autonomous decision-making and vehicle operation, and pass long-term virtual simulation tests and road tests. The most important thing is that the system is "obedient", it will not violate traffic regulations, it will drive according to the requirements in a proper manner, which greatly meets the safety needs. Therefore, it is not an exaggeration to say that autonomous driving is born because of safety, which also reflects the importance of autonomous driving technology itself.

Analysis of the safety requirements of different autonomous driving products

Theoretically, autonomous driving systems can indeed avoid the human element of driving, thereby improving the safety of driving, but can current autonomous driving systems really be safer than human drivers? This article analyzes and discusses the safety of mainstream autonomous driving products in the current market, and hopes to draw some conclusions.

At present, the mainstream autonomous driving systems or products on the market can be divided into the following four categories according to the scenario:

Low-speed loads, high-speed loads, low-speed loads, high-speed loads.

Is autonomous driving really safe?

Based on the methodology of Hazard Analysis and Risk Assessment (HARA) in the ISO26262 road vehicle functional safety standards, the analysis of the functional safety integrity level of the whole vehicle includes three dimensions:

1) controllability after functional failure;

2) the severity of the harm caused in a certain scenario after the function fails;

3) The exposure (probability) of the function in a certain scene.

The functional safety integrity level of a final vehicle function is obtained by the sum of the scores of the above three dimensions. Here, we will not carry out the concrete and standardized analysis work, but only make a general analysis of the above four types of autonomous driving products based on this theory.

1) Controllability analysis

From the point of view of controllability, the controllability of the system function is lower when the system function fails at high speed than at low speed. Imagine that at high speeds, the lateral control (steering wheel control) function fails, and the direction is randomly hit or punched, and the driver takes over the system and controls the vehicle to a safe state (pull over or drive smoothly) is significantly less controllable than at low speeds. Therefore, the controllability of the high-speed state system after failure is low, and the controllability of the low-speed state system after failure is high.

2) Severity analysis

There is a difference between severity and controllability, which needs to be analyzed from two dimensions. First of all, looking at the speed dimension, the severity of accidents caused by system failure vehicles in high-speed conditions is often higher than that in low-speed states. A simple example: when driving at a speed of 100km/h, the vehicle loses control and crashes into the roadside guardrail, and the result is likely to be a car crash and death; When driving at a speed of 30 km/h, the vehicle loses control and crashes into the roadside guardrail, resulting in damage to the vehicle and injuries. Therefore, from the perspective of speed, the severity of system failure in the high-speed state is higher than that in the low-speed state.

Secondly, looking at the dimensions of people and goods, at the same speed, the casualties caused by the failure of the intelligent driving system of the manned vehicle are much greater than those of the vehicle, which is obvious.

3) Exposure analysis

It is difficult to conclude whether the exposure of high-speed scenes is high or that of low-speed scenarios, and the exposure of autonomous driving systems defined in different application scenarios corresponds to completely different exposures. Therefore, it is not possible to draw conclusions about the exposure of the four dimensions from the macro dimension.

Based on the analysis of the above three dimensions (to be exact, two dimensions), it can be preliminarily concluded that the safety requirements of the current system in the main application scenarios of autonomous driving are classified as follows: high-speed manned> low-speed manned> high-speed load> low-speed load. But this conclusion must be relative, and we must not look at it absolutely.

From this conclusion, it can also be seen that the development and landing route of autonomous driving technology should be: first load goods and then people, first low speed and then high speed. From easy to difficult, this should be everyone's common thinking.

Safety analysis of different levels of autonomous driving systems

Earlier, we analyzed the safety requirements of autonomous driving in different application scenarios, but did not consider an important dimension, that is, the automation level of the autonomous driving system itself. SAE classifies the automation level of autonomous driving systems into L0-L5, and the core cut-off point is L3. Autonomous driving systems below Level 3 are often referred to as driver assistance systems (ADAS), and systems at Level 3 and above are often referred to as autonomous driving systems (ADS), and the most important difference is whether the driver is "in the loop" during the control operation of the vehicle.

Is autonomous driving really safe?

For the driver assistance system, the driver is in a real-time "in-the-loop" state when the system function fails, so it can take over the vehicle, which has a certain degree of controllability over the abnormal situation caused by the failure. In the case of autonomous driving systems, the driver is "out of the loop" when the system function fails, so there is no control over abnormal situations when the system fails. Based on this distinction, the safety of autonomous driving systems needs to be divided into two categories: assisted driving systems (i.e., systems below Level 3) and autonomous driving systems (Level 3 and above).

1. Safety analysis of driver assistance systems

As mentioned above, based on the traditional HARA analysis theory of ISO26262 functional safety, the driver assistance system needs to consider controllability. However, in the book "Automotive Human Factors Ergonomics", the safety of adaptive cruise control (ACC) mentions the idea that ACC frees the driver from some tasks (such as braking and acceleration) but at the same time adds new tasks. When the ACC system is working, the driver has to keep monitoring the system to ensure that it is working properly. However, autonomous driving may instead underload the driver, reducing the level of attention assigned to the task, leading to an explosion of attention resource requirements in an emergency situation (e.g., when a system fails) that may result in an explosion of attention resource requirements that may not detect danger in time or affect the control response to the vehicle.

Is autonomous driving really safe?
Is autonomous driving really safe?

In other words, while autopilot functions reduce (or alleviate) the physical operation of some drivers, they cause their minds to be highly focused to ensure that the vehicle is taken over in time to avoid danger. In this way, the actual result may be that the driver's mind is not focused when the autonomous driving system is turned on, but the body and mind are completely relaxed, and when there is danger, the driver is unable to control a certain operation of the vehicle (steering, acceleration, braking, etc.) in time, and eventually an accident occurs. Such similar tragedies seem to have happened in practice.

If the end result is like this, wouldn't it be contrary to the original purpose of researching autonomous driving technology mentioned at the beginning of the article - safety? It may seem ironic to say this, but this similar analysis and reasoning is not unreasonable. Some of Tesla's earlier users appeared during driving, trying to put mineral water on the steering wheel to fool the system's takeover detection, and then relax the driver's monitoring of the system. Now it seems that how stupid it is to do so, it is simply a joke with your own life! However, this is another reflection of the level of trust that users have in the capabilities of autonomous driving assistance systems, which is obviously excessive and ignorant.

Therefore, from this dimension, the autonomous driving assistance system (more precisely, the autonomous driving system of L2 and below) is unsafe, and this unsafety does not come from the system itself, but from the two unsafe factors of the system when the user is in use:

1) The user's illegal operation, that is, similar to the "deception of the system" mentioned above;

2) The high requirements of the system for users cannot be satisfied, that is, allowing users to relax their hands and feet while keeping the user's brain highly tense.

The second insecurity is obviously caused by the design of the system and should not be bought by users.

Returning to the function of the system itself, we believe that all companies or developers must design the system to be compliant, reasonable and safe functions. However, because it is an assisted driving system, it is still manual driving most of the time, and even many times when the system drives according to traffic rules but does not meet the operation of the "old driver", the driver will immediately take over the vehicle and drive the vehicle according to his own intentions. In these scenarios, driving behavior is often very unsafe. In this way, the autonomous driving assistance system itself does not effectively "shield" the unsafe operation of the human driver that we mentioned at the beginning. Therefore, we have not achieved the original purpose of our security.

2. Safety analysis of autonomous driving systems

After analyzing the safety of the driver assistance system, let's take a look at the safety of the autonomous driving system. According to the SAE definition, the autonomous driving system should be a system of L3 and above, because the system of L3 and above can realize that the driver is "not in the loop". Does this "shield" what we call a dangerous human operation or vulnerability?

The answer is: L3 or no, L4, L5 can do that.

The definition of L3 is a conditional autonomous driving system, and the so-called conditional means that the system operation needs to meet certain conditions, that is, ODD (operation design domain). Typically includes the following dimensions:

Is autonomous driving really safe?

But in fact, these dimensions are really very troublesome to define, because there are so many environmental factors that affect driving that in a sense it is impossible to exhaustively. Here's an example, for example, a scenario that the system cannot recognize and respond to when it is designed is: "Your own car is driving on the road, and the vehicle in front of you suddenly falls full of oranges to the ground". This kind of scenario is beyond the ODD of the system, but there are too many similar strange scenarios that the system can't deal with, and it can't be written all if they are written in ODD, so the intelligence simply designs ODD according to the common categories in the figure above. However, the author believes that this is not rigorous, because the boundaries of the system can never be truly described. Therefore ODD is a false proposition.

So with all that said, why is the L3 system insecure? Because ODD can't be written clearly? Yes, that's one of the reasons, but it's not just that. Because L3 requires the driver to take over in time when the system fails, the system does not have the ability to control the minimum risk (MRC). As a result, the L3 also suffers from the problem mentioned above: it places higher demands on the driver's concentration and reaction time. The reason is that L3 is more comprehensive, which is more likely to cause the driver's "paralysis", and even the driver is already asleep when the system fails, which leads to more serious consequences when the vehicle needs to be taken over in time and cannot be taken over, and the probability of this situation is greatly increased.

L4 and L5 are safe, why? Let's talk about L5 first, according to the definition of SAE, L5 does not have ODD, so there is no unclear situation, in other words: users don't need to care where they can open and where they can't; When can it be opened, when can it not be opened, and other similar questions. When it comes to L5, users don't have to worry about ODD at all, and they don't need to worry about whether to take over in the event of a system failure, these systems can be handled. It can be understood that you buy a bus ticket, you can just take the car, and the driver will do it for you.

Speaking of L4, the difference between the L4 autonomous driving system and L3 is that when the system fails or exceeds the ODD, it can handle the system by itself to bring the vehicle into a state of minimum risk. In this way, it is much more user-friendly, you can drive and play with your phone, sleep, and don't have to worry about the system prompt to take over. Of course, it is possible for users to take over if they want to, but it is not mandatory. Therefore, L4 is also safe.

Impetuous in the autonomous driving industry

After analyzing so much earlier, the so-called autonomous driving safety analysis theory and the so-called autonomous driving classification are actually theories made up by some people. The user's requirements for autonomous driving are actually very simple, and here is to borrow the words of a layman friend: "Autonomous driving? So I don't have to take a driver's license test in the future? Although it is a question sentence, it is very direct to state your needs. Can the autonomous driving systems on the market meet such requirements? Obviously, no. Therefore, some manufacturers who were in a hurry to achieve success claimed that they were autonomous driving systems, which led to some tragedies, and then everyone changed to assisted driving systems.

In order to gain more customer use and favor, it seems that everyone is getting value-added based on launching functions that do not fully meet customer needs or even useless. In today's society, everyone is very impetuous, anxious for mass production, anxious to make money, and anxious to prove themselves. Very few people have the patience to meet real customer needs, and even the most basic security issues have not been solved. I think it's time for everyone to slow down and not go so far that we forget where we want to go.

Note: Most of the articles reproduced on this site are collected on the Internet, and the copyright of the article belongs to the original author and the original source. The views in the article are only for sharing and exchange, and do not represent the position of this site and are responsible for its content.

Read on