laitimes

Every year, tens of millions of vehicles are "incremental" markets, and the cost-effective track of intelligent driving

With the improvement of camera performance (field of view, pixels), the optimization based on AI algorithms, and the gradual penetration of ADAS into models below the price of 150,000 yuan (new car rating and related regulations are strengthened), a new generation of smart camera solutions is reconstructing a new subdivision track.

At the same time, the combination of perception schemes is also showing a situation of differentiation, is it not the more types of sensors the better? Behind it are the accumulation of technology, the improvement of platform computing power and the consideration of multiple factors such as cost.

Previously, Tier1 suppliers, including ZF, Valeo, Zhixing Technology and other Tier 1 suppliers, began to pre-install and deliver L1/L2 assisted driving solutions based on a single camera (wide field of view). One of the reasons is that in addition to the ability to achieve the traditional monocular (narrow viewing angle) + millimeter wave radar combination scheme, the overall solution cost is lower.

This also opens up a new growth track.

According to the monitoring data of Gaogong Intelligent Automobile Research Institute, the insurance volume of new cars with 150,000 yuan and below in the Chinese market (excluding imports and exports) in 2021 reached 11.2011 million, accounting for more than 50% of all new cars, maintaining a slight growth year-on-year.

In the ADAS (L0-L2) part, the front-loading rate of this price range is only 19.73%, which is nearly 20 percentage points lower than the market average.

Due to the cost advantage of monocular smart cameras, in the next few years, the gradual adoption of L1/L2 level functions in the market of models below 150,000 yuan will further promote the demand for more functions of cameras, and how to achieve more and more functions within a certain cost range will be a key element to test the ability of suppliers.

One

In 2016, Nissan pioneered the Japanese market to launch a self-driving assistance system called "ProPILOT", equipped with a front-facing monocular camera that can quickly identify the three-dimensional depth of the vehicle in front of it and lane signs. Lane lines and obstacles are detected with a single camera, enabling the LKA+ACC function of the entry-level L2.

However, this solution was equipped with Mobileye's EyeQ3 platform at the time, and the camera's horizontal viewing angle was only 52 degrees, resulting in more restricted conditions and unsatisfactory adaptability for more scenes. By 2019, the first domestic Tianlai equipped with the ProPILOT system in the Chinese market has added millimeter-wave radar.

However, for OEM models that focus on cost performance, what is needed is a cost-effective solution that avoids a sharp increase in cost while landing more auxiliary driving functions. In 2018, ZF and Mobileye launched the S-Cam4 series of monocular cameras, with the goal of meeting the evaluation system for new cars in Europe in 2020.

Based on the EyeQ4 chip, the camera angle is extended to 100 degrees, enhancing automatic emergency braking AEB, lane keeping assist, and L2 level auto assisted driving functions, including highway driving assistance and traffic congestion assistance.

In July 2020, ZF launched the first mass production of the L2 single-lane intelligent driving system based on the single camera (S-Cam4.8) in the Chinese market, which is installed in the hot-selling model of its own brand Haval SUV, which was developed by the ZF China team.

According to the monitoring data of The Institute of Intelligent Vehicles of Gaogong, the insurance volume of the front-loading single-item ADAS solution in the Chinese market (excluding import and export) in 2021 was 486,700 units, an increase of 42.85% year-on-year, which was nearly 13 percentage points higher than the average growth rate of the market.

Among them, among the L2-level monocular ADAS front-loading mass production suppliers, in addition to ZF, Valeo, Bosch three foreign suppliers, Zhixing Technology, as a representative of Chinese suppliers, the monocular solution has achieved mass production in Wuling E300, WM E5, M7 and other models.

As one of the main products of Zhixing Technology, the company's first generation of smart camera products IFC1.0 has achieved mass production in 2020, and the second generation of products has been upgraded and has also been mass-produced. This cost-effective solution can realize L2 automatic driving assistance system, including ACC, AEB, LKA, TJA, TSR, LDW, IHBC, ELK, ESS and other 12 functions.

According to public information, the second-generation product IFC2.0 has increased the FOV (100 ° ) by 2 times, providing intersection assistance related functions; at the same time, based on the 3 times more computing power than the first generation, more targets can be tracked at the same time; in addition, according to the current driving environment, different driving control strategies (such as anti-jamming function, hazard avoidance, intelligent speed regulation, etc.) can be provided.

According to the four perception algorithm mechanisms previously announced by Mobileye, monocular can create 3D models from 2D camera images to help the system better perceive the environment. Including the algorithm for identifying the wheel and inferring the position of the vehicle; the second is the algorithm for identifying the door, similar to the function of opening the door warning, but mainly to identify the surrounding vehicles.

The third algorithm is to infer the distance of each pixel in an image by comparing different frames of the image taken by the camera, resulting in a three-dimensional point cloud (a lidar-like processing algorithm is used on the point cloud), as well as identifying objects in the scene. The fourth algorithm identifies areas of drivable roads.

This means that, based on point clouds, a 3D estimate of the object's position can be given in object detection. This is achieved by triangulating from multiple views of the scene. Then, based on the visual road model, classical computer vision is used to estimate whether the ground plane and extract the depth of the vehicle are on the same ground plane.

Depth information is then inferred from the appearance of objects in the image through a neural network-based approach. This neural network is trained from daily multi-sensor-based data acquisition. Finally, road models that rely on high-precision map data extract depth from the distance of the vehicle on the ground.

According to Zhixing Technology, the single-camera solution has very high accuracy requirements for the identification, positioning, planning, control and other aspects of the entire system, and the successful mass production of the program fully proves the company's technical capabilities in core algorithm, software and hardware development and system integration verification.

In addition, according to the development trend of high-end intelligent driving technology architecture, this set of low-cost, cost-effective solutions based on monocular perception can also be used as a redundant backup for complex multi-sensor + domain controller solutions.

Two

Judging from the current scale of the model market, most automakers still choose a relatively conservative strategy. This is both a guarantee of functional safety and reliability, but also based on cost considerations. "At the current level of functional maturity and road complexity, unpredictable vehicle behavior needs to be minimized."

Judging from the regulatory upgrade plans of various countries, "security" is also being continuously strengthened.

Recently, the American Highway Safety Insurance Association (IIHS) officially released a new rating system for assisted driving systems, which clearly requires new cars equipped with related systems to take safety measures to help drivers stay focused.

Among them, the new top-of-the-line rating requires the system to monitor whether the driver is putting his hand on the steering wheel and paying attention to the road ahead. In addition, drivers are required to initiate automatic lane changes. This means that the main force of the market is still positioned at L2 and below.

Considering the resolution bottleneck of traditional millimeter-wave radar, the 1R1V solution may cause AEB to be triggered by mistake under the road conditions such as metal plates, manhole covers, and rails on the ground at the perceptual level; sediment is easy to splash on the radar surface, causing radar sensors to fail, resulting in limited use of functions and so on.

That's why Tesla chose to "abandon" millimeter-wave radar. The 4D imaging radar is still in the early stages of mass production, and the stability, reliability and development cost of the product need time to improve.

Due to the maturity of the vehicle OTA mechanism, in the future, automakers can provide optional upgrade services for some customers. In the view of the Gaogong Intelligent Automobile Research Institute, the entire market is still developing along the trend of polarization, especially in the context of the development model of soft and hard decoupling becoming the mainstream.

High-end models will begin to carry out large-scale full-stack hardware standard + software upgrade payment model (for example, Weilai ET7), high-end models will try more two sets of hardware system configurations (for example, Xiaopeng's P7), low-end models will choose a more cost-effective solution configuration, so as to allocate cost-effective expenditure more reasonably.

Extending backward from single-camera perception is a pure visual system route similar to Tesla. For example, mobile's latest generation of pure vision-based multi-camera Supervision system, which is launched on the Krypton 001, can realize the navigation assisted automatic driving function in urban roads and high-speed elevated scenarios.

To this end, mobileye launched the next generation of flagship chip EyeQ Ultra at the beginning of this year, and also launched two new EyeQ system integration chips for ADAS solutions - EyeQ 6L and EyeQ 6H, based on 7nm process technology, and focus on cost-effective pure camera solutions.

Among them, EyeQ 6L is the successor to EyeQ 4, and its package size is only 55% of ThatQ 4. This all-in-one windshield solution delivers higher deep learning hash rate (TOPS) at ultra-low power for energy-efficient entry-level and high-end (L2) ADAS. Samples of the chip began last year and mass production is expected in mid-2023.

The EyeQ 6H enables high-end ADAS and partial autopilot with a full-surround view camera configuration. In terms of force, it is equivalent to two EyeQ 5 system integration chips. But more importantly, the EyeQ 6H supports visualization and performs better at heavy AI workloads.

The centralized solution will provide all L2+ADAS capabilities, multi-camera processing capabilities including parking cameras, and the ability to support third-party applications such as visual parking and driver monitoring. Samples of the integrated chip for ADAS-compatible systems will begin this year and are expected to be in mass production by the end of 2024.

Obviously, the price range of 150,000 yuan and below in the Chinese market is close to 10 million new cars per year, and the attraction is still huge.

Read on