Source: Guosen Securities.
Tesla (TESLA):
Full-stack self-research from hard to soft to create a competitive barrier of "computing power + algorithm + data"
1, from Mobileye to NVIDIA, and finally to FSD self-developed chips.
Tesla launched the autopilot assist system Autopilot 1.0 in 2014, Tesla mastered the core data, AI algorithms and main control chips, from hard to soft full-stack self-research, which has also become Tesla's most core competitive barrier. Tesla was founded in 2003 and listed on the NASDAQ in 2010. From 2008 to 2020, Tesla released four mass production models: Model S, Model X, Model 3, and Model Y. Tesla began research and development of the autopilot assistance system in 2013, and in 2014 Tesla launched the autopilot assist system Autopilot 1.0, which has since undergone four upgrades, and in 2019 launched a self-developed FSD master control chip on the HW 3.0 platform.
Tesla went from Mobileye to NVIDIA and eventually to FSD's self-developed chips. Since Tesla launched HW 1.0 in 2014, Tesla's Autopilot system has undergone a total of 4 major hardware version updates. In the 2014-2016 HW 1.0 era, Tesla is completely based on 1 Mobileye EyeQ3 and 1 NVIDIA Tegra 3, the algorithm is also completely provided by third-party vendor Mobileye, in 2016 Tesla gradually dissatisfied with mobileye slow process and related safety accidents, and in the 2016 HW 2.0 version, Tesla switched to 1 NVIDIA Parker SoC and 1 NVIDIA Pascal GPU composed of NVIDIA DRIVE PX 2 computing platform, and in the 2017 HW 2.5 version upgrade process, the NVIDIA Drive PX 2 was upgraded to NVIDIA Drive PX 2+, and an NVIDIA Parker SoC was added, which achieved about 80% of the computing performance improvement.
Tesla is about to release the HW 4.0 platform, an FSD self-developed chip based on Samsung's 7nm process, which will have three times the performance of the HW 3.0. Due to NVIDIA's high energy consumption, since 2017, Musk decided to start developing the master control chip, especially the neural network algorithm and AI processing unit in the main control chip are all completed by Tesla. In April 2019, Tesla successfully launched a self-developed FSD master chip on the Autopilot HW 3.0 platform, realizing the vertical integration of autopilot chips + neural network algorithms. Tesla plans to use the HW 4.0 version in the near future, a new FSD self-developed chip based on Samsung's 7nm process, which will have three times the performance of the HW 3.0.
Tesla FSD chip is based on NPU (ASIC) as the computing core, using the "CPU + GPU + ASIC" technology route, FSD mainly has three modules CPU, GPU and NPU. Tesla launched its self-developed FSD chip in 2019 and delivered FSD chips in batches on its Model S, Model X, and Model 3. The chip is manufactured using Samsung's 14nm FinFET process, has an area of 260 square millimeters, and encapsulates about 6 billion transistors. (1) CPU: Cortex-A72 architecture, a total of three groups, each group of 4 cores, a total of 12 cores, the maximum operating frequency of 2.2GHz, the CPU mainly handles general-purpose computing and tasks; (2) GPU: the highest operating frequency of 1 GHz GPU, the highest computing power of 600 GFLPS; (3) NPU: 2 Neural Processing Units (NPU), each NPU can perform 8-bit integer calculations, running frequency of 2GHz, The single NPU hash rate is 36.86 TOPS, and the final capacity of two NPUs is 73.73 TOPS. From the perspective of area, NPU area accounts for the largest proportion, NPU is mainly used to run deep neural networks, GPU is mainly used to run post processing of deep neural networks, and the part of processing deep neural networks occupies 70% of the area of the chip.
The Tesla HW 3.0 features full dual-system redundancy. Tesla HW 3.0 uses full dual system redundancy, in the event of damage in any functional area, the entire system can still work normally, to ensure that the vehicle can drive safely. HW 3.0 has 21 times better performance than the previous generation HW 2.5, while consuming 25% less power and being more efficient than 2 TOPS/W.
2. With the continuous upgrading of automatic driving functions, the price of FSD software charges continues to rise.
Tesla's FSD opening price has been climbing, and overseas prices have risen to $12,000. Tesla FSD has two purchase methods: one-time payment and subscription. In January 2022, the Tesla FSD rose again to $12,000. In the domestic market, tesla FSD has only risen in price once, from 56,000 to 64,000 yuan. In terms of subscription services, in July 2021, Tesla launched an FSD subscription package, with EAP owners $99/month and BAP owners without EAP $199/month.
The overall opening rate of Tesla FSD in the world is about 11%, with the highest proportion in North America. According to Troyteslike data, due to the low-priced Model 3 and Model Y high-speed release, as well as the impact of the rising price of FSD, the overall opening rate of Tesla FSD in the world continues to decline, as of the end of 2021Q2, the overall opening rate of Tesla FSD is about 11%. It is estimated that the cumulative number of Tesla FSDs opened around the world is nearly 360,000 units (more than 260,000 units in North America, nearly 90,000 units in Europe, and only 5,700 units in the Asia-Pacific region), with an average optional price of $6,000 and total sales of more than $21 billion. Tesla FSD sales in Asia continue to climb, but the FSD opening rate is generally low. Taking North America as an example, the model S/X has an FSD selection rate of 61%, a Model Y selection rate of 20%, and a Model 3 selection rate of 20%.
3. Launch the Dojo supercomputing platform to create a closed-loop learning system with autonomous evolution of perception.
Tesla relies on a large customer base to collect autonomous driving data to achieve model training of deep learning systems. Unlike the average automaker and tech company, Tesla's self-driving doesn't rely on internal testing to obtain data on self-driving, but collects data through its huge customer base and sensor-laden Tesla vehicles and upgrades. Even without activation, the AP system can still collect data about its environment and potential self-driving behavior to feed Tesla's neural network. This data collection method is often referred to as Shadow mode, in which the AP system runs in the background of the vehicle without being able to make any input in driving.
Released 7nm process AI training chip D1 to create a Dojo supercomputing training platform. At the Tesla AI Day in August 2021, Tesla released the latest AI training chip D1, D1 chip using TSMC 7nm process manufacturing, core area of 645 square millimeters, integrated up to 50 billion transistors, a total of four 64-bit superscalar CPU cores, with up to 354 training nodes, especially for 8×8 multiplication, support FP32, BFP64, CFP8, INT16, INT8 and other data instruction formats, are AI training related. The FP32 single-precision floating-point computing performance of the D1 chip is 22.6 TFlops, and the BF16/CFP8 computing performance is up to 362 TFlops. In order to support the scalability of AI training, the interconnect bandwidth of the D1 chip can reach up to 10TB/s, consisting of up to 576 channels, each with a bandwidth of 112Gbps, while the thermal design consumes only 400W. Dojo is a distributed computer architecture connected through the network, which has the characteristics of high bandwidth and low latency, which will enable artificial intelligence to have higher learning capabilities, making Autopilot more powerful. The core of the Dojo super platform is the D1 chip, and the 25 D1 chips form a "training tile" that makes up 36 TB/s bandwidth and 9 Peta FLOPS (9 peta quadrillion) of computing power. In the future, Dojo can also be combined into a supercomputer cluster with the world's strongest computing power.
Tesla continues to build a closed-loop iterative system based on data-driven algorithms. Tesla will give absolute priority to the development of self-supervised learning techniques (note: self-supervised learning here is unsupervised learning). The iterative optimization of the algorithm is inseparable from the training based on big data, Tesla relies on a large customer base to obtain high-quality automatic driving data, and uses the Dojo supercomputing platform to achieve unsupervised large-scale training of video.
NVIDIA:
Create a full-stack tool chain that continues to lead advanced autonomous driving
1. The Drive series platform continues to iterate to empower the autonomous driving ecosystem.
Since 2015, NVIDIA has launched the NVIDIA Drive series of platforms to empower the autonomous driving ecosystem. NVIDIA began to launch the cockpit-oriented DRIVE CX and the driving-oriented DRIVE PX in 2015, and has since launched a number of autonomous driving platforms such as the DRIVE PX2, Drive PX Xavier, DRIVE PX Pegasus, DRIVE AGX Orin, and in terms of SoC chips, from Parker, Xavier, Orin to the newly released Atlan.
(1) DRIVE PX: NVIDIA launched the first-generation platform based on NVIDIA Maxwell GPU architecture at CES 2015: a DRIVE CX equipped with 1 Tegra X1, mainly for digital cockpits, and a DRIVE PX equipped with 2 Tegra X1s, mainly for autonomous driving;
(2) DRIVE PX2: Nvidia launched the second-generation platform DRIVE PX 2 based on NVIDIA's Pascal GPU architecture at CES 2016, which is mainly composed of Tegra X2 (Parker) and Pascal GPUs, and the PX2 has multiple versions, which can be divided into a single-chip version of AutoCruise, a dual-chip version of AutoChauffeur and a four-chip version of Fully Autonomous Driving. Tesla has been carrying Nvidia's custom drive PX2 AutoCruise version since the 2016 HW 2.0, and upgraded to 2 Tegra X2s (Parker) on the 2017 HW 2.5;
(3) Drive PX Xavier: NVIDIA launched the Xavier AI Car Supercomputer at CES 2017 and re-released the name Drive PX Xavier at CES 2018, equipped with a Tegra Xavier chip with 30 TOPS hashrate. The Xavier platform is a miniaturized and energy-efficient version of the PX2, and the area is reduced to half of the PX2 under the premise of slightly improving the computing power, and the power is only about 1/8 of the PX2. The platform is currently installed on the Xiaopeng P5 and P7 models.
(4) DRIVE PX Pegasus: Nvidia launched the DRIVE PX Pegasus in October 2017, and Pegasus positioning pays more attention to performance improvement. Pegasus has a total of four chips, 2 Tegra Xavier chips, 2 separate Turing architecture GPUs, each Xavier integrates an 8-core CPU and an NVIDIA Volta architecture GPU, by adding CPU and GPU, the Pegasus platform can achieve 320 TOPS hashrate, power consumption of 500 W.
(5) DRIVE AGX Orin: NVIDIA launched the DRIVE AGX Orin platform at the China GTC 2019 conference, which consists of 2 Orin SoC chips and 2 Ampere architecture GPUs, with a maximum computing power of 2000 TOPS and a power consumption of 800 W.
2. With the resource endowment of the GPU, it will continue to lead the automatic driving.
NVIDIA adopts the technical route of "CPU + GPU + ASIC". NVIDIA Xavier's chip architecture has four main modules: CPU, GPU, Deep Learning Accelerator (DLA) and Programable Vision Accelerator (PVA). Among them, GPU as the first choice for deep learning applications, the area accounts for the largest proportion, the AREA of the CPU is second, the smallest part is the DLA and PVA is two dedicated ASICs, DLA is used for inference, and PVA is used to accelerate traditional vision algorithms.
A single Orin SoC can achieve 254 TOPS hashrate, consume less than 55W, and can support single-chip or multi-chip collaboration schemes to achieve computing power expansion. The Orin SoC chip integrates the Arm Hercules CPU core, the next-generation architecture Ampere GPU, the new deep learning accelerator (DLA) and the computer vision accelerator (PVA) to achieve 254 TOPS computing performance per second, which is 7 times higher than the previous generation Xavier system-on-chip computing performance. With a huge increase in computing performance, the Orin consumes less than 55 W. Orin can cover the computing power requirements from 10 TOPS to 254 TOPS, and can provide end users with upgradable solutions to support single-chip or multi-chip Orin collaborative solutions, and infinitely expand the computing power.
Orin's integrated GPUs have 2048 CUDA Cores and 64 Tensor Cores. Orin integrates an Ampere architecture GPU, which has 2 GPC (Graphics Processing Clusters), each GPC contains 4 TPCs (Texture Processing Clusters), and each TPC contains 2 SM (Streaming Multiprocesor, Stream processor), Each SM contains 128 CUDA Cores, for a total of 2048 CUDA Cores with a hash rate of 4096 GFLOPS. In addition, 64 Tensor Cores are included, which are dedicated execution units designed to perform tensor or matrix operations, with a hash rate of 131TOPS under the sparse INT8 model, or 54 TOPS under dense INT8.
The NIO ET7 became the first mass production car of the Orin series, and the NIO Adam supercomputing platform was equipped with four Orin chips, and the single car computing power created 1016 TOPS. NIO Adam supercomputing platform, equipped with four Orin chips, Adam has 48 CPU cores, 256 matrix computing units, 8096 floating-point computing units, a total of 68 billion transistors, and the final power is as high as 1016 TOPS. The Adam platform integrates the redundancy and diversity required for safe autonomous operation, with the first two Orin SoCs processing up to 8G of data generated by the vehicle's sensors per second, the third Orin SoC as a back-up to ensure the system can operate safely in any situation, and the fourth Orin SoC for local model training, further enhancing the vehicle's own learning capabilities and providing a personalized driving experience based on user preferences. The NIO ET7 will be delivered in March 2022 as the first production car of the NVIDIA DRIVE Orin series, and the NIO ET5, which is also equipped with the NIO Adam supercomputing platform, will start delivery in September 2022.
NVIDIA released the Atlan SoC chip platform, integrating DPU for the first time, with a single chip hashrate of more than 1000 TOPS. At NVIDIA's Spring GTC Conference in April 2021, NVIDIA released the next-generation autonomous driving chip Atlan SoC chip platform. Atlan is compatible with the software stack of the Orin and Xavier platforms, and the Atlan uses a 5nm process with a single hash rate of 1000 TOPS, which is equivalent to 4 times that of Orin. The Atlan platform uses a new Arm CPU core, a new generation of GPUs, the latest DLA deep learning accelerator, a PVA computer vision accelerator, and a BlueField DPU built into advanced networking, storage and security services, with network speeds of up to 400 Gbps, which is also the first time that the DRIVE platform has integrated DPUs. Atlan SoC
Samples will be made available to developers in 2023 and will be mass-produced in 2025.
At present, NVIDIA is far ahead in the field of automatic driving, and continues to obtain a large number of autonomous driving customers, and NVIDIA's customers can be roughly divided into three categories: new car-making forces, traditional car companies, and autonomous driving companies. (1) New car-making forces: NIO (ET5, ET7), Xiaopeng (P5, P7, G9), Ideal (X01), WM (M7), SAIC Zhiji, R Auto, FF, etc.; (2) Traditional car companies: Mercedes-Benz, Volvo, Hyundai, Audi, Lotus, etc.; (3) Autonomous Driving Robotaxi Company: General Cruise, Amazon Zoox, China's Didi, Volvo Commercial Vehicles, Kodiak, Tucson Future, Zhijia Technology, AutoX, Chi Ma
Line, wenyuan zhixing, etc.
3. Create an end-to-end autonomous driving platform and create an open and efficient R&D ecosystem.
NVIDIA offers a full-stack toolchain from chips, hardware platforms, system software, functional software, application software, and simulation test and training platforms. Using NVIDIA's DRIVE AGX hardware development platform as a starting point, software algorithms are validated on the DRIVE Constellation. After full validation, the software will be deployed for on-the-road testing via the DRIVE Hyperion reference architecture. Deep learning model training is carried out using the DGX high-performance training server, and this process is iterative. NVIDIA offers full-stack toolchains from chip (Xavier/Orin/Atlan), drive AGX hardware platform, DRIVE OS, Driveworks, DRIVE AV autonomous driving software stack, DRIVE Hyperion data acquisition and development verification kit, DRIVE Constellation virtual simulation platform, and DGX high-performance training platform.
NVIDIA's Drive autonomous driving platform creates an end-to-end, open and efficient R&D ecosystem for customers. Its core advantages can be summarized in the following four points:
(1) Software and hardware decoupling: its platform is highly decoupled, can be upgraded independently, and supports independent hardware upgrade routes and software upgrade routes;
(2) Hardware advantages: As a GPU leader, NVIDIA has obvious advantages in hardware;
(3) Ecological perfection: it has the most complete official development kit in the industry, and the developer community is relatively perfect;
(4) Ecological openness: The software level has a high degree of openness, and the API can be opened in DriveWorks (functional software layer), and the API can also be opened in Drive AV and Drive IX (application software layer); (5) R&D bundle: Deep learning algorithm acceleration is all based on NVIDIA's own CUDA and TensorRT, so that its software development and software development system cannot be separated from the NVIDIA platform.
Qualcomm:
The smart cockpit rides the dust, and the automatic driving continues to catch up
1. Create a "digital chassis" and comprehensively lay out the four major areas of intelligent vehicles.
Qualcomm is the hegemon of consumer electronics and continues to lay out the intelligent networked car business. Launched the third-generation cockpit platform 8155 in 2019 and the fourth-generation cockpit platform 8295 in 2021; in the field of autonomous driving, Qualcomm released the Ride autonomous driving platform in 2019. Qualcomm currently has more than 25 head car company customers, the company's business has covered more than 200 million intelligent networked vehicles around the world, Qualcomm's territory in the field of intelligent cars continues to expand.
Qualcomm builds a digital chassis based on four major platforms: vehicle cloud, cockpit, driving and vehicle connection. Qualcomm aims to build a "digital chassis" in the automotive business, which is mainly composed of four parts: Snapdragon Car-to-Cloud, Snapdragon Cockpit Platform, Snapdragon Ride Platform, Snapdragon Auto connectivity Platform To create an open, customizable, upgradeable, intelligent and connected electronic chassis to help Tier 1 and OEM OEMs improve the customer experience.
Qualcomm's automotive business revenue grew rapidly. FY2021 Qualcomm automotive business revenue reached $975 million, an increase of 51.40% year-on-year, Qualcomm's automotive business revenue from 19 to 21 was 640/6.44/975 million US dollars, Qualcomm expects the automotive business revenue to reach 3.5 billion US dollars after five years, and the automotive business revenue scale is expected to reach 8 billion US dollars after 10 years.
2. The intelligent cockpit rides the dust, and the high-end digital cockpit shows a monopoly position.
Qualcomm is a leader in the field of smart cockpit chips. Starting from qualcomm's launch of the first-generation cockpit chip 602A in 2014, and then to the second-generation 820A and the third-generation 8155 chip, the market penetration rate has continued to increase, and it can be found that the cabin of the initial new model in the near future is almost equipped with Qualcomm 8155 chips. At present, leading automobile manufacturers at home and abroad, including Mercedes-Benz, Audi, Porsche, Jaguar Land Rover, Honda, Geely, Great Wall, GAC, BYD, Lynk & Co, Xiaopeng, Ideal Intelligent Manufacturing, and WM Motors, have launched or announced the launch of models equipped with Snapdragon's digital cockpit platform.
The Qualcomm Snapdragon SA8155P chip is the most powerful cockpit SoC chip that can be used in the current mass production car. Qualcomm's third-generation cockpit chip SA8155P platform is based on TSMC's first-generation 7nm process to create soC, but also the first 7nm process to create a vehicle-grade digital cockpit SoC, performance, 8155 chip is currently the most powerful cockpit SoC chip that can be selected for mass production vehicles, the world's 25 largest 25 car companies have 20 using Qualcomm third-generation cockpit 8155 chip. The 8155 platform is a multi-core heterogeneous system with three times the performance of the original 820 platform, which has the power of extremely heterogeneous computing, including multi-core AI computing units, Spectra ISPs, Kryo 435 CPUs, Hexagon DSP sixth-generation Adreno 640GPU. Hexagon DSP adds a vector extension kernel (HVX) and a tensor accelerator (HTA), these dedicated AI computing modules can greatly increase the AI computing power of the chip.
Qualcomm released the fourth generation of intelligent cockpit SA8295P platform, which significantly improved performance. In July 2021, Qualcomm released the SA8295P of the fourth-generation cockpit platform, which uses a 5nm process, adopts the sixth-generation octa-core Kyro 680 CPU and Adreno 660 GPU, supports synchronous processing of multi-screen scene requirements such as dashboards, cockpit screens, AR-HUDs, rear-seat displays, and electronic rearview mirrors, and the computing power of major computing units such as CPUs and GPUs is more than 50% higher than that of 8155, and the main line capacity is more than 100%.
Baidu's Jidu Automobile became the first launch of the Qualcomm 8295, and the mass production model is expected to be delivered in 2023. On November 29, 2021, Jidu, Baidu and Qualcomm held a signing ceremony in Shanghai, Jidu Automobile became the first launch of Qualcomm 8295, Jidu's first automotive robot is expected to be mass-produced and delivered in 2023, in addition, Qualcomm 8295 chip has been designated by Great Wall, GAC, GM and other car manufacturers, and related models are expected to be delivered in 2023.
At CES 2022, Zhongke Chuangda released a new intelligent cockpit solution based on Qualcomm SA8295 hardware platform. The solution gives full play to the outstanding performance of SA8295 in computing power, graphics, image processing, etc., and creates a one-core multi-screen intelligent cockpit domain control that includes digital instrumentation, central control entertainment, co-pilot entertainment, dual rear-seat entertainment, streaming media rearview mirrors and head-up display. Based on the profound on-board OS technology, the company innovatively opens up the two technical domains of cockpit and self-driving, better supports 360 ° surround view and intelligent parking function, and reduces the cost of the program while realizing safe and reliable low-speed parking based on the redundant computing power of the cockpit domain.
3. Released the Ride platform, acquired Pwenoeer, and continued to strengthen the driving domain.
Qualcomm unveiled its self-driving Snapdragon Ride platform at CES 2020 to support the development of autonomous driving platforms. Qualcomm unveiled a new self-driving platform, Qualcomm Snapdragon Ride, at CES 2020, which is built on a range of different Snapdragon automotive SoCs and accelerators. It features a scalable and modular, high-performance heterogeneous multi-core CPU, an energy-efficient AI and computer vision engine, and an industry-leading GPU. The platform also includes the Snapdragon Ride Security System-on-Chip (SoC), the Snapdragon Ride Security Accelerator, and the Snapdragon Ride Autonomous Stack.
The Snapdragon Ride software platform includes: planning stacks, positioning stacks, perceptual convergence stacks, system frameworks, core software development kits (SDKs), operating systems, and hardware systems. Qualcomm's dedicated software stack for autonomous driving is a modular, scalable solution integrated into the Snapdragon Ride platform designed to help automakers and Tier 1 suppliers accelerate development and innovation. The software stack enables automakers to bring greater safety and comfort to everyday driving through software and applications optimized for complex use cases, such as human-like highway driving with automatic navigation, and modular options for perception, positioning, sensor fusion, and behavior planning. The snapdragon Ride platform's software framework supports hosting both customer-specific software stack components and Snapdragon Ride autonomous driving software stack components.
Qualcomm acquires Warnive, a software business owned by Vininger, to fully strengthen the autonomous driving domain. Headquartered in Stockholm, Sweden, venerer was spun off from Autoliv in 2018 as autoliv, the world's largest manufacturer of airbags and seatbelts, and is dedicated to the development of advanced assistance systems (ADAS) and collaborative autonomous driving systems (AD) for autonomous vehicles, with products such as radar systems, ADAS electronic control units (ECUs), vision systems, lidar systems and thermal imaging. Veoneer will bring ADAS, collaboration, and automated software development together in one division in 2020 and name it Arriver.
Integrated the Arriver Vision Software Stack to launch the Snapdragon Ride Vision vision system. Qualcomm unveiled the Snapdragon Ride Vision Vision vision system at CES 2022, featuring a new open, scalable, and modular computer vision software stack built on a system-level SoC chip based on a 4nm process designed to optimize front-view and surround-view camera deployments to support advanced driver assistance systems (ADAS) and autonomous driving (AD).
4. The cooperation between main engine factories is increasing, and the relevant mass production models are about to land.
Since Qualcomm launched the Snapdragon Ride autonomous driving platform in early 2020, it has reached cooperation with many OEMs such as GM, Great Wall, and BMW, and will be equipped with the Ride platform on the next generation of new cars, and related mass production models are about to land.
(1) General (GM):
GM will power the next-generation Ultra Cruise driver assistance system on the Qualcomm Ride platform. Qualcomm announced a partnership with GM group at CES 2020 in the areas of digital cockpit, in-vehicle information processing and ADAS. GM recently released its driver assistance system, the Ultra Cruise computing platform, which consists of two Snapdragon SA8540P SoCs and an SA9000P AI accelerator that provides critical low-latency control functions on a 16-core CPU and high-performance AI computing for camera, radar, and lidar processing in excess of 300 Tera operations per second. Designed with 5nm process technology for superior performance and energy efficiency, the SA8540P SoC will provide the necessary bandwidth for Ultra Cruise's sensing, sensing, planning, positioning, mapping and driver monitoring. GM plans to launch its new all-electric CELESTIQ under Cadillac in 2023, and will be equipped with a self-developed Ultra Cruise software stack that covers 95% of driving scenarios and can be autonomously driven with two hands.
(2) Great Wall Motor:
Great Wall Motor and Qualcomm have reached a cooperation in the field of autonomous driving, and the relevant mass production vehicles will be mass-produced in 2022. In December 2020, Great Wall Motors and Qualcomm announced that they have reached a cooperation in the field of autonomous driving, and Great Wall Motors will take the lead in using Qualcomm Snapdragon Ride platform to create an advanced high-computing intelligent driving system - Great Wall Motor Coffee Intelligent Driving System, and use it in the high-end models of Great Wall Motors in mass production in 2022, Great Wall Motors is the first vehicle manufacturer in China to adopt Qualcomm Snapdragon Ride platform. In July 2021, ICU 3.0, an autonomous driving platform equipped with Qualcomm's Snapdragon Ride platform, will be officially released, and the mass production model equipped with this platform will be officially delivered in the second quarter of 2022.
Great Wall's Momo Zhixing launched a domain controller based on qualcomm Ride platform at CES 2022. Founded in November 2019, formerly known as the intelligent driving department of Great Wall Motors, in the two-year development process, It has a full-stack self-developed autonomous driving solution and data intelligence center, and its business scope includes passenger cars, unmanned logistics vehicles, and intelligent hardware. In the 1 billion yuan A round of financing at the end of 2021, Qualcomm Venture Capital participated in this round of investment, and the post-investment valuation of Millima Zhixing exceeded US$1 billion. At the CES 2022 conference, Miller Zhixing and Qualcomm's world's highest-capacity mass-produced autonomous driving computing platform Millipede Zhixing Little Magic Box 3.0, its platform single board computing capacity reached 360TOPS, and it can be continuously upgraded to 1440TOPS. This is also the world's first mass production of Qualcomm's 5nm autonomous driving computing platform. The combination of SA8540P SoC + SA9000 supports access to 6 gigabit Ethernet / 12 8 million pixel cameras / 5 mmWave radar / 3 lidar, which can do L1/L2 level degradation control, and can also meet the current L3 and subsequent L4/L5 and other full-scenario automatic driving functions.
(3) BMW Group:
Qualcomm and BMW Group have reached a cooperation in the field of autonomous driving, and the related models will be mass-produced in 2025. At the Qualcomm Investor Conference in November 2021, Qualcomm announced a cooperation with bmw group in the field of autonomous driving, BMW's next-generation model will use Qualcomm Snapdragon Ride autonomous driving platform, which includes multiple core components such as Qualcomm's central computing SoC, and the new model will be mass-produced in 2025.
4. Mobileye:
The forerunner of the ADAS track, the current market share is the first
1. The forerunner of the ADAS track, the Cumulative Shipment of the EyeQ series has exceeded 100 million pieces.
Mobileye has been focusing on the ADAS track since 1999. Founded in 1999 by Professor Amoon Shashua and Ziv Aviram of the Hebrew University in Israel, Mobileye started with vision algorithms and developed autonomous driving-related systems and the EyeQ series of chips. In 2007, Mobileye's EyeQ1 began mass production in BMW, GM and Volvo, and in 2008, it released EyeQ2, especially after the launch of EyeQ3 in 2014, and was listed on the NASDAQ in the United States in 2014, with a market value of up to $8 billion. Acquired by Intel in 2017 for $15.3 billion, it was privatized and delisted as Intel's autonomous driving business unit. Intel plans to have Mobileye go public in the U.S. independently in mid-2022.
Mobileye's revenue in Q3 2021 increased by 39% year-on-year, and the compound growth rate of revenue in 18-20 years was 18%. In 2021, Mobileye received 41 new orders from more than 30 car companies, involving about 50 million new cars. According to Intel's earnings report, Mobileye's revenue in Q3 2021 was $326 million, up 39% year-on-year. Mobileye's operating income increased from $698 million in 2018 to $967 million in 2020, a compound growth rate of 17.7%.
Since 2007, mobileye EyeQ series chips have shipped more than 100 million units. Since Mobileye's EyeQ1 was mass-produced in BMW, GM and Volvo in 2007, 100 million of the company's EyeQ series chips have been shipped so far. Mobileye EyeQ series chip shipments are also continuing to increase, but the growth rate is gradually slowing down, EyeQ series chip sales from 2018-2021 were 12.4 million, 17.5 million, 19.3 million and 28.1 million, a year-on-year growth rate of 43%/41%/10%/46%.
Mobileye's market share is still leading, and it is gradually falling behind. In the past 20 years, based on visual perception technology, Mobileye has launched a series of solutions composed of algorithm + EyeQ series chips to help car companies achieve various functions from L0-level collision warning, to L1-level AEB emergency braking, ACC adaptive cruise, and then to L2-level ICC integrated cruise, Mobileye is still ranked first with 36.29% market share, including BMW, Volvo, Audi, Weilai, A series of domestic and foreign car companies such as the Great Wall, and even Tesla have been equipped with EyeQ series chips. However, Mobileye is gradually falling behind, for example, BMW formed an autonomous driving alliance with Mobileye in 2016, but recently reached a cooperation with Qualcomm Ride, and a number of car companies such as Weilai and Ideal chose to carry Nvidia Orin chips on the new generation of models.
2. The advantage of product power consumption is significant, and the algorithm ecology is relatively closed.
Since its release in 2007, the Mobileye EyeQ series of chips currently has five generations:
EyeQ5: released in 2018, mass production and listing in 2021, foundry by TSMC, using 7nmFinFET process, EyeQ5 system using dual CPU, using 8 core processors, 18 core vision processors, computing power of 24 TOPS, power consumption of 10W.
EyeQ5 adopts the "CPU + ASIC" architecture, which has extremely low power consumption but is relatively closed in ecology. EyeQ5 has four main modules: CPU, Computer Vision Processors (CVP), Deep Learning Accelerator (DLA), Multithreaded Accelerator (MA), of which CVP is an ASIC module designed for traditional computer vision algorithms, and is known for running these algorithms with proprietary ASICs to achieve extremely low power consumption. However, its algorithm system is relatively closed, which is a black box for OEMs and Tier 1, and they cannot make secondary modifications to differentiate their own algorithm functions. Mobileye's algorithm solution is still based on traditional computer vision algorithms and supplemented by deep learning algorithms, which directly determines its CVP-based architecture with DLA as the supplement.
3. Release high-computing power advanced process chips and lay out high-level automatic driving.
Mobileye unveiled three of its latest chips, EyeQ Ultra, EyeQ6 Light and EyeQ6 High, at recent CES 2022. In addition, Mobileye and Geely Automobile Group's Krypton jointly announced that it will launch a new pure electric vehicle with L4 capability by 2024, which is based on the Geely SEA platform and uses six EyeQ 5 chips to handle Mobileye's driving strategy and open collaboration model of mapping technology. At the same time, the new car will be effectively integrated between the two sides in terms of software technology.
EyeQ Ultra: For L4 autonomous driving, based on the 5nm process, the computing power is 176 TOPS, which is about the performance of 10 EyeQ5 chips. The EyeQ Ultra has a 12-core, 24-thread CPU, two general-purpose compute accelerators and two CNN accelerators. EyeQ Ultra is expected to provide samples in 2023 and mass production in 2025;
EyeQ6 High: For L2 level autonomous driving, based on the 7nm process, computing power 34 TOPS, EyeQ6 High has an 8-core, 32-thread CPU, two general-purpose computing accelerators and two CNN accelerators. EyeQ6 High is expected to start providing samples in 2022 and achieve mass production in 2024;
EyeQ6 Light: For L1-L2 level automatic driving, based on 7nm process, 5 TOPS. The EyeQ6 Light has a 2-core, 8-thread CPU, a general-purpose compute accelerator, and a CNN accelerator. It is an iterative version of the previous generation of EyeQ4, but the package size is 55% of that of EyeQ4. Mass production is expected in 2023.
V. Huawei:
Comprehensively empower vehicle intelligence with ICT technology
1. Firmly adhere to the "platform + ecology" strategy and lay out five major business sectors.
Huawei's smart vehicle solutions include five business segments: intelligent networking, intelligent driving, intelligent cockpit, intelligent electric, and intelligent vehicle cloud services. Since the establishment of the IoV Lab in 2014, Huawei has begun to reserve technology in the field of intelligent and connected vehicles, and in May 2019, Huawei officially established the smart car solution BU and began to fully enter the smart car track. Huawei proposed a CC architecture representing computing and communication, using the architecture of distributed network + domain controller to divide the vehicle into three major parts: driving, cockpit and vehicle control, and launched three major platforms based on CC architecture: intelligent driving platform (MDC), intelligent cockpit platform (CDC), and vehicle control platform (VDC). Huawei adheres to the "platform + ecosystem" development strategy, focuses on ICT technology, builds an ecosystem around the three platforms of iDVP, MDC, and HarmonyOS smart cockpit, and works with partners to help car companies build good cars.
Create an open and win-win iDVP intelligent vehicle digital base to achieve layered decoupling of software and hardware. In the intelligent vehicle digital architecture, Huawei provides the basic elements of the intelligent vehicle digital platform iDVP, i is intelligent, D is digital, V is the car, P is the platform, including the computing and communication architecture CCA, the on-board operating system, the multi-domain collaborative software framework HAS Core, and the perfect vehicle-level tool chain, build a hardware ecology and software ecology, jointly define hardware interfaces and software interfaces with partners, jointly develop atomic services, realize the hierarchical decoupling of software and hardware, and help car companies quickly develop cross-manufacturers. Apps across devices that deliver a continuously evolving experience to users. Huawei actively participates in industry alliances, builds consensus, and contributes industry standards based on its own practices.
2. Based on Huawei's MDC computing platform, create a win-win intelligent driving ecosystem for development.
Huawei MDC (Mobile Data Center, Mobile Data Center) is positioned as a computing platform for intelligent driving, integrating Huawei's more than 30 years of R&D and manufacturing experience in the ICT field, providing developers with a tool chain with full-scenario coverage and a rich SDK, supporting software development and porting of partners, and meeting the core requirements of intelligent driving applications for vehicle regulation and safety. At present, more than 70 partners have joined the MDC ecosystem to jointly promote the pilot and commercialization of intelligent driving scenarios such as passenger cars, ports, mining cards, and parks.
Huawei's MDC platform follows the principles of platformization and standardization, including platform hardware, platform software services, functional software platforms, supporting toolchains, and end-cloud collaboration services, supporting component servitization, interface standardization, and development tools; software and hardware decoupling, a set of software architectures, different hardware configurations, supporting the smooth evolution of L2+~L5, and protecting the historical investment in application software development of customers or ecological partners. The system architecture of the MDC autonomous driving platform is scalable, and through the increase or decrease of the number of CPU cores, the number of artificial intelligence acceleration cores and the number of IO interfaces, it can meet the different use scenarios of high-end, medium- and low-end passenger cars from driving assistance to high-end intelligent driving.
Huawei MDC uses CPU+NPU routes. Taking the MDC 300F released by Huawei in 2018 as an example, Huawei's self-developed Host CPU chip, AI chip, ISP chip and SSD control chip are integrated. CPU chip: Huawei's self-developed Kunpeng 920 processor, based on ARM architecture, using 7nm process, 2.0GHz, maximum power consumption of 55W; NPU chip: Huawei's self-developed Ascend 310 processor, based on the Da Vinci AI architecture, can provide 16TOPS@INT8 computing power, using 12nm process, maximum power consumption of 8W.
Combination and application of the same algorithm components. Huawei's MDC platform supports the access of a variety of sensors, actuators, IVI or T-Box related to intelligent driving, and supports rich, flexible and variable mainstream hardware standardized interfaces, such as GMSL, CAN, CAN-FD, Automotive-Ethernet, etc., providing extensive compatibility and choice flexibility. At the same time, Huawei's MDC function software is based on the SOA architecture and follows the AUTOSAR specification, defining the basic algorithm components of intelligent driving, and can realize the software interface between the call framework and components of the perception algorithm component, the fusion algorithm component, the positioning algorithm component, the decision algorithm component, the planning algorithm component, and the control algorithm component. The upper-level scene application can flexibly select different combinations of algorithm components to achieve specific scene application functions.
Huawei's MDC product line has gradually improved, and more than 300F/210/610/810 products have been released, covering all-scenario autonomous driving applications from L2+~L5. In 2019, Huawei officially launched the MDC 300F, hash rate 64 TOPS, for commercial vehicle scenarios, Huawei officially opened the MDC ecological construction; in September 2020, at the Huawei Intelligent Vehicle Solution Ecological Forum, Huawei released MDC 210 and MDC 610, the former with a computing power of 48 TOPS, suitable for L2+ automatic driving, the latter with a computing power of 200+ TOPS, suitable for L3/L4 level automatic driving, and in 2021, at the Shanghai Auto Show, Huawei released the MDC 810 with a hashrate of 400+ TOPS; Huawei plans to release the MDC 100 in 2022 to further enrich the MDC product line.
Huawei also provides a series of Huawei MDC developer suites including MDC toolchain, MDC Core SDK, and vehicle-cloud collaborative open platform. On the hardware of Huawei's MDC platform, it runs the intelligent driving operating system AOS, VOS, and MDC Core, and provides a complete development tool chain. The operating system, platform software, and functional software middleware based on Huawei's MDC platform provide standard open APIs and SDK development packages, combined with simple and easy-to-use toolchains, to help customers or ecosystem partners improve R&D efficiency, and achieve rapid development, commissioning, deployment, and operation of intelligent driving applications.
Huawei's cooperation with partners is divided into two modes:
One is huawei inside mode, where Huawei provides a full-stack solution for smart driving that includes smart driving application software, computing platforms, and sensors.
The other is the MDC platform model, Huawei provides MDC intelligent driving computing platform, which mainly includes SOC hardware, automatic driving operating system, vehicle control operating system, and AutoSAR middleware.
At present, a total of five automakers, BAIC, GAC, Changan, Xiaokang Xilisi and Great Wall, have determined to be equipped with Huawei's MDC platform. At the 2021 Guangzhou Auto Show, GAC Aean LX Plus and Great Wall Salon Mecha Dragon both chose MDC as the intelligent driving computing platform. In addition, BAIC BJEV's Jihu Alpha, Xiaokang Xilis SF5 and Qingjie M5 and Changan Avita 11 are also determined to be equipped with Huawei's MDC platform.
3. Create a seamlessOS intelligent cockpit ecology of the Internet of Everything.
On August 14, 2020, Huawei announced three major Hongmeng in-vehicle OS systems - Hongmeng cockpit operating system HOS, intelligent driving operating system AOS, and intelligent vehicle control operating system VOS. Huawei is committed to hardware modularization, interface standardization, system platforming as the goal, around the HarmonyOS vehicle machine operating system to build an intelligent cockpit ecology, at present, Huawei has incrementally developed 9 types of vehicle enhancement capabilities on the HarmonyOS operating system, opened 1517 in-vehicle business APIs, more than 13,000 HarmonyOS APIs, and provided comprehensive and open tools and technical support to reduce the integration and development difficulty of the cockpit system. Help partners quickly develop and migrate applications, bringing users a rich human-vehicle life experience.
Based on HarmonyOS, Huawei has established cooperation with more than 150 software and hardware partners. Jointly define hardware interfaces, achieve hardware plug-and-play, replaceable upgrades, interconnection between diverse hardware, and open to applications through API interfaces, quickly develop a cockpit system with full-scenario coverage and multi-device collaboration, and provide consumers with personalized, intelligent and diversified service experiences. On Huawei's latest cockpit demo, a variety of Hongmeng peripherals such as the partner's on-board sky screen, electronic rearview mirror, holographic projection, steering system and intelligent health seat have been deployed.
Huawei is building a truly intelligent, all-connected HarmonyOS smart cockpit ecosystem. Huawei's smart cockpit "one-core, multi-screen" solution enables lcd instruments, AR-HUD (head-up displays), central displays, central entertainment screens, central control screens, and co-driver screens in the cockpit to be supported by the same chip. Huawei revolves around the HarmonyOS vehicle-machine operating system, mainly through three ways to build 'rich applications, diverse experience, Commonly used Changxin's intelligent cockpit application ecology: (1) for the application of high-frequency use in the car domain, and partners work together to deeply adapt to the characteristics of the HarmonyOS car domain to create a high-quality application of HarmonyOS; (2) based on Huawei's 1+8 full-scenario ecological capability, mobile phone, tablet, and smart large-screen application can be seamlessly inherited on the car; (3) for the infrequently used long-tail application, HarmonyOS car machine operating system also provides mobile phone screen projection capabilities to meet the diverse experience needs of users 。
VI. Horizon:
Achieve a breakthrough in domestic vehicle specification-level AI chips from 0 to 1
1. The forerunner of domestic car specification-level AI chips, the cumulative shipment of chips exceeded 1 million.
Horizon is currently the only enterprise in China that has large-scale front-loading and mass production of vehicle-grade AI chips. Founded in 2015 by Dr. Yu Kai, an artificial intelligence and deep learning scientist, Horizon released China's first vehicle-grade AI chip journey 2 in 2019, and achieved front-loading mass production in 2020, horizon currently has three generations of products, Journey 2 (released in 2019), Journey 3 (released in 2020) and Journey 5 (2021). Horizon will also introduce a more powerful journey 6, using a 7nm process with a hash rate of more than 400 TOPS. At present, The Journey 5 has been fixed by the model, the mass production time is the second half of 2022, the launch time of the engineering sample of the Journey 6 is expected to be 2023, and the mass production time is 2024.
Horizon has a large number of shareholders related to the automotive industry chain, which helps the company to obtain more OEMs customers. Horizon completed a $1.5 billion C7 round of funding in July 2021, with a post-investment valuation of up to $5 billion. In the past financing, SAIC, GAC, BYD, Dongfeng, Great Wall and other OEMs participated in the financing of Horizon, and related companies in the automotive industry chain also included CATL Times, Weier Shares, Sunny Optics, BOE, Xingyu Shares, etc.
The cumulative shipment of Horizon Journey series chips exceeded 1 million, and OEM customers continued to make breakthroughs. As of January 2021, horizon journey series chip shipments have exceeded 1 million, and more than 40 front-loading mass production projects have been won. Since the launch of the first Journey 2-chip Changan UNI-T model in March 2020, Horizon has cooperated with Changan, SAIC, GAC, FAW, Ideal, Chery, Great Wall, as well as well-known OEMs and Tier1s at home and abroad, such as Audi, Continental Group, and Faurecia.
2. Self-developed AI accelerator BPU, which exerts the ultimate computing power efficiency.
Horizon adopts the "CPU + ASIC" technology route and develops its own AI accelerator BPU (ASIC). Taking the Horizon Journey 2 chip as an example, the BPU (ASIC chip) using the Bernoulli 1.0 architecture developed by Horizon adopts the dual-core ARM Cortex-A53 CPU, the equivalent computing power of the Journey 2 exceeds 4 TOPS, the power consumption is only 2W, reaching the vehicle-level AEC-Q100 standard, and the utilization rate of the typical algorithm model in the Journey 2 chip can be higher than 90%.
Horizon's self-designed and developed artificial intelligence-specific computing architecture BPU has launched five third-generation AI architectures. Horizon has independently designed and developed the artificial intelligence-specific computing architecture Brain Processing Unit (BPU), and has launched five third-generation AI architectures: Gaussian architecture, Bernoulli 1.0 architecture (for Journey 2 chip), Bernoulli 2.0 architecture (for Journey 3 chip), Bayesian architecture (for Journey 5 chip), and the next generation of Journey 6 chips will integrate the fourth generation BPU architecture (Nash architecture).
3. Combine soft and hard to create an automatic driving ecology of "algorithm + chip + tool chain".
Based on "algorithm + chip + toolchain", Horizon creates a "Tiangong Kaiwu" AI development platform. Horizon's "start-to-start" AI full-life cycle development platform based on Horizon's self-developed AI chip includes three functional modules: Model Zoo, AI Chip Toolchain, and AI Application Development Middleware (AI Express). Developers cooperate with the Horizon AI toolchain, adapt to the mainstream training frameworks Caffe, MXNet, TensorFlow and PyTorch, support ONNX, and provide model repositories to accelerate customer development and deployment of their own algorithms, and improve the efficiency of customer product application development.
Model Zoo: Three types of algorithm resources: product-level algorithms, basic algorithms, and product reference algorithms. Empower Horizon chip partners to develop their own AI products faster and more economically;
AI Chip Toolchain: Quantitative training tool and floating-point fixed-point conversion tool, providing basic tools such as model training, model conversion, application development and deployment for horizon chip developers;
AI Application Development Middleware (AI Express): XStream and XProto are two sets of application development frameworks, providing rich and highly reusable algorithm modules, business policy modules, application components and scenario application reference programs, designed to accelerate customer integration from business model to application development process.
Horizon has created the latest self-driving reference platform, the Matrix 5, with a hash rate of up to 512 TOPS. Horizon launched the Matrix 2 computing platform based on 4 Journey 2 chips in 2020, with a maximum hash rate of 16 TOPS. Horizon launched its latest autonomous driving reference platform Matrix 5 in 2021, which is based on 4 journey 5, with a computing power of up to 512 TOPS, which can meet the needs of ADAS, advanced automatic driving, intelligent cockpit and other multi-scenarios and has a rich interface, including 48GMSL2 camera input path, up to support multi-channel 8MP@30fps, multi-millimeter wave radar, 4D imaging radar, lidar, ultrasonic and microphone array access, Enables all-round, multi-modal sensing inside and outside the vehicle.
VII. Related Companies:
Zhongke Chuangda, Desay SV, Guangting Information, Neusoft Group, NavInfo, Jingwei Hengrun
Zhongke Chuangda (300496. SZ): The world's leading provider of intelligent platform technology services.
Desay SV (002920. SZ): Automotive Electronics Tier 1 leader, ADAS first-mover advantage is significant.
Light Court Information (301221. SZ): A leading provider of software solutions for smart cars.
Neusoft Group (600718. SH): The smart car wave has injected life into the established software leader.
NavInfo (002405. SZ): Map-based, chip-based, to create a leader in automotive intelligence.
Jingwei Hengrun (A21257.SH): a leading integrated electronic system technology service leader.
Article first published - public number: Diamond Research Report.