Supply Chain > ADAS/AV Stack > Compute Platform


EV ADAS/AV Compute Platforms


ADAS (Advanced Driver-Assistance Systems) and AV (Autonomous Vehicle) compute is the hardware that runs perception, sensor fusion, planning, and control decision workloads in real time. In most vehicles, this compute is primarily an inference platform: it runs trained models and deterministic algorithms on-vehicle to make driving decisions without relying on cloud latency.

Key terms

  • Inference: running trained models on the vehicle to interpret sensor data and generate decisions
  • SoC (System-on-Chip): integrated compute chip combining CPU cores and accelerators (GPU/NPU)
  • NPU (Neural Processing Unit): AI accelerator optimized for matrix operations
  • Safety MCU: deterministic safety controller supervising compute and outputs

Inference platform vs central compute

Many OEMs use “inference computer” and “central ADAS compute” interchangeably. The practical distinction is architectural:

  • Inference platform: the compute module(s) responsible for real-time ADAS/AV decision workloads
  • Central compute: a broader architecture trend where fewer computers run multiple domains; ADAS inference may be one of those domains
Compute term What it usually means Typical scope Example contents
Inference computer / inference platform Vehicle-side compute that runs real-time perception and planning ADAS/AV domain SoC(s), safety supervision, DRAM, storage, Ethernet
ADAS computer OEM label for the ADAS compute module ADAS/AV domain Same as inference platform in most contexts
Domain controller (ADAS DCU) Domain compute that may include inference workloads ADAS plus gateway/supervision SoC or high-end MCU, Ethernet, CAN gateway, safety
Central compute Fewer computers running multiple vehicle domains Multi-domain vehicle platform SoC(s) + virtualization + networking fabric + safety supervision

Compute architecture

ADAS/AV inference platforms are built from a small set of repeatable hardware blocks.

Block What it does Why it matters
SoC (CPU + GPU/NPU) Runs perception, fusion, planning, and runtime systems Primary performance driver for ADAS capabilities
AI accelerator (NPU / dedicated accelerator) High-throughput model inference Determines real-time model capacity under power constraints
Safety supervision (safety MCU / safety islands) Monitors health and validates outputs; triggers safe state Required for functional safety and higher autonomy
DRAM Runtime memory for perception pipelines and models Capacity and bandwidth can bottleneck performance
Storage (flash / NVMe) Logs, maps, firmware, local buffering Enables data capture and robust OTA update staging
Networking (Ethernet) Moves sensor data to compute and commands back to controllers Scales with camera/LiDAR payloads and topology
Power delivery Provides stable rails and transient handling Compute stability affects safety and reliability
Thermal solution Removes heat from SoCs and memory High-end ADAS compute can become a primary thermal load

Compute scale by autonomy level

Compute requirements scale with sensor count, model complexity, and redundancy requirements.

Autonomy level (informal) Typical compute pattern Redundancy expectation Network implication
L2 (ADAS) Single ADAS SoC module or integrated ADAS ECU Fail-safe; limited redundancy Ethernet begins for cameras; CAN remains dominant for control
L2+ / L3 (conditional) Higher-end SoC with more sensors and compute headroom More supervision; increasing redundancy Ethernet backbone expands; gateways bridge legacy buses
L4 (high autonomy) Multi-SoC or redundant compute modules; higher storage and data logging Fail-operational; physical redundancy expected High-speed Ethernet fabrics; 10G-T1 becomes relevant

Safety supervision and command integrity

Safety supervision is the hardware that prevents unsafe commands from propagating when compute faults occur.

  • Safety islands inside SoCs: dedicated safety logic within the main compute chip
  • External safety MCU: independent controller that monitors health, cross-checks outputs, and triggers safe-state transitions
  • Independent power rails and watchdog circuits: keep supervision operating through transients

Memory, storage, and data logging

ADAS compute is increasingly data-centric. Even when autonomy is not fully deployed, vehicles log edge cases for analysis and improvement.

  • DRAM: impacts real-time perception throughput
  • Persistent storage: impacts logging depth and OTA staging reliability
  • Write endurance and thermal stability become practical constraints

Networking interfaces to the IVN

ADAS compute platforms are typically Ethernet-rich nodes.

  • Multiple Ethernet ports for sensor ingestion and zonal uplinks
  • Gateway interfaces to CAN/CAN-FD and LIN via DCUs or ZCUs (architecture dependent)
  • Time synchronization support becomes more important as sensor fusion scales

Power and thermal constraints

Compute capability is limited by the vehicle power and thermal envelope.

  • Power: low-voltage rails (12 V or 48 V) must support sustained compute draw plus transient loads
  • Thermal: air cooling may be sufficient at lower compute levels; liquid cooling is used in higher-end designs
  • Reliability: automotive temperature ranges and vibration drive packaging and connector requirements

Supply-chain notes

ADAS/AV compute is a high-value concentration point in the vehicle BOM.

  • Automotive-grade SoCs and accelerators: constrained by advanced semiconductor nodes and packaging
  • DRAM and flash: capacity and bandwidth trends track sensor content
  • Ethernet switches and PHYs: required for high-rate sensor ingestion and zonal backbones
  • Thermal hardware: cold plates, heat sinks, and thermal interface materials scale with compute
  • Functional safety components: safety MCUs and safety islands are key differentiators

Inference compute platforms

OEMs choose from a few common platform categories. Specific model names and branding change frequently, but the underlying supply-chain pattern is stable.

  • Integrated ADAS SoC platforms: CPU plus GPU/NPU on a single chip, packaged as an automotive compute module
  • Multi-chip accelerator platforms: SoC plus one or more accelerators, sometimes with dedicated safety controllers
  • In-house platforms: OEM-designed inference chips and boards integrated tightly with vehicle OS and sensor suite

Tesla FSD, Mobileye EyeQ, NVIDIA DRIVE, and Qualcomm Snapdragon Ride are the most popular inference compute chips.

ADAS for cars

Car brand ADAS system Inference compute platform
Aion ADiGO NVIDIA DRIVE Orin
Aito Qiankun ADAS (Huawei) Huawei ADS SoC
Arcfox α-Pilot ADS Huawei Kirin 990A
Audi Audi Pre Sense Infineon
AVATR Qiankun ADAS (Huawei) Huawei ADS SoC
Baojun Chengxing Zhuoyu 7V+32
BMW BMW Personal Pilot L3 Qualcomm Snapdragon Ride
BYD DiPilot NVIDIA DRIVE Orin/Thor
Chevrolet UltraCruise Snapdragon, Infineon
Denza DiPilot NVIDIA Drive Orin
Ford BlueCruise Intel Mobileye EyeQ
Geely G-Pilot NVIDIA DRIVE Orin | Ecarx
GMC Super Cruise Qualcomm Snapdragon Ride
GWM Qualcomm Snapdragon Ride
HiPhi HiPhi Pilot NVIDIA DRIVE Orin
Huawei Huawei ADAS Ascend AI
Hyper NVIDIA DRIVE Thor
Hyundai SmartSense NVIDIA DRIVE Orin
IM UNP NVIDIA Drive Orin
Jaguar InControl NVIDIA Drive Orin
Land Rover InControl NVIDIA Drive Orin
Leapmotor Leapmotor Pilot NVIDIA Drive Orin
Li Auto NOA NVIDIA DRIVE Thor
Lotus NVIDIA DRIVE Orin
Lucid DreamDrive NVIDIA DRIVE Orin
Luxeed Qiankun ADAS (Huawei) Huawei ADS SoC
Lynk & Co G-Pilot NVIDIA Drive Orin
Mazda iActivsense Qualcomm SA8155P
Mercedes-Benz Drive Pilot NVIDIA DRIVE Orin
Neta Hozon Pilot NVIDIA Drive Orin
NIO NOP+ NVIDIA DRIVE Orin
Nissan ProPILOT Renesas R-Car
Polestar ADAS NVIDIA DRIVE Orin
Porsche InnoDrive Intel Mobileye EyeQ
Renault Active Driver Assist Qualcomm Snapdragon Ride
Rivian Rivian Driver+ NVIDIA DRIVE Orin
SAIC Navigation on Pilot Horizon Robotics Journey 6
Smart SuperVision Intel Mobileye EyeQ
Stelato Qiankun ADAS (Huawei) Huawei ADS SoC
Stellantis STLA AutoDrive Qualcomm Snapdragon Ride
Subaru EyeSight Xilinx Zynq UltraScale+
Tata Qualcomm Snapdragon Ride
Tesla Autopilot Tesla FSD
Toyota Safety Sense NVIDIA DRIVE PX
Volvo NVIDIA DRIVE Orin
Voyah Voyah Vpilot Qualcomm Snapdragon Ride
VW Travel Cruise Qualcomm Snapdragon Ride
Wuling Horizon Robotics Journey
Xiaomi Xiaomi Pilot (XP) NVIDIA DRIVE Orin
Xpeng XNGP NVIDIA DRIVE Orin/Thor
Yangwang NVIDIA DRIVE Orin
Zeekr Huohan 2.0+ NVIDIA DRIVE Thor

ADAS for long-haul trucks

Heavy-duty Class 7-8 EVs and hybrids. Require redundancy, sensor fusion, and specialized highway autonomy. Vendors include NVIDIA, Aurora, Plus.ai.

Trucks Brand ADAS system Chip platform
MAN Plus SuperDrive NVIDIA DRIVE Thor
Scania Plus SuperDrive NVIDIA DRIVE Thor
Tesla Semi Autopilot Tesla FSD

Robotaxi AV platforms

Urban autonomous fleets with dense sensor arrays and high compute density. Examples: Waymo, Cruise, Baidu Apollo, Pony.ai. Autonomous platforms for cars are still largely in testing phase, with various technology companies offering their "bolt-on" technology to existing car models. The exceptions are Tesla, DiDi, and Zoox, where the technology is integrated during production.

Robotaxi brand Autonomous driver Chip platform
Avride Avride Driver
Baidu Apollo Go Kunlun AI
DiDi NVIDIA DRIVE Orin
Motional Motional NVIDIA DRIVE Orin
Tesla Cybercab Tesla FSD
Waymo Waymo Driver NVIDIA
WeRide WeRide One NVIDIA DRIVE Thor
Zoox Zoox ADP NVIDIA DRIVE Orin

Robotruck AV platforms

Highway freight haulers with autonomy stacks tuned for long corridors and platooning. Autonomous platforms for trucks is still largely in testing phase, with various technology companies offering their "bolt-on" technology to existing truck brands. The exceptions are Tesla Semi and Gatik Carrier, where the technology is integrated.

Tech brand Autonomous driver Chip platform
Aurora Aurora Driver NVIDIA DRIVE Orin
Einride Einride autonomy NVIDIA DRIVE Orin
Gatik Carrier Gatik autonomy NVIDIA DRIVE Orin
Kodiak Kodiak Driver NVIDIA DRIVE Orin
Plus Plus.ai SuperDrive NVIDIA DRIVE Thor
Pony PonyPilot NVIDIA DRIVE Orin
Torc Robotics Torc autonomy NVIDIA DRIVE Orin
Waabi Waabi Driver NVIDIA DRIVE Thor
Wayve Wayve AI Driver NVIDIA DRIVE Orin

Robovan AV platforms

Last-mile delivery vans with mid-scale inference stacks. Balance cost efficiency with safety. Examples: Nuro, Udelv.

Robovan Autonomous driver Chip platform
Neolix Neolix autonomy
Nuro Nuro Driver NVIDIA Drive Orin
Tesla Autopilot Tesla FSD
Udelv Udely Transporter
WeRide WeRide One NVIDIA DRIVE Thor

Autonomous tech platforms

Generalized autonomy stacks licensed across multiple form factors.

Brand Product Chip platform
Aurora Aurora Driver NVIDIA DRIVE Orin
Avride Avride Driver
Baidu Apollo Go Kunlun AI
Cartken
Coco
DiDi NVIDIA DRIVE Orin
Einride Einride autonomy NVIDIA DRIVE Orin
Gatik Carrier Gatik autonomy NVIDIA DRIVE Orin
Kodiak Kodiak Driver NVIDIA DRIVE Orin
Motional Motional NVIDIA DRIVE Orin
Neolix Neolix autonomy
Nuro Nuro Driver NVIDIA Drive Orin
Plus Plus.ai SuperDrive NVIDIA DRIVE Thor
Pony PonyPilot NVIDIA DRIVE Orin
Serve
Tesla Cybercab Tesla FSD
Tesla Autopilot Tesla FSD
Torc Robotics Torc autonomy NVIDIA DRIVE Orin
Udelv Udely Transporter
Waabi Waabi Driver NVIDIA DRIVE Thor
Waymo Waymo Driver NVIDIA
Wayve Wayve AI Driver NVIDIA DRIVE Orin
WeRide WeRide One NVIDIA DRIVE Thor
WeRide WeRide One NVIDIA DRIVE Thor
Zoox Zoox ADP NVIDIA DRIVE Orin