Panel For Example Panel For Example Panel For Example

How Many Sensors Are Required for Autonomous Driving?

Author : Adrian September 10, 2025

Overview

Sensors used in vehicles range widely in cost, roughly from $1 to $15 per unit, and include cameras, lidar, radar, ultrasonic sensors, and thermal sensors. No single sensor type suffices, because each has inherent limitations. Sensor fusion, which combines data from multiple sensor types, is therefore critical for safe autonomous driving. All L2 or higher vehicles rely on sensors to perceive the environment and perform functions such as lane centering, adaptive cruise control, emergency braking, and blind-spot warnings. OEMs are adopting very different designs and deployment strategies.

Recent OEM Approaches

In May 2022 Mercedes-Benz introduced the first production vehicle in Germany capable of L3 automated driving. L3 capability is offered as an option on the S-Class and EQS, with a planned US rollout. The Drive Pilot package extended the existing driver-assist suite (radar and cameras) with additional sensors including lidar, an advanced stereo camera at the windshield, a multifunction camera at the rear window, microphones for detecting emergency vehicles, and a humidity sensor in the front cabin. In total, about 30 sensors were installed to capture data required for safe L3 operation.

Tesla has taken a different route. In 2021 Tesla announced a vision-based autonomy strategy for Model 3 and Model Y, later applied to Model S and Model X, and removed ultrasonic sensors from the sensor suite.

Sensor Limitations

One of the main challenges in current autonomous vehicle design is the limitations of different sensor types. Achieving safe autonomy typically requires sensor fusion. The key issues are not only the number, type, and placement of sensors, but also how AI and machine learning interact with sensors to analyze data and produce optimal driving decisions.

Thierry Kouthon, product manager for safety IP technology, notes that autonomous driving heavily uses AI techniques. Vehicles must exhibit environment awareness comparable to or better than human drivers. They must identify other vehicles, pedestrians, and roadside infrastructure and determine their precise locations. That requires deep-learning techniques for robust pattern recognition (particularly vision-based object recognition) and for route planning so the vehicle can calculate optimal trajectories and speeds. Lidar and radar provide distance information that is essential for accurately reconstructing the vehicle environment.

Each sensor type has trade-offs. Cameras excel at object recognition but provide poorer distance information and demand significant compute for image processing. Lidar and radar supply accurate range data but with lower spatial clarity; lidar can also struggle in adverse weather. Sensor fusion, which combines inputs from multiple sensors to improve environmental understanding, remains an active research area.

How Many Sensors Are Needed?

There is no simple answer to how many sensors an autonomous system needs. OEMs are still exploring the question, and requirements vary by use case—for example, long-haul trucks on open roads have very different needs than urban robotaxis. Amit Kumar, director of product management and marketing, explains that each OEM designs an architecture to provide better spatial positioning, longer range and high visibility, object identification and classification, and differentiation among object types. The required sensor count also depends on the autonomy level the manufacturer aims to enable. In short, a minimal configuration for partial autonomy might involve 4 to 8 diverse sensors, while full autonomy in current practice often uses 12 or more sensors.

Kumar cites Tesla as an example with around 20 sensors (8 cameras plus 12 or so short-range ultrasonic sensors) and no lidar or radar, reflecting a confidence in computer vision for L3-like capabilities. Other companies differ: Zoox has implemented four lidar units combined with cameras and radar for a fully driverless vehicle intended to operate on well-mapped routes. Nuro’s delivery vehicle uses a 360-degree camera system with four sensors, a 360-degree lidar, four radar sensors, and ultrasonic sensors.

Chris Clark, senior manager for automotive software and safety, emphasizes that required sensor count is tied to acceptable organizational risk and to the application. Robotaxis need both exterior sensors for road safety and interior sensors to monitor passengers. Urban deployments demand higher sensor density than highway vehicles, which operate with longer ranges and larger reaction spaces. There is no fixed rule that mandates an exact combination of sensor types to cover all autonomous vehicle scenarios.

Use cases drive sensor selection. For robotaxis, lidar and conventional cameras plus ultrasonic or radar are likely necessary because of high urban density. V2X sensors may be added so incoming infrastructure or vehicle data can be correlated with onboard perception. For highway trucking, ultrasonic sensors are less useful, while front- and rear-facing long-range sensors, lidar, and radar become more important for distance and range considerations.

Another factor is the level of local analysis performed at the sensor. Early local processing by a lidar unit, for example, can reduce the amount of data sent for centralized sensor fusion, lowering overall compute requirements and system cost. Without such local processing, the vehicle must provide additional compute resources, either through an integrated computing environment or dedicated ECUs focused on sensor-grid partitioning and analysis.

Cost Considerations

Sensor fusion can be expensive. Early multi-unit lidar systems cost many tens of thousands of dollars due to mechanical components; modern costs are much lower, with some manufacturers projecting unit prices could reach $200–$300 in the future. Emerging thermal sensor technologies are still in the thousands of dollars range. Overall, OEMs face pressure to reduce sensor deployment costs. Using more cameras instead of lidar systems can help lower manufacturing costs.

David Fritz, vice president of digital industrial software for hybrid and virtual systems, notes that the minimum sensor count depends on the use case. Some anticipate that complex smart-city infrastructure could reduce reliance on vehicle-mounted sensors in urban environments. Vehicle-to-vehicle communication could also affect onboard sensor needs. However, external infrastructure can be unavailable due to power failures or outages, so vehicles will always require some onboard sensing for both urban and rural scenarios.

Many current designs use eight external cameras to provide a 360-degree view plus several interior cameras. Properly calibrated stereo cameras at the front can provide low-latency, high-resolution depth perception, reducing dependence on radar. Key information from these cameras is passed to a central compute system for control decisions. If infrastructure or other vehicle data are available, that information is fused with onboard sensors to generate a more complete 3D view and improve decision making. Interior cameras are used for driver monitoring and to detect left-behind objects. Adding a low-cost radar to handle severe weather like heavy fog or rain can be a beneficial supplement; lidar use has not been universally adopted.

In some cases, lidar performance can be affected by echoes and reflections. While early prototypes relied heavily on GPU processing of lidar point clouds, recent architectures favor high-resolution, high-FPS cameras in distributed architectures to better optimize data flow across the system. Optimizing sensor fusion is complex: OEMs use functional testing and simulation and modeling tools from vendors such as Ansys and Siemens to evaluate different sensor combinations and achieve the desired performance.

Enhanced Technologies and Future Design

Augmenting technologies—V2X, 5G, high-definition digital maps, and GPS—could enable autonomous systems that rely less on vehicle-mounted sensors. However, improving and deploying these technologies requires industry-wide support and smart-city development.

Frank Schirrmeister, VP of IP solutions and business development, observes that different augmentation technologies serve different purposes and are often combined to create safe and convenient experiences. For example, digital-twin map information for path planning can improve safety under limited visibility. V2V and V2X can supplement local onboard information, increasing redundancy and providing more data points to support safe decisions.

Real-time collaboration between vehicles and roadside infrastructure will require ultra-reliable low-latency communication (URLLC). These requirements have led to AI applications for traffic prediction, 5G resource allocation, and congestion control. AI can also optimize and reduce the network infrastructure load of autonomous driving. OEMs are expected to use software-defined vehicle architectures in which ECUs are virtualized and updated wirelessly. Digital twins are critical for testing software and updates in cloud simulations that closely mirror real vehicles.

Conclusion

When deployed, an L3 autonomous system could require 30+ sensors, or a dozen or more cameras, depending on OEM architecture. There is no consensus yet on which approach is universally safer or whether urban sensor systems can match the safety levels achievable on highways. As sensor costs decline, new sensor types may become feasible additions to improve performance in adverse weather. However, it may take OEMs a long time to standardize on a sensor count considered sufficient for safety across all conditions and extreme scenarios.