With sensors costing from 15 USD to 1,000 USD, automakers confuse, at least for a while, how many sensors are needed for a vehicle to be fully autonomous.
Sensors like image, lidar, radar, ultrasonic and thermal sensors to gather data about the surrounding environment. One sensor is not enough as each sensor has its limitations. This is the key driving force behind sensor fusion, which combines multiple types of sensors to enable safe autonomous driving.
Limitation of sensors used in automotive
One of the challenges in autonomous driving design today is the limitations of different sensors. Besides, sensor fusion may be required for safe autonomous driving. The key question is not only the number, type and placement of sensors, but also how AI/ML technology should interact with the sensors to analyze the data to make the best driving decisions.
Thierry Kouthon, product manager of safety IP technology at Rambus, said, Autonomous driving generally uses artificial intelligence technology. Autonomous driving, even entry-level ADAS functions, requires vehicles to exhibit environmental awareness comparable to or better than human drivers.
First, the vehicle, pedestrians, and roadside infrastructure must be recognized and their correct location determined, which requires pattern recognition capabilities that are well-solved by artificial intelligence deep learning techniques.
Visual pattern recognition is an advanced deep learning field that is intensively used by vehicles. In addition, The vehicle must be able to calculate its optimal trajectory and speed at all times, which requires route planning capabilities that are also well-solved by artificial intelligence. This way, lidar and radar can provide distance information, which is essential to correctly reconstruct the vehicle’s environment. “
How many sensors do we really need?
There is no easy answer to the question of how many sensors an autonomous driving system needs. OEMs are trying to solve this problem. Other considerations here include, for example, that trucks driving on open roads and robotaxis in cities have very different needs.
“It’s a tough calculation because every automotive OEM has its own architecture that provides better spatial positioning, higher visibility, and the ability to recognize and classify objects, and then distinguish between various objects to keep the vehicle safe.
It also depends on how much automakers decide to achieve autonomous driving (for example, to provide breadth). In short, to achieve partial autonomous driving, there are at least 4 to 8 different types of sensors. To achieve fully autonomous driving, the number of sensors used is now more than 12.”
“The number of sensors required is the number of sensors that are acceptable to the company at the level of risk, and it also depends on the application,” said Chris Clark, senior manager of automotive software and security at Synopsys’ Automotive Group. “If you’re developing a robotaxi, you need more than sensors for road safety, in-car sensors are also needed to monitor the behavior of passengers to ensure passenger safety.
In this case, we will be in areas with large populations and highly urban populations, compared to road-driving cars. Quite a unique feature, with longer range and more room to react. On the highway, there’s less chance of trespassing into the lane. I don’t think there’s a hard and fast rule that says you have to have three different types of sensors and three dfferent cameras to cover different angles of all self-driving cars.”
Cost is always the most important factor
Sensor fusion is expensive. Early on, lidar systems made up of multiple components could cost as much as 80,000 USD. The high cost stems from the mechanical parts in the element. The cost is much lower now, and some manufacturers expect it to be as low as 200 USD to 300 USD per piece at some point in the future.
Emerging thermal sensor technology costs will be in the thousands of dollars range. Overall, OEMs will continue to be under pressure to reduce the total cost of sensor deployment. Replacing lidar systems with more cameras will help OEMs reduce manufacturing costs.
“The basic definition of safety in an urban environment is the elimination of all avoidable collisions,” said David Fritz, vice president of hybrid virtual systems at Siemens Digital Industries Software. The minimum number of sensors required depends on the use case. Some believe that in the future, smart city infrastructure will become more complex and ubiquitous, reducing the need for on-board sensing in urban environments. Vehicle-to-vehicle communication may also have an impact on sensors.
Fritz found that “Here, the number of onboard sensors may be reduced, but we are not there yet. Also, in some cases, it has to be assumed that the self-driving car will not be able to obtain All external information. So there is always a need for a vehicle to have a set of sensors, not only in urban areas but also in rural areas.
Many of the designs that we’ve been working on require eight cameras on the outside of the vehicle and a couple of cameras on the inside .Two cameras on the front, with proper calibration, we can achieve low-latency, high-resolution stereo vision, which can provide the depth range of objects, thereby reducing the need for radar. We have the front, rear and sides of the vehicle to achieve a full 360° view.”
Technology upgrading to improve the sensor future designs
Enhanced technologies such as V2X, 5G, advanced digital maps, and GPS in smart infrastructure will enable autonomous driving with fewer on-board sensors. But for these technologies to be improved, autonomous driving requires the support of the entire automotive industry and the development of smart cities.
Frank Schirrmeister, vice president of IP solutions and business development at Arteris, pointed out, “Various augmentation technologies serve different purposes. Developers often combine multiple purposes to create a safe and convenient user experience.
For example, map information digital for route planning Twinning can create a safer experience in conditions of limited visibility to enhance local in-vehicle decision-making based on sensor information.V2V and V2X information can complement locally available in-vehicle information to help make safety decisions, increase redundancy and create more data points that are used as the basis for security decisions.”
In addition, connected vehicles promise real-time collaboration between vehicles and roadside infrastructure, which requires technologies such as ultra-reliable low-latency communications (URLLC).
When eventually implemented, Level 3 autonomous driving could require more than three dozen sensors or a dozen cameras, depending on the OEM’s architecture. But the jury is still out on which is safer, or whether self-driving sensor systems can provide the same level of safe driving in urban environments as they do on highways.
Falling sensor costs over the next few years could open the door for new sensors that can be added to the mix to improve safety in severe weather. However, it may be a long time before OEMs can standardize on a certain number of sensors sufficient to ensure safety under all conditions and extremes.