Self-driving car sensors – senses of an autonomous vehicle

  • Scope:
  • Artificial Intelligence
  • Web Development
Self-driving car sensors - senses of an autonomous vehicle
Self-driving car sensors – senses of an autonomous vehicle

The human driver’s mind performs super-complicated computations in the blink of an eye, combining an intricate system of sight, hearing, and motion. To be effective, an autonomous vehicle must be able to do the same, yet using a set of entirely different senses. 

Autonomous vehicles, as mentioned in our blog post about AI trends of 2021, are getting attention on both the professional and the consumer market. According to Ford’s predictions, there will be fully-functional autonomous vehicles on British roads by 2022. Initially, the goal was to deliver a driverless car by 2021, but the pandemic has hit the automotive market severely as well as disrupting the business and research processes behind the design. 

Autonomous vehicle sensors – the artificial senses

One of the key challenges with autonomous vehicles is to provide them with a reliable and accurate perception of reality. The self-driving car system needs to deliver even more accurate information than human senses are capable of. A human driver harnesses all the experience he or she has gathered during their lifetime, an experience that an autonomous vehicle lacks. Therefore, they have to make up the difference in other areas.

So how do self driving cars work when it comes to perceiving reality? The car uses at least three alternative technologies in combination when providing the AI system with proper information. This blog post delivers information about: 

Camera

Digital cameras are one of the most intuitive and most popular types of sensors used in cars, and not only the autonomous ones. The popularity of this technology and multiple other applications have enabled researchers to deliver multiple auxiliary technologies, which have in turn been polished in other segments of the market and through multiple use cases. 

Both autonomous cars and their non-autonomous counterparts use multiple cameras, including front, rear, side, and wide-range. As such the technology is already well established in the automotive industry. 

AI Computer Vision

Cameras provide the perfect input for image recognition neural networks. AI-based solutions are used to process various types of images, either to recognize a whole scene (for example the positions of cars) or only a part of it (road sign recognition). 

Due to the versatility and flexibility of AI image recognition technology, the applications are countless, including safety-enhancing solutions like detecting the driver’s fatigue or intoxication

Stereoscopic Vision

Another interesting use case is not directly connected with AI image recognition, although it can be supported by it. By comparing two images done with a parallax, the system can estimate the distance between objects and their relations in a 3D space. 

The Camera At A Glance

Strengths

  • Multiple-use cases harnessing different computer vision technologies
  • Passive type of sensor, usable in various environments
  • Easily interpretable to human researchers 
  • Relatively easy access to hardware and software

Weaknesses

  • Can get confused by unexpected input (car on a billboard, odd behavior, etc.)
  • Data-heavy
  • Requires large on-site computing power
  • Technical challenges with real-time analysis
  • Weather and lighting conditions have an impact on performance
  • Affected by geometry distortions – especially challenging when dealing with stereoscopic depth estimation

Lidar

Light Detection and Ranging (Lidar) can be compared to a radar device, yet it uses a laser instead of radio waves. But what is Lidar exactly? Lidar is a relatively new technology, being around since the 1970s, and is considered a core building block in autonomous vehicle development.

The popularity of the technology is rising significantly. According to the Markets and Markets report, the value of the Lidar market is expected to grow from $1.1 bln in 2020 to $2.8 bln in 2025. 

Point Cloud Analysis

Contrary to a camera, Lidar does not deliver an image but 3D point cloud data, with the exact position of each point (voxel) taken from a single ray of the laser. Depending on the density of points, humans can use their imaginations to recognize a shape with bigger or smaller difficulties. It can be compared to a “connect the dots” puzzle, but in 3D and without the numbers. 

But what comes relatively easily to a human mind is a great challenge for neural networks and computers. Processing a 3D shape and abstracting it into a building wall, a car or a running kid is far more challenging than recognizing the same entity in a camera’s output. 

On the other hand, though, Lidar delivers real-time and reliable information about the distance and the position of every entity the laser beam reaches. 

Lidar At A Glance

Strengths 

  • Top-quality distance information
  • Fast and reliable
  • Unaffected by geometry distortions
  • Delivers a point-cloud, automatically validating each single point

Weaknesses

  • Affected by weather conditions
  • Hard to process (though there are research works on point cloud analysis as performed by Tooploox)
  • Relatively expensive (yet with prices steadily going down)
  • Energy-consuming
  • Highly reflective materials can confuse the sensor

Radar

Radar is widely used in maritime and air traffic control as well as in military and civilian applications. The device uses radio waves in pulses that hit an object and return to the sensor allowing for an estimation of the object’s relative position and distance. So where other sensors struggle, radar shines. 

Radar sensor delivers precise data when it comes to the information mentioned above, yet the resolution is not enough to distinguish between multiple types of objects. So radar may spot a car, but cannot specify if it is a sports car, a lorry, an ambulance, or a heavily armored troop transport vehicle – for a radar, these are basically the same. So in fact radar for autonomous vehicles comes as a core yet not standalone sensor.

Radar At A Glance

Strengths

  • Unaffected by weather conditions, including mists, smoke and heavy rain
  • Long-range 
  • Accurate data
  • Commonly used technology
  • Covers a wide range of terrain

Weaknesses

  • Low resolution
  • Can be affected by some materials or angles
  • Can get confused by certain types of motion, like slow deceleration
  • It delivers no additional information

Ultrasonic sensors

Ultrasonic sensors work in the same way as the radar does, but uses ultrasounds instead of radio waves. This type of sensor is commonly used in modern cars, for example in parking sensors. 

Ultrasound Sensors At A Glance

Strengths 

  • Delivers accurate short-range data
  • Unaffected by weather conditions
  • Low energy consumption

Weaknesses

  • Short-range
  • Little additional data gathered 

Summary

These sensors deliver basic and the most necessary information for driverless cars to operate in a road traffic environment. With the data gathered by all these tools combined, the autonomous vehicle control system gets a piece of reliable and comprehensive information about its surroundings. 

Read more our automotive-related content:

Obviously, this was not a complete car sensor list. There are multiple other sources of data, as well as auxiliary sensors that further enhance the accuracy of the information the vehicle operates on, with the speed sensor and temperature sensors among the most obvious.  

Similar Posts

See all posts