Technology
Fused Sensing vs. Sensor Fusion
We're hiring!

Fused Sensing vs. Sensor Fusion

There is no perfect sensing technology: 2D Cameras, LiDARs and Radars work differently when considering three domains: Space, Time and Frequency (ie. Wavelength).

Combining the information from each kind of sensor in an efficient and comprehensive way is not only a key challenge for Autonomous Robots like Self-Driving Cars, but is also an enabler for Safer and Smarter man-controlled Machines operating in a broad variety of economic sectors.

The first approach of the industry has been to combine the output from different and isolated sensors. This is an emerging discipline called Sensor Fusion.

We introduce a different approach, that we call Fused Sensing: a multi-physics sensor that provides a comprehensive perception of the environment in a single device, where the Space and Time data from different spectral bands is blended into a single 3D Image.

This multi-physics perception output is a key enabler to achieve Full Reality Perception, where the sensor itself is delivering a rich set of actionable information in real-time: not only color and position, but also an unprecedented kind of data such as the Full Velocity and the Material composition (Skin, Cotton, Ice, Snow, Plastic, Metal...) for each individual 3D point.

This comprehensive perception capability delivers the key output required to fulfil our mission: bringing Full Situation Awareness to Smart Machines.

 


Fused Sensing vs. Sensor Fusion

 

There is no perfect sensing technology: 2D Cameras, LiDARs and Radars work differently when considering three domains:

  • Space: they have different Field-of-View and different 2D vs 3D capabilities.
  • Time: frame-rate, jitter and read-out strategies are often different for each sensor.
  • Frequency (wavelength): each one works on specific bands of the electromagnetic spectrum (visible bands, infra-red wavelengths and radio waves) that interact differently with elements like dust, snow and fog.

Spectrum Sensors-1

 

The different capabilities of each sensing technology on these three axis leads to specific weak and strong points on key performance variables and situations:

 

Sensor Pros and Cons-1


  • Cameras provide high-resolution images but they are passive. They lack native depth information and depend greatly on lighting conditions, even if good progress is being made in this field.

    In a world suited for human beings, the color information provided by a Camera is a critical contribution to the overall Sensor Set capabilities.
  • Radars can measure velocity with great precision but fare poorly when measuring the position of objects due to their low resolution and have trouble detecting static objects. This remains a challenge even after significant innovation in the field of Imaging Radar.

    Snowflakes, rain drops and water molecules are smaller than Radar wavelength, so Radar "bends" around them making them mostly invisible. This makes them a good sensor in poor weather conditions.
  • LiDAR can perceive in 3D, but the "performance vs. cost" trade-offs for current technologies is far from suitable for mass production except for simple use cases. Current LiDAR technologies are either high-performing but unscalable (Fiber Laser LiDAR) or affordable but with low performance in terms of Range and Resolution (Diode Laser, VCSEL LiDAR, due to Eye Safety constraints among other reasons).

    However, LiDAR is the only sensor that can provide actual and precise 3D native data, which is required for the highest levels of Situation Awareness.

The first approach of the industry has been to combine the output from different and isolated sensors. This is an emerging discipline called Sensor Fusion.

Most Sensor Fusion methods rely on either of the two architectures highlighted below:

  1. Big Brain + Dumb Sensors (aka Low-level fusion): This system employs very basic sensors and sends the raw data to a central processing unit. This central unit is often running with the help of Machine Learning processes.
     
       
    Centralized computing-2
      
      
  2. Black-Box Smart Sensors (aka High-level fusion): Edge computing capabilities, embedded on the sensor itself, allow it to send only the conclusions (ie. detected objects) through the network.


    Edge computing-2

    This makes it much easier for the central processing unit to Fuse information from different sensors and requires much less network bandwidth.

Regardless of the advantages of each approach, there are some common drawbacks and specific challenges:

  • Latency: The time necessary to perceive the environment and process the captured data is critical. In the case of moving vehicles, the available time to prevent a collision with an obstacle or incoming object is very limited and in case of static sensing devices like Surveillance Cameras, it limits the maximum detectable speed of a tracked object.

    The closer the processing and decision-making software is to the data acquisition, the lower the latency will be. As a rule, an Edge computing device should have lower latency than a centralised approach, except if the information available on the edge is limited or doesn't suffice to make a decision (ie. redundancy required for safety). 

    A Centralised approach can fuse the data from different sensors quicker, but processing all the raw data takes time and the data from each sensor can arrive at different periods of time with different frame-rates (see synchronisation below).
    Latency minimisation on such architectures is still subject to significant research and development across the industry and academia.
     
  • Synchronisation: when several sensors monitor the same scenario, in order to correctly extract and combine the conclusions from their distinct perceptions, the time at which each sensor has acquired the data is of paramount importance because both the vehicle and the surrounding objects can be in motion.


    Synchronization

    You can only know that two sensors are actually perceiving the same moving object if both are collecting the information at the same time (synchronised) and you also know the relative position between them (Calibration, see below).

    This is still a significant engineering challenge that is even harder to resolve when the sensors are separated and can deliver data with different frame-rates and jitter.
     
  • Calibration: Calibration is to space what Synchronisation is to time and you need both when combining different perspectives.

    In order to know that two sensors (eventually with different field-of-views, range, and resolution) are actually perceiving the same scenario, you need to know the relative position and orientation of them.


    Calibration_FOV

    The calibration of sensors can be a cumbersome process, that becomes critical in mass-produced solutions, where after-sales organisations cannot perform highly-sophisticated calibration procedures at the system level.

  • Installation/Wiring: the need for different sensors means that each one of them must be installed, wired and powered separately.

    This may not seem like a big issue, but with the vehicle harness being the third most expensive component in a car behind engine and chassis. Wiring and connector selection is of critical importance to car OEMs. Harnesses consume up to half of the cost of labor for the entire car (Vehicle Electronics, Issue 69, Sept 2019).

    A single sensor to be wired, powered and seamlessly installed can significantly lower production costs.
     
  • Sensor location: when sensors are positioned apart from each other, they will have a different perspective on the vehicle’s environment (parallax) which may lead to a situation in which one sensor can perceive an object, while it is occluded for the other sensor (see Sensor Set for Autonomous Driving by Felix Friedmann).

    This is a challenge for sensor data fusion. For high-level fusion (fusion of objects), there will be contradicting data from the sensors while for low-level fusion (fusion of raw sensor data), the parallax will result in areas that can’t be matched between the sensors and consequently result in ‘holes’ the fused sensor data. These effects are particularly strong for the environment close to the sensors.

    To reduce these issues to a minimum, different sensors modalities must be collocated, mounted as close as possible to each other, but this is hard to do in mass-produced vehicles, where there is often insufficient space for placement of more than one sensor in the optimum vehicle locations.

    A multi-physics single sensor approach changes the situation.

 

Fused Sensing

 

Fused Sensing is our contrarian approach to the problem of Sensor Fusion: a multi-physics self-calibrated and self-synchronised sensor that provides a comprehensive perception of the environment in a single device, where the Space and Time data from different spectral bands is blended into a single 3D Image.

 

Fused Sensing copy

 

This multi-physics perception output is a key enabler to achieve Full Reality Perception, where the sensor itself is delivering a rich set of information in real-time: not only color, range and reflectivity, but also an unprecedented kind of data such as the Full Velocity and the Material composition (skin, cotton, ice, snow, plastic, metal...) of each 3D point.

Other point-wise classification data includes: moving/movable/fix, vegetation, ground, drivable road, markings and traffic signs:

 

Actionable data

 

The Fused Sensing approach changes the equation of current Sensor Fusion challenges:

  • Multi-physics Self-Calibration and Self-Synchronisation are easy to perform on the same device.
  • Latency is minimised, as the fusion and semantics are both done at the point level (and not the object level) in the same device.
  • The single device approach solves the multi-sensor wiring and installation and the sensor location problem at minimal cost.

In order to provide only meaningful information in the most optimum way in terms of bandwidth and computation, we have created a new Edge Processing paradigm beyond the limited Black-box edge computing and the inefficient All-Raw centralised architectures - see the article The 5 Levels of Sensor Smartness.

The comprehensive perception capability obtained by the Fused Sensing approach is a key enabler to fulfil our mission: bringing Full Situation Awareness to Smart Machines.