Technology
The 5 levels of Sensor Smartness
We hire!

The 5 levels of Sensor Smartness

As Machines become Smarter, they also need to become Safer, which requires much better perception capabilities.

But all sensing devices are not created equal.

We introduce here a smartness scale for sensors, from the most basic approaches to the more sophisticated ones as follows:

 

  • S1 – Only Points (Raw data)

    This is the current situation in most LiDARs, where the devices deliver basic information, typically X,Y,Z and intensity. In some cases like FMCW LiDAR, they can also provide the Axial component of the Velocity (not to be confused with Velocity).

    The output of this kind of sensors is typically preferred in Centralised Sensor Fusion approaches (Big-Brain-Dumb-Sensors).

    They provide low-level data: it’s up to the user to go from data to information using a separate piece of software.

    Because of the difficulty to use raw data in an effective way, only a relatively small number of organisations and Academia have the knowledge to leverage its full potential, which limits their addressable market.

      
  • S2 – Only Objects 

    In case of Radar and some LiDARs, the output can be a higher level of abstraction, where points are clustered to form Objects and the information about their position, speed and size are delivered to a central processing unit. In some cases the objects can be classified, ie. Pedestrian, Car, Bicycle.

    The object output is preferred in High-level Sensor Fusion approaches. The abstraction level is high, but all the information contained in the raw data beyond the object itself is lost.

    The simplicity of the output makes this kind of sensors much easier to adopt, but they are often limited to well-defined use cases and conditions.

 

  • S3 – Meaningful Points 

    Each individual point is classified by itself, so only a meaningful output is delivered depending on the task on hand.

    This includes, by example, actionable categories for ADAS and Self-driving cars:


    Actionable data
     

    This low-level yet information-rich one is preferred in the Fused Sensing approach, as it provides a higher level of information than S1 and S2 sensors while minimising the bandwidth and computing resources required in the decision making processes that rely on this data as input.

    However, the point-wise information can only be leveraged by a relatively small number of Companies and Academia, having the knowledge to work at the point abstraction level.

    The next generation of S3 sensing devices includes the OEM version of Outsight's 3D Semantic Camera

  • S4- Meaningful Objects (Measured behaviour)

    The information from the classified points is clustered to provide classified objects.

    When the object clustering variables are configurable, which means that only meaningful objects are created depending on the task at hand, for example:
     
    • Only moving and movable objects on the drivable road and the sidewalk.
    • Only Black Ice, Snow and Oil and only if they are on my lane.
    • Only the markings on the road.
        

    Because the information at the object level, such as Velocity, Position, Size and Trajectory, is kept over time in a consistent 3D coordinates system (3D SLAM on Chip) the result is an object behaviour that can then directly feed a decision-making process.

    The next generation of S4 sensing devices includes the most advanced version of Outsight's 3D Semantic Camera.

 

  • S5- Predicted behaviour

    Most advanced Smart Machines will include embedded prediction capabilities, for seamless system integration.

    "The best qualification of a prophet is to have a good memory. "

    --Marquis of Halifax,

    Only having a good understanding of the past and the present time can deliver robust predictions: Outsight's 3D Semantic Camera running SLAM on Chip(R) constantly keeps a consistent record of past events and data, which enable both quicker and better predictions.