Contact Us

Integrating Sensor Fusion

April 30, 2020

Ceva

Technology is becoming more and more integrated into our lives, moving from devices used to inform and entertain to devices that help us see, hear, feel, and soon react to the world around us. An integral part of this shift depends on sensors to extend our eyes, ears, and decision-making ability.

From automotive, home automation, audio, and smart cities to robots, sensing is the first step to contextually aware devices that will gather information, analyze, and react appropriately.  Inertial measurement units, miniature microphones and speakers, cameras, radar, and LIDAR units — each provides the information on which we build our smart environment.

But these sensors alone are not that smart. They just generate raw data. The smarts come behind the sensor in intelligent processing, though quite possibly built into the same device. This raw data must be processed and packaged to become actionable.

There’s a pedestrian in front of the car, or another car is approaching fast on your left. Wearing your VR headset (figure 1), you’re looking straight ahead while pointing up. You tapped twice on your left earbud to start a phone call. Or you spoke a command to switch to another track, picked up and recognized by your earbuds. There are many more examples, like wearable eHealth devices – who knows what clever innovations will emerge for COVID-19 symptom detection and contact tracing? All of these capabilities depend on that conversion of raw data to intelligence.

Fig. 1: Man and woman interacting with a virtual environment

There’s a tradeoff in how this is done. We could pump all the raw data up to the cloud or a big central processor and let high-horsepower machine learning handle the reduction. That idea falls apart quickly on power, bandwidth, latency, and quality of service. Better to do the intelligent reduction close to the sensor or sensors. It would be better still if the solution also employs wireless communication, because in many applications, wireless is the next step to get to a gateway or the cloud.

Sensing Hubs

There is an increasing need for one intelligent hub since contextual awareness generally requires input from multiple sensors. For example, in visual sensing for camera, you may have time-of-flight data, structured light, i.e., projecting grids or bars on a scene to recognize depth and surface information, radar and LIDAR data, motion sensing through gyroscopes, accelerometers and magnetometers, and audio data through one or more microphones.

Intelligent reduction requires per-sensor reduction. For example, beam-forming, echo cancellation, and more for the microphones. Also, fusion, i.e., getting spatial positioning from motion sensors, and finally contextual awareness so the system can determine proximity to a product you might want to buy, or for simultaneous localization and mapping (SLAM) guidance in a robot that has obstacles in its path that it needs to avoid.

Business Wire projects almost a 19% CAGR in the sensor fusion market from 2018 to 2023, citing rising integration in smartphones and wearables as major growth markers. GM Insights has projected similar growth, extending their view of opportunities to engine control and ADAS in automotive, medical, military, and industrial applications including robotics. Right now, they see the fastest growth in the Asia-Pacific region, driven particularly by consumer and automotive demand.

Technical Demands on a Hub

In a wide range of applications, what is needed from a hub processor to support them all? First, a range of processors with the ability to handle multiple sensors in parallel for voice, imaging, inference processing, and SLAM. In simpler hubs, a solution might manage surveillance (video and audio), sport cameras, smart speakers, and earbuds, all of which need to be very power efficient. In the mid-range, drones, VR/AR, and home robotics may need floating-point support, and at the high end, more neural net capacity for robotics and AI for complex language processing.

A hub will also need strong software support. First, the application must have the ability to map from trained networks in any of the standard formats and optimize the hardware you have built. Second, it needs a robust software library with strong support for voice pickup, automatic echo cancellation and adaptive noise cancellation, trigger word/phrase and command recognition, computer vision libraries, and SLAM, together with all the usual vision support libraries.

Conclusion

There is significant opportunity in integrating sensing control in common processing hubs for close-to-the-sensors processing, for fusion, and for contextual awareness. The idea that all this heavy lifting should be done in the cloud or some heavy-duty central intelligence processor is already outdated. Nor does this mean we should push all sensing intelligence to the extreme edge.

To get the full benefits of fusion and contextual awareness, we need to distribute intelligence intelligently. And if the hub cannot be wired into a network, it needs wireless support. Sensor hubs, with an option for wireless communication, are the logical way to meet that goal.

 

published on Sensors Daily

Ceva

Complete the form to download!

Get in touch

Reach out to learn how can Ceva help drive your next Smart Edge design

Contact Us Today