Contact Us

SLAM and Multi-Sensing Robots: Soon in Every Home

September 24, 2020

Ceva

Robotics, for many of us, is still an application mostly limited to the factory or warehouse floor. We see media coverage on personal robot assistants and robots playing a bigger role in hospitals, which all sound like good ideas but can still seem aspirational rather than near-term. Yet there are practical applications already taking off. Robots in the home are today performing basic chores. Delivery drones are starting to make an appearance. And an imaginative Danish company is attacking Covid-19 with robotic ultraviolet disinfection units, now operating in some Chinese hospitals.

(Source: CEVA)

 

Robot applications and market
A crisis is always a good motivator, but convenience continues to be enough for most of us, as long as the price is right. Robot vacuums are already familiar. Robot lawnmowers are now appearing in homes and garden stores. You can buy a robot window cleaner or a robot pool cleaner online. There are even robots to monitor home security. The domestic robotics market is estimated to grow to nearly 40 million units by 2025 and to over 100 million units by 2030. Robotic delivery drones are expected to grow to nearly 800,000 units by 2030. Autonomous mobile robots will also have value in factories, supermarkets, and stores for delivery services, stocking, and other applications. This need is also expected to grow to the millions-of-units level by 2030.

(Source: CEVA)

SLAM and navigation
The magic behind all autonomous robots is simultaneous location and mapping (SLAM). SLAM learns to navigate unfamiliar territory on the fly. Unusually these days, SLAM isn’t based on machine learning, as some aspects of the territory can and will change dynamically (you moved the furniture or the dog decided to sleep in the middle of the floor). SLAM instead relies on more traditional techniques, especially computer vision, together with some very heavy-duty linear algebra.

To navigate as a robot vacuum would, SLAM needs to estimate the camera trajectory and build a map. It needs the map to estimate the camera trajectory and the trajectory to build the map, hence the need for these steps to be simultaneous. This map is nowhere near exhaustive. It’s a very sparse set of points along the path traveled, built through three steps — tracking, mapping, and loop closure. Tracking does the basics, including finding feature points, fitting to a motion model, and preparing for mapping — all of which uses fixed-point processing, which must run at real-time speeds. Mapping runs on a subset of frames but is performing linear equations represented by a matrix of on the order of a few hundred by a few hundred in floating-point — not quite as fast as tracking but still near-real time.

This works very well, but remember that this is all calculation on the fly. The real path and the estimated path will diverge over time thanks to limitations in the algorithms and calibration errors/noise associated with the sensors. Errors can be corrected when the robot revisits a point it’s been to before, in global loop closure. This calculation obviously doesn’t need to happen as often but must solve linear equations represented by a matrix of on the order of a few thousand by a few thousand in floating-point. That takes long enough that it must run in the background.

Multiple sensors and fusion
Visual sensing must be augmented by other forms of sensing — through proximity or time-of-flight sensors, for example — to avoid bumping into the dog or the TV. Also, robots can get stuck on low obstacles, a floor-level brace for a chair, or a transition between floor and carpet. To manage cases like this, the robot needs a six-axis sensor to detect tilt that might indicate an area where the robot could get stuck and should try a different path. Robots often also include optical flow sensors (like the one that tracks your mouse movements) and more. These additional inputs refine the accuracy of the SLAM processing, but they have to be calibrated and fused intelligently to actually improve accuracy. Camera-based tracking must also be fused with inertial and other types of sensing so that the robot can continue on a reasonable path when it goes under a bed or a table.

Requirements for a SLAM platform
Taken together, that’s a lot of sensor processing required to run a high-quality SLAM. You can’t do this in the cloud; communication latencies would kill effectiveness. Calculations need to be run in real time on a purpose-built low-power platform in the robot; with a DSP-based architecture designed to support fusion from multiple sensor types, optimized with dedicated instructions for fast fixed-point SLAM calculations and fast floating-point linear algebra; with hardware to support the visual-inertial SLAM fusion I mentioned above; and with an SDK supporting the widely used ORB-SLAM2 open-source flow.

If you’re thinking about robotics and want to build a high-efficiency, high-accuracy solution, check out our CEVA SensPro architecture.

Published on EEWeb.

Ceva

Complete the form to download!

Get in touch

Reach out to learn how can Ceva help drive your next Smart Edge design

Contact Us Today