AI at the edge will reduce overwhelming volumes of data to useful and relevant information on which we can act.
A recent McKinsey report projects that by 2025, the CAGR for silicon containing AI functionality will be 5× that for non-AI silicon. Sure, AI is starting small, but that’s a pretty fast ramp. McKinsey also shows that the bulk of the opportunity is in AI inference and that the fastest growth area is on the edge. Stop and think about that. AI will be growing very fast in a lot of edge designs over the next six-plus years; that can’t be just for novelty value.
Smart assistants in the car cockpit (Source: CEVA)
I would argue that as we push more and more electronics into our homes, cars, offices, cities, hospitals, and infrastructure, we have become the bottleneck in handling all of that potential; we are neither able to manage those volumes of data and bandwidth nor keep up with conventional interfaces to control what these technologies can offer. We need data and technology to be condensed, simplified, and delivered in capsule summaries at rates that we can digest, and we need control that we can manage through speech and visual recognition rather than confusing and constantly evolving arrays of buttons and menus. This is where AI is most likely to flourish in the near future — in AI assistants that help us to be more effective in what we want to do rather than in assuming full control.
Take a perennial AI favorite — intelligence in cars. Someday, we’ll see true autonomy, though that may be a little further out than we originally thought. But AI assistants can and already are playing important and convenient roles today. You approach the car, and an assistant opens the door for you and adjusts the seat, steering wheel, and mirrors to your personal preference. Another AI assistant monitors your head position and direction of your eyes to make sure you’re paying attention to the road. You can control the infotainment system through voice commands for navigation, radio, media, and even for messaging. These functions could perhaps still be polished more to be truly effective, but they don’t have to overcome the still-challenging technical, legislative, and social acceptance barriers to full autonomy; assistants can add value and safety to our car rides already or very soon.
Smart AI assistants everywhere
But the real opportunity is in ubiquitous AI assistance — not just in cars but almost everywhere we might automate. Take discretionary purchases — TVs, appliances, and home security systems, for example. Get rid of the remote, the app, and even the on-fridge touchscreen. Instead, tell the device what you want. Put food in the microwave and have the microwave look at that food to figure out power and time settings without need for additional input. Have security cameras around your house notify you only if unusual activity is detected, such as a person near a window but not a dog, cat, or bird. In each case, your interaction should be natural and familiar. An AI assistant should take care of mapping your needs into the needs of the thing that you want to control.
These are high-volume applications, devices, and systems that you might expect to find at a Home Depot, Best Buy, or Costco. This is a huge opportunity, but with that opportunity comes significant price sensitivity. Maybe you’d pay a 10% premium for this kind of functionality, but you’d be less sure about a 20% premium. A typical microwave runs at about $100, a pretty nice TV at about $1K to $2K, and a camera-based home security system at about $200 to $400. Costs for expensive GPU-based AI assistants would be wildly out of sync with these price levels. Also, fully cloud-based AI requiring a subscription to stream and maintain huge storage on the cloud further undermines the drive-by shopping appeal of any such device.
Other problems in using cloud-based AI include response speed and cloud connectivity requirement to enable AI assistance in many such devices. That connectivity isn’t 100% reliable. Perhaps it’s no more than an inconvenience in some cases, but does your home security system become non-operational when the internet goes down? Does your collision-avoiding drone fly into the nearest tree when a building blocks access to a cell tower? Low-cost though they may be, there are some functions that we expect should remain reliable whether or not cloud access is available. An effective minimum of recognition functionality (beyond basic trigger-word recognition, for example) must be hosted on the device, independent of any external connectivity.
Taking advantage of the next wave of systems innovation requires smart AI assistants at the edge — not just for consumers but also for smart cities, industrial, infrastructure, and other domains, with varying levels of price sensitivity and reliability expectations, many not so different from consumer demands. That requires AI at a significantly lower cost, embedded in single-chip solutions, able to run at very low power as many edge applications demand, and able to function independently of a cloud connection to deliver the core functionality expected in the product. The time has come for more smart, cheap, and self-reliant AI assistants.
You can learn more about how CEVA is enabling applications like these on the AI applications page.
Published on EEWeb.