Single chip 9 axis sensor with embedded sensor fusion that enables rapid development of sensor-enabled robotics, AR, VR, and IoT devices
The BNO085/BNO086 is a 9 axis System in Package...
OVERVIEW
Listen Like You’re There
Technology continually advances to capture life, with sharper pictures and video, and even 360-degree cameras for an encompassing experience. But to truly feel transported requires immersive audio as well.
When listening to music with 3D audio, sound cues change the direction of the sound based on the orientation of your head, as if you were in the same room as the artist. If a zombie approaches in a VR game, 3D audio lets you rely on directionally changing cues to quickly spin in the right direction.
Sounds from a real-world source reach each ear at a different time, and your brain interprets these differences in time to determine directionality. Recreating these audio cues requires accurate, low-latency head tracking capabilities.
Sensor Fusion Delivers Music That Moves With You
If your application requires absolute heading, Hillcrest Labs offers 9-axis IMUs with fast magnetometer calibration and advanced magnetic interference rejection. If you only need relative heading, our 6-axis IMUs will ignore magnetic interference. Regardless of which you choose, both utilize our patented MotionEngine™ sensor fusion technology to provide high dynamic accuracy. And to help make the experience true to life, our products can output sample rates up to 1kHz to ensure the lowest latency head-tracking.
Adding Context to Motion
Smooth head tracking is the bare minimum for a 3D audio IMU, but our products have other features to add value to your customers. With predictive head tracking, movements are anticipated to calculate the sound cues ahead of time, leading to a smooth, immersive audio experience. Tap and motion gestures allow users to conveniently switch modes or access features with simple movements.
Built-in activity classifiers tell whether a user is sitting, standing, or walking, which could be used for context-based modes. Imagine lowering the volume slowly as you stand and get out of a chair, to allow you to better observe the outside world. As a user moves around, the 3D audio device may lose its bearing. With a combination of activity classification and some AI, the headset could align itself for the user if they’ve been sitting for a while. MotionEngine can also determine when a device is not in use and change to a power-saving mode.
Maintaining Accuracy Over Time
Any list of capabilities is useless if the core sensor performance breaks down. That’s why we developed algorithms to dynamically adjust for changes in temperature and the effects of aging.
With over 15 years of studying sensors and sensor fusion, we have built up expertise and understanding that will save your company time and resources trying to emulate. Work with a pre-packaged IMU from Hillcrest Labs to enable precise and robust performance, moving you faster to market with confidence.
Haven’t You Heard?
3D audio can truly immerse a user in a new space, transporting them to a studio or the middle of an epic fight, but it requires precise low-latency motion. Capturing that motion is our specialty. Integrating one of our products will leverage our expertise, and added features to deliver positive experiences to your customers. With high sample rates, tap and gesture recognition, predictive head tracking, activity classification, and dynamic calibration, adding value to your product never sounded so good.
3D/Spatial Audio: How To Overcome Its Unique Challenges To Provide A Complete Solution
Join CEVA and VisiSonics experts to learn about:
- What is Spatial/3D Audio
- How does Spatial/3D Audio work
- What are the technical challenges to implementing an effective Spatial/3D Audio system
- Why is head tracking so important to Spatial/3D Audio
- How can the technical challenges be addressed to provide a robust, complete solution