Autonomous vehicles will be a great thing – someday. But governments and regulators don’t want to wait until ‘someday’ to improve traffic safety. The US $1trillion bill recently signed includes a mandate for automakers to help prevent drunk driving. The EU has reached provisional agreement on a revision to the General Safety Regulation to reduce blind spots on trucks and buses and to warn drivers in case of drowsiness or distraction. The Chinese government has recently published new standards with similar intent. Also included among these standards are requirements for automatic emergency braking and lane departure warnings as standards, emergency steering and child occupancy detection in the back seat. The Chinese standards became effective 2021, the European standards in 2022 and the US standards in 2025, though all have somewhat later dates for some of the more advanced features. This clear upcoming trend drives technologies of sensors, digital signal processing and AI/ML to find new efficient ways to deal with the growing demand of the worldwide automotive safety market.
(Source: CEVA)
Regulation stimulates opportunities for innovation
Since these regulations define requirements rather than upgrades, such systems will be required in all new cars. As post-pandemic auto markets settle down it is reasonable to expect these regulations will create a sizeable opportunity for chip and system builders. Some functions such as automatic emergency braking and steering can potentially migrate from high-end production vehicles or autonomous pilot systems across product models, although probably requiring more testing and guard-banding to ensure these functions robustly adapt to wider deployment. But some such as driver monitoring systems bridge the gap between driver controlled and autonomous systems, and demand yet more innovation. The impact will also be seen in greater adoption of more advanced digital signal processors (DSPs) and AI inferencing platforms to boost performance, reduce cost and enable artificial perception by interpreting data from diverse sensor suites.
Driver Monitoring Systems
Driver monitoring systems (DMS) aim to detect whether the driver is awake, whether she’s paying attention to the road (not checking out her phone) or is otherwise impaired. Systems of this kind may use infrared cameras to detect driver pose and gaze. If these drift from expectation, a warning is sounded. If the behavior continues, warnings escalate, hazard lights may be turned on and the vehicle may either stop or pull over to the side.
These functionalities require additional in cabin sensors to be able to detect this type of behavior and alert the driver accordingly. The sensing challenges varies from motion sensors, to vital signs detectors, all the way to depth sensing and low light image sensors as well as super wide-angle cameras that has the necessary field of view (FoV) to govern all car inhabitants at once.
These increasing data sources impose a challenge of how to aggregate all data, process it independently and analyze it together as fused information. All while keeping strict power consumption constraints by adding new efficient mechanisms to the platforms we know today.
DMS functionality depends on sensor fusion, for example to merge information from multiple driver-facing cameras. But it can also be part of a broader situational awareness through further sensor fusion. If the driver’s gaze is not facing forward and at the same time lane departure warning detects that the car is crossing lane boundaries, that signals need for a very active warning. Similarly, detecting cellular or Wi-Fi activity, together with a shift in gaze also signals need to re-capture the driver’s attention.
To accommodate the wide range functionality that is needed, processors must be flexible enough to deal with different sensor outputs, different signal processing methods and the massive data transfers. AI processors that will have the ability to compress data, manipulate metadata to reduce bandwidth (BW) and leverage mathematical manipulation such as Winograd transform & Matrix decomposition for AI or 3D registration for computer vision (CV), will shortly prevail alternatives. All must be done in a way that preserve precision and avoid further development efforts or unnecessary risk to the solution provider.
DMS isn’t just to keep you alert. Some car companies are working on incorporating facial recognition, to adjust seating and climate control to suit a recognized driver. There are even suggestions that future vehicles may allow you to control car functions simply through eye movements.
DMS is real and must meet broad market needs
Lexus and Volvo were among the first automakers to add DMS. Cadillac already has support in the Super Cruise, and GM, BMW and Nissan are now installing DMS. Subaru is starting to add facial recognition. This trend is very real and can only accelerate as regulations start to bite. Which will require automakers and Tier1s to add this technology to the palette of options they must incorporate in upcoming models.
As regulatory requirements these features can’t be limited to high-end models. They will have to hit price points all the way from luxury vehicles down to entry-level. That will require a range of innovative and scalable implementations to meet cost, power and safety requirements.
There’s a lot of opportunity and a lot to consider in building widely deployable DMS platforms. Talk to us at CEVA about how we can deliver low cost high performing DMS systems.
(If you want to check out mandates and regulatory actions, here are links for the US, Europe and China.)
Published on Electronic Specifier.