Advertisement

Blog

Mentor in Robo-Car Race with Mobileye, Nvidia

MADISON, Wis. – As more automakers start integrating different sensors into ADAS/autonomous cars, they often justify the decision to apply sensor fusion as “critical to the safety” of highly automated driving.

Often left unsaid, though, are details on the data — raw or processed — they are using and the challenges they face in fusing different types of sensory data. As Ian Riches, director of the global automotive practice at Strategy Analytics, confirmed, “Sensor fusion today is not done on the raw sensor data.  Each sensor typically has its own local processing.”

Mentor Graphics Corp. will come to SAE World Congress in Detroit this week to demonstrate how “raw data fusion” in real time from a variety of modalities – radar, lidar, vision, ultrasound, etc. – can provide “dramatic improvements in sensing accuracy and overall system efficiency.”

Mentor is rolling out an automated driving platform called DRS360, designed to “directly transmit unfiltered information from all system sensors to a central processing unit, where raw sensor data is fused in real time at all levels,” the company said.

Glenn Perry, vice-president and general manager of Mentor Graphics’ Embedded Systems Division, told EE Times, in comparing “sensor fusion” with “raw data fusion,” that there are “subtle but important differences.”

Typically, sensors supplied to automakers come in a module designed to pre-process data. As Riches explained, “Data sent from a camera to a fusion system, for example, will not be the actual image data, but rather a description of the areas of interest within that image – e.g. here is a white line, here is a car, here is a traffic sign.  Any fusion was then thus done on that much higher-level data.” 

How sensor fusion is done today by using processed data from separate sensor modules(Source: Mentor Graphics)Click here for larger image

How sensor fusion is done today by using processed data from separate sensor modules(Source: Mentor Graphics)

Complex task
Mentor believes that by getting rid of pre-processing microcontrollers from each sensory module used at each end node and opting for raw data instead, designers of ADAS/autonomous cars can achieve a big boost in “real-time performance, significant reductions in system cost and complexity, and access to all captured sensor data for the highest resolution model of the vehicle’s environment and driving conditions.”

To read the rest of this article, visit EBN sister site EETimes. 

1 comment on “Mentor in Robo-Car Race with Mobileye, Nvidia

  1. Zalgarion
    April 10, 2017

    Thank you for Graphic.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.