SAN JOSE, Calif.—Google’s Daydream VR is very much a reality today for semiconductor managers such as Tim Leland. The head of visual processing at Qualcomm is one of many working with the search giant for some time to bring to all next-generation Android phones an upgraded version of the mobile virtual reality Samsung pioneered with its GearVR.
Leland’s team helped develop an Android framework for optimizing single-buffer rendering. The graphics cores in its latest Snapdragon 820 SoC were tuned to deliver fine grained pre-emption to reduce motion-to-photon latency, a key metric to make sure displays change as fast as a user’s head moves.
“It took a lot of effort” from deep in the SoC through Android to the application to hit the 20 millisecond target, Leland said.
Snapdragon chips needed “to change way they handshake with sensors” to reduce latency. The sensors themselves need to support fast sampling at rates of 100 MHz to a gigahertz.
Qualcomm developed an algorithm it calls visual inertial odometry to track head motion across six-degrees of freedom. It correlates on Snapdragon’s embedded Hexagon DSPs data from a handset’s accelerometer, gyroscope, magnetometer and cameras.
Developers will be able to access the Qualcomm technique in an SDK the company will release soon. Google also plans to handle sensor fusion tasks in Android N, presumably for handset makers using SoCs that don’t sport their own sensor fusion capabilities.
In a white paper, Qualcomm claims its Snapdragon 820 has less than 18 ms motion-to-photon latency. "To put this challenge in perspective," it says, "a display running at 60 Hz is updated every 17 ms, and a display running at 90 Hz is updated every 11 ms."
Most Daydream headsets will be passive smartphone containers like Samsung's GearVR. (Images: Google)
Handsets typically will need AMOLED displays. They support faster switching times than conventional LCDs which can show ghosted images.
Graphics cores will use a host of tricks to render images to give users a fluid sense of motion while minimizing battery drain. For example, devices will reuse macro-blocks as often as possible to reduce the need to render images.
A simple technique is to render images in the center of a display first, assuming this is where the user is focused. A more advanced approach will use the smartphone’s camera to track eye movement to determine what portions of an image to render first.
Compression of graphics data has become a focus for reducing power while increasing processing speed. Another trick involves changing the tones in an image to make it appear brighter without needing to crank up the power-hungry backlight.
It’s a challenge given today’s phones get hot to the touch just tracking driving directions in a car. Phones this fall will be asked to do more while riding next to the user’s face. The hope is VR will inject excitement in a premium handset market that has slowed.
To read the rest of this article, visit EBN sister site EE Times.