SAN JOSE, Calif. — Machine learning is sparking a new era in computing, according to Nvidia’s chief executive, who hopes that his latest GPU, Volta, becomes its favorite fuel.
The Volta announcement was the centerpiece of a two-hour keynote at GTC on “Powering the AI Revolution.” The annual Nvidia event attracted a record of more than 7,000 attendees, thanks to rising interest in using an expanding array of neural networks across a broadening horizon of applications from agriculture to pharmaceuticals and public safety.
Nvidia’s graphics processors hold a strong position in training neural nets for machine learning. “Every single cloud company in the world has Nvidia GPUs provisioned for a cloud service,” said founder and CEO, Jensen Huang.
But it’s a hotly competitive field. More than a half-dozen startups are working on new architectures, two of them acquired last year by Nvidia’s largest rival, Intel, The x86 giant also bought established FPGA maker Altera, whose chips are used as accelerators in the data centers of Baidu and Microsoft.
Rival AMD also is accelerating its rollout of new GPUs with its Vega chip due soon. However, AMD has only recently added a strong focus on machine learning to its pursuit of the game market.
Nvidia is running as fast as possible to stay ahead. Its 815-mm2Volta will pack 5,120 CUDA cores and 16-Mbytes cache to deliver 7.5 64-bit floating point TFlops. It is made in a 12-nm FinFET process at TSMC and is packaged with 16 GBytes of Samsung HBM2 memory running at 900 GBytes/second.
Volta delivers a 50% better general-purpose performance than last year's Pascal, said Jensen Huang. (Images: EE Times)
The Volta processors, called Tesla V100, can link to each other or to CPUs at 300 GBytes/s via an Nvidia proprietary NVLink. The chip also packs new instructions enabling 4x4 matrix operations that are at the heart of 640 new Tensor cores in the chip.
The net result is a 50 percent performance boost over the Pascal chip that the company launched a year ago and started shipping last fall. Volta delivers 120 Tensor TFlops, 12 times the performance of Pascal on training jobs.
With its Tensor cores, Volta is “no longer a general-purpose GPU architecture, so Nvidia cannot be accused of using its GPU hammer and seeing every problem as a nail,” said Kevin Krewell, principal analyst at Tirias Research. “Although Volta is more efficient [than Pascal] running deep-learning workloads, Nvidia didn’t compare it with Google’s TPU ASIC.”
Nvidia was vague on why it chose the TSMC 12-nm node. Mobile SoCs are racing into production with the 10-nm TSMC node, while the 12-nm node is based on a shrink of TSMC’s 16-nm process, Krewell noted. “It could be that the 12-nm node offered faster time-to-market, but Volta is also a huge die and is pressing the limits of die area.”
“I think [that] Nvidia will wait until late Q3 or early Q4 to bring out a graphics-only version of Volta,” said Jon Peddie, principal of Jon Peddie Research. “That would be an appropriate time to do it as AMD will be bringing out their highly anticipated Vega in Q3, and if it’s as good as many people think it will be, Nvidia can push back with its GTX-based Volta.”
Engineering managers from Amazon, Baidu, Facebook, Google, Microsoft, and Tencent released statements supporting Volta. They were joined by a technology leader from a U.S. national lab.
As with past generations, Nvidia will supply not only chips, but its own systems, packing up to eight Voltas. They include two 2.2-GHz Xeon E5 processors and 128-Gbytes memory and draw 3,200 W to deliver up to 960 TFlops of 16-bit floating-point performance.
To read the rest of this article, visit EBN sister site EE Times.