The Intel brands is synonymous with the computer chips attested to by the “Intel Inside” tag on hardware. But the company is adapting to today’s computing demands and participating in the advance of artificial intelligence (AI). I spoke with Intel’s chief data scientist Bob Rogers about the company’s direction and the new possibilities opened up by machine learning, deep learning, and transfer learning.
In October, Intel announced that by the end of this year, the Intel Nervana Neural Network Processor (NNP), the industry’s first silicon for neural network processing, will be on the market. This is a product of collaboration with Facebook, which shared “technical insights” that Intel was able to implement in building the AI hardware.
Even before this announcement, Intel was on the road to AI with a number of strategic acquisitions. As The Verge reported, in March that it bought Mobileye, and last year it bought Movidius shortly after acquiring Nervana Systems.
That fits with what Rogers told me about the company “making a major effort to be the leader in AI.” The reason for going in that direction, he explained is that as “Intel makes ingredients that go into computing systems,” it pays attention to what enterprises “need to answer their big analytics questions.”
There is a “maturity curve” to analytics to get answers for different type of questions, Rogers explained. The first form is “descriptive analytics,” which applies to what happened, “like how many widgets did we sell.”
The next step is “prescriptive analytics” which can identify future actions or fill “in gaps between what can observed and what we would like to know.” An example would be figuring out what a customer is likely to purchase based on the behavior of the customer we have seen.
Prescriptive analytics can also provide insight on “why.” For example, AI can identify “what the main causative factors” behind the need for repairs on machines. That type of insight increases “the confidence around each prediction,” he explained.
However, sometimes there are obstacles to getting those answers. “One of the very interesting challenges that I’m seeing in AI is that many times end users don’t have enough data to create a full stack deep learning solution,” Rogers said.
He offered the manufacturing example of setting up a system to recognize the “occasional malformed widget.” That solution would entail a “flexible custom vision system” that can apply “deep learning” to recognize what it should be responding to from examples. However, showing all the possible “malformed widgets” could take “tens of millions” of images.
That approach takes too much time to be practical. As he said, it’s possible to “spend five years” going through all kinds of “examples of bad widgets” and still not cover them all. So what can you do to assure that your system will be able to distinguish the bad ones?
One possibility is to apply “transfer learning.” The means taking a system that has already “been trained in all those millions of images and then repurpose it with your data tweak.” In that way, the “basic deep learning capabilities are transferred to the new use case with pretty small amounts of data,” he explained.
Another solution is to use a 3D model “to simulate data around” it to generate images of “various malformations,” he said. In that way, you “can basically create millions of variations by making different changes.” It’s possible to go even further, to build a deep learning system that understands how to randomly generate realistic simulations of the object itself.”
“As an Intel guy, I’m interested in that because you can do that with no special infrastructures,” Rogers said. That ability of AI to set itself up with the examples it needs to learn how to optimize its functionality makes it far more accessible, and that has great potential impact.