In my last installment on this topic, we began our exploration of Artificial Intelligence by developing a definition of what human intelligence really is, as well as its foundation and course of development. Many of the world's leading high technology companies are attempting to create Mind from a completely non-biological and non-evolutionary direction through massing silicon, server racks, and software. They are intrigued with the myriad possibilities suggested by the creation of sentient robots – machines that can think. We'll explore two of those system-level efforts in this article.
The apple cannot be stuck back on the Tree of Knowledge; once we begin to see, we are doomed and challenged to seek the strength to see more, not less. - Arthur Miller
In 2011, Google, the eponymous search engine company, acquired Deepmind Technologies for something over $400M. Deepmind was developing a variant of artificial neural networks known as Deep Learning, in which a machine uses a library of models in combination with linear and nonlinear computational algorithms to capture patterns in data. The approach has become increasingly popular in the development of Machine Vision and Voice Recognition/Activation technology. The Google X R&D group sought Deepmind's capabilities for incorporation into their own AI initiatives, known internally as Google Brain.
AI is not actually a particularly new endeavor for Google. One can conceptualize its search engine as a kind of machine learning software. As a consequence, Google Brain and its Deep Learning research touch upon just about everything the firm is doing.
Various Google products have already benefited from this research. For instance: Google Maps no longer requires teams of people manually sorting thru street level photos and gathering building numbers to verify unique addresses. This has instead become a Machine Vision task. Voice Recognition has been integrated into Android and image search into Google+, with these capabilities soon to be added to Google Translate.
In one program, Google combined image recognition and text translation to capture new images and have the machine figure out an appropriate text label for the new image. The system appears to be successful about two-thirds of the time, though the research scientists on the project haven't yet figured out how it's actually doing this.
These activities (including the various AI-like functions of Google Now found in Android mobile phones) are all directly supported by the various basic AI capabilities residing on Google's servers – and herein lies the evidence of where the fundamental flaws in the company's approach to AI are. Google Brain researchers consider it a remarkable achievement that they induced a network of 16,000 servers to examine 10M images using Machine Vision capabilities and recognize on its own that they were all images of a cat. The defect in this method is that this is not at all how the human mind works. It doesn't take a human infant millions or billions of instances to recognize a cat or dog and distinguish them from each other. In fact, it only takes about a dozen attempts or less.
This suggests that the Google researchers are not anywhere close to building a true AI. Google Brain appear to have quite a long road to travel before it can be said that it has achieved even basic Awareness, let alone Perception/Cognition or Consciousness.
As the births of living creatures at first are ill-shapen, so are all Innovations, which are the births of time. - Francis Bacon
One of the first things that becomes obvious when scrutinizing Microsoft's work in AI and Robotics is that their mission is to beat Google. The Cortana voice-activated digital assistant that competes with Google Now and Apple Siri also drives much of the AI effort for Microsoft. It reads and understands email, supports Windows 8.1 search capability and can even be presented an image captured by the phone's camera and be asked to identify it.