Robotics offers a promise of huge advancements, and yet there are technological limitations which have historically held the sector back. The dream of embedding human-like functionality into instruments, tools and machinery is an idea that has inspired a huge body of science fiction literature and film and, in fact, has had appeal at least as far back as the ancient Greeks. Yet mechanical servants, whether fact or fiction, have always lacked a set of qualities that would make their utility to their human masters complete: the ability to grow beyond basic programming and make relative judgments, formulate new concepts and exhibit novel behaviors. Together, such manifestations in essence describe the ability to think .
The brain is the citadel of the senses: this guides the principle of thought. – Pliny the Elder
Today there are competing concepts of robotic design. One is of purely mechanical emulation/imitation and the other a holistic approach for robotic functionality. In the first, robots are able to identify images or sounds from pattern matching to a library of models. This is the basis of machine vision and audio recognition today. The ability to perform such functions is equivalent to, at best, an insect-level intelligence.
In order for either a synthetic system to reach beyond such limitations, there is a hierarchy of data processing that needs to be satisfied. The interactions of these layers in the hierarchy (through means that are still mostly mysterious) are what seem to flip the switch in the human brain to turn on the light of reason. What can we generally say about the layers of such a hierarchy?
This is the foundational layer of the mind, based mostly in the human brain stem and cerebellum. It is from this part of the brain that we interact with the rest of our body and thru which we receive sensations such as temperature and pain. This 'lower brain' area is also involved in various autonomic functions such as motor control, respiration, and heartbeat, as well as sensor activity for vision, speech, and hearing. Apparently, we inherited this part of our brain from the dinosaurs.
Through the 'lower brain', we are able to experience and absorb outside inputs and data, though not necessarily understand them. It also plays a major role in our most basic instincts – hunger, fear of predators, the mating drive, fight versus flight and so forth. These base behaviors have predetermined weights and thresholds which trigger their expression. Exactly how this basic set of behavioral characteristics is stored, how they manifest themselves, and the complete extent to which the 'lower brain' governs them is still poorly understood.
To put it more simply: awareness is the most basic part of the mind's hierarchy, the foundational 'feature set', if you will, upon which all of a sentient creature's thoughts and behaviors are based. We can think of this underlying feature set as 'firmware.' Though the ability to understand all outside inputs & data as well as influencing the pre-set weights and thresholds of this feature set is extremely restricted, this bedrock level of the hierarchy is required “a priori.”
At this level, the human brain compiles inputs from the lower brain control center and separate sensory input and, along with being aware of this data, assigns a relative importance to it. Our brain is now processing data with the intent of affecting the base weights and thresholds dynamically, as well as preparing decisions on freshly processed output.
It is at this topmost level of the hierarchy, where awareness combines with cognition/perception and correlates them with one's own existence and surroundings that Mind flowers. Here both self-awareness and an understanding of unique identity are generated. The subjectivity of weights and thresholds on inputs is strongest at this level, including the suppression of automatically triggered behaviors from the lower brain. Since the Mind functions within a temporal context, experience acts as a modifier to weights and thresholds. In other words, the Mind permits learning to modify future behavior.
Mind is the Master power that moulds and makes,
And Man is Mind, and evermore he takes
The tool of Thought, and, shaping what he wills,
Brings forth a thousand joys, a thousand ills: —
He thinks in secret, and it comes to pass:
Environment is but his looking-glass. – James Allen
Here we can see how mathematicians and researchers are drawing inferences between today's conceptions of the formation of the Mind and chaotic/nonlinear mathematics. From a reduced set of basic principles (instincts) with boundary conditions for growth over time, a much higher order of behaviors is generated, each being different from any other due to differences in starting conditions. In the end, the whole is obviously much greater than the sum of its parts.
Yet, ironically, part of the way the human mind functions has prevented progress in robotics research. One of the 'subroutines' that help organize our minds and facilitate learning is that our brains sift data and memories to form connections between them. The purpose of this is to assign patterns in the data.
Pattern recognition proved vital to our survival as a species. From the dawn of homo sapiens some 250,000 years ago up to the period shortly before the birth of agriculture around 12,000 -13,000 B.C., the earth's climate was unstable, with extreme temperature swings. By carefully observing nature and drawing inferences from experience, early humans were able to ascertain when certain flora would produce edible fruits, vegetables or tubers, the migration routes of herd animals, the likely hiding spots of predators and so forth.
Despite the nonlinear character of our minds, we have an instinctive desire to impose order, patterns and predictability on the world around us. We are driven to interpret world in a linear manner – including our machines. Thus, we are the source of our own obstacles in advancing robotics.
Machine vision and voice recognition are already moving towards nonlinear mathematical foundations in order to capture the entire range of variables and their dynamic interactions, as well as their variations in weights, thresholds, and initial conditions. What has remained incompletely recognized is that neither machine vision nor voice recognition will ever reach its full potential if developed in isolation. Both require the capacity to weigh, judge, determine value or harm, and adjust conceptions accordingly. Judgment thus requires the ability to interpret – an ability that changes with experience, evolving needs, desires and dynamic conditions.
What’s needed now is an artificial version of the above which governs, coordinates and shapes all of it. What we are talking about, of course, is artificial intelligence. It is this capability which must become a Reality so that we can provide that 'spark of life' which will turn inert machines into thinking beings.
There is no shortage of research initiatives dedicated to realizing such a breakthrough. Efforts are underway in the labs of Google, IBM, Qualcomm, NVidia and other private and public organizations, and we will begin examining some of them in the next editorial.