Most of us have had some exposure to the “AI Awakening” wave that has emerged over the past few years, with regular headlines such as:
- “Can A.I. Release Us from the Nightmare of Expense Reports?
- “The World Bank’s latest tool for fighting famine: Artificial intelligence”
Of particular interest to supply chain, though, is how this “new technology” integrates in with the ongoing journey of more intelligent supply chain decision making to improve organization performance. Our observation is the real benefit comes from fostering community intelligence across the traditional division of tasks in supply chain management that are at best loosely coupled enabling more rapid and intelligent response to emerging opportunities and this benefit cannot be realized without the underlying structure of advanced-memory-engine and real-time analytics.
Before exploring community intelligence, let’s briefly define artificial intelligence (AI) and do a quick review of current trends in AI and supply chain. Merriam-Webster defines AI as a branch of computer science dealing with the simulation of intelligent behavior in computers. It is a broad area of research and application dating back to the 1950s according to the American Association of Artificial Intelligence (AAAI), covering such areas such as: speech recognition; natural language interfaces; conceptual structures; search algorithms (such as Sudoku puzzles and factory scheduling); inference engines for rule-based systems; decision trees; and predictive analytics. Its first awakening occurred in the 1980s with great fanfare and some success, according to the Rise of the Expert Company by Edward Feigenbaum,
The second awakening occurred in our current decade. Within supply chain most of the discussion falls into two areas:
- Machine learning for predictive analytics – The term was coined in 1959 by Arthur Samuel and is outgrowth of the interface of statistics and computer science/data science. There is no single accepted definition of machine learning; Wikipedia defines it as a field of computer science that uses statistical techniques to give computer systems the ability to “learn” with data, without being explicitly programmed. SAS defines it as a specific subset of AI that trains a machine how to learn. Within supply chain management it typically references models to predict the future and/or the impact of certain actions in estimating – forecasting future demand, demand segmentation, demand shaping, and estimating the impact of actions such as promotions or trends in social media.
- Direct support for the regular operational activities of the planner within his or her “silo of influence” – automatic alerts and tracking of actual orders versus forecast versus inventory; tracking actual factory output versus projected and lead time; etc.
What is missing that is critical to the ongoing journey of more intelligent supply chains? First, decision silos remain intact to the detriment of ongoing adoption. Much of the success in supply chain management in the 1990s came from eliminating the data silos which provided a common agreed up set of data to support all decisions. However, the decisions made were at best loosely couple relying on asynchronous communication. The supply planning model requires an aggregate statement of capacity which is provided by the tool planning,
Second, a single point estimate of demand that is provided by demand planning, purchase lead times from inventory planners, and business preferences from executive that are in fact more fungible then some simple weights. In each case the summary information is store in one location, but the detail data and algorithms belong to each silo, even if they happen to be stored in the same server.
In addition, the current types of AI support typically don’t explain why or even provide an easy path to track through the network to uncover core factors. More difficult then why, is what response / repair action to take? This is the bulk of what good planners and this requires “deep knowledge” not in the sense of “deep machine learning” which simply means working through “two or layers of data and modification”, but an understanding of the demand supply network.
These are three of the principle components of community intelligence! Is this possible now or this the third wave 20 years in the future? In fact, the term community intelligence comes directly from Feigenbaum’s book:
It is new kind of entity – a community intelligence born from the collective wisdom of various disciplines, experiences, and points of view, which dynamically disseminate the new intelligence around the same community that engendered it, solving problems that are too tough for us humans to figure out.” (pp 63-64).
In short, the critical point for each success is a memory engine and real-time analytics. What is different in 2018 from 1988? The successes of 1988 were narrower in scope simply due to limitations in hardware and software and this type of work could only be done by large firms (Dupont, Chevron, IBM, etc.). As a result the current loosely coupled silo approach became the standard to overcome hardware / software limitations and the growth of grind it out vendors of supply chain management software. Additionally, much of the current work in AI is done by “data scientists,” as opposed to decision scientists – which drove a change in emphasis.
In a blink of an eye the promise of 1988 was forgotten. But not by the entire industry. As we start this conversation about community intelligence, we are bringing it to some customers. We hope that it will be considered a best in class standard to leverage the value of AI in as little as five years.