Big-data. We've heard a lot about it recently. With the cloud, social networks, and number of devices mushrooming, the challenges associated with managing, analyzing, and executing decisions based on all this data multiply exponentially, too.
The amount of data from all sources being produced on an annual basis is already overwhelming. As this Intel video points out, global data in 2013 is predicted to grow to 2.7 zettabytes (one zettabyte equals 1 billion terabytes) — or in clearer terms, 500 times more data than “all data ever generated prior to 2003… and it's going to grow three times bigger than that by 2015.”
Many industry watchers, including Gartner Inc., predict big-data will fuel huge amounts of IT spending. Gartner notes that $28 billion of worldwide IT spending in 2012 is expected to be dog-eared for big-data, and in 2013 that number will jump to $34 billion.
Although often seen as its own market needing its own tools, big-data is not a standalone issue. Rather, it is something that affects all corporate data, practices, and software solutions, and soon there will be no distinction between big-data and regular data, according to Gartner:
- “Despite the hype, big data is not a distinct, stand-alone market, it but represents an industrywide market force which must be addressed in products, practices and solution delivery,” said Mark Beyer, research vice president at Gartner. “In 2011, big data formed a new driver in almost every category of IT spending. However, through 2018, big data requirements will gradually evolve from differentiation to 'table stakes' in information management practices and technology. By 2020, big data features and functionality will be non-differentiating and routinely expected from traditional enterprise vendors and part of their product offerings.”
This is the key phrase: “Big data requirements will gradually evolve from differentiation to 'table stakes' in information management practices and technology.” Translation: Companies that incorporate big-data solutions today will be first-movers, which leads to competitive advantages enterprise-wide but also more specifically within their supply chains.
One of the biggest challenges facing companies — at least from a supply chain perspective — is figuring out how to collect, aggregate, and use unstructured, big-data inputs and convert it into “fast data,” or meaningful data that can be used to help make quicker decisions, allocate supply chain resources more efficiently, reduce complexity, or increase agility.
But as this Forbes article points out, “The incessantly changing positions of forecasts, orders, shipments and inventory… is complicated enough within the virtual enterprise, and becomes downright overwhelming in the context of global trading networks – with multiple tiers of partners trying to manage information changes across unique operating systems.”
It's obvious, as the Forbes article notes, that all participants in an organization and the broader supply chain ecosystem “need to have access to a shared version of the truth plus the ability to act on this information in real time.” Arguably, though, supply chain collaboration is only the starting point.
Many practices — particularly those related to demand planning, inventory management, and order fulfillment — also have to evolve. And it's just not software tools that have to be upgraded to better deal with the flow of big and fast data.
But we'd be fooling ourselves if we ignored the very human aspect involved in all this. Sure, automating supply chain decisions is effective and is probably the longer-term solution. But looking at how the supply chain team thinks about, behaves towards, and reacts to the piles of existing unanticipated, free-form data can't be underestimated either. Maybe, in fact, the big and fast data dilemma is a blessing in disguise — something that will compel innovative supply chain thinking and create advantages not witnessed before.