Advertisement

Blog

Building a Fully Operational Data Supply Chain

Ancient Greek philosopher Eraclitus noted that nothing is permanent except change. He was a man ahead of our time. We live in a world of constant change and a business climate empowered by rapidly evolving and emerging technologies. The only way to succeed and achieve excellence is by embracing that.

At times, though, change may challenge electronics OEMs as they struggle to effectively manage the vast amount of data fast enough, extracting value to meet the needs of their business. However, with the help of modern data architecture it is possible to access the most business-relevant information that will contribute to gaining greater competitive advantage in the market. First, though, it is important to apply the best practices for building an effective and efficient data supply chain.

The changing world of data

In the past, OEMs relied upon a single data warehouse. Later, the explosion of big data and its subsequent multiple data sources gave organizations access to more business-relevant information than what they could have ever imagined before. With this, the creation of a variety of data repositories for data storage and data analysis became a focus.

According to CITO Research's whitepaper titled Hadoop and the Modern Supply Chain (registration required), the data supply chain connects and feeds all different data sources into the modern data architecture. The modern data architecture includes multiple repositories, Hadoop, traditional enterprise data warehouses (EDWs) and other data stores, cloud data sources, public or open data sources, commercial data sources, data from mobile devices, sensors, and the Internet of Things (IoT).

Organizations are adopting modern data architectures in order to maximize investments by ensuring timely data is timely is available to start important initiatives such as business intelligence (BI) and analytics. With the ability to make more informed decisions, OEMs can capture competitive advantage and ensure healthy business development.

Mix & match solutions – avoiding challenges

Traditional warehouses alone are not enough anymore. They can't effectively move existing data fast enough to meet today's demands and needs. Multiple data sources need to be coordinated across departments and integrated to inform business decisions. To help with this and also to reduce costs, an increasing number of organizations are adopting Apache Hadoop.  This open source, scalable software for distributed computing handles all sorts of data (i.e. structured, semi-structured, and unstructured) addressing both volume and complexity, and plays an important role in the data supply chain. Organizations can now keep more data on hand.

 Yet, it comes with some challenges. Organizations need to be prepared to hire specialized expertise, or invest in training to build and manage the Hadoop infrastructure. Talent, though, may be scarce. According to CITO Research, for every one Hadoop expert there are 50 or more SQL experts. Finding the right people with the Hadoop and data skills required to fully leverage the platform can be challenging, so organizations need to start early enough for a successful deployment.

The best for organizations to get the most benefits for their data supply chain is to combine Hadoop with the implementation of data movement software solutions like Attunity Replicate, according to the paper. These solutions are specifically designed to make data movement to and from Hadoop easier, faster, and more cost effective, even across a broad number of platforms. They are designed to allow users to move data from one repository to another in a highly visible manner, unifying and integrating data not only from Hadoop but also from all the other platforms within the enterprise. This ensures enough flexibility in the data supply chain. When Attunity Replicate is combined with Attunity CloudBeam, it can also be used to move data to and from the cloud. 

Speed data up & avoid risks

The paper also noted a number of risks that organizations may face if they are not able to move big data with the speed that is needed. These risks include:

  • The inablity to execute business-critical big data projects
  • A limited view of their business and all the data they have, which can lead to ill-informed decisions
  • Laborious and manual movement of data
  • Stale data; if not utilized quickly, data loses relevance and value
  • Poor integration with legacy and other existing systems, limiting the scope of data
  • The inability to create data lakes supporting high-level analytics
  • Lack of effective data management and control, leading to misuse, or loss of data
  • The inability to track data visually across the data supply chain to better understand use and validity

Assure flexibility & data agility

The adoption of modern data architecture allows organizations to create an efficient and flexible supply chain that supports easy data migration. This flexibility allows the organization to automate and move data quickly to supporting timely decisions that benefit the entire supply chain. Choosing the right solutions and hiring the right staff are critical to creating a fully operational and agile data supply chain that capitalizes on the breadth of available data.  

2 comments on “Building a Fully Operational Data Supply Chain

  1. puga2006
    March 18, 2015

    Great article on the challenges in dealing with data in Supply Chain, I like the call out that the data is stuck in several legacy systems and often not visibile, that is the most painful to extract for several reasons.

    Thought will share this perspective on this challenge as well. 

    Should the approach be to extract data that can be modelled in all possible ways or extract meaningful insight that can be translated into actions? If I were a COO, I would want the insight to keep my Supply Chain agile, to be able to react to any interruption, to be able to do a CBA (Cost Benefit Analysis) on various aspects of my Supply Chain. Is an approach that is more specific to the vertical, based on a set of KPIs (key performance indicators) that gives me insights a better one or a system that has all the data that I can model however I want is more useful? May be an Operations approach Vs IT approach question.

    Infact it will be so nice to add a layer on the various supply chain systems, that throws out a set of standard KPIs that offer meaningful insight, something similar to a balance sheet or an income statement? 

    Would love to hear further thoughts. 

    Puga 

  2. kevin.petrie@attunity.com
    March 18, 2015

    Great to see this article – the Data Supply Chain is a very useful model for understanding how best to manage data as it flows across platforms to enable analytics.  Enterprises also have at their disposal new software that can monitor data usage within the supply chain so as to identify potential bottlenecks or areas of slack, and thereby make the best decisions about what needs to be re-balanced.  For example, organizations can measure the CPU cycles that are consumed by certain data sets, and measure how often those data sets are accessed.  They might discover that a significant portion of tables in an EDW are not being used, but their regular ETL updates are consuming half of all server CPU cycles.  Those tables might better reside on Hadoop, where they can be stored cost effectively and potentially correlated with new types of data that together yield fresh analytics insights.  Infrastructure is better utilized, and data science is more successful.

    – Kevin Petrie, Attunity.  www.attunity.com

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.