Most global companies with a manufacturing footprint in multiple regions of the world find their similar-looking inbound component supply chains actually behave and perform very differently across various sites. Companies are surprised to see variations in component pricing, lead times, safety stock levels, upside buffer inventory levels, package quantities, and several other factors, which ultimately lead to significant variation in supply chain performance.
To understand the causes and address the variations in supply chain performance among sites, many companies take the qualitative benchmarking route. The maturity of supply chain processes and practices is studied, and best-practices are deployed at sites with lower performance. SCORmark, an alternate quantitative benchmarking methodology, analyzes and addresses the performance variations using metrics and scorecards, in conjunction with processes and best-practices.
Based on experience, most companies achieve better results with internal benchmarking using a quantitative methodology. The quantitative approach has several benefits that global companies can leverage to address variations in supply chain performance among manufacturing sites. This approach allows companies to compare key performance indicators and metrics across multiple sites and to identify the metrics and underlying processes that must be improved at specific sites. Here are four benefits of the quantitative approach:
- The standardized approach saves on time and effort. With a focused approach, most companies are able to complete the benchmarking exercise in a few weeks, if not a few days. Managers can then devote their time to interpreting the results and then taking action.
- Collecting data to compute metrics and create scorecards accounts for most of the time-and-effort investment. Yet once the data sources have been identified, companies can rapidly repeat the exercise in the next benchmarking cycle. As a result, companies are able to benchmark the supply chain performance across multiple sites every few months, identify areas of improvement, and develop a roadmap for better performance.
- This methodology drives different sites toward a common definition of metrics. It is hardly surprising that some metrics are often defined and calculated slightly differently by individual sites. A frequent example is the on-time delivery (OTD) metric — multiple definitions such as when parts are required by the customer or when they are promised for delivery exist within companies. Comparison of metrics among sites is meaningful only if they are defined consistently.
- The rapid repeatability of this approach provides another benefit. Managers quickly understand the link between performance, key metrics, and underlying processes and are able to interpret results from the benchmarking exercise. As knowledge associated with benchmarking accumulates within the company, the role and cost of external consultants diminish.
If you've faced similar issues related to significant performance variations across different sites globally, or you have used a benchmarking technique to address variations, I invite you to share your insights.