Data is everywhere: in the lab, the factory, and the field. It exists in numerous formats, multiple geographical locations, and in the case of external supply chains, it can also come from many different companies. The ability to collect clean, usable data throughout the multi-faceted manufacturing process can enable unprecedented visibility and valuable insights. The question is how companies can be more effective in big data acquisition to ensure enterprise-wide improvements in manufacturing operations that lead to the goal of zero defective parts per million (DPPM). Successful data acquisition starts with collecting and mapping data, and then finding a common language to discover data trends that give insight into overall product manufacturing.
The process: collecting data
Manufacturing data can be highly dispersed, with semiconductor and electronics companies pulling data from various factory floors across multiple subcons. Collecting this enterprise data requires a comprehensive “getting everything” approach. It’s taking data from every possible source on the manufacturing floor and throughout the supply chain, including parametric data, environmental data, manufacturing execution system (MES) data, process data and product genealogy data. Most importantly, all of this acquired data must be cleanly collected to perform advanced analytics.
Mapping & harnessing data
Once the scope of the data is defined, it’s necessary to create an operation mapping process, which lists all the data sources across all machines in all factories. The mapping is an essential step for any organization that wants to create a complete enterprise data picture that enables them to manage their data across all processes, facilities and systems and ensure that the data can be easily fed into a big data analysis system.
The mapping process begins by identifying the diverse nomenclature used throughout the various data sources and finding a common language to help make the correlations. One company may refer to data from final test as FT_param1, while another company may label the data as FinalTest_p1. Correlating and finding a common nomenclature for parallel data that can come from multiple sources in the supply chain is key to effectively sorting and deciphering the data for quality results. Once the mapping flow is established, the data is ready to be harnessed.
Harnessing the data is part of the individual data acquisition cycle which takes all the mapped data that has been collected and converts it into a common language. Each data source potentially has a different index, format and must be parsed. For example, ICs and multichip boards and systems collect multiple data points on multiple parts throughout the manufacturing test process. Each one of these data points represents parametric, process and in-use data that needs to be unified to truly understand the overall board or system performance. It’s important to correlate how each chip or device fared in each test, and collect the data from a full complement of sources including SMP, AOI, functional testing, assembly, cleaning, burn-in and wafer testing.
Now that the data has been collected, mapped and harnessed, it’s ready for analysis. For example, manufacturers can analyze past return material authorizations (RMAs) to predict the likelihood of other RMAs and establish rules to prevent these bad parts from shipping in the future. An intelligent big data solution can help identify the root cause of a failure, or understand why certain parts fail when paired with other parts. And most importantly, the data will provide visibility into the genealogy of manufactured products by tracing the source of every part in the final product.
For manufacturers to reach the aggressive quality goal of zero DPPM, they need powerful big data analytics solutions that can parse through terabytes of enterprise data quickly and easily. It is only at this “everything” level of analysis that minute, but significant activities can be identified, such as separating good devices from bad ones, and identifying suspect devices that are within a “good” population. The challenge in doing the latter is that those devices pass all the requisite tests to be labeled as “good” but have characteristics that can be correlated to known RMAs and will most likely fail before the end of the warranty period. Only through advanced analytics, like multi-variate analysis, can these subtle characteristics be detected, enabling a semiconductor or electronics manufacturer to screen these devices out from their good populations, preventing downstream RMAs and protecting brand integrity.
In summary, collecting, mapping, and harnessing data are important steps toward processing the data via a big data analytics architecture, ultimately yielding valuable insight that improves the manufacturing process. When the data is clean, shares a common language and is sourced throughout the supply chain, this results in actionable intelligence that can enhance overall product quality.