In the mid-1980s, GM took analog measurements and input parameters from engines, then used a lookup table in a memory chip to yield pre-computed output values. After those simple first designs, electronics became cheaper and more sophisticated, making way for further innovation. Now, even a simple engine designed for industrial use has hundreds of sensors recording thousands of measurements, along with enough memory to store months or even years of data. Until recently, the primary use of these embedded measurements and associated low level diagnostics was to help identify the root cause of failure and facilitate the repair process. Essentially we were looking at “What Happened?”
Manufacturers of equipment for medical, high-tech, automotive, and heavy industrial applications began collecting and storing historical data in traditional DBMS systems years ago. Analytic models were developed to understand basic early warning signals and ultimately to predict part failures in advance. Applications of these models enableprescriptive actions to be taken in advance of failures in order to avoid costly equipment outages.
Massive amounts of historical equipment performance data can now be captured and uploaded by the manufacturer, but costs to load and store all this information in traditional DBMS environments have historically been prohibitive. Often information generated by field equipment becomes nothing more than data exhaust, not recorded and ultimately lost forever before any timely decision-making can take place.
Enterprises have realized this potential and adopted Apache Hadoop as a go-to technology to capture, store and process data sources within their modern data architecture. Data exhaust can now be used to make routine and critical operational decisions. And it’s driving bottom line value to the business.