Separator

Realizing the Promise of Industrial Analytics

Separator
Ravi Mulugu, Venture Capital Investor, Underwriters LaboratoriesThe volume, velocity, and variety of data generated in asset-intensive industries is mind-boggling. Unfortunately, 80 percent of that data goes unused as per many market research analysts. McKinsey & Company estimates that 99 percent of data generated in the Oil and Gas industry goes unused. Lack of systems that present a unified view of asset operations and a lack of skilled employees who can leverage data tools are the primary reasons why data gets buried in siloed storage systems as soon as it is generated. Asking an under-resourced data science team to build solutions to the data deluge problem is not a long-term solution. In my view, both technological tools, as well as a skilled workforce, are needed to address this challenge. The tools available to analyze data should be so intuitive that employees in departments ranging from engineering to finance should embrace them and build applications and dashboards to solve various operational issues.

A few years ago, when the concepts of machine learning and AI were initially entering the industrial domain, many believed that the proprietary deep learning algorithms developed by researchers held answers to all their operational questions. However, as time progressed, people came to realize that although the algorithms are capable of finding patterns in data, they are yet not smart enough to deliver reliable answers on their own.

Leaving decision making to an unreliable black box AI model in mission-critical environments is dangerous


There are many reasons for this. One, data integrity could be weak due to bad sensors, missing data, misconfiguration, etc. which results in many false positives. Second, there is a shortage of labeled data for algorithm training, which is ironic when we are talking about industries that generate vast amounts of data every hour. Before algorithms can be trained on data, someone has to identify the failure cases or patterns and label them painstakingly. Third, many of the AI models still are black boxes and are not explainable. Users cannot trace or interpret the logic behind an algorithm’s recommendation.

So, leaving decision making to an unreliable black box AI model in mission-critical environments is dangerous. The right approach, in my opinion, is to empower a domain or subject matter expert with these analytical tools. This will enable them to identify and train the algorithms on the right quality data and features, develop a reasonably accurate prediction model, verify its behavior under changing new data, and only then productionalize it. When it comes to predicting the failure of gas turbine components, who would be better suited to answer—the engineer who worked on designing the component or an algorithm that was used elsewhere to classify cats and dogs?

We need analytical tools that are easy to use by people from all backgrounds, across the enterprise, without prior knowledge in data science. Users should be able to easily connect to the data history and other databases, access an asset or system or process model (aka digital twin) created by others, test out a hypothesis, identify improvement opportunities or early signs of failure and make decisions. Time to tangible financial impact should be a crucial factor when selecting tool vendors and prioritizing analytics projects. Organizations should not embark on fancy data science projects that are technologically stimulating but under-deliver on meaningful outcomes. Only then organizations can move away from pilot fatigue to realizing the benefits from a scalable, enterprise-wide analytics solution.