For industrial companies engaged in digital transformation, analytics are key to turning large volumes of data into business value to enhance operations and improve the customer experience. Facing intense financial pressure and competition in rapidly changing global markets, companies need to think very carefully about where that data is and how best to leverage it. In some instances, data and analytics need to be processed centrally—such as in a cloud—to drive strategic decisions. In other situations, operational decisions need to be made instantaneously, meaning centralized solutions cannot provide the analysis.
Decentralized analytics—otherwise known as edge analytics or edge computing—occur at or near the edge of the operational network. This is quite common in some consumer-facing industries. However, until recently, analytics at the industrial edge wasn’t possible due to a mix of cost, complexity, security and technology barriers.
That is changing. Digitization is occurring in all industrial environments. In brownfield infrastructure, intelligence is being added via devices such as sensors and gateways. In new infrastructure, we’re seeing digitization through embedded software and preconfigured intelligent equipment.
As this change has taken place, ARC has observed the market focus swinging away from centralized Big Data and analytics toward edge data management and analytics. This makes sense to some degree, as the growth of edge devices for the Internet of Things (IoT) and their related data has skyrocketed, and will continue to do so.
However, edge analytics that rely too heavily on data generated only by equipment and devices overlook some of the most valuable data and insights available to industrial companies: operational data, a portion of which is also generated at or near the operational edge, plus process knowledge.
Cloud and edge redefine analytics
In industrial settings, hierarchical structures have traditionally been employed to capture, access and communicate data across an organization. Operations personnel, whether in a field environment or on the plant floor, can certainly attest to processes and technology designed to capture, share and use the data. But limits on the use of the data were considerable, constrained rather tightly at times by business siloes and technology.
This data structure precedes the Internet. As the Internet becomes a ubiquitous part of business and operating environments, this traditional data structure is being replaced.
Organizations are now beginning to see the value of a more comprehensive view of data and analysis. This improved view includes centralized processing, such as in the cloud (or even on premises on a server), and extends seamlessly to and from the operational edge.
As business leaders wrestle with the data explosion, they see cloud computing as the solution for associated volume, speed and complexity issues.
The cloud can bring massive computational power to solve problems, since it provides a viable solution for combining complex and large data sets—both structured and unstructured—with advanced analytics techniques.
Examples include applying machine learning to acoustics data to predict asset failure, integrating text analysis for process optimization or using image analysis for product assurance.
In reaction to the growth of the cloud, the concept of the edge of an organization has been defined as the farthest extensions of a businesses operating environment, whether physical infrastructure, distributed operational points or customer engagements.
Edge analytics extend data processing and computing close to or at the data sources, which include equipment and devices. In industrial operations, analytics executed at the edge typically support tactical use cases for efficiency, reliability, unplanned downtime, safety and customer experience.
Often overlooked IIoT elements
When thinking about the data for edge analytics, a common misperception is that they only consist of streaming data, time stamped based on the input source. They are often referred to as Industrial Internet of Things (IIoT) data. The thinking here is that a combination of connection, automation, edge analysis and workflow automation are key to getting value from the data.
Though true, this paints only a portion of the picture within the context of IIoT strategies. What’s missing is an understanding of the value of operational processes and their related data, some of which may be generated at the edge. Because these data are often generated and captured by subject matter experts (SMEs), they typically contain high-value information.
Operational data, particularly those generated at the edge, are often underutilized, if used at all. Unless a formal process exists, these data are rarely systemized into a source that can make them available as part of the overall pool of operational data.
In addition to operational data, SMEs understand (and often design) operational processes and best practices. These high-value workers have specific knowledge of how to operate equipment, execute maintenance and ensure safety procedures. For example, crude oil engineers have expertise around impact of crude types on equipment failure during the refining process. This intellectual property is invaluable, of course, and organizations are fearful it will leave the business as workers retire or move on.
Technologies are now available that can mathematically model and capture that expertise as part of the analytics. In doing so, this process knowledge can be augmented with operational and IIoT data. This blending of knowledge and data can be used to drive the optimized decision flows and equipment performance necessary for maximizing IIoT strategies.
>>Michael Guilfoyle, [email protected], is director of research at ARC Advisory Group. His expertise is in analysis, positioning and strategy development for companies facing transformational market drivers. At ARC, he applies his expertise to developments related to IIoT and advanced analytics, including machine learning.