Use Sensors to Unlock Value in Connected Systems

Jan. 7, 2025
Sensors’ role in the monitoring, control and optimization of automated equipment takes on new significance in industry’s digital age. However, over-specification and overuse of sensors can create unnecessary costs and generate vast quantities of data that make analysis more complex.

Avoiding the deluge of data created by the deployment of too many sensors requires a more considered approach to sensor selection and integration. Ideally, sensors should deliver just enough functionality to ensure IIoT (Industrial Internet of Things)-enabled manufacturing equipment can perform its intended role efficiently, reliably and cost-effectively within the wider system. 

The following three steps can help engineers make better decisions about the selection and integration of sensors for manufacturing equipment. 

Define the sensor’s objective 

The first step is to establish what decisions or actions a sensor will enable. Typical focal points of sensors for automated manufacturing equipment include efficiency, product loss, product quality, general usage insights and indicators of equipment degradation or malfunction. For instance, a sensor might indicate when maintenance is required or monitor efficiency levels and recommend adjustments to the system. It’s important to determine exactly which factors are of interest and what the sensor needs to convey so that the strategy is laser-focused on gathering useful insights which translate into business value. 

When defining objectives, consider the impact of false positives and false negatives. Sometimes false positives (e.g., early or unnecessary alerts) are acceptable whereas false negatives (no alert) cannot be tolerated. In other applications false positives may be highly disruptive and need to be avoided. Considering this at an early stage enables more informed decisions about the type and combination of sensors to employ.  

Once an objective has been established, the next task is to ascertain the parameters which support it. This will vary according to science or engineering features which underpin the wider manufacturing system. A detailed understanding of the equipment and its operational context is essential to right-size the sensor system and ensure only necessary data is collected. Not every piece of information is worth the cost of obtaining it. It’s also important to think about potential interference or confusion during operation. For instance, in a noisy manufacturing environment, it could prove difficult to monitor sound. This might demand a more sophisticated solution to measure specific frequencies of sound, or it may be more effective to focus on a different measurement.  

Decide on sensor type

Some metrics have many established ways of sensing, some have only one and some have none. Compare sound (where a microphone is the standard sensor) with position sensing (where there are multiple methods). 

On the other hand, there may be a vital metric that cannot be sensed using off-the-shelf technology, such as certain chemical concentrations. In such cases, it may be beneficial to invest in the development of novel sensor technologies or to adopt lab methods. Mapping the additional R&D costs against potential market gains is a simple but important way to ensure the investment is focused on delivering commercial advantage.

The chart at left shows a down-selection process to help determine the ideal sensor type for a given objective. This approach requires broad knowledge of different sensing methods as well as sensor physics so that relevant calculations can be performed. The model facilitates methodical assessment of candidate sensors encompassing factors such as ease of implementation, cost, existing IP and robustness. It can quickly determine one or two possible sensing concepts for a more detailed lab-based proof-of-concept exploration. If the technology readiness level of identified approaches demands attention, this can become an R&D priority. 

In this example, the goal is to detect the position of a metal object. The inductive concept performs strongest, followed by capacitive and mechanical. Capacitive outperforms mechanical in this example, but it may be that power consumption and cost are key drivers, in which case they could be given a stronger weighting. 

Finalize sensor settings and thresholds  

Whether there’s a single standard approach for sensing (e.g., a microphone) or a couple of options (e.g., inductive and capacitive sensing), the next step is to establish how the measurement is made in practice and how it enables the desired outcome. 

Consider the example of using a microphone to listen to sounds from a piece of mechanical equipment. Sound can be generated at many different frequencies and at many different amplitudes. So, experimenting with a lab system using high-spec instrumentation can be highly beneficial, reading across a wide frequency range with high fidelity to reveal the frequencies of interest.

Exposing the system to possible confounding factors is also critical during this stage. For example, when listening to sound, it is important to understand what other sounds may be generated in the manufacturing environment and how they affect the system. It might be that audible frequencies are saturated by extraneous noise, but ultrasonic frequencies are unaffected. Similarly, when investigating competing approaches for sensing the position of a metal object, capacitive solutions may give false positives when touched by an operator’s hand while inductive solutions do not.

Lab-based concept development generates a significant amount of data which needs to be distilled before a solution can be delivered. To continue with the sound and position examples, this might involve setting the correct frequencies or inductive thresholds respectively. It’s an important step which ensures sensors are not over-specified, meaning costs and data handling/processing requirements are proportionate. 

Simple data analytics can be employed to investigate the data gathered by lab instrumentation. When listening to noise with a microphone, dimensionality reduction helps to pinpoint the smallest number of frequencies that provide the largest amount of relevant information. This can be achieved using principal component analysis or linear discriminant analysis. Alternatively, classification algorithms such as decision trees might be used to identify the parameters that provide the most useful information. 

Simple classification algorithms can also be used to set thresholds. Equipment monitoring systems generally classify a sensor reading as normal or abnormal, with abnormal readings creating an alarm or triggering a recommended action. The key is to choose a system that provides the necessary insight but also balances false positives and false negatives appropriately. 

Automating sensor data analysis 

It is worth considering the role that AI and/or machine learning can play in the interpretation of sensor data. Following are five AI applications that have proven to add value to sensor data:

  • Data processing and analysis: AI algorithms can quickly process large volumes of data, identifying patterns and correlations that might be missed by human analysis. 
  • Predictive analytics: Transformer models surpass previous methods for predicting behavior and generative systems add to this functionality. These models can go beyond detecting normal vs. abnormal to provide early warning of specific events. 
  • Large language models (LLMs): LLMs (like many of the AI copilot technologies that have recently been introduced) can enable straightforward investigation of data, using questions like: “We had a power cut last night, was this reflected in any of our data?” 
  • Visualization tools: AI-powered tools can create intuitive graphs and charts for complex data. This can be especially powerful when paired with LLMs. 
  • Error reduction: In the lab, automation of repetitive tasks and data entry can reduce the risk of human error, ensuring more accurate and reliable results, which go on to form the foundation of in-operation algorithms. 

To be clear, many data analysis algorithms do not require AI/ML, but there are situations when the development and upkeep costs can be justified. There are also ways that AI/ML can speed up laboratory analysis even if they are not part of the final system. Before implementing these technologies, the benefits should be clearly laid out and the costs justified.

Dan Spencer is R&D Consultant at Sagentia Innovation.

Sponsored Recommendations

Why Go Beyond Traditional HMI/SCADA

Traditional HMI/SCADAs are being reinvented with today's growing dependence on mobile technology. Discover how AVEVA is implementing this software into your everyday devices to...

4 Reasons to move to a subscription model for your HMI/SCADA

Software-as-a-service (SaaS) gives you the technical and financial ability to respond to the changing market and provides efficient control across your entire enterprise—not just...

Is your HMI stuck in the stone age?

What happens when you adopt modern HMI solutions? Learn more about the future of operations control with these six modern HMI must-haves to help you turbocharge operator efficiency...