The chart at left shows a down-selection process to help determine the ideal sensor type for a given objective. This approach requires broad knowledge of different sensing methods as well as sensor physics so that relevant calculations can be performed. The model facilitates methodical assessment of candidate sensors encompassing factors such as ease of implementation, cost, existing IP and robustness. It can quickly determine one or two possible sensing concepts for a more detailed lab-based proof-of-concept exploration. If the technology readiness level of identified approaches demands attention, this can become an R&D priority.
In this example, the goal is to detect the position of a metal object. The inductive concept performs strongest, followed by capacitive and mechanical. Capacitive outperforms mechanical in this example, but it may be that power consumption and cost are key drivers, in which case they could be given a stronger weighting.
Finalize sensor settings and thresholds
Whether there’s a single standard approach for sensing (e.g., a microphone) or a couple of options (e.g., inductive and capacitive sensing), the next step is to establish how the measurement is made in practice and how it enables the desired outcome.
Consider the example of using a microphone to listen to sounds from a piece of mechanical equipment. Sound can be generated at many different frequencies and at many different amplitudes. So, experimenting with a lab system using high-spec instrumentation can be highly beneficial, reading across a wide frequency range with high fidelity to reveal the frequencies of interest.
Exposing the system to possible confounding factors is also critical during this stage. For example, when listening to sound, it is important to understand what other sounds may be generated in the manufacturing environment and how they affect the system. It might be that audible frequencies are saturated by extraneous noise, but ultrasonic frequencies are unaffected. Similarly, when investigating competing approaches for sensing the position of a metal object, capacitive solutions may give false positives when touched by an operator’s hand while inductive solutions do not.
Lab-based concept development generates a significant amount of data which needs to be distilled before a solution can be delivered. To continue with the sound and position examples, this might involve setting the correct frequencies or inductive thresholds respectively. It’s an important step which ensures sensors are not over-specified, meaning costs and data handling/processing requirements are proportionate.
Simple data analytics can be employed to investigate the data gathered by lab instrumentation. When listening to noise with a microphone, dimensionality reduction helps to pinpoint the smallest number of frequencies that provide the largest amount of relevant information. This can be achieved using principal component analysis or linear discriminant analysis. Alternatively, classification algorithms such as decision trees might be used to identify the parameters that provide the most useful information.
Simple classification algorithms can also be used to set thresholds. Equipment monitoring systems generally classify a sensor reading as normal or abnormal, with abnormal readings creating an alarm or triggering a recommended action. The key is to choose a system that provides the necessary insight but also balances false positives and false negatives appropriately.
Automating sensor data analysis
It is worth considering the role that AI and/or machine learning can play in the interpretation of sensor data. Following are five AI applications that have proven to add value to sensor data:
- Data processing and analysis: AI algorithms can quickly process large volumes of data, identifying patterns and correlations that might be missed by human analysis.
- Predictive analytics: Transformer models surpass previous methods for predicting behavior and generative systems add to this functionality. These models can go beyond detecting normal vs. abnormal to provide early warning of specific events.
- Large language models (LLMs): LLMs (like many of the AI copilot technologies that have recently been introduced) can enable straightforward investigation of data, using questions like: “We had a power cut last night, was this reflected in any of our data?”
- Visualization tools: AI-powered tools can create intuitive graphs and charts for complex data. This can be especially powerful when paired with LLMs.
- Error reduction: In the lab, automation of repetitive tasks and data entry can reduce the risk of human error, ensuring more accurate and reliable results, which go on to form the foundation of in-operation algorithms.
To be clear, many data analysis algorithms do not require AI/ML, but there are situations when the development and upkeep costs can be justified. There are also ways that AI/ML can speed up laboratory analysis even if they are not part of the final system. Before implementing these technologies, the benefits should be clearly laid out and the costs justified.
Dan Spencer is R&D Consultant at Sagentia Innovation.