Generative AI Slow Rolls into Industry

Nov. 14, 2024
Immature data strategies and concerns about security currently limit LLM use in manufacturing, but that’s poised to change as the technology advances rapidly and more tools from automation suppliers become available.

ChatGPT and other AI chatbots have become a regular sidekick for office workers seeking a  leg up crafting business correspondence or for developers as an accelerant when writing  code. Yet in the industrial space, chatbots and their large language model (LLM)  technology are still far from ubiquitous, surfacing primarily in pilot projects aimed at  boosting operator productivity and problem solving.

As a deep learning model, LLMs employ natural language processing to generate human- like responses to queries and tasks. They are trained on extremely large data sets and are  built on neural network technology known as transformer models. In the business sector,  LLMs are being rolled out to generate text and visual content, uplevel chatbots for more  personalized customer service and interactions, and to empower task automation. 

OpenAI CEO Sam Altman has claimed that 92% of Fortune 500 companies are using  ChatGPT, but that platform and other LLMs are charging a slower path into the industrial  landscape. While there is mounting enthusiasm for LLMs as an anchor for tailored systems  in areas like predictive maintenance, computer vision quality inspection, production  planning assistants and operator guidance, large-scale implementations are still rare and  there remains an array of issues standing in the way of active deployment.

The data obstacle

Perhaps the biggest barrier to widespread LLM use in industry is the current state of  industrial data. Most manufacturers are sitting on a mountain of siloed data, much of it  time series, that is not properly contextualized, standardized, scalable or even available  for LLM use cases and model training. Immature data management processes, the on- going OT/IT divide, and the lack of real understanding of how and where LLMs can make a  mark are among the many inhibitors limiting industrial activity to one-off pilot projects and  experimentation.

“A lot of initiatives are in the experimentation phase because they haven’t really laid the  full foundation from the OT level up,” said Travis Cox, chief technology evangelist for Inductive Automation.  “They haven’t made the  cultural change [to] where data is fundamental to the business so they can go to the next  level. It’s also not a technology problem, it’s more of a people, process problem now.”

The open nature of public LLMs also creates additional data security and customer privacy  concerns and complications. To get the desired result, public LLMs need data—lots and  lots of data typically fed to the model by the user looking for answers and action. Yet most  manufacturers are extremely wary about putting their own proprietary manufacturing,  product and consumer data up for consumption by public LLMs for model training in the  cloud. Even those manufacturers fully onboard with generative AI’s promise seem partial  to building use cases around private LLMs, but they typically lack the resources and AI  expertise to navigate a project of this scale and complexity.

“Manufacturers are trying to implement LLMs in certain places today, but it’s not  largescale and certainly not everywhere,” said Erik Lindhjem, vice president and general  manager of the reliability solutions business at Emerson. “There’s more focus now on how to train private LLMs  with priority data to use in different scenarios.”

 

Industrial virtual assistants dominate

As in the business sector, most of the early industrial use case examples of LLMs are  focused on virtual assistants to provide guidance to plant floor operators or to give control  engineers a head start writing PLC code. The ease with which GenAI can help synthesize  real-time information or provide coding assistance is crucial for today’s manufacturers  given current challenges attracting and retaining plant floor talent. 

“LLMs can help operators handle complexity as plants try to produce more in a safe  environment with less people,” said Claudio Fayad, vice president of technology for  Emerson’s process systems and solutions business. 

Fayad pointed to Aspen Tech’s Aspen Virtual Advisor (AVA) for DMC3 as an example of  putting LLMs to work in this kind of use case. The solution augments operator knowledge  with real-time insights into advanced process control hardware, presenting as a chat  window where technicians can ask questions that probe constraints on particular assets  or to get instant guidance on recommendations for increasing plant floor throughput,  Fayad explains. Emerson now owns a 55% stake in Aspen Tech, and AVA will be offered to  support Emerson process control hardware. 

“We’ve been using help files in neural nets to help operators make decisions, and now  LLMs are making that learning capability better,” Fayad said.

Emerson also sees potential of LLMs in plant modernization efforts, serving as a code  assist tool as part of its DeltaV Revamp, a cloud-based technology that manages the  transition of legacy control applications to the DeltaV distributed control system. LLMs  facilitate the process of understanding the old code base and converting it to the language  of the modern control platform. 

“Just like LLMs can understand Spanish and convert it to English, they can be leveraged to  accelerate the conversion of a Honeywell code base to a Delta V code base for rapid  systems deployment,” Lindhjem explained.

Other automation providers are testing the power of LLMs for virtual assistants. Aveva’s Industrial AI Assistant, for instance, enables operators to ask  questions like: What was my maximum output last month? or why is my compressor less  efficient this week? and get answers and context that go far beyond what would be  possible sifting through spreadsheets and documents and far faster than a complex data  analysis. 

Aveva’s patent-pending knowledge linking technology (considered an AI-driven knowledge  graph) pulls in time-series data and documents from various, relevant data sources,  automatically creating relationships across structured and unstructured data without the  need for a data hierarchy or model, according to Jim Chappell, vice president and global head of AI and advanced analytics at Aveva. Data security and AI hallucinations are tackled  using an AI orchestrator, which includes guardrails like intent analysis, prompt  optimization and response formatter, among other capabilities, while also breaking down questions into individual sub-queries to keep responses more grounded, Chappell  explained.

Unlike a traditional approach, which would require familiarity with cloud platforms,  sensors, IoT, AI and different types of time-series and engineering data, an LLM-based  offering lets engineers facilitate data queries, visualizations and workflows without the  complexity. “You can ask questions as a subject matter expert contemporary and don’t  need to know the software,” Chappell said. “You just ask questions and it gives you the  information you need.” 

Siemens and Beckhoff Automation are harnessing the power of LLMs to simplify and  accelerate the programming of automated systems and controls. TwinCAT Chat Client  leverages ChatGPT to automate code creation, code optimization, code restructuring and  code documentation, enabling engineers to write higher quality code faster. It connects to  the host cloud of the LLM—in this case, Microsoft Azure for ChatGPT, letting TwinCAT  developers ask questions to generate HMI controls and establish links to the PLC. “The  LLM is contextualized with our documentation so the code spitting out has the knowledge  of best practices, our documentation, APIs,and equipment,” said Brandon Stiffler,  software product manager at Beckhoff.

For its part, Siemens’ Industrial Copilot, developed using the Azure OpenAI Service in  Microsoft’s Azure Cloud, is connected to Siemens Totally Integrated Automation (TIA)  Portal, enabling engineering teams to quickly get help as well as generate basic  visualization and structured control language (SCL) code for PLCs.  Siemens Industrial  Copilot explains SCL code blocks and creates machine and plant visualization in the  WinCC Unified visualization platform, reducing the time and effort for PLC programming  while minimizing errors. 

“Given the lack of skilled labor workers, you need to be able to train quickly on how to  program automation hardware, and GenAI really speeds up that process,” said Kristen  Quasey, product marketing manager for industrial PCs and Industrial Copilot at Siemens.

Automation vendors are not the only avenue for LLM-enabled tools. Control system  integrators such as Northwind Technical Services are  taking the opportunity to offer generative AI-based virtual assistant functionality to its  customers to address what Matt Lueger, executive vice president, called their biggest  problem—the shortage of skilled labor. 

“The 30-year veterans who own their knowledgebase are leaving the workforce just as they  are putting in more sophisticated systems and more automation,” Lueger said. “They are  also having a hard time hiring young technical people to maintain their on-going systems.  GenAI could be the tool to help close that gap.”

Northwind developed PlantIQ, a digital expert built on an LLM and trained specifically to  understand manufacturing processes through connections to relevant documentation and  real-time process data. The accompanying AlarmIQ module has a specific focus on PLC  and SCADA system alerts and alarms, ensuring quicker resolution of system faults through  delivery of detailed process information, analysis of historical faults and documented  control system service tickets. With this technology, new operators are afforded the  benefits of domain expertise typically held by a seasoned process engineer, allowing them  to come up to speed more quickly than with traditional training.

LLM’s future role

As LLM-enabled virtual assistants evolve, capabilities will go beyond real-time guidance  and insights to having the assistant automatically execute a specific task. Take controls  system design and configuration, for instance. This is typically a heavy lift for process  controls engineers, but LLMs could be used to accelerate the design of HMI graphics as  well as the configuration of control systems.

Aveva sees a progression where a user won’t just ask questions of the LLM-enabled virtual  assistant, but rather instruct it to perform a specific task like “build me a dashboard,”  whether initiated through written commands or conversational interaction. 

“We are looking at delivering new types of user experiences for our software,” Chappell  explained. “Instead of learning how to use different types of software, you ask the assistant  to do it and it will. It provides a great starting point for the average user across different  functionalities.”

Taking the time to understand where and how LLMs and GenAI can add business value is  crucial to success. It’s also important to frame virtual assistant output as a starting point,  not an end point, at least with the current generation of products.  “Keeping a human in the loop is critical to build trust in the answers,” said Fayed. “It’s also  important to bring in data in a contextualized way so AI can make sense of it. If it’s just  numerical data and not relationships, it’s not useful.”

About the Author

Beth Stackpole, contributing writer | Contributing Editor, Automation World

Beth Stackpole is a veteran journalist covering the intersection of business and technology, from the early days of personal computing to the modern era of digital transformation. As a contributing editor to Automation World, Beth's coverage traverses a range of industries and technologies, including AI/machine learning, analytics, automation hardware and software, cloud, security, edge computing, and supply chain. In addition to her high-tech and business journalism work, Beth writes an array of custom editorial content and thought leadership pieces.

Sponsored Recommendations

Why Go Beyond Traditional HMI/SCADA

Traditional HMI/SCADAs are being reinvented with today's growing dependence on mobile technology. Discover how AVEVA is implementing this software into your everyday devices to...

4 Reasons to move to a subscription model for your HMI/SCADA

Software-as-a-service (SaaS) gives you the technical and financial ability to respond to the changing market and provides efficient control across your entire enterprise—not just...

Is your HMI stuck in the stone age?

What happens when you adopt modern HMI solutions? Learn more about the future of operations control with these six modern HMI must-haves to help you turbocharge operator efficiency...