Manufacturers throughout industry are getting good results with remote diagnostics, monitoring the health of their machinery on the go, and freeing their technicians from traveling to and fro to assess problems. But if you’re considering implementing a remote diagnostics system of your own, make sure to do your homework and develop a sound plan first.
“Start by defining the business problem that you’re trying to solve, and work your way down to the information that you want to gather,” advises Doug Weber, business manager for remote application monitoring services at Rockwell Automation. “Where is the data, and at what rate and volume do you need to collect it in order to accomplish the business objective?”
The initial assessment should also identify the people involved, employees and contractors alike. “Who should receive this information, and how should it be acted upon?” asks Nick Sandoval, field application engineer at Moxa. These questions are important because success is a function not only of the technical aspects of the project, but also of the ease with which the people receiving the information can interpret it.
The implementation plan should also consider the flow of information. The questions here deal with how the system will handle the information. “Is data collection going to be an automatic process, a manual one, or a combination of the two?” Sandoval asks. Will the network have a way to relieve any congestion that could occur in the flow of data?
Another key question: Will the remote connection always be on? “Always-on for continuous monitoring is a prerequisite for real-time monitoring and diagnostics,” notes Dan Schaffer, business development manager for networking and security at Phoenix Contact. The alternative is to store data locally and to transmit it periodically to the diagnostics software or its historian.
Even if the plan calls for continuous monitoring, local storage is always an important consideration whenever the remote devices are not wired within the four walls of a facility. “Because you can’t always rely on a constant network connection, you have to ask, what happens when you lose that connection?” says Angela Rapko, commercial product manager for FactoryTalk Vantage Point at Rockwell Automation. “Your overall implementation plan must then include what data you want to store, how long you want to store it, and how you are going to store it.”
It must also outline recovery procedures for when the network comes back online. “You need to throttle in the stored data in a way that allows it to move along with the new data that the device is uploading to the server or Cloud,” Rapko says. You want to avoid bottlenecks.
“If you want to be able to do more than simply look at the data and want to have the ability to tweak the process, then you also need some bidirectional communication,” Schaffer adds. Such communications require careful thought on who should be able to talk on the diagnostics network. “Should vendors and integrators have the same permissions as your technicians?”
The software question
Another good starting place for implementing remote diagnostics is to identify the software support tools offered by your controls manufacturer, says Ben Orchard, applications engineer at Opto 22. “The steps for implementation can change radically with the capabilities of the process controller and the toolset for that device,” he explains. “Once you know what that toolset is, you can explore how to configure and implement it.”
The choice of the remote-diagnostics solutions themselves will depend on the type of diagnostics that they are performing. For example, Belden offers software for network diagnostics. Other vendors offer software that analyzes sensor data to detect the early stages of material fatigue, bearing failure, or other developing problems.
In all cases, however, users might want to consider a multi-platform system that can not only be installed on a PC, but also integrated into SCADA software. “Why not be able to show some of that diagnostic data on your HMI,” suggests Sven Burkard, commercial engineering manager at Belden.
He recommends also that the information being sent from plant floor devices follow an industry-standard protocol, rather than a proprietary one. “Our network management software, for example, uses SNMP for data,” he says. “It can read information from a vast number of devices that communicate via the protocol.” Then, the resident OPC server can convert the SNMP data and deliver it to the SCADA software and HMIs.
Kerry Sparks, senior field marketing specialist for operator interface, logic and communications at Eaton, also thinks that a PC-based system running a good SCADA software package is the way to get cost-effective remote diagnostics. “Modern SCADA software supports all the functionality needed, including the historians and data logging with web-based thin client connections to allow multiple remote connections,” he says.
Sparks suggests selecting visualization software and control systems that have been designed for remote access without requiring a separate Microsoft server-based operating system. “Also look for a system with robust built-in security,” he adds. “That security scheme is the last line of defense against unwanted access. The best packages allow you to tie the SCADA security into the plant security system through an LDAP interface to Microsoft’s Active Directory server.”
Is the infrastructure adequate?
Besides considering the data flow and the software, an implementation plan should also cover the remote-diagnostics infrastructure. Perhaps the first requirement here is to ensure that the remote assets have the sensors, logic and other components necessary for connecting to the system.
All too often, remote assets go about their work in isolation at their locations as disconnected devices. “In a lot of organizations, devices weren’t connected to the DCS, for example, unless they were used for control,” explains Marc Leroux, technology evangelist for CPM technologies and R&D at ABB. “This was done primarily because of cost, but it was also a mindset.”
Falling costs and easier connections are changing that, however. More devices are being connected, typically through the process historian. In fact, as Leroux points out, the proliferation of networks and Internet-ready devices these days has been encouraging many users to increase the amount of data that they collect.
Consequently, users must ensure that the bandwidth of their network is adequate for the data traffic. Part of the calculus is deciding whether the system will be running entirely on a desktop computer connected to a hardwired network, or whether it will use a mobile or cellular connection. “Cellular networks may suffer from signal-strength problems, and wireless networks tend to get overloaded quickly,” Leroux says.
Some experts are advising future-proofing the backbone of your network by specifying a high-quality Ethernet cable with gigabit transmission rates. “Ten years can bring a lot of change,” Burkard notes. “Today, 100 Mb may be sufficient, but what’s tomorrow going to bring?” Not only do the higher data transmission rates ameliorate concerns over capacity in the future, but Ethernet’s ability to run multiple protocols also offers flexibility.
Even so, it’s important to strike a balance with user needs. “Bandwidth is a key component, obviously,” Leroux explains, “but the infrastructure should be designed to match the use of the information. A well-designed system should automatically scale the data to what can be used and then provide the drill-down capability if more is needed.”
The need is not always more, however. “The downside of bringing more data into systems is that often there is no good reason for doing so, other than it is easy,” Leroux says. It lacks context, he adds. “And a lot of data without context is noise.”
For this reason, he and others advocate developing user interfaces that present the necessary information in a format that provides this context. “This should include alarm and event histories, data trends with history, I/O screens, and graphics that effectively give both local and remote users a clear view of the state of the process or machine,” Sparks says.
Secure those connections
Besides providing a robust interface for extracting actionable diagnostics from the network, the architecture must also control who and what can upload data to it. For this, Sparks recommends a VPN so that server or router firewalls can limit access to the control system. Users, however, must not only create the white and black lists used by these firewalls, but also be sure to maintain them by adding and removing remote users as people and equipment come and go. The firewalls, moreover, should expose only those ports that are necessary for remote access.
Keep in mind that a variety of technologies can be classified as VPNs. “Not all VPN technologies are capable of the multiple-network hop that is required in many industrial facilities,” notes Perry Nordh, product manager for advanced control, optimization and monitoring at Honeywell Process Solutions. This is especially true in facilities that have implemented comprehensive security with both corporate firewalls and firewalls around each manufacturing network.
Nordh stresses the importance of protecting the various levels of the manufacturing network with appropriate security. “Implementation at any manufacturing site,” he says, “should include a protected corporate network firewalled from the Internet, a DMZ network firewalled from the corporate network, and a process control network firewalled from the DMZ network.”
Segmentation need not end here. As Moxa’s Sandoval explains further, users can subdivide their manufacturing networks further into smaller segments based on the application at hand. These segments interact with one another by passing information through VPNs and firewalls, creating more layers of defense within the manufacturing networks themselves.
One of these layers can be the remote diagnostics apparatus. “For a site to be secure, the different network segments need to be isolated,” Sandoval says. “Anything mission-critical, like a remote I/O that is part of a life-safety system, needs to be protected with a firewall at least. Best practice is to secure the remote I/O.”
Another means for building security into networks is to use controllers that have two Ethernet ports. Controllers from Opto 22 feature this configuration. Because each port is wired separately and has separate logic, users assign separate IP addresses for each port. “This construction mitigates the need for adding components like DMZs, firewalls or bridges,” Orchard explains. “You can plug one port into the corporate network and the other into the control network to segment them and automatically build in some security.”
Though some manufacturing facilities might need VPNs to allow technicians to communicate directly with controllers through firewalls, others might prefer to use data logs instead as a kind of buffer in their diagnostic work. “This data can be sent to a central location, such as a Cloud-based server, with the ability to review and analyze it from anywhere,” says Daymon Thompson, TwinCAT product specialist at Beckhoff Automation.
This tactic enhances security by limiting access to production PLCs and by relying on a standard protocol, such as OPC UA, designed with security in mind.
Diagnostics on the go
Such considerations are growing in importance these days because employees are increasingly using smartphones, tablets and other mobile devices at work. Because these devices give employees a real-time glimpse into operations while they are on the go, they can boost their efficiency and productivity. “So employers are either providing them, subsidizing them, or at a minimum supporting them through some kind of Bring Your Own Device (BYOD) policy,” says Saadi Kermani, SmartGlance product manager at Schneider Electric.
Consider the steel manufacturer currently working with Belden to equip one of its workers with a mobile device to take readings in the plant. Not only are readings automatically entered into the mobile device, but the technician can also tweak process variables on the fly while standing in front of the equipment.
In the past, the technician would have to relay these readings and changes by radio. “There was a chance of something being misinterpreted,” Burkard notes. “In other cases, there was also the possibility of a time lag generated while the worker went back to an HMI or PC terminal to enter the collected information.” Besides expediting data collection, mobile devices can also receive alarms and give them access to the critical information they need in order to respond effectively.
“Mobile devices can be extremely useful for orientation to an issue and can enable new modes of collaboration for decision-making, but expectations for controlling or making changes should be weighed carefully,” Nordh cautions. In general, he advocates a read-only policy to avoid security problems.
To strike a balance between maintaining security and giving employees freedom to choose their preferred devices, Kermani advises users to look at some kind of mobile-device management system. He also suggests developing a mobile-device strategy that includes a plan for the increasing adoption of cloud computing and software-as-a-service (SaaS) delivery models for mobile-driven apps.
Looking at the Cloud
Cloud-hosted managed services are another option for plugging remote assets into the network. They are especially helpful when the IT staff is too small to take on the extra burden of installing, administering and maintaining remote diagnostics.
Honda Manufacturing of the Americas is using a cloud service to extract information from factory automation at several locations throughout North America. The automaker is doing this through a new industrial Internet of Things service from ILS Technology. “It’s a gateway technology that lives at the edge and has all of the different industrial connectors (drivers and protocols) for PLCs, robot controllers and building power,” says Fred Yentz, ILS Technology’s president and CEO.
The platform connects the various PLCs and sensors on the factory floor, extracts and normalizes the data, and sends it to the appropriate enterprise system. It glues the elements together by acting as a kind of translator between the factory floor and the enterprise system.
Once the platform brings the data into the cloud, software can perform analytics and generate alarms. “We have the ability to generate dashboards and alerts,” Yentz says. “Or we can connect it to back office applications, such as predictive maintenance tools.”
Leaders relevant to this article: