Some years ago, we implemented a downtime reporting system and analysis as a “give away” on a packaging line. This was prior to development of the term overall equipment effectiveness (OEE). We all got together to determine what we were going to use for rules to define availability and reliability (some older power-generation terms that we borrowed). Availability was defined as the number of times that the line was requested to run compared with the number of times it came online without a hitch. Reliability was defined as the amount of time after being started that it remained online and producing compared with the total available time that the line was running.
This particular packaging line had five machines in series to make the container, fill it, close it and put it in a case. We set up our rules and began collecting data for the line. After a week or so, we looked at the report and were alarmed at how inefficient one of the machines seemed to be. In fact, it was so inefficient that it seemed unlikely that our production numbers could be true.
Well, if you’re going to have a reporting system, you certainly need to have one you believe. We all scratched our heads and looked at the alarm log to see what had been tripping the machine so frequently, and noticed that the e-stop had been the single major culprit. To better understand what was happening, we decided to unobtrusively observe the line to learn more of what was behind all these e-stops.
As it turned out, the machine in question was responsible for closing the top of the filled container before sending it to the case packer. The case packer also happened to be within line of sight of the “top closer” machine. The operator would notice when he was far enough ahead of the case packer (there was an accumulating conveyor between them), at which time he would press the e-stop, open the enclosure doors to the machine, and clean the glue rolls. Then he would bring his machine back online before the case packer missed a cycle.
We had determined that one of the “downtime” alarms would be an e-stop accompanied by having the safety doors opened on the machine. So we further filtered the data to get a true picture. We monitored the case packer and if it was not waiting for product while the “top closer” was getting its glue rolls cleaned, there was no penalty. That improved the downtime reporting considerably.
It’s pretty easy to determine whether a machine is up and running. The big challenge is determining the reason why it’s stopped running. I’m a big fan of working hard to capture the data through rule sets rather than having an operator pick from a list on a monitor or some other operator-dependent category selection. I’ve seen operations and maintenance bonus packages dependent on line performance where operations would select the reason for a stoppage and then after the shift the maintenance department would run the report and “adjust” it to “more accurately” reflect what had happened the previous shift. If you can’t believe the data, why bother with the report?
The better approach is to work hard to make sure that you’re applying the proper rules to capture the data through automation and then implement change control to add or modify the rules as required to improve the quality of data being collected. Formalize that process. Formalizing it does slow down the process, but it also stabilizes it and makes the results more trustworthy.
Even with today’s tools for OEE and other metric reporting, you still have to validate that the data you’re collecting allows you to have faith in it as you explore your information to discover opportunities for improvement.
We also had to do some adjustments to properly measure “give away,” but that’s for another time.
Ray Bachelor is chairman of the board at Bachelor Controls Inc., a certified member of the Control System Integrators Association (CSIA). For more information about Bachelor Controls, visit its profile on the Industrial Automation Exchange.