The Vision for Robotic Vision

May 10, 2017
Advances in technology and cost make vision systems more accessible than ever for manufacturers to use them for new robotics applications.

Even as robots have gotten smaller, smarter and more collaborative, robotic vision capabilities have been restricted mainly to bin picking and part alignment. But the technological improvements and lowered costs of vision systems make them more accessible than ever for both the robot suppliers and the manufacturers using robots in production, opening the door for new applications.

Not only are vision-assisted robots taking on new manufacturing jobs such as quality control, precision manufacturing and product sorting, they’re also playing a role in human-robot collaboration. In the not-too-distant future, vision could aid in machine learning for applications that call for robotic intelligence.

As is the case with most other technologies, the cost of vision systems has declined even as their capabilities have greatly improved, thanks largely to the increased processing power made possible by Moore’s Law, says Brandon Treece, senior product marketing manager at National Instruments (NI).

“The computation power required to analyze images is processing-intensive,” he says, explaining that computer chips have become much more capable without becoming much more costly. Along with increased processing power, quality improvements have been made to field-programmable gate arrays (FPGA), graphical processing units (GPUs) and the cameras themselves—all integral parts of robotic vision systems.

The FPGA is a reprogrammable integrated circuit (IC) configured for a particular vision use—essentially hardware programmed to act like software. With the help of tools from NI and others, FPGAs can now be set up and programmed onsite by technicians who aren’t vision experts. That wasn’t the case in the past, and the cost of on-staff vision experts kept many companies from adopting the technology, Treece notes.

Straight to software
Another recent vision-system boost: Images can be processed faster than ever because of a change in the way they can be loaded into software, according to Jerry Leitz, director of field engineering at software maker IntervalZero.

Traditionally, cameras captured and sent images via a frame grabber to a computer equipped with software that interpreted the visual data. The software was then able to determine if a part was in the correct spot based on part size, placement and other factors, Leitz explains. Now, GigE Vision—an interface standard for industrial cameras—can be used to transmit video and control data over an Ethernet network directly to the computer software. “GigE is faster than a frame grabber; images go straight to the PC,” Leitz says. “So the trend now is to eliminate the frame grabber.”

Though manufacturers have had access to GigE for about five years, adoption has been slow. “It’s not fully there yet,” Leitz says.

Some computer-intensive vision applications do still call for a frame grabber, but GigE Vision has opened up hundreds of vision applications that turn Windows software into real-time operating systems.

Vision dictates robotic action
Traditional use of robotic vision has included mainly finding a target to build on, such as a printed circuit board (PCB), or for bin picking or reorienting parts, says Keith Vozel, software product manager at Yaskawa America’s Motoman Robotics Division. But use has been expanding as vision becomes cheaper and more accessible.

That includes letting a robot decide what it will do. “Rather than presenting information to a robot controller and telling the robot what to do, vision systems can make those decisions,” Vozel says.

As an example, one of IntervalZero’s customers relies on robotic vision in a recycling application to automatically sort unwanted items. “In a recycling system, you have material of all different shapes and sizes coming down a conveyor very quickly, and you’re continuously acquiring images of it,” Leitz says.

In this case, the vision system is programmed to pick the items from the line based on their particular shape, size and color. “So if there’s an object that’s 2 inches by 2 inches by 5 inches and if it’s the color the system is looking for, the system knows where it’s located on the conveyor belt,” Leitz explains. “They have a bank of air nozzles mounted a foot away, and the air nozzle automatically turns on to blow a blast of air at that piece of material and blow it off the belt into a receptacle.”

The recycling function builds on work done in 2009 at Carnegie Mellon University (CMU). Researchers there, along with collaborators from Intel Research Pittsburgh, developed a system that merges information from several images in order to create a 3D model. By focusing on features like corners or textured areas, the object-recognition algorithm spots a particular object within a pile of clutter.

When it finds enough matches between features, the algorithm identifies the object, demonstrating that vision-assisted robots can be used to sort through a number of objects that don’t resemble one another and pick out a target, says Alvaro Collet Romea, a Ph.D. student at CMU’s Robotics Institute, who led the research.

By looking for features of an object rather than the whole object, the vision system recognizes objects faster than those that rely on more traditional algorithms, Romea notes. The system can even identify and pick up objects that are partially obscured.

Another IntervalZero customer uses vision to count pills. The pills fall “like a waterfall” before cameras that continuously acquire and then analyze the images in real time. The system automatically pushes the conveyor forward when the number of capsules fallen reaches a particular number, ensuring that each bottle contains the same number.

Vision is also stepping up robotics in precision manufacturing—which requires higher accuracies and tighter tolerances. Smartphone manufacturing, for example, can demand that displays be placed within 5-10 μm of their targeted location, says John Petry, director of global solutions marketing at Cognex, which makes machine vision systems. The vision software enables that kind of accuracy and can also ensure that any number of manufacturing lines work in parallel—performing the same job with the same level of accuracy, he adds.

Meanwhile, calibration software corrects for camera lens or perspective distortion within the vision system and also connects camera and robot “so you always know where the robot is with respect to the part,” Petry says.

Working together
Meanwhile, cameras continue to make their way into robots themselves, particularly into collaborative robots—a new breed of robots that can work directly alongside humans, with built-in safety systems that automatically stop the robot arm’s operation if it encounters objects or people while moving. Collaborative robots will have a significant impact on the manufacturing and construction industries in the years to come, says David Thomasson, principal research engineer at Autodesk.

In contrast to traditional industrial robots, which stay hardwired inside safety enclosures, collaborative robots from Universal Robots, for example, can be moved from site to site within a factory. They can also be reprogrammed—often by the person who had been doing the job the robot is set to take over—within minutes, says Scott Mabie, general manager of Universal Robots’ Americas Division.

Rethink Robotics’ Baxter (pictured here) and Sawyer collaborative robots include cameras embedded in their heads and arms.​

Similarly, collaborative robots from Rethink Robotics are trained by on-site manufacturing staff to complete a job. The trainer moves the robot into various positions, and demonstrates tasks needed to be carried out. On-board software, teamed with the vision system, allows the robots to learn those tasks, says Jim Lawton, the robot maker’s chief product and marketing officer.

Rethink Robotics’ Baxter and Sawyer robots include cameras embedded in their heads and arms. Sawyer also includes integrated lighting. “We learned how important lighting was, because a task that would work with a robot trained in the morning wouldn’t work when the light changed or the sun went down,” Lawton says.

The embedded camera allows the robot to read barcodes, locate and pick parts off a tray or conveyor, and inspect parts. The robots can recognize a part and then automatically call up the proper inspection sequence they need to carry out.

Those who train the robot and program their vision systems don’t need previous vision expertise, Lawton adds. “That takes the hard part out of vision for companies that can’t afford to invest in cameras and hire people to program them for them.”

What’s next?
IntervalZero’s Leitz foresees robotic vision systems being used for safety as robots and humans begin working closer together on the manufacturing floor. “If an operator’s hand got in the way of something on the conveyor, the robot would see that right away and immediately stop the machine,” he says.

Collaborative robots are learning to do their jobs better thanks to artificial intelligence built into the software that drives them, Lawton says. “Historically, robots take on average about 300 hours to program and they’re not really learning. There’s complex if-this-then-that trees that are built into them,” he says. “A robot is a big bucket of sensors, like visual sensors. If you could pull all that sensor information into an analytical engine, then it could gain insight into the task and could improve its own performance based on insights shared in the cloud with other robots doing similar tasks.”

Lawton also cites work going on at vendors and academic institutions that would allow robots to pull visual data stored in the cloud to work with something they’ve never seen before. “Between AI, the cloud and vision, the robot could find out how to use the tool and perform a task better than it could have,” Lawton says.

Leitz says manufacturers will also be able to call upon visual data stored in the cloud to find each piece a robot worked on—or saw—to trace a problem that might have happened on a particular date, for example, or to prove the part functioned correctly when the robot tested it.

One thing is certain: Vendors and manufacturers continue to push the vision envelope, knowing the outlook for vision is only getting brighter.

Sponsored Recommendations

Why Go Beyond Traditional HMI/SCADA

Traditional HMI/SCADAs are being reinvented with today's growing dependence on mobile technology. Discover how AVEVA is implementing this software into your everyday devices to...

4 Reasons to move to a subscription model for your HMI/SCADA

Software-as-a-service (SaaS) gives you the technical and financial ability to respond to the changing market and provides efficient control across your entire enterprise—not just...

Is your HMI stuck in the stone age?

What happens when you adopt modern HMI solutions? Learn more about the future of operations control with these six modern HMI must-haves to help you turbocharge operator efficiency...

AVEVA™ System Platform: Smarter, Faster Operations for Enhanced Industrial Performance

AVEVA System Platform (formerly Wonderware) delivers a responsive, modern operations visualization framework designed to enhance performance across all devices with context-aware...