Manufacturers that need very short assembly times and high quality need the utmost from both manufacturing equipment and the vision systems that inspect the output. A number of advances in vision systems have made it possible for visual inspections to keep pace with high-speed assembly systems, letting manufacturers significantly improve operations.
โThe ability to integrate vision and motion has enabled us to solve applications that we have not been able to solve in the past,โ says Matt Wicks, vice president of manufacturing systems product development at Intelligrated. โFor example, applications such as vision-guided depalletization leverage advanced vision to identify products as well as their orientations to allow for robotic picking.โ
Gaining these benefits is not a simple task. Integrating high-resolution cameras with complex assembly equipment requires attention to detail. Manufacturers must consider the trade-offs between many technologies.
โSuccessfully implementing machine vision for high-speed assembly is not just a matter of installing vision components, and should be considered in terms of the whole system integration,โ says David Dechow, staff engineer at Fanuc Americaโs Material Handling Segment. โThe top three considerations are component advances, such as sensors, illumination, optics, microprocessors; smart cameras and integrated machine vision systems; and high-speed camera interfaces.โ
Processor and sensor advances
Advances in processors and image sensors have brought costs and sizes of vision systems down, substantially improved their performance and made systems easier to set up and use.
โThis has not only allowed vision systems to keep pace with increasing speeds of assembly systems, but has also enabled proliferation of vision systems on production lines and opened doors to new applications in operations that previously would not have considered installation of vision systems,โ says John Agapakis, Americas sales director at Microscan.
Camera resolution continues to improve, helping manufacturers examine more complex products. That capability is being exploited by display manufacturers that make large, high-resolution displays, for example.
โThe pixel density of the displays keeps growing, and sensors need to capture all that detail to allow the system to accurately identify any defects within the display pixels, and where those defects are located in the display pixel,โ says Antonio Ciccarelli, product marketing manager at On Semiconductorโs Image Sensor Group.
Higher camera resolutions and faster production speeds are driving many changes in communication links. This is an important factor in complex tasks like checking ultra high-definition (HD) televisions with 4K resolution.
โIf I now have to inspect twice as many pixels as I had to on the previous-generation TV, in the same amount of time, I will need a faster camera,โ says Rusty Ponce de Leon, president of Phase 1 Technology. โIf I now have all the speed I need, but canโt get all the information down the pipe (communication interface), I defeated the purpose of the faster camera. So I need a larger pipe.โ
In addition to the vision links, many cameras now offer LAN connectivity, providing variants of Ethernet and other popular fieldbus network options to help integrators speed setup.
โWeโre building in standard protocols for major vendors, such as Profibus and Profinet, EtherNet/IP, and several others,โ says Rick Roszkowski, senior director of product marketing at Cognex. โThat dramatically reduces integration timeโunder one day instead of weeks.โ
Lots of data links
The need for high-performance data channels led to the creation of many different links through which to communicate all the data. These links provide varying performance parameters that help manufacturers get the right level of performance for their production needs.
โCamera Link was the original frame grabber-based system, yielding image throughput rates up to 850 MBps,โ says Bob McCurrach, director of standards development at the Automated Imaging Association (AIA). โTwo next-generation frame grabber-based systems have been extremely popular in applications such as large flat panel display inspection. Camera Link HS operates at up to 2,100 MBps per cable with CX4 cables and up to 1,200 MBps per cable with fiber cables (SFP+ connector). CoaXPress, operating over coaxial cable, offers speeds up to 600 MBps.โ
Those standards focus on high-resolution area cameras, while others target line scan digital cameras. GigE Vision connectivity, based on Gigabit Ethernet technology, makes it easier to deploy line scan cameras, which can help lower costs and simplify systems that capture very high-resolution images.
โFor example, the Microscan PanelScan system uses two GigE line scan cameras to create a 150 MP image of printed circuit boards up to 400 x 500 mm in size while also allowing reading of very small codes marked on each board or the inspection of features down to just a few microns,โ Agapakis says. โWhen tied with a multicore PC, where each core can process a section of the image in parallel with the other cores, it leads to much faster overall inspection times even when dealing with such very large images.โ
Vision specifications also bring the benefits of all standards. Standards help trim costs by reducing the number of options, and they provide more flexibility for changing equipment in the future.
โStandards help make it possible for customers to know how to compare performance of devices while giving comfort in the fact that devices will be compatible if they should ever need to make a change,โ says Jim Anderson, national product manager for vision at Sick.
While standards bring many benefits, large automation suppliers arenโt always ready to adopt them quickly. Many prefer to offer proprietary specifications or standards that have been tweaked to bring them additional benefits. That also helps them maintain their customer base.
โEvery major automation company wants a proprietary standard created for them,โ Roszkowski says. โIf a plant has one vendorโs backbone, itโs hard to switch to another platform.โ
Decisions, decisions
Picking an interface is only one of many decisions facing design teams. Advances in processor technologies and software development capabilities are also making vision more suitable for high-speed production. The all-important cost issue is yet another parameter that must be considered.
System developers now have to consider imaging technologies. CCD imagers used to dominate in advanced manufacturing environments, but thatโs changed. CMOS imagers, used in everything from smartphones and tablets to automobiles, have advanced far enough that theyโre quite competitive in ruggedized industrial camera applications.
โWeโre rapidly migrating to CMOS imagers. Theyโre generally less expensive and usually have higher speeds,โ Roszkowski says. โThermally, they have a lower heat profile, which lets us run faster semiconductors. Originally, CMOS was an inferior imaging technology, but the difference is very small now.โ
System strategists also have to trade resolution, cost and data processing requirements. Higher-resolution cameras create more data that must be analyzed, driving up computing requirements. In some applications, it could be more effective to use lower-resolution cameras.
โReasonably priced cameras can now achieve speeds of 300-500 frames per second,โ Ponce de Leon says. โThese speeds are accomplished by either using a lower-resolution camera or windowing down a higher-resolution camera.โ
The trade-off between camera resolution and the size of the area being scanned impacts both camera resolution and the processing power needed to manipulate pixels. Developers must determine which approach provides the best overall performance for their specific application.
โPerformance as a function of resolution is only a matter of the size of the field of view,โ Dechow says. โGiven a specific viewing area, a higher-resolution camera will indeed provide more pixels in that image, but a lower-resolution camera would have equal performance if the field of view is reduced.โ
While hardware costs get a lot of attention, software is a critical factor in performance and reliability. Here again, design teams have many options. Programmers can use different levels of coding to achieve their goals. Often, focusing on software brings significant speed enhancements.
โWeโre exploiting the instruction set of the Texas Instruments dual-core digital signal processor,โ Roszkowski says. โThat lets us get five to 10 times more performance on the same hardware platform. Most people use high-level languages, others use assembly code. We use machine code.โ
The use of DSPs highlights significant changes in computing strategies. Microcontrollers are evolving from dual cores to multicores, providing more performance with minimal changes in size. Field programmable gate arrays (FPGAs) have enough computing power to meet todayโs challenges. And graphic processing units are migrating from gaming to more productive industrial tasks.
โThe amount of computing power built into the GPUs and FPGAs of leading camera technology give more flexible ways to solve some complex applications,โ Anderson says. โThe use of multicore processors has allowed integrators and customers alike to have more realistic expectations of system cycle times. The use of GPUs has also reduced the overall cost of a solution.โ
All these decisions will be influenced by pricing. These considerations must include more than the cost of a specific component. For example, higher-resolution cameras often require faster processors and more software development.
โHigher-resolution components allow for better accuracy and yield more detailed information that can/may be utilized in vision algorithms,โ Wicks says. โThese come at both a computational and monetary cost.โ
Going forward, managers tasked with monitoring these assembly systems will have another tool in their arsenal, as more make smartphones and tablets part of their connectivity options. These handhelds can be used to check performance from any location, in the plant or from a remote site. Some managers may even be able to tweak vision systems using their phones.
โThe link between vision solutions and the assembly systems is getting tighter in many ways, but the steady increase in the performance of the smartphone has allowed for a nearly constant contact between system and user,โ Anderson says. โIn many cases, the phone can be used as a visualization tool and in some cases it can be the actual interface for programming the camera.โ