The fast advance of robot vision capabilities may makes it appear that these technologies have resolved all of the work cell issues that can impact reliable and accurate robot movements. But these technologies, as advanced as they appear, still have many obstacles to overcome to become more ubiquitous throughout industry.
Microspi Industries, a supplier of AI-vision software for industrial and collaborative robots, is focused on an addressing these issues and can now resolve a number of them with its release of MIRAI 2, the company’s latest generation of its AI-vision software for robotic automation. According to Micropsi, MIRAI 2 comes with several new features (details below) to “reliably solve automation tasks with variance in position, shape, color, lighting or background.”
“We've let our most demanding automotive OEM customers drive the requirements for this version without sacrificing the simplicity of the product,” said Ronnie Vuine, founder of Micropsi Industries. “It still wraps immensely powerful machine learning in a package that delivers quick and predictable success and is at home in the engineering environment it's being deployed in.”
How it differs
A key aspect of the MIRAI 2 technology that sets it apart from traditional robot vision systems is its ability to operate without relying on CAD data. Instead, it uses data captured by the camera from the actual operating environment.
Matt Jones, vice president of sales and operations at Micropsi explained that, with robot picking/placing applications, for example, there can be a lot of variation the robot has to deal with, such as changing positions or changing backgrounds behind the object to be picked.
In one of the MIRAI 2 demonstrations shown at Automate 2024, multiple brake pads were placed randomly in a box filled with crumpled paper. After the robot dropped a brake pad onto the paper in the box, it doesn’t know the exact position of the brake pad when it goes to pick it again because of the change caused by the drop. This constantly changing background of crumpled paper adds further complexity to the picking task. Plus, there’s the issue of changing light reflections off of the highly reflective brake pads after being dropped onto the paper.
“Typically, vision systems are going to struggle with this kind of variation, but MIRAI is able to adjust and adapt,” said Jones. He noted that this demo for Automate was developed in Micropsi’s San Francisco offices, which, of course, have very different lighting from that at the Automate event in McCormick Place in Chicago. “But we didn't have to touch anything up from the training in the office to have it work as expected here at Automate even though it’s now in a very different lighting scenario,” he said. “This shows the power of AI to adapt.”
Jones explained that the MIRAI software can currently be used with robots from Universal Robots, FANUC and KUKA. “We are purposefully slowly growing our ecosystem of third-party robot manufacturers we work with so that we're fully integrated with the robot system,” he said.
Training the AI
The machine learning in MirAI 2 is taught by showing the robot “where we want it to end up either via hand-guiding of a cobot or jogging an industrial robot to teach it the different angles, poses, backgrounds or any other variance the robot and vision system might encounter,” said Jones. This is all that needs to be done with MIRAI 2 to train it.
Jones added that it took a just couple of days to do the multiple trainings used for Micropsi’s demos at Automate.
The AI training is quick because most of the work is handled through the standard robot programming provided with the robot. Jones noted that about 90% of the MIRAI demonstrations shown at Automate (see video below) are done using the robot’s own programming that a user will already be familiar with.
In addition to the brake pad demo at Automate, Micropsi also showed a more complex and intricate series of movements that MIRAI 2 is capable of. In this example, the MIRAI system can grab a USB connector—in random and/or moving positions—then plug it and unplug it from a device.
“One thing that's really cool about MIRAI is how it can adapt in real time,” said Jones. “It's not snapping a picture and then finding the object, it's actually scanning at about 20 times a second to adjust its movement to be able to grab the cable out of the air—in whatever position it may be—and then plug it in.”
New features
In addition to the advanced capabilities noted above, the release of MIRAI 2 also eliminates the need for a force-torque sensor for most applications, according to Micropsi.
Four other new features introduced with MIRAI 2 are:
- Robot skill-sharing: Users can share skills between multiple robots. If conditions are identical (lighting, background, etc.), very little or no additional training is required in additional installations.
- Semi-automatic data recording: This allows users to record episodes (of data) for skills without having to hand-guide the robot, reducing the workload on users and increasing the quality of the recorded data. Users only need to prepare the training situations and corresponding robot target poses.
- Abnormal condition detection: MIRAI can now be configured to stop when unexpected conditions are encountered, allowing users to handle these exceptions in their robot program or alert a human operator.
- Industrial PCs: The MIRAI software can now be run on industrial-grade hardware (rail-mountable with 24V DC power) for higher dependability in more difficult factory conditions.
In the video below, Micropsi’s Matt Jones explains MIRAI 2’s capabilities across two complex robot applications.