Can Artificial Intelligence Explain Itself?

Jan. 21, 2021
As an end user, you may never have to work with artificial intelligence from a developer’s perspective, but knowing how it works is key to ensuring its successful use. Fortunately, the technology can explain itself.

Though the use of artificial intelligence (AI) is increasing across the industrial automation spectrum, many in industry are still unclear about its application and benefits. This is not surprising considering that AI applications are not something most end users will ever knowingly directly interface with. AI tends to work behind the scenes, processing inputs and actions to streamline the functions of the systems that employ it.

Even though you may not need to learn how to interact with AI, having a basic understanding of how it works will likely be as important as understanding how to setup a Wi-Fi network in your home. We’ve all learned so much about Wi-Fi networks because of how much we depend on them. The same will likely be said of AI in the near future.

A few months ago, Automation World connected with Anatoli Gorchet, co-founder and chief technology officer at Neurala (a supplier of AI vision software), to better understand how artificial intelligence (AI) works in industrial inspection processes using machine vision. Following the publication of Gorchet’s insights, we learned about another AI term we were not familiar with—explainability.

This term refers to explainable AI, a set of techniques (including software code and  a user interface) that creates a human-readable path from a given piece of data to a specific decision. “In essence, these techniques bottle that intuitive understanding that an AI PhD student develops, into an intelligible, replicable process that can be delivered to the end user,” said Max Versace, Neurala’s CEO and co-founder.

According to Versace, explainability is “critical for debugging. No matter how explainable AI turns out to be, nobody will ever deploy a solution that makes tons of mistakes. You need to be able to see when AI fails and why it fails. And explainable AI techniques can help you determine whether the AI is focusing on the wrong things.”

As an example, Versace said to consider a deep learning network deployed on industrial cameras to provide quality assurance in a manufacturing setting. “This AI could be fooled into classifying some products as normal when, in fact, they are defective. Without knowing which part of the image the AI system is relying on to decide ‘good product’ vs. ‘bad product,’ a machine operator might unintentionally bias the system,” he said. “If they consistently show the ‘good product’ on a red background and the ‘bad product’ on a yellow one, the AI will classify anything on a yellow background as a ‘bad product.’ However, an explainable AI system would immediately communicate to the operator that it is using the yellow background as the feature most indicative of a defect. The operator could use this intel to adjust the settings so both objects appear on a similar background. This results in better AI and prevention of a possibly disastrous AI deployment.”

Beyond this kind of application, explainability also enables accountability and auditability, said Versace. It can help answer who designed the system as well as how it was built and trained.

“At the end of the day, humans are offloading key decisions to AI,” said Versace. “And when it comes to assessing trust, they take an approach similar to that of assessing whether or not to trust a human co-worker. Humans develop trust in their co-workers when at least two conditions are satisfied: their performance is fantastic and they can articulate in an intelligible way how they obtained that outcome. For AI, the same combination of precision and intelligibility will pave the way for wider adoption.”

About the Author

David Greenfield, editor in chief | Editor in Chief

David Greenfield joined Automation World in June 2011. Bringing a wealth of industry knowledge and media experience to his position, David’s contributions can be found in AW’s print and online editions and custom projects. Earlier in his career, David was Editorial Director of Design News at UBM Electronics, and prior to joining UBM, he was Editorial Director of Control Engineering at Reed Business Information, where he also worked on Manufacturing Business Technology as Publisher. 

Sponsored Recommendations

Why Go Beyond Traditional HMI/SCADA

Traditional HMI/SCADAs are being reinvented with today's growing dependence on mobile technology. Discover how AVEVA is implementing this software into your everyday devices to...

4 Reasons to move to a subscription model for your HMI/SCADA

Software-as-a-service (SaaS) gives you the technical and financial ability to respond to the changing market and provides efficient control across your entire enterprise—not just...

Is your HMI stuck in the stone age?

What happens when you adopt modern HMI solutions? Learn more about the future of operations control with these six modern HMI must-haves to help you turbocharge operator efficiency...