Machine Learning Roadblocks in Oil and Gas

Jan. 6, 2018
Artificial intelligence can help shale oil drillers reduce expenses, increase productivity and improve work methods, but trusting the logic inside the black box is a major hurdle.

Industrial revolutions—from mechanization to electrification and mass production to increased automation—have long been about replacing human muscle with machines. For many factory workers who might face the threat of redundancy, that is scary enough. But the fourth revolution, which is more about replacing human brain power with artificial intelligence (AI), presents a change that many more workers are finding difficult to accept.

AI can provide immediate impact for oil and gas companies—reduced expenses, increased productivity, improved work methods—but energy companies have been slow to adopt the technologies available. This might have to do with security concerns, cost, or even just a lack of understanding about the benefits to be gained.

But it also could have a lot to do with humans’ inability to understand what’s going on inside the black box.

Publicis.Sapient has been trying to take its AI technology to the energy industry, helping oil and gas companies optimize their drilling operations, particularly in the shale industry. Though some companies focus on helping oil companies figure out where to drill, Publicis.Sapient uses data to analyze other information around the drilling—the lifetime of wells and how to guide the drill once drilling has begun, for example.

“A number of companies do analysis where they drill the rock and then analyze how deep they need to go. But the platform serving these models could be much faster with machine learning,” says Rashed Haq, global lead for AI at Publicis.Sapient. “The second part, when you’re drilling horizontally, is how to guide the drill based on the data that you’re getting. A simplistic way to look at it, depending on what the sensors are reading and correlating to historic data: Does it mean are you going to hit a rock? What kind of rock? Things like that. We know what the sensors said, know what kind of rock there is from past drilling, and whether it damaged the equipment. We’re collecting sensor data in new drilling and comparing that with historical data. Now we know there’s a high probability that we’re going to hit this kind of rock.”

There are different types of AI. And in some cases, the technology is being trained to follow a similar process that a human would have followed. But the machine can then do it quicker and more accurately.

An example Haq provides relates to alarm management during drilling. “Often when they’re drilling, everything has sensors on it, and they get a lot of alarms for when to stop, when to change direction, things like that,” he explains. “There are control alarms and error alarms. For error alarms, they have to stop drilling and a team of engineers then goes through a set of tests—they’re going through sensor data and geophysical models, trying to figure out does this create a concern.”

This is a manual process that engineers must go through, deciding what’s a false alarm or whether they need to actually stop the process. “The causal reasoning engine can go through the exact same steps that engineers go through to decide if it’s a false alarm,” Haq says. “It definitely saves engineering time. And second is there’s less errors.”

Causal reasoning is one type of AI. Machine learning is another—and the one whose logic can look so foreign to the human brain. When Google’s AlphaGo AI beat high-profile Go player Lee Sedol in 2016, and then Go world champion Ke Jie in 2017, it did so with moves that were completely counterintuitive to Go experts. It made moves early on that any human expert of the ancient game of strategy would’ve considered its death.

“Causal reasoning systems don’t learn by themselves. You have to manually create the ontology,” Haq explains. “But then it can find correlations that others, like humans, would take a long time to find.”

Machine learning, on the other hand, lets the computer think in ways that a human wouldn’t. “Often the model is not human-determinable,” Haq notes. And that can be disconcerting for the humans left to trust that model.

“What I hear fairly often from customers is that they don’t trust a black box,” Haq says. “If a machine is coming up with a model based on what it’s learning from data, and I can’t see its logic flow, I’m not comfortable with it.”

Publicis.Sapient sees that not only in the oil and gas industry, but across all industries, Haq says. As more companies try using AI and prove out the benefits, there will be a better case for it in industry. But it’s a fundamental challenge rooted in existing practices, Haq adds.

“For a long time, fairly successfully, people have done this using geophysical models,” he says. “They understand all the steps as to why things should happen or not happen. In artificial intelligence, they’re moving to a black box.”

Instead, there needs to be a convergence of what AI can achieve and what humans can understand about it, Haq contends. “That’s not a solved problem. It will emerge maybe over the next five years as people adopt and see how to merge the two together,” he says. “In academia, about 50 percent of the research is Explainable AI (XAI). Can you find interim steps in the model that are human-understandable? These things need to start converging and making progress. Then we’ll see more of an uptake.”

Change management, therefore, is a major part of introducing AI technologies to oil drilling operations. “The believability part of it is working with mathematicians and getting on the same page about language and why you’re doing what you’re doing,” says Haq, who was somewhat surprised to discover geologists’ general lack of faith in data. “It took us some time to understand their view of what a model is. Geologists are like physicists and data scientists are more like economists.”

But to truly take advantage of the brain power that can come out of machine learning, humans will need to be able to trust the data that they receive, making moves the might not be intuitive. As Ke Jie said in an interview after he lost to AlphaGo, “What he sees is the whole universe, while what we see is just a pond in front of us.”

About the Author

Aaron Hand | Editor-in-Chief, ProFood World

Aaron Hand has three decades of experience in B-to-B publishing with a particular focus on technology. He has been with PMMI Media Group since 2013, much of that time as Executive Editor for Automation World, where he focused on continuous process industries. Prior to joining ProFood World full time in late 2020, Aaron worked as Editor at Large for PMMI Media Group, reporting for all publications on a wide variety of industry developments, including advancements in packaging for consumer products and pharmaceuticals, food and beverage processing, and industrial automation. He took over as Editor-in-Chief of ProFood World in 2021. Aaron holds a B.A. in Journalism from Indiana University and an M.S. in Journalism from the University of Illinois.

Sponsored Recommendations

Why Go Beyond Traditional HMI/SCADA

Traditional HMI/SCADAs are being reinvented with today's growing dependence on mobile technology. Discover how AVEVA is implementing this software into your everyday devices to...

4 Reasons to move to a subscription model for your HMI/SCADA

Software-as-a-service (SaaS) gives you the technical and financial ability to respond to the changing market and provides efficient control across your entire enterprise—not just...

Is your HMI stuck in the stone age?

What happens when you adopt modern HMI solutions? Learn more about the future of operations control with these six modern HMI must-haves to help you turbocharge operator efficiency...

AVEVA™ System Platform: Smarter, Faster Operations for Enhanced Industrial Performance

AVEVA System Platform (formerly Wonderware) delivers a responsive, modern operations visualization framework designed to enhance performance across all devices with context-aware...