Industrial revolutions—from mechanization to electrification and mass production to increased automation—have long been about replacing human muscle with machines. For many factory workers who might face the threat of redundancy, that is scary enough. But the fourth revolution, which is more about replacing human brain power with artificial intelligence (AI), presents a change that many more workers are finding difficult to accept.
AI can provide immediate impact for oil and gas companies—reduced expenses, increased productivity, improved work methods—but energy companies have been slow to adopt the technologies available. This might have to do with security concerns, cost, or even just a lack of understanding about the benefits to be gained.
But it also could have a lot to do with humans’ inability to understand what’s going on inside the black box.
Publicis.Sapient has been trying to take its AI technology to the energy industry, helping oil and gas companies optimize their drilling operations, particularly in the shale industry. Though some companies focus on helping oil companies figure out where to drill, Publicis.Sapient uses data to analyze other information around the drilling—the lifetime of wells and how to guide the drill once drilling has begun, for example.
“A number of companies do analysis where they drill the rock and then analyze how deep they need to go. But the platform serving these models could be much faster with machine learning,” says Rashed Haq, global lead for AI at Publicis.Sapient. “The second part, when you’re drilling horizontally, is how to guide the drill based on the data that you’re getting. A simplistic way to look at it, depending on what the sensors are reading and correlating to historic data: Does it mean are you going to hit a rock? What kind of rock? Things like that. We know what the sensors said, know what kind of rock there is from past drilling, and whether it damaged the equipment. We’re collecting sensor data in new drilling and comparing that with historical data. Now we know there’s a high probability that we’re going to hit this kind of rock.”
There are different types of AI. And in some cases, the technology is being trained to follow a similar process that a human would have followed. But the machine can then do it quicker and more accurately.
An example Haq provides relates to alarm management during drilling. “Often when they’re drilling, everything has sensors on it, and they get a lot of alarms for when to stop, when to change direction, things like that,” he explains. “There are control alarms and error alarms. For error alarms, they have to stop drilling and a team of engineers then goes through a set of tests—they’re going through sensor data and geophysical models, trying to figure out does this create a concern.”
This is a manual process that engineers must go through, deciding what’s a false alarm or whether they need to actually stop the process. “The causal reasoning engine can go through the exact same steps that engineers go through to decide if it’s a false alarm,” Haq says. “It definitely saves engineering time. And second is there’s less errors.”
Causal reasoning is one type of AI. Machine learning is another—and the one whose logic can look so foreign to the human brain. When Google’s AlphaGo AI beat high-profile Go player Lee Sedol in 2016, and then Go world champion Ke Jie in 2017, it did so with moves that were completely counterintuitive to Go experts. It made moves early on that any human expert of the ancient game of strategy would’ve considered its death.
“Causal reasoning systems don’t learn by themselves. You have to manually create the ontology,” Haq explains. “But then it can find correlations that others, like humans, would take a long time to find.”
Machine learning, on the other hand, lets the computer think in ways that a human wouldn’t. “Often the model is not human-determinable,” Haq notes. And that can be disconcerting for the humans left to trust that model.
“What I hear fairly often from customers is that they don’t trust a black box,” Haq says. “If a machine is coming up with a model based on what it’s learning from data, and I can’t see its logic flow, I’m not comfortable with it.”
Publicis.Sapient sees that not only in the oil and gas industry, but across all industries, Haq says. As more companies try using AI and prove out the benefits, there will be a better case for it in industry. But it’s a fundamental challenge rooted in existing practices, Haq adds.
“For a long time, fairly successfully, people have done this using geophysical models,” he says. “They understand all the steps as to why things should happen or not happen. In artificial intelligence, they’re moving to a black box.”
Instead, there needs to be a convergence of what AI can achieve and what humans can understand about it, Haq contends. “That’s not a solved problem. It will emerge maybe over the next five years as people adopt and see how to merge the two together,” he says. “In academia, about 50 percent of the research is Explainable AI (XAI). Can you find interim steps in the model that are human-understandable? These things need to start converging and making progress. Then we’ll see more of an uptake.”
Change management, therefore, is a major part of introducing AI technologies to oil drilling operations. “The believability part of it is working with mathematicians and getting on the same page about language and why you’re doing what you’re doing,” says Haq, who was somewhat surprised to discover geologists’ general lack of faith in data. “It took us some time to understand their view of what a model is. Geologists are like physicists and data scientists are more like economists.”
But to truly take advantage of the brain power that can come out of machine learning, humans will need to be able to trust the data that they receive, making moves the might not be intuitive. As Ke Jie said in an interview after he lost to AlphaGo, “What he sees is the whole universe, while what we see is just a pond in front of us.”
Leaders relevant to this article: