web analytics
Machine Learning with Deborah Sacrey – AAPG Energy Insights Podcast

Machine Learning with Deborah Sacrey – AAPG Energy Insights Podcast

One of our very own esteemed geoscientists, Deborah Sacrey, sat down with Vern Stefanic to talk about Machine Learning in the energy industry.

To watch the video, please click here.

Transcript of podcast

Full Transcript

VERN STEFANIC: Hi, I’m Vern Stefanic. And welcome to another edition of AAPG’s Podcast, Energy Insights, where we talk to the leaders and the people of the energy industry who are making things happen and bringing the world more energy, the energy that it needs to keep going.

Today, we’re very happy to have as our guest Deborah Sacrey, who is Auburn Energy, consultant working outside of Houston, Texas. But somebody who’s got experience working in the energy industry for a long time, who’s been through many changes in the industry, and who keeps evolving to find new ways to make herself valuable to the profession and to the industry that’s going forward. Deborah, welcome, and thank you for being here with us today.

DEBORAH SACREY: I’m delighted. Thank you so much for inviting me.

VERN STEFANIC: Well one reason– we’re doing this from the AAPG Annual Convention in San Antonio, where you have been one of the featured speakers. And you were talking about what the future of petroleum geologist is going to be. Which was perfect, because you found yourself somebody who’s had to sort of evolve and change your focus, the focus of your career several times. Could you tell us a little bit about your journey?

DEBORAH SACREY: Well, what I found is that every time there’s a major technology change, a paradigm shift in the way we look at data, there are consequences to that. There are benefits and consequences. If you’re not prepared to accept that technology change, you get left behind. And it makes it hard for you to find a job.

But if you accept that technology change, and embrace it, and learn about it, then you can morph yourself into a very successful career, until the next time the technology changes again. So you’re constantly– you have ups and downs, and you’re constantly morphing yourself and evolving yourself to embrace new technology changes as they come along.

VERN STEFANIC: Which is important, because we live in– in the industry right now, there have been rapid change, which we’re going to talk about some of the places where we’re going on that. But because of that we always hear stories of a lot of petroleum geologists or professional geoscientists, who find themselves awkwardly lost in the shuffle somewhere and not knowing what to do. What I love about your message is that it’s the understanding of how technology is driving all of this and being aware of that. You’ve experienced this several times in your career, is that right?

DEBORAH SACREY: Oh, absolutely. I’ve been– I got out of school in 1976. So this makes my 43rd year in the career. What I’ve gone through is, we had a digital transformation. When I got out of school, we were always looking at paper seismic records. And I went to work for Gulf, and they’d be rolled up every night, and they’d be put in a tube, and they’d be locked behind the door. And during the day, you’d go check them out and take them to your office and work on paper.

So the digital transformation is when we moved from paper records into workstations, where we could actually scale the seismic, and can see the seismic, and blow it up, and do different things with it. That was a huge transformation. When we went from paper logs, which a lot of people still use today, to something that you can see on the screen, and blow it up and see all the nuances, of the information in the well.

Then the next major transformation came in the middle ’90s, when software was available for the smaller clients and independents to start looking at 3D. So we transition from the 2D world into the 3D world. And that was huge. I mean, it’s amazing to me that there’s any space left on the Gulf Coast that doesn’t have a 3D covering yet at this point. And now, we’re getting ready to go into another major transformation. And it’s all about data.

VERN STEFANIC: People have been told that, I think, maybe a couple of times, that oh, yeah, I understand that I have to change, and I have to be aware of it. But they really not have the skills or the insight on how to make some of those changes happen. I’m just curious, in your career do you recall some of the ways that you had to– just some of the realizations that you had. First, not just that you had to change, but some of the steps that you did to make it happen.

DEBORAH SACREY: Well, I think a lot of it, and what was important to me, is when I could see the changes coming. I had to educate myself. I didn’t have a resource to go to. I wasn’t working for a big company that had– that would send you off to classes. So it’s a matter of doing the research and understanding the technology that you’re facing.

And what I told people yesterday in my talk is that the AAPG Convention or any convention is an excellent resource for free education. Go out and look what the vendors are trying to do. And that’s your insight into how the technology is changing. And people can walk around the convention center. And they can listen to presentations for free and try to get an inkling of what’s getting ready to happen.

VERN STEFANIC: That’s great advice. That’s great insight. By the way, I’ve noticed that too in myself, in walking around the convention floor. That’s where I heard many things for the first time. Thought, oh, when I was at the Explorer, thought, oh, I ought to do a story about this.

DEBORAH SACREY: Right, exactly. And I think it’s especially important for the young people, the early career people, or the kids coming out of school, to understand that their lives will not always be with one company. When I went to work for Gulf in 1976, the gentleman who interviewed me on campus, looked me in the eye and said, Gulf will be your place of employment for life.

And I referenced yesterday a really good book. It’s called Who Moved my Cheese. So our cheese in our careers is constantly getting moved. And we have to be able to accept that and adapt to it. And you can only do it through education.

VERN STEFANIC: When did you realize, or was there a moment, when you saw that, oh, big data is important? Because it seems very obvious that we would see that. But I’m not sure everybody clicks on to– not just big data is going to be the name of the game, but this is what I’m going to do about it. What was your experience with that?

DEBORAH SACREY: Well, in 2011, a gentleman whom I’d been working with for a long time, Tom Smith, and he was the– Dr. Smith was the guy who started S&P, or Kingdom. When he sold Kingdom, he started doing research into ways that we could extend our understanding of seismic data and do applications using seismic attributes.

So he brought me in to help work with the developers to make this software geoscience friendly. Because our brains are wired a little bit differently from other people, other industries. And the technology he was using is machine learning, but it’s cluster analysis and it’s pattern recognition. Now what’s happened in the big data world is, all these companies, all the majors, all the large independents, have been drilling wells for years. And a lot of times, they’ve just been shoving the logs, and the drilling reports, and everything in a file.

So that’s all this paper that’s out there, that they’re just now starting to digitize, but you have to get it in a way that’s easily retrievable. So the big data– every time you drill a well now, you’re generating 10 gigabytes of information. And think about the wells that are being drilled, and how that information is being organized, and how it’s being put in– so if you use a keyword, like 24% porosity, you can go in and retrieve information on wells where they’ve determined that there’s 24% porosity in reservoirs. And that’s some of the data transformation we’re getting ready to go through, to make it accessible, because there’s so much out there.

VERN STEFANIC: OK, so understanding that having data is the key to having more knowledge, is the key to actually being a success, not just with your company, but also with actually bringing energy to the world.

DEBORAH SACREY: Right, I mean it’s not getting any easier to find. So we’re having to use advanced methods of technology and understanding the data information to be able to find the more subtle traps.

VERN STEFANIC: So– and I don’t know if this is too much of a jump– in fact, we can fill in the blanks if it is– but today we’re talking about machine learning and its applications and implications for the energy industry. And I know you are somebody who has been a little bit ahead of the curve on this one, in recognizing the need to understand what this is all about. So for some of us who don’t understand like you do, could you talk a little bit about that?

DEBORAH SACREY: Well, I can be specific about the technology that I’ve been using for the last five years.

VERN STEFANIC: OK, yeah.

DEBORAH SACREY: And like I said earlier, Tom brought me in to help guide the developers. But the basics behind the software I’ve been using is that instead of looking at the wavelet in the seismic data, I’m parsing the data down to a sample level. I’m looking at sample statistics.

So if your wavelet, if you’re in low frequency data, and your wavelet’s 30 milliseconds between the trough and the peak, I may be looking at 2 millisecond sample intervals. So I’m parsing the data 15 times as densely as you would if you were looking at the wavelet. What this allows me to do is, it allows me to see very thin beds at depth. Because I’m not looking at conventional seismic tuning anymore. I’m looking at statistics and cluster analysis that comes back to the workstation. Because every sample has an X, Y, and Z. So it has its place in the earth. And then I’m looking at true lithology patterns, like we’ve never been able to see before.

VERN STEFANIC: OK, well, never been able to see before is a remarkable statement. Are we talking about a game change for the profession at this point?

DEBORAH SACREY: Most definitely. I give a lot of talks on case histories. I’ve probably worked on a hundred 3Ds in the last five years, all around the world. And I have one example in the Eagle Ford set. The Eagle Ford is only a 30 millisecond thick formation in most of Texas. And so you’re looking at a peak and a trough, and you’re looking at two zero crossings. That’s four sample points.

But when you’re looking at that kind of discrete information that I can get out of it, I can see all six facies strats from the clay base, up through the brittle zone and the ash top, right underneath the Austin Chalk. Well, it’s the brittle zone, in the middle, where the higher TOC is, and what people are trying to stay in when they’re drilling the Eagle.

If you can define that and you can isolate it, then you can geosteer better. You can get better results from your well. But you’re talking about something that’s only 150 feet deep. And you’re trying to discern a very special part of that, where the hydrocarbons are really located. And so that’s going to be a game-changer.

VERN STEFANIC: So what would you say to people– but this is still you– you’re bringing your skills, your talents, everything that has brought you to this point in your career, and applying them with this new technology. What about the criticism, which may be completely invalid, but what about the criticism that people say that because of machine learning, we’re headed to a place where the very nature of the jobs of the professional geologists are going to be threatened? Is that a possibility? Is that something that we should even think about?

DEBORAH SACREY: Well, I think it’s a possibility. And why I say that is because a lot of the machine learning applications that are being developed out there are really improving efficiency, especially when it comes to the field and monitoring pressure gauges and things like that. They’re doing it remotely. And they’re getting into the artificial intelligence aspect of it. But the efficiency that you can bring to the field and operations will get rid of some of the people who go down and check the wells every day. Because they’ll be able to monitor it– they’ll know when rest is getting to one day or whether bad weather comes through and they’ve had problems. They’ll be able to know immediately without having to send someone out to the field.

Now, when you relate it to the geosciences, especially on the seismic side, you’re going to still need the experience. Because it’s a matter of maybe having a different way to view the data, but someone’s still going to have to interpret it. Someone’s still going to have to have knowledge about the attributes to use in the first place. That takes a person with some experience. And it’s not something that’s usually learned overnight. So I think some aspects of it will improving efficiency in the industry and do get rid of some jobs, and then some other aspects will not get rid of jobs.

VERN STEFANIC: Well, let me go down a difficult path then in our conversation, we all are aware of demographics within the industry, within the profession. And so, let’s start first with the baby boomers. Right, so there is an example. We have a case history of how we can approach that. From your perspective, though, it’s actually just being aware that change is necessary.

DEBORAH SACREY: Yeah, and you know, there are a lot of people out there who are in denial. And they think they can keep on working that same square of earth all the rest of their career. And those will be the guys who get left behind. One of the things I tried to emphasize yesterday is that old dogs can learn new tricks. And this is not that hard.

This kind of technology has been on Wall Street, it’s been in the medical industry for years. We’re just now getting to the point where we’re applying it to the oil and industry, to the energy industry.

VERN STEFANIC: Is there any advice that you could give to maybe younger, mid-career, the Gen X, or even kind of YPs, who are just now getting into the industry, special things they should be looking for or trying to do to enhance their careers?

DEBORAH SACREY: Well, certainly, if they’re working for a large company, American companies have already started making the shift to machine learning or artificial intelligence data mining. I’ve done a lot of work with Anadarko. They put a whole business unit together some years ago specifically to look into methodologies to improve the efficiencies on how they can get more out of this data. All the big companies have research departments. They’re getting into it.

I have a friend who just got a PhD several years ago in data mining. And she said her company is looking and screening all the new resumes coming in for any kind of statistics, any kind of data mining technology, or any kind of advanced machine learning. And they need a reference that they’ve had exposure to it, but that’s becoming a discriminator for finding a job in some instance, because they’re all making the digital transformation to efficiency and machine learning.

VERN STEFANIC: I don’t want to– I don’t want to overlook what might be obvious to some people, but I’d like to put it on the record. Auburn Energy, you, in recognizing and embracing the need to evolve along with the industry, as technology changes, you’ve had a little bit of success at this.

DEBORAH SACREY: I’ve been very lucky. I’ve been blessed in life with the successes I’ve had. I was getting bored with mapping and 3D. And so several years ago, at the time I was involved in this, I started looking in to different attributes, and what kind of reactions to the rock properties and sizing to get these different attributes, which is why this machine learning technology came along at the right time for me. Because it’s just a gradual going on. I’m not looking at one attribute at a time, I’m looking at 10 at the same time.

And in doing so, and in looking at the earth in a different way, I’ve been able to pick up some nuances that people have missed and had discoveries. I had a two million barrel field I found a couple years ago. I had an 80 bcf field that I found a couple years ago. I just had a discovery in Mississippi and in Oklahoma, in southern Oklahoma. And we’re expanding our lease activities to pick up on what I’m seeing in my technology there. So not only has it revitalized my love of digging in and looking at seismics, but it proves to be profitable as well.

VERN STEFANIC: So let me go ahead and maybe put you on the spot. Don’t mean to be– but because you are a person who’s gone through many stages of the industry, what can you see happening next? Do you have any kind of crystal ball look out to– or even just to say this is what needs to happen next to help you do your job better.

DEBORAH SACREY: Well, certainly the message I’m trying to get out to people of all ages is that this paradigm shift that we’re getting ready to go through, and you hear it over and over at all the conventions, is going to substantially change their lives. And we need to get on the train before it leaves the station or they will be left behind. And each time we’ve had a major paradigm shift, there have been some people who’ve been reluctant or didn’t want to get outside their box. And they wake up five or six years later and don’t recognize the world. Their world has completely changed and people have moved on.

And each time that happens, you can lose a certain part of the brain power and people who have knowledge of one county and one piece of Texas, because they just didn’t want to make– they didn’t want to bother themselves. And so I’m trying to get the word out to people that this change is coming. And it’s something that can be easily embraced and you should not be afraid of it, and just get on the bandwagon. I mean, it’s not that hard.

The technology and the software that I’m seeing being developed out there, it’s a piece of cake to use. You just have to have some knowledge of the seismic or logs. There’s a technology called convolutional neural network and it’s being used to map faults through 3D. So you may go in and map the faults in 10 lines out of 80 blocks of actual data. And the machine goes in and learns what certain kinds of faults look like from the 10 lines that you’ve mapped. And it will finish mapping all the faults in the whole bunch of blocks in the offshore data.

VERN STEFANIC: Wow.

DEBORAH SACREY: It’s scary. But fault picking is like one of the most boring things we can do in seismic data. So if you can find– if you can find an animal out there that will crawl through that data, and pick out the faults for you, that’s wonderful. That saves tons of human hours. And it’s good for stratigraphy. You give it some learning lines where you’ve mapped out blocks of clastics or carbonates, or turbidites, something like that, and it learns from that. And then it goes and maps that stratigraphy anywhere it can find it in the 3D. It’s very unique. You need to start educating yourself about what’s out there.

VERN STEFANIC: Well, you’re absolutely right. I try to in the world that I work with, but I’m always impressed that in the world that you’re part of, that there’s so much change that keeps coming. And it’s just fast. And it’s again, and again, and again. And the ability that people, such as yourself, has had to embrace that and to use technology in the new way– in fact, I’m going to guess– I’m going to guess that– have you offered suggestions to anyone, who are developing technology, have you got to the point where you say, you know what we need, we need now for it to do this?

DEBORAH SACREY: I’m still on a development team for the software I’m using.

VERN STEFANIC: You’re on the development– OK.

DEBORAH SACREY: Yeah, so we’re forward thinking two years down the road what kind of– what can we anticipate the technology needs to be doing two to three years down the road.

VERN STEFANIC: Can you talk about any of that?

DEBORAH SACREY: Well, I mean, I can. And certainly, this CNN technology is part of it. We’ve been approached by several larger companies to put this into our software. And they’re willing to help pay for the effort to do that, because it would take their departments too long. We’re too far advanced where we are. And it would take them too long to recreate the wheel.

So they’d rather support us to get the technology that they need, that they need for their data. And the beauty of all this is that you don’t have to shoot anything new. You don’t even necessarily have to reprocess it. You’re just getting more out of it than you’ve ever been able to get before.

VERN STEFANIC: That is beauty.

DEBORAH SACREY: It is cool. Because a lot of people don’t have the money to go shoot more data or reprocess it. They just want to take advantage of the stuff they already have in their archives.

VERN STEFANIC: When people talk about the industry being a sunset industry, I think they’re not giving it proper credit for what’s going on.

DEBORAH SACREY: Oh, I see this totally revitalizing– one of the examples I showed yesterday was the two million barrel oil field that I found in Brazoria County, Texas. And it’s from a six-foot thick off-shore bar at 10,800 feet. Well, that reflector is so weak– I mean, it’s not a bright spot. It doesn’t show up. People would have ignored it a long time, and have ignored it a long time, for drilling.

But I can prove that there’s two million barrels of oil in that six-foot thick sand that covers about 1,900 acres. So how many of those little things that we’ve ignored for years and years are still out there to be found. That’s what I’m saying. This technology is going to give us another little push. It’ll make us more efficient in the unconventional world. It will definitely help us find the subtle traps in the conventional world.

VERN STEFANIC: So there you have it. If you’re part of this profession now, you’re part of this industry now, don’t be discouraged. There’s actually great work to be done.

DEBORAH SACREY: Oh, there’s a lot of stuff left to find. We haven’t begun to quit finding yet. I mean, it’s just like Oklahoma– I grew up in Oklahoma. And for years and years, all the structural trap had been drilled, and all the clays had gone through. And everyone said, well, Oklahoma’s had it. And we turn around and there’s a new play. And you turn around, there’s the unconventional. There’s the SCOOP and STACK. There’s all the Woodford. There’s all these things that reenergized Oklahoma. And it’s been poked and punched for over 100 years. And people are still finding stuff. So we just– we just have to put better glasses on.

VERN STEFANIC: Yeah.

DEBORAH SACREY: We have to sharpen our goggles. And get in there and see what’s left.

VERN STEFANIC: Great words. Deb, thanks for this conversation today.

DEBORAH SACREY: You’re welcome.

VERN STEFANIC: Thank you. I hope it’s a conversation that we’ll continue. We’ll continue having this talk, because it sounds like there’s going to be new chapters added to the story.

DEBORAH SACREY: Oh, yeah, and I’m really– you know, I’m 66 years old, but I’m not ready to give it up yet. I’m having way too much fun.

VERN STEFANIC: That’s great. Thank you.

DEBORAH SACREY: You’re welcome.

VERN STEFANIC: And thank you for being part of this edition of Energy Insights, the AAPG Podcast, coming to you on the AAPG website, but now coming to you on platforms wherever you want to look. Look up a AAPG Energy Insights, we’ll be there. And we’re glad you’re part of it. But for now, thanks for listening.

The Oil Industry’s Cyber–Transformation Is Closer Than You Think

The Oil Industry’s Cyber–Transformation Is Closer Than You Think

By David Brown, Explorer Correspondent
Published with permission: AAPG Explorer
June 2019

The concept of digital transformation in the oil and gas industry gets talked about a lot these days, even though the phrase seems to have little specific meaning.

So, will there really be some kind of extensive cyber-transformation of the industry over the next decade?

“No,” said Tom Smith, president and CEO of Geophysical Insights in Houston.

Instead, it will happen “over the next three years,” he predicted.

Machine Learning

Much of the industry’s transformation will come from advances in machine learning, as well as continuing developments in computing and data analysis going on outside of oil and gas, Smith said.

Through machine learning, computers can develop, modify and apply algorithms and statistical models to perform tasks without explicit instructions.

“There’s basically been two types of machine learning. There’s ‘machine learning’ where you are training the machine to learn and adapt. After that’s done, you can take that little nugget (of adapted programming) and use it on other data. That’s supervised machine learning,” Smith explained.

“What makes machine learning so profoundly different is this concept that the program itself will be modified by the data. That’s profound,” he said.

Smith earned his master’s degree in geology from Iowa State University, then joined Chevron Geophysical as a processing geophysicist. He later left to complete his doctoral studies in 3-D modeling and migration at the University of Houston.

In 1984, he founded the company Seismic Micro-Technology, which led to development of the KINGDOM software suite for integrated geoscience interpretation. Smith launched Geophysical Insights in 2009 and introduced the Paradise analysis software, which uses machine learning and pattern recognition to extract information from seismic data.

He’s been named a distinguished alumnus of both Iowa State and the University of Houston College of Natural Sciences and Mathematics, and received the Society of Exploration Geophysicists Enterprise Award in 2000.

Smith sees two primary objectives for machine learning: replacing repetitive tasks with machines – essentially, doing things faster – and discovery, or identifying something new.

“Doing things faster, that’s the low-hanging fruit. We see that happening now,” Smith said.

Machine learning is “very susceptible to nuances of the data that may not be apparent to you and I. That’s part of the ‘discovery’ aspect of it,” he noted. “It isn’t replacing anybody, but it’s the whole process of the data changing the program.”

Most machine learning now uses supervised learning, which employs an algorithm and a training dataset to “teach” improvement. Through repeated processing, prediction and correction, the machine learns to achieve correct outcomes.

“Another aspect is that the first, fundamental application of supervised machine learning is in classification,” Smith said,

But, “in the geosciences, we’re not looking for more of the same thing. We’re looking for anomalies,” he observed.

Multidimensional Analysis

The next step in machine learning is unsupervised learning. Its primary goal Is to learn more about datasets by modeling the structure or distribution of the data – “to self-discover the characteristics of the data,” Smith said.

“If there are concentrations of information in the data, the unsupervised machine learning will gravitate toward those concentrations,” he explained.

As a result of changes in geology and stratigraphy, patterns are created in the amplitude and attributes generated from the seismic response. Those patterns correspond to subsurface conditions and can be understood using machine-learning and deep-learning techniques, Smith said.

Human seismic interpreters can see only in three dimensions, he noted, but the patterns resulting from multiple seismic attributes are multidimensional. He used the term “attribute space” to distinguish from three-dimensional seismic volumes.

In geophysics, unsupervised machine learning was first used to analyze multiple seismic attributes to classify these patterns, a result of concentrations of neurons.

“We see the effectiveness of (using multiple) attributes to resolve thin beds in unconventional plays and to expose direct hydrocarbon Indicators in conventional settings. Existing computing hardware and software now routinely handle multiple-attribute analysis, with 5 to 10 being typical numbers,” he said.

Machine-learning and deep-learning technology, such as the use of convolutional neural networks (CNN), has important practical applications in oil and gas, Smith noted. For instance, the “subtleties of shale-sand fan sequences are highly suited” to analysis by machine learning-enhanced neural networks, he said.

“Seismic facies classification and fault detection are just two of the important applications of CNN technology that we are putting into our Paradise machine-learning workbench this year,” he said.

A New Commodity

Just as a seismic shoot or a seismic imaging program have monetary value, algorithms enhanced by machine-learning systems also are valuable for the industry, explained Smith.

In the future, “people will be able to buy, sell and exchange machine-learning changes in algorithms. There will be industry standards for exchanging these ‘machine-learning engines,’ if you will,” he said.

As information technology continues to advance, those developments will affect computing and data analysis in oil and gas. Smith said he’s been pleased to see the industry “embracing the cloud” as a shared computing-and-data-storage space.

“An important aspect of this is, the way our industry does business and the way the world does business are very different,” Smith noted.

“When you look at any analysis of Web data, you are looking at many, many terabytes of information that’s constantly changing,” he said.

In a way, the oil and gas industry went to school on very large sets of seismic data when huge datasets were not all that common. Now the industry has some catching up to do with today’s dynamic data-and-processing approach.

For an industry accustomed to thinking in terms of static, captured datasets and proprietary algorithms, that kind of mind-shift could be a challenge.

“There are two things we’re going to have to give up. The first thing is giving up the concept of being able to ‘freeze’ all the input data,” Smith noted.

“The second thing we have to give up is, there’s been quite a shift to using public algorithms. They’re cheap, but they are constantly changing,” he said.

Moving the Industry Forward

Smith will serve as moderator of the opening plenary session, “Business Breakthroughs with Digital Transformation Crossing Disciplines,” at the upcoming Energy in Data conference in Austin, Texas.

Presentations at the Energy in Data conference will provide information and insights for geologists, geophysicists and petroleum engineers, but its real importance will be in moving the industry forward toward an integrated digital transformation, Smith said.

“We have to focus on the aspects of machine-learning impact not just on these three, major disciplines, but on the broader perspective,” Smith explained. “The real value of this event, in my mind, has to be the integration, the symbiosis of these disciplines.”

While the conference should appeal to everyone from a company’s chief information officer on down, recent graduates will probably find the concepts most accessible, Smith said.

“Early-career professionals will get it. Mid-managers will find it valuable if they dig a little deeper into things,” he said.

And whether it’s a transformation or simply part of a larger transition, the coming change in computing and data in oil and gas will be one of many steps forward, Smith said.

“Three years from now we’re going to say, ‘Gosh, we were in the Dark Ages three years ago,’” he said. “And it’s not going to be over.”

Applications of Machine Learning for Geoscientists – Permian Basin

Applications of Machine Learning for Geoscientists – Permian Basin

By Carrie Laudon
Published with permission: Permian Basin Geophysical Society 60th Annual Exploration Meeting
May 2019

Abstract

Over the last few years, because of the increase in low-cost computer power, individuals and companies have stepped up investigations into the use of machine learning in many areas of E&P. For the geosciences, the emphasis has been in reservoir characterization, seismic data processing, and to a lesser extent interpretation. The benefits of using machine learning (whether supervised or unsupervised) have been demonstrated throughout the literature, and yet the technology is still not a standard workflow for most seismic interpreters. This lack of uptake can be attributed to several factors, including a lack of software tools, clear and well-defined case histories and training. Fortunately, all these factors are being mitigated as the technology matures. Rather than looking at machine learning as an adjunct to the traditional interpretation methodology, machine learning techniques should be considered the first step in the interpretation workflow.

By using statistical tools such as Principal Component Analysis (PCA) and Self Organizing Maps (SOM) a multi-attribute 3D seismic volume can be “classified”. The PCA reduces a large set of seismic attributes both instantaneous and geometric, to those that are the most meaningful. The output of the PCA serves as the input to the SOM, a form of unsupervised neural network, which, when combined with a 2D color map facilitates the identification of clustering within the data volume. When the correct “recipe” is selected, the clustered or classified volume allows the interpreter to view and separate geological and geophysical features that are not observable in traditional seismic amplitude volumes. Seismic facies, detailed stratigraphy, direct hydrocarbon indicators, faulting trends, and thin beds are all features that can be enhanced by using a classified volume.

The tuning-bed thickness or vertical resolution of seismic data traditionally is based on the frequency content of the data and the associated wavelet. Seismic interpretation of thin beds routinely involves estimation of tuning thickness and the subsequent scaling of amplitude or inversion information below tuning. These traditional below-tuning-thickness estimation approaches have limitations and require assumptions that limit accuracy. The below tuning effects are a result of the interference of wavelets, which are a function of the geology as it changes vertically and laterally. However, numerous instantaneous attributes exhibit effects at and below tuning, but these are seldom incorporated in thin-bed analyses. A seismic multi-attribute approach employs self-organizing maps to identify natural clusters from combinations of attributes that exhibit below-tuning effects. These results may exhibit changes as thin as a single sample interval in thickness. Self-organizing maps employed in this fashion analyze associated seismic attributes on a sample-by-sample basis and identify the natural patterns or clusters produced by thin beds. Examples of this approach to improve stratigraphic resolution in both the Eagle Ford play, and the Niobrara reservoir of the Denver-Julesburg Basin will be used to illustrate the workflow.

Introduction

Seismic multi-attribute analysis has always held the promise of improving interpretations via the integration of attributes which respond to subsurface conditions such as stratigraphy, lithology, faulting, fracturing, fluids, pressure, etc. The benefits of using machine learning (whether supervised or unsupervised) has been demonstrated throughout the literature and yet the technology is still not a standard workflow for most seismic interpreters. This lack of uptake can be attributed to several factors, including a lack of software tools, clear and well-defined case histories, and training. This paper focuses on an unsupervised machine learning workflow utilizing Self-Organizing Maps (Kohonen, 2001) in combination with Principal Component Analysis to produce classified seismic volumes from multiple instantaneous attribute volumes. The workflow addresses several significant issues in seismic interpretation: it analyzes large amounts of data simultaneously; it determines relationships between different types of data; it is sample based and produces high-resolution results and, reveals geologic features that are difficult to see in conventional approaches.

Principal Component Analysis (PCA)

Multi-dimensional analysis and multi-attribute analysis go hand in hand. Because individuals are grounded in three-dimensional space, it is difficult to visualize what data in a higher number dimensional space looks like. Fortunately, mathematics doesn’t have this limitation and the results can be easily understood with conventional 2D and 3D viewers.

Working with multiple instantaneous or geometric seismic attributes generates tremendous volumes of data. These volumes contain huge numbers of data points which may be highly continuous, greatly redundant, and/or noisy. (Coleou et al., 2003). Principal Component Analysis (PCA) is a linear technique for data reduction which maintains the variation associated with the larger data sets (Guo and others, 2009; Haykin, 2009; Roden and others, 2015). PCA can separate attribute types by frequency, distribution, and even character. PCA technology is used to determine which attributes may be ignored due to their very low impact on neural network solutions and which attributes are most prominent in the data. Figure 1 illustrates the analysis of a data cluster in two directions, offset by 90 degrees. The first principal component (eigenvector 1) analyses the data cluster along the longest axis. The second principal component (eigenvector 2) analyses the data cluster variations perpendicular to the first principal component. As stated in the diagram, each eigenvector is associated with an eigenvalue which shows how much variance there is in the data.

two attribute data set

Figure 1. Two attribute data set illustrating the concept of PCA

The next step in PCA analysis is to review the eigen spectrum to select the most prominent attributes in a data set. The following example is taken from a suite of instantaneous attributes over the Niobrara formation within the Denver­ Julesburg Basin. Results for eigenvectors 1 are shown with three attributes: sweetness, envelope and relative acoustic impedance being the most prominent.

two attribute data set

Figure 2. Results from PCA for first eigenvector in a seismic attribute data set

Utilizing a cutoff of 60% in this example, attributes were selected from PCA for input to the neural network classification. For the Niobrara, eight instantaneous attributes from the four of the first six eigenvectors were chosen and are shown in Table 1. The PCA allowed identification of the most significant attributes from an initial group of 19 attributes.

Results from PCA for Niobrara Interval

Table 1: Results from PCA for Niobrara Interval shows which instantaneous attributes will be used in a Self-Organizing Map (SOM).

Self-Organizing Maps

Teuvo Kohonen, a Finnish mathematician, invented the concepts of Self-Organizing Maps (SOM) in 1982 (Kohonen, T., 2001). Self-Organizing Maps employ the use of unsupervised neural networks to reduce very high dimensions of data to a classification volume that can be easily visualized (Roden and others, 2015). Another important aspect of SOMs is that every seismic sample is used as input to classification as opposed to wavelet-based classification.

Figure 3 diagrams the SOM concept for 10 attributes derived from a 3D seismic amplitude volume. Within the 3D seismic survey, samples are first organized into attribute points with similar properties called natural clusters in attribute space. Within each cluster new, empty, multi-attribute samples, named neurons, are introduced. The SOM neurons will seek out natural clusters of like characteristics in the seismic data and produce a 2D mesh that can be illustrated with a two- dimensional color map. In other words, the neurons “learn” the characteristics of a data cluster through an iterative process (epochs) of cooperative than competitive training. When the learning is completed each unique cluster is assigned to a neuron number and each seismic sample is now classified (Smith, 2016).

two attribute data set

Figure 3. Illustration of the concept of a Self-Organizing Map

Figures 4 and 5 show a simple example using 2 attributes, amplitude, and Hilbert transform on a synthetic example. Synthetic reflection coefficients are convolved with a simple wavelet, 100 traces created, and noise added. When the attributes are cross plotted, clusters of points can be seen in the cross plot. The colored cross plot shows the attributes after SOM classification into 64 neurons with random colors assigned. In Figure 5, the individual clusters are identified and mapped back to the events on the synthetic. The SOM has correctly distinguished each event in the synthetic.

Two attribute synthetic example of a Self-Organizing Map

Figure 4. Two attribute synthetic example of a Self-Organizing Map. The amplitude and Hilbert transform are cross plotted. The colored cross plot shows the attributes after classification into 64 neurons by SOM.

Synthetic SOM example

Figure 5. Synthetic SOM example with neurons identified by number and mapped back to the original synthetic data

Results for Niobrara and Eagle Ford

In 2018, Geophysical Insights conducted a proof of concept on 100 square miles of multi-client 3D data jointly owned by Geophysical Pursuit, Inc. (GPI) and Fairfield Geotechnologies (FFG) in the Denver¬ Julesburg Basin (DJ). The purpose of the study is to evaluate the effectiveness of a machine learning workflow to improve resolution within the reservoir intervals of the Niobrara and Codell formations, the primary targets for development in this portion of the basin. An amplitude volume was resampled from 2 ms to 1 ms and along with horizons, loaded into the Paradise® machine learning application and attributes generated. PCA was used to identify which attributes were most significant in the data, and these were used in a SOM to evaluate the interval Top Niobrara to Greenhorn (Laudon and others, 2019).

Figure 6 shows results of an 8X8 SOM classification of 8 instantaneous attributes over the Niobrara interval along with the original amplitude data. Figure 7 is the same results with a well composite focused on the B chalk, the best section of the reservoir, which is difficult to resolve with individual seismic attributes. The SOM classification has resolved the chalk bench as well as other stratigraphic features within the interval.

North-South Inline showing the original amplitude data (upper) and the 8X8 SOM result (lower) from Top Niobrara

Figure 6. North-South Inline showing the original amplitude data (upper) and the 8X8 SOM result (lower) from Top Niobrara through Greenhorn horizons. Seismic data is shown courtesy of GPI and FFG.

8X8 Instantaneous SOM through Rotharmel 11-33 with well log composite

Figure 7. 8X8 Instantaneous SOM through Rotharmel 11-33 with well log composite. The B bench, highlighted in green on the wellbore, ties the yellow-red-yellow sequence of neurons. Seismic data is shown courtesy of GPI and FFG

 

8X8 SOM results through the Eagle Ford

Figure 8. 8X8 SOM results through the Eagle Ford. The primary target, the Lower Eagle Ford shale had 16 neuron classes over 14-29 milliseconds of data. Seismic data shown courtesy of Seitel.

The results shown in Figure 9 reveal non-layer cake facies bands that include details in the Eagle )RUG,v basal clay-rich shale, high resistivity and low resistivity Eagle Ford shale objectives, the Eagle Ford ash, and the upper Eagle Ford marl, which are overlain disconformably by the Austin Chalk.

Eagle Ford SOM classification shown with well results

Figure 9. Eagle Ford SOM classification shown with well results. The SOM resolves a high resistivity interval, overlain by a thin ash layer and finally a low resistivity layer. The SOM also resolves complex 3-dimensional relationships between these facies

Convolutional Neural Networks (CNN)

A promising development in machine learning is supervised classification via the applications of convolutional neural networks (CNNs). Supervised methods have, in the past, not been efficient due to the laborious task of training the neural network. CNN is a deep learning seismic classification. We apply CNN to fault detection on seismic data. The examples that follow show CNN fault detection results which did not require any interpreter picked faults for training, rather the network was trained using synthetic data. Two results are shown, one from the North Sea, Figure 10, and one from the Great South Basin, New Zealand, Figure 11.

Side by side comparison of coherence attribute to CNN fault probability attribute, North Sea

Figure 10. Side by side comparison of coherence attribute to CNN fault probability attribute, North Sea

Side by side comparison of coherence attribute to CNN fault probability attribute, North Sea

Figure 11. Comparison of Coherence to CNN fault probability attribute, New Zealand

Conclusions

Advances in compute power and algorithms are making the use of machine learning available on the desktop to seismic interpreters to augment their interpretation workflow. Taking advantage of today’s computing technology, visualization techniques, and an understanding of machine learning as applied to seismic data, PCA combined with SOMs efficiently distill multiple seismic attributes into classification volumes. When applied on a multi-attribute seismic sample basis, SOM is a powerful nonlinear cluster analysis and pattern recognition machine learning approach that helps interpreters identify geologic patterns in the data and has been able to reveal stratigraphy well below conventional tuning thickness.

In the fault interpretation domain, recent development of a Convolutional Neural Network that works directly on amplitude data shows promise to efficiently create fault probability volumes without the requirement of a labor-intensive training effort.

References

Coleou, T., M. Poupon, and A. Kostia, 2003, Unsupervised seismic facies classification: A review and comparison of techniques and implementation: The Leading Edge, 22, 942–953, doi: 10.1190/1.1623635.

Guo, H., K. J. Marfurt, and J. Liu, 2009, Principal component spectral analysis: Geophysics, 74, no. 4, 35–43.

Haykin, S., 2009. Neural networks and learning machines, 3rd ed.: Pearson

Kohonen, T., 2001,Self organizing maps: Third extended addition, Springer, Series in Information Services, Vol. 30.

Laudon, C., Stanley, S., and Santogrossi, P., 2019, Machine Leaming Applied to 3D Seismic Data from the Denver-Julesburg Basin Improves Stratigraphic Resolution in the Niobrara, URTeC 337, in press

Roden, R., and Santogrossi, P., 2017, Significant Advancements in Seismic Reservoir Characterization with Machine Learning, The First, v. 3, p. 14-19

Roden, R., Smith, T., and Sacrey, D., 2015, Geologic pattern recognition from seismic attributes: Principal component analysis and self-organizing maps, Interpretation, Vol. 3, No. 4, p. SAE59-SAE83.

Santogrossi, P., 2017, Classification/Corroboration of Facies Architecture in the Eagle Ford Group: A Case Study in Thin Bed Resolution, URTeC 2696775, doi 10.15530-urtec-2017-<2696775>.

Video (in Chinese) : Leveraging Deep Learning in Extracting Features of Interest from Seismic Data

Video (in Chinese) : Leveraging Deep Learning in Extracting Features of Interest from Seismic Data

 

Abstract:

 

Mapping and extracting features of interest is one of the most important objectives in seismic data interpretation. Due to the complexity of seismic data, geologic features identified by interpreters on seismic data using visualization techniques are often challenging to extract. With the rapid development in GPU computing power and the success obtained in computer vision, deep learning techniques, represented by convolutional neural networks (CNN), start to entice seismic interpreters in various applications. The main advantages of CNN over other supervised machine learning methods are its spatial awareness and automatic attribute extraction. The high flexibility in CNN architecture enables researchers to design different CNN models to identify different features of interest. In this webinar, using several seismic surveys acquired from different regions, I will discuss three CNN applications in seismic interpretation: seismic facies classification, fault detection, and channel extraction. Seismic facies classification aims at classifying seismic data into several user-defined, distinct facies of interest. Conventional machine learning methods often produce a highly fragmented facies classification result, which requires a considerable amount of post-editing before it can be used as geobodies. In the first application, I will demonstrate that a properly built CNN model can generate seismic facies with higher purity and continuity. In the second application, compared with traditional seismic attributes, I deploy a CNN model built for fault detection which provides smooth fault images and robust noise degradation. The third application demonstrates the effectiveness of extracting large scale channels using CNN. These examples demonstrate that CNN models are capable of capturing the complex reflection patterns in seismic data, providing clean images of geologic features of interest, while also carrying a low computational cost.

Tao Zhao

Research Geophysicist | Geophysical Insights

TAO ZHAO joined Geophysical Insights in 2017. As a Research Geophysicist, Dr. Zhao develops and applies shallow and deep machine learning techniques on seismic and well log data, and advances multiattribute seismic interpretation workflows. He received a B.S. in Exploration Geophysics from the China University of Petroleum in 2011, an M.S. in Geophysics from the University of Tulsa in 2013, and a Ph.D. in geophysics from the University of Oklahoma in 2017. During his Ph.D. work at the University of Oklahoma, Dr. Zhao was an active member of the Attribute-Assisted Seismic Processing and Interpretation (AASPI) Consortium developing pattern recognition and seismic attribute algorithms.

Video: Leveraging Deep Learning in Extracting Features of Interest from Seismic Data

Video: Leveraging Deep Learning in Extracting Features of Interest from Seismic Data

 

Abstract:

Mapping and extracting features of interest is one of the most important objectives in seismic data interpretation. Due to the complexity of seismic data, geologic features identified by interpreters on seismic data using visualization techniques are often challenging to extract. With the rapid development in GPU computing power and the success obtained in computer vision, deep learning techniques, represented by convolutional neural networks (CNN), start to entice seismic interpreters in various applications. The main advantages of CNN over other supervised machine learning methods are its spatial awareness and automatic attribute extraction. The high flexibility in CNN architecture enables researchers to design different CNN models to identify different features of interest. In this webinar, using several seismic surveys acquired from different regions, I will discuss three CNN applications in seismic interpretation: seismic facies classification, fault detection, and channel extraction. Seismic facies classification aims at classifying seismic data into several user-defined, distinct facies of interest. Conventional machine learning methods often produce a highly fragmented facies classification result, which requires a considerable amount of post-editing before it can be used as geobodies. In the first application, I will demonstrate that a properly built CNN model can generate seismic facies with higher purity and continuity. In the second application, compared with traditional seismic attributes, I deploy a CNN model built for fault detection which provides smooth fault images and robust noise degradation. The third application demonstrates the effectiveness of extracting large scale channels using CNN. These examples demonstrate that CNN models are capable of capturing the complex reflection patterns in seismic data, providing clean images of geologic features of interest, while also carrying a low computational cost.

To view this webinar in Chinese, please click here.

Transcript of the Webinar

Hal Green: Good morning and Buenos Dias to our friends in Latin America that are joining us. This webinar is serving folks in the North America and Latin America regions, and there may be others worldwide who are joining as well. This is Hal Green and I manage marketing at Geophysical Insights. We are delighted to welcome you to our continuing webinar series that highlights applications of machine learning to interpretation.

We also welcome Dr. Tao Zhao, our featured speaker, who will present on leveraging deep learning in extracting features of interest from seismic data. Dr. Zhao and I are in the offices of Geophysical Insights in Houston, Texas, along with Laura Cuttill, who is helping at the controls.

Now just for a few comments about our featured speaker today. Dr. Tao Zhao joined Geophysical Insights 2017 as a research geophysicist where he develops and applies shallow and deep learning techniques in seismic and well log data, and advances multi-attribute seismic interpretation workflows. He received a Bachelor of Science in Exploration Geophysics from the China University of Petroleum in 2011, a Master of Science in Geophysics from the University of Tulsa in 2013, and a Ph.D. in Geophysics from the University of Oklahoma in 2017.  During his Ph.D. work at the University of Oklahoma, Dr. Zhao worked in the attribute assisted seismic processing and interpretation or AASPI Consortium developing pattern recognition and seismic attribute algorithms and he continues to participate actively in the AASPI Consortium but now as a rep of Geophysical Insights, and we’re delighted to have Tao join us today. At this point, we’re going to turn over the presentation to Tao and get started.

Tao Zhao: Hello to everyone, and thank you Hal for the introduction. As you can see by the title, today we will be talking about how we can use deep learning in the application of seismic data interpretation. We’re focusing on extracting different features of interest from seismic data. Here is a quick outline of today’s presentation. First, I will pose a question to you about how we see a feature on the seismic data versus how a machine or computer sees a feature on the seismic data to link to the reason why we want to use deep learning. Then there will be a very interesting story, or argument, behind the shallow learning versus the deep learning method. Today we’re only focusing on deep learning, so shallow learning is another main topic that people work on to be applied to seismic interpretation. Then there are three main applications I will talk about today. The first one is seismic facies classification, the second is fault detection, and the last one is channel extraction. Finally, there will be some conclusions to go with this presentation.

The first thing I want to talk about is actually a question. What does a human interpreter see versus what does a computer see on a seismic image. Here we have an example from offshore New Zealand Taranaki Basin. On the left, you have a vertical seismic line of seismic amplitude and on the right you have a time slice for coherence attribute, just to show you how complex the geology is in this region. As human interpreters, we have no problem to see features of different scales from this seismic image. For example, we have some geophysical phenomenon here. Those are multiples and those are features that we see from seismic data that may not relate to a specific geological feature. But here we have something that has a specific geologic meaning.

Here we have some stacked channels, and here we have a volcano, here we have tilted fault blocks, and here we have very well defined continuous layered sediments. So, as human beings, we have no problem to identify those features because we have very good cameras and very good processors. The cameras are, of course, our eyes which help us to identify those features in a very local scale or a tiny scale, as well as to capture the features in a very large scale, such as the whole package of the stacked channel. On the other hand we have a good processor, which is our brain. Our brain can understand that and can put the information together from both the local patterns and the very large scale patterns. But for a computer, there’s a problem because, by default, the computer only sees pixels from this image which means the computer only understands the intensity at each pixel.

For the computer to understand this image, we typically need to provide a suite of attributes for the computer to work on. For example, in the stacked channel we have several attributes that can quantify the stacked channel facies. But those attributes are not perfectly aligning in the same region. In this particular region, we have reflectors that are converging into each other so an attribute that quantifies the convergence of the reflectors may best quantify this kind of feature in this local scale. Here we have discontinuity within these reflectors, so some coherence type of attribute may best quantify this feature. Here we have a continuous layer gently dipping, so maybe it’s just as simple as the dip of the reflectors quantifying this local scale feature. Finally, we have some very weak amplitude here, so we can just use the amplitude to quantify here.

So, in short, each of the attributes may qualify a particular region within the big geologic facies, but not everywhere. Then the problem we have is “how can we quantify the complex patterns in this relatively big window as one uniform facies?” The end goal is us wanting to have something like that. We want to color code the same seismic facies into the same color – into one uniform color represented by one uniform value. For example, here we have all those facies color-coded and all those transparent regions mean we don’t have a facies assigned. Or, actually, we have a facies assigned and we can call these facies zero – representing everything else. Although this result is from an interpreter’s manual pick, I will show you in the first application that after we train a computer, or after we train a deep learning model that a computer can do a pretty good job to mimic those picks on other seismic slices.

Before I go into the actual application let me share with you a very interesting story. Here’s a bet between shallow learning and deep learning. The bet happened in 1995, March 14, almost 24 years ago. The bet was between two very famous figures – on the left is Vladimir Vapnik who is the inventor of support vector machines. On the right is his boss at Bell Labs at the time. So one day Larry Jackel bet that after a few years people will have a good understand of the big neural networks -or the deep neural networks – and people will start using those deep neural networks with great success. As a counter-bet, Vapnik thought that even after 10 years, people will still be having trouble with those big neural networks and people may not use those anymore. They will turn to kernel methods such as support vector machines which are shallow learning methods. They bet on a very fancy dinner.

After 5 years, it came out that both of them were wrong, because by the year 2000, people were still having trouble using big neural networks. However, people were not dumping the neural network at all. They were still using them, so Vapnik lost as well because people are still using neural networks. In fact, after 10 years from that time, into the early 2010’s, people were starting to use very large, very deep neural networks such as convolutional neural networks. So as a result both of them lost the bet and they had to split the dinner, and guess who had a free dinner? It’s Yann LeCun, who happens to be the witness at their bet. So as we know, Yann LeCun is one of the founders or inventors of the convolutional neural network that we use today. So then there’s a question about what is shallow learning and what is deep learning. Well, different people may have different answers but for me, I think the main distinction between shallow learning and deep learning is if an algorithm learning from the features provided by the user or the algorithm learning the features by itself. So if we’re using a shallow learning method such as a typical neural network – a multi-layer perceptron neural network – the first step to do if we apply the neural network on seismic data is to extract seismic attributes and let the neural network classify on those attributes. So that means the algorithm needs to learn from the features- and here features means seismic attributes. On the other hand if we’re using a deep-learning method to classify on seismic data, typically we will provide the raw input which is the seismic amplitude. Some people may use even pre-stack seismic amplitude and here we’re just using post-stack. It’s still a relatively raw data compared to the seismic attributes that we calculated from the seismic amplitude. So during the training process of a deep learning method it will automatically derive a great number of – I will call attributes – from your input data and find the best ones that represent the data so that your target classes can be well separated. For example here we have two seismic facies, one is a stack channel the other is a tilted fault block, and if we want to separate those two features using a shallow learning method, this is what we’re going to do. We have to choose a bunch of seismic attributes that best distinguishes those two facies. Maybe we can use discontinuity or dip magnitude or amplitude variance or even reflector convergence. But the problem is, even if we use all those attributes, we probably won’t have a perfect separation between those two facies because those patterns are so complex. At every region they have different responses from a particular attribute. But don’t get me wrong, I’m not saying that those attributes are useless, that we don’t need to use AAPSI attributes at all. Seismic attributes are very useful to quantify local properties and they’re very useful for visualization purposes because once we have the seismic attributes it’s very easy for us as human beings to identify the features. It just becomes difficult for a computer to use that information and separate those very complex patterns from the others.

So what are we going to do? How do you quantitatively describe the difference between those two very complex facies? Well the answer is ‘let the machine tell us.’ If we are using a deep learning method, and here for example I’m showing a very simple deep learning model and people call it an encoder-decoder model, then we just feed in this seismic amplitude data and after training, we will have a classified seismic facies just like I showed at the very beginning that color-code different facies to a single color and according to what training data we provide. For example, here it classifies this stacked channel into one uniform color or a single value and here the tilted fault blocks in another value. So deep learning automatically learns the most suitable attributes to use although those attributes most likely won’t make any sense for human beings if we just look at those attributes but the computer with the algorithm can figure out the difference or use those attribute to separate your target facies.

Let’s look at the first application. The first application is on seismic facies classification on this data set I used in the introduction. This, being a testing data set, is relatively small or about one gig in size. Because I did the study almost a year ago, at that time it took me about 90 minutes to run the training on a not that powerful single GPU. Right now with the growing computing powers and with the better scripting and better software libraries we can do it much faster. So there are 31 lines manually annotated from the seismic volume and those are used for training and validation. For example we can uniformly interpret or annotate some of the lines in this volume and those are those annotated line and we can train the model using, I’d say, 29 lines and testing your results on two lines that not used in training, and we can also do a cross- validation which means first time we choose this 29 lines and test on the other two lines and the next time we train on a different set of training data – a different 29 lines and testing on the other two remaining lines. After several rounds, if we have a relatively stable result, then we know that our model parameters are pretty good to use and the result is pretty reliable.

So here is the result on the training data, after we run the training. This is a line that used in training and this is the manually interpreted result. As you can see, there are several seismic facies, and whether it’s geologically meaningful or it represents a particular geophysical phenomenon those facies are being picked out by hand and after we run the training, we first of all want to test the neural network on the training data to see if the network is converged well and behaves well on the training data. This is the result on the same training line, as you can see that if I flip back and forth, back and forth, as you can see those two images match very, very good, which is good but it’s not that interesting because this line is used in training and what we really want to see is how the neural network performs on a testing line that is not used for training. So this is a line that is not used in training and this is, again, this is a hand-picked result. We consider this a ground truth and this is the predicted result on the same line. So this gives you an idea of how this network performs on data that it hasn’t seen before. To measure the performance, or measure the quality of the prediction we have several options in terms of the performance metric. The most commonly used one may be the sample accuracy, which means how many samples are correctly predicted and here’s it’s 93% of samples are correctly predicted but in this case, this metric is okay to use because we don’t have a huge imbalance among all those classes. In some cases, particularly if we’re looking for a feature that only consists of a fraction of samples in your data set, such as if you want to pick out only faults, then this metric is very misleading. Let’s say if you only have 0.1% of your data as faults in your data set, even if you don’t predict anything for the fault and get all the remaining – and predict every sample as non-fault, you still have a 99.9% correctness or accuracy. But, in fact, you have nothing predicted for the fault. So a more robust metric to use is the intersection over union, which is defined like this: for a particular class, I’d say for the stacked channel complex, we take the ground truth of the stacked channel complex which is found outlined by this region and the intersection between the ground truth and this predicted result, then divided by the overall extension of your ground truth and predicted result. So basically it calculated how much overlap you have between your prediction and your ground truth. And this is only for one class. If you have more than one class – in this example we have 9 different facies – we will average over this measurement over all your classes, so that it essentially takes out the imbalanced data problem because each class, no matter how many samples there are in the class, they contribute equally in your final metric.

For example, if you have a class with only ten samples, and the other class 1,000 samples, each of those will contribute .5 in your final measurement. So if you have a 10 sample class, you only have 2 samples correctly predicted, then you have a IoU measure for that particular class at .2 and for that 1,000 sample class, if you predict all of the samples correctly, you have IoU measure of 1 for that particular class, then you average over you only have .6. But if you are using sample accuracy, you will have accuracy close to 1, which is not a good estimate of the real performance of your model. So this is the first testing line, and here’s another testing line. Again, this is a ground truth that was manually picked from an interpreter’s manual interpretation. This is the predicted result. As you can see, those two images again match pretty well, and the main thing to notice is although the boundaries aren’t matching perfectly we have a very clean body within each of the fissures which makes the subsequent steps such as generating geobodies much, much easier. Also for this particular case, for this predicted result it matches the reflector pretty well. So I think we’re happy with this result and we can visualize it in 3D. So here we have an inline and a cross line of seismic amplitude, we can overlap our prediction on those two lines so actually we have a volumetric prediction everywhere which shows two lines . This display is very useful for interpreters because it gives you all the highlights, or all the regions of interest that the interpreter may be interested in, although those regions are not 100% accurate, it’s very easy to find your interest with this color-coded map, instead of just scanning through line by line without any highlights. And again we can visualize those features in 3D as some sort of geobody and as you can see here we have very nice well-defined gas chimney and all the other facies as well. And you can crop it to whatever display you would prefer.

So the second application I want to discuss today is fault detection. Here is a data set we want to test our fault detection. Again this data set is from offshore New Zealand, but from a different basin. So here we’re in Great South Basin. Typical workflow with fault detection will start with some sort of edge detection attribute, for example coherence. So here we have a coherence co-blended with this seismic amplitude, and as you can see that the coherence does a pretty good job to highlighting those faults, but there’s some problem with this coherence attributes because the coherence being an edge-detection algorithm, it detects all the discontinuities in this data set. For example here we have a bunch of very high coherence anomalies but, in this particular example, it’s very low in value coherence anomalies and those are not faults. So people call those things syneresis, which means they are cracks formed in the shaly formation when the formation lost water. Another problem, I will say, for this image is if we take a very close look at this coherence image we see some stair-step artifacts which means the coherence anomaly along the fault surface is not smooth. Instead it’s highly segmented. It’s related to some algorithm limitations in this kind of algorithm because it uses a vertical window. And so to get a better fault detection result or to get a better initial attribute that we can run our fault surface generation algorithms on, we need some studies using a neural network. So this is the result from convolutional neural network fault detection. As you can see here have very, very smooth fault and with almost no noise at all from other types of discontinuities. Let me flip back to this coherence and let’s go to our CNN. So you can see that we have a very well defined fault with almost no noise at all. So this is a very promising result. So how do we get this result? You may ask this question. There are several ways that we can do fault detection using CNN, and some of those are very easy to implement, and some of those may require some well-designed algorithm.

So let’s start with something basic. The most basic, or most naive way, to implement this CNN based fault detection is to do something like that. We define this problem as a classification problem and we pick some training data to represent faults and another class of training data to represent non-faults. So here all the green lines are the training data, training samples picked to represent faults and the red triangles are what we picked to represent non-faults. I picked things on 5 lines. Here is the coherence image just to show you how the faults look like on the seismic data. Being a naive implementation, the algorithm we use is something like that. So for every sample, we extract a small 3D patch around the sample and we classify the small 3D patch to be whether it’s a fault or non-fault and assign this value to the center point. In this way, we can after we train this model, we can classify all the samples in the seismic volume one-by-one by using a sliding window of the size of this 3D patch.

This is the result of the naive implementation. I will say this result is kind of ugly because the faults are relatively thick and they’re not that continuous, we have a lot of noise here as well. At that time we were thinking to clean it up using some kind of image processing techniques. So we took this result, and went through an image processing workflow, and I call it regularization which is nothing but smoothing along the fault and sharpening across the fault. After this regularization, we have a result like that to compare against our raw CNN output. As you can see we have much sharper fault images and it cleaned up those noise quite well.

At this time you may ask who actually does the heavy lifting? Is that the CNN or is it actually this image processing regularization? To answer this question, we brought in our coherence image and went through exactly the same regularization or image processing workflow and this is what we got. So to compare this one with the one we got from using the CNN fault detection as our raw output, it’s pretty clear that once we use the CNN as our initial fault detection attribute we get rid of those type of noise. Moreover we have more continuous faults as well, compared to using this coherence.

To do a further comparison, we also did a fault detection using a swarm intelligence workflow from a 3rd party vendor that I cannot tell the name. So this is the result from a swarm intelligence fault detection and as you can see it brings out most of the faults pretty well but the problem is it may be too sensitive to the discontinuities and you have those responses almost everywhere in your data set. And you have – maybe there – those things are actually acquisition footprints or maybe some sort of noise. So if we use this example, sorry, this result to do a fault surface generation, you may have a bunch of fault surfaces that are not real. Then we zoom into this box region and on the left you have CNN based fault detection and on the right you have coherence-base fault detection and it’s pretty clear that we don’t have that kind of noise on here and also the kind of faults are very, very continuous and clean. Again, here is swarm intelligence and we have a bunch of noise here which may not be the real fault. And then we can view it in a vertical slice. Here is a coherence based and here is a CNN based. So coherence based and CNN based. And it’s pretty clear that the coherence based result, even though we went through this regularization step the fault surfaces are not that continuous and we have a bunch of other types of discontinuity response as well. But for CNN, it got rid of most of those and the faults are very continuous and sharp. But then you may identify a problem, so this result is not as good as the one I showed at the very beginning. So what’s the problem? There’s lots of faults that are missing, but in general it’s just not that good. As I said, this is a very naive implementation. At the time that we developed this, we thought maybe we can use a similar approach as we did with the seismic facies classification and use that type of CNN network and this is what we got. This result is the same as the one I showed at the very beginning, the only difference is here I used this image processing regularization to make those faults a little thinner. Again, we can use different types of neural networks, and this is just one possible way to do it. This is similar to the one we can use for seismic facies classification. We take a whole seismic line, if your data is relatively small, or we can take a large 2D patch of data, say, maybe 200 samples by 200 samples in size, and fit it into the network and we get a classification at every sample in this patch simultaneously. Once we move to this kind of algorithm, we will have a much better defined fault image. We can then make the faults even thinner by using some morphological thinning, which is just skeletonization, to make everything one sample thick on a bigger line. To look from a time slice, this is the naive implementation, which I called a 3D patch-based classification and this is the segmentation. This segmentation network only runs on 2D, which means it takes 2D image for training, and then you can run your prediction on both inline and crossline and sum it together. So it’s pseudo-3D, but it’s actually 2D. Even if it’s in 2D, it still gives you a better result compared to the 3D patch-based method. As you can see, it has much more faults and more continuous faults.

After that, we thought this result is okay, but we’re only using 2D information. We train it on 2D lines. So can we use 3D information and train a 3D CNN? Well, the answer is yes. This is the result we got from training a real 3D neural network on the same data set. To compare this result with this line-based CNN, I will say that it gives about the same faults. In general the faults are cleaner and the faults are more continuous, but moreover the biggest advantage of this 3D CNN fault detection neural network is to train this network I don’t need to use and prior knowledge for this particular data set I want to predict. This network is trained on the data that has not been seen on this data set. In other words, this network is very general so that we can apply this trained neural network to many seismic data sets as long as those are the – the data quality are relatively the same or I mean, it allows a certain variation in the data set but if you have, let’s say if you have a marine data, if you’re training on marine data and predicting on land data, it’s probably not going to work. We still have some limitation on data quality but in general it works very, very well and if you take the training time away from the users, so when the users use this type of technique to predict a fault, it becomes very, very fast. So for example, in this particular data set, this data set is maybe one gigabyte, maybe 900 megabytes or so, and only takes about a minute or even less than a minute to run on a single GPU.

Another thing that we tried is to run a fault detection using some input data other than seismic amplitude. In this particular example, I tried to run a fault detecting using a self-organizing map classification result. The reason behind it is some attributes, for example, instantaneous phase attribute, or cosine of instantaneous phase, are very, very good to make your reflector very continuous. It’s so continuous in a way that it highlights this discontinuity between your continuous reflectors. If we use those type of attributes and run a SOM and get a classification result on those type of attributes for example like that, then we have a pretty clear, well-defined faults as well. I trained my neural network on this image and hoped to see that they can pick the faults as good as on seismic amplitude data, or maybe even better. This is the result we get from training a fault-detection neural on the SOM classification result. As you can see, it picks out the faults really, really well and we may have a little bit missing faults in here, but in general, it gives you a very good result on fault detection, even though we’re not using the seismic amplitude data.

At the very beginning, I was hoping maybe this result won’t be as good as using the seismic amplitude because we’re limiting the information that we provide to the neural network, but in fact it just did a pretty good job. Maybe the reason behind it is we carefully chose the attribute that’s very good to highlight- or we’re not actually highlighting the fault, but it’s highlighting the continuous reflectors so that the faults stand out on those attributes.

Again, we can show this same result on this seismic amplitude. This is how they look like on the seismic amplitude. Let me flip back to the SOM and to the seismic amplitude. As you can see, those faults are very well defined.

Okay, the last application is for channel extraction. This channel extraction is actually similar to what we did for the seismic facies, but in this case we’re interested in a particular feature. So this data set is again, from offshore New Zealand Taranaki Basin but it’s a different survey compared to the very first one that I showed. So here in the survey we have a lot of channels and most of the channels in this part are relatively small scale, and here we have a very big channel in the shallower part. In this particular example I’m interested in extracting this big channel from this data set. As you know, extracting those smaller-scale channels are actually easier than the big ones because for smaller channels you can use the coherence response from the channel flanks or the channel edges. Maybe the curvature response from the bottom of the channel. Those responses are kind of overlapped with where the channel is. Or you can just extract the channel body using those attributes. But the problem with the big channel is, for the big channel those attributes are only sensitive to the edges of the channel or to the bottom of the channel, so it’s only sensitive to part of the channel but not the whole channel. That makes extracting the whole channel somewhat challenging. In this example, I manually highlighted this channel on several lines of the data and then after training the network I extracted the channel over the whole volume and this is what I got. This channel matches the boundary on the seismic pretty well. While we may have some disagreement in here, in general, I think in terms of getting a quick interpretation of this channel, I think the neural network does a pretty good job. We can, of course, see it in a 3D, and here on the lower right corner, we have the channel displayed on each of the time level and grow from the bottom to the top. Every time slice the channel matches the boundary on the seismic data pretty well. Then I looked into another channel in the deeper part of the same survey, so here we’re about 2 seconds, the previous one about .7 I think. Here we have a more sinuous channel in the deeper part of the survey, and again I picked some of the appearance of the channel. After training the network, we were able to match the channel boundary pretty well using this CNN classification. And of course we again, have some noise that leaked outside of the channel but I’m not too worried about that because this is the raw input, raw output from the neural network without any post-editing. So again, we can visualize it in 3D, so the channel develops from this side and once we go up, we can see this sinuous channel start to show up in this side and it matches the boundary pretty well. And of course, again we have some noise and those things can be cleaned up by some post-editing techniques.

So conclusions. I think after I showed the 3 applications it’s safe to say that deep-learning methods represented by convolutional neural networks are powerful in qualifying complex seismic reflection patterns into uniform facies, whether we’re interested in multiple facies at the same time or we’re interested in a particular feature of interest such as faults or channels. And we demonstrated the application for the 3 problems with clear success, and finally I think that with great flexibility in model architecture with different types of CNNs and with all your clever genius researchers we can develop something particular for a particular problem so that we believe that CNNs are promising in other interpretation tasks as well. I would like to thank Geophysical Insights for permission to show this work, and also want to thank New Zealand Petroleum and Minerals for providing those beautiful data sets used in the study to the general public.

Tao Zhao

Research Geophysicist | Geophysical Insights

TAO ZHAO joined Geophysical Insights in 2017. As a Research Geophysicist, Dr. Zhao develops and applies shallow and deep machine learning techniques on seismic and well log data, and advances multiattribute seismic interpretation workflows. He received a B.S. in Exploration Geophysics from the China University of Petroleum in 2011, an M.S. in Geophysics from the University of Tulsa in 2013, and a Ph.D. in geophysics from the University of Oklahoma in 2017. During his Ph.D. work at the University of Oklahoma, Dr. Zhao was an active member of the Attribute-Assisted Seismic Processing and Interpretation (AASPI) Consortium developing pattern recognition and seismic attribute algorithms.