web analytics
Video (in Chinese) : Leveraging Deep Learning in Extracting Features of Interest from Seismic Data

Video (in Chinese) : Leveraging Deep Learning in Extracting Features of Interest from Seismic Data

 

Abstract:

 

Mapping and extracting features of interest is one of the most important objectives in seismic data interpretation. Due to the complexity of seismic data, geologic features identified by interpreters on seismic data using visualization techniques are often challenging to extract. With the rapid development in GPU computing power and the success obtained in computer vision, deep learning techniques, represented by convolutional neural networks (CNN), start to entice seismic interpreters in various applications. The main advantages of CNN over other supervised machine learning methods are its spatial awareness and automatic attribute extraction. The high flexibility in CNN architecture enables researchers to design different CNN models to identify different features of interest. In this webinar, using several seismic surveys acquired from different regions, I will discuss three CNN applications in seismic interpretation: seismic facies classification, fault detection, and channel extraction. Seismic facies classification aims at classifying seismic data into several user-defined, distinct facies of interest. Conventional machine learning methods often produce a highly fragmented facies classification result, which requires a considerable amount of post-editing before it can be used as geobodies. In the first application, I will demonstrate that a properly built CNN model can generate seismic facies with higher purity and continuity. In the second application, compared with traditional seismic attributes, I deploy a CNN model built for fault detection which provides smooth fault images and robust noise degradation. The third application demonstrates the effectiveness of extracting large scale channels using CNN. These examples demonstrate that CNN models are capable of capturing the complex reflection patterns in seismic data, providing clean images of geologic features of interest, while also carrying a low computational cost.

Tao Zhao

Research Geophysicist | Geophysical Insights

TAO ZHAO joined Geophysical Insights in 2017. As a Research Geophysicist, Dr. Zhao develops and applies shallow and deep machine learning techniques on seismic and well log data, and advances multiattribute seismic interpretation workflows. He received a B.S. in Exploration Geophysics from the China University of Petroleum in 2011, an M.S. in Geophysics from the University of Tulsa in 2013, and a Ph.D. in geophysics from the University of Oklahoma in 2017. During his Ph.D. work at the University of Oklahoma, Dr. Zhao was an active member of the Attribute-Assisted Seismic Processing and Interpretation (AASPI) Consortium developing pattern recognition and seismic attribute algorithms.

Applications of Convolutional Neural Networks (CNN) to Seismic Interpretation

Applications of Convolutional Neural Networks (CNN) to Seismic Interpretation

As part of our quarterly series on machine learning, we were delighted to have had Dr. Tao Zhao present applications of Convolutional Neural Networks (CNN) in a worldwide webinar on 20 March 2019 that was attended by participants on every continent.  Dr. Zhao highlighted applications in seismic facies classification, fault detection, and extracting large scale channels using CNN technology.  If you missed the webinar, no problem!  A video of the webinar can be streamed via the video player below.  Please provide your name and business email address so that we may invite you to future webinars and other events.  The abstract for Dr. Zhao’s talk follows:

We welcome your comments and questions and look forward to discussions on this timely topic.

Abstract:  Leveraging Deep Learning in Extracting Features of Interest from Seismic Data

Mapping and extracting features of interest is one of the most important objectives in seismic data interpretation. Due to the complexity of seismic data, geologic features identified by interpreters on seismic data using visualization techniques are often challenging to extract. With the rapid development in GPU computing power and the success obtained in computer vision, deep learning techniques, represented by convolutional neural networks (CNN), start to entice seismic interpreters in various applications. The main advantages of CNN over other supervised machine learning methods are its spatial awareness and automatic attribute extraction. The high flexibility in CNN architecture enables researchers to design different CNN models to identify different features of interest. In this webinar, using several seismic surveys acquired from different regions, I will discuss three CNN applications in seismic interpretation: seismic facies classification, fault detection, and channel extraction. Seismic facies classification aims at classifying seismic data into several user-defined, distinct facies of interest. Conventional machine learning methods often produce a highly fragmented facies classification result, which requires a considerable amount of post-editing before it can be used as geobodies. In the first application, I will demonstrate that a properly built CNN model can generate seismic facies with higher purity and continuity. In the second application, compared with traditional seismic attributes, I deploy a CNN model built for fault detection which provides smooth fault images and robust noise degradation. The third application demonstrates the effectiveness of extracting large scale channels using CNN. These examples demonstrate that CNN models are capable of capturing the complex reflection patterns in seismic data, providing clean images of geologic features of interest, while also carrying a low computational cost.

 

Video: Leveraging Deep Learning in Extracting Features of Interest from Seismic Data

Video: Leveraging Deep Learning in Extracting Features of Interest from Seismic Data

 

Abstract:

Mapping and extracting features of interest is one of the most important objectives in seismic data interpretation. Due to the complexity of seismic data, geologic features identified by interpreters on seismic data using visualization techniques are often challenging to extract. With the rapid development in GPU computing power and the success obtained in computer vision, deep learning techniques, represented by convolutional neural networks (CNN), start to entice seismic interpreters in various applications. The main advantages of CNN over other supervised machine learning methods are its spatial awareness and automatic attribute extraction. The high flexibility in CNN architecture enables researchers to design different CNN models to identify different features of interest. In this webinar, using several seismic surveys acquired from different regions, I will discuss three CNN applications in seismic interpretation: seismic facies classification, fault detection, and channel extraction. Seismic facies classification aims at classifying seismic data into several user-defined, distinct facies of interest. Conventional machine learning methods often produce a highly fragmented facies classification result, which requires a considerable amount of post-editing before it can be used as geobodies. In the first application, I will demonstrate that a properly built CNN model can generate seismic facies with higher purity and continuity. In the second application, compared with traditional seismic attributes, I deploy a CNN model built for fault detection which provides smooth fault images and robust noise degradation. The third application demonstrates the effectiveness of extracting large scale channels using CNN. These examples demonstrate that CNN models are capable of capturing the complex reflection patterns in seismic data, providing clean images of geologic features of interest, while also carrying a low computational cost.

To view this webinar in Chinese, please click here.

Transcript of the Webinar

Hal Green: Good morning and Buenos Dias to our friends in Latin America that are joining us. This webinar is serving folks in the North America and Latin America regions, and there may be others worldwide who are joining as well. This is Hal Green and I manage marketing at Geophysical Insights. We are delighted to welcome you to our continuing webinar series that highlights applications of machine learning to interpretation.

We also welcome Dr. Tao Zhao, our featured speaker, who will present on leveraging deep learning in extracting features of interest from seismic data. Dr. Zhao and I are in the offices of Geophysical Insights in Houston, Texas, along with Laura Cuttill, who is helping at the controls.

Now just for a few comments about our featured speaker today. Dr. Tao Zhao joined Geophysical Insights 2017 as a research geophysicist where he develops and applies shallow and deep learning techniques in seismic and well log data, and advances multi-attribute seismic interpretation workflows. He received a Bachelor of Science in Exploration Geophysics from the China University of Petroleum in 2011, a Master of Science in Geophysics from the University of Tulsa in 2013, and a Ph.D. in Geophysics from the University of Oklahoma in 2017.  During his Ph.D. work at the University of Oklahoma, Dr. Zhao worked in the attribute assisted seismic processing and interpretation or AASPI Consortium developing pattern recognition and seismic attribute algorithms and he continues to participate actively in the AASPI Consortium but now as a rep of Geophysical Insights, and we’re delighted to have Tao join us today. At this point, we’re going to turn over the presentation to Tao and get started.

Tao Zhao: Hello to everyone, and thank you Hal for the introduction. As you can see by the title, today we will be talking about how we can use deep learning in the application of seismic data interpretation. We’re focusing on extracting different features of interest from seismic data. Here is a quick outline of today’s presentation. First, I will pose a question to you about how we see a feature on the seismic data versus how a machine or computer sees a feature on the seismic data to link to the reason why we want to use deep learning. Then there will be a very interesting story, or argument, behind the shallow learning versus the deep learning method. Today we’re only focusing on deep learning, so shallow learning is another main topic that people work on to be applied to seismic interpretation. Then there are three main applications I will talk about today. The first one is seismic facies classification, the second is fault detection, and the last one is channel extraction. Finally, there will be some conclusions to go with this presentation.

The first thing I want to talk about is actually a question. What does a human interpreter see versus what does a computer see on a seismic image. Here we have an example from offshore New Zealand Taranaki Basin. On the left, you have a vertical seismic line of seismic amplitude and on the right you have a time slice for coherence attribute, just to show you how complex the geology is in this region. As human interpreters, we have no problem to see features of different scales from this seismic image. For example, we have some geophysical phenomenon here. Those are multiples and those are features that we see from seismic data that may not relate to a specific geological feature. But here we have something that has a specific geologic meaning.

Here we have some stacked channels, and here we have a volcano, here we have tilted fault blocks, and here we have very well defined continuous layered sediments. So, as human beings, we have no problem to identify those features because we have very good cameras and very good processors. The cameras are, of course, our eyes which help us to identify those features in a very local scale or a tiny scale, as well as to capture the features in a very large scale, such as the whole package of the stacked channel. On the other hand we have a good processor, which is our brain. Our brain can understand that and can put the information together from both the local patterns and the very large scale patterns. But for a computer, there’s a problem because, by default, the computer only sees pixels from this image which means the computer only understands the intensity at each pixel.

For the computer to understand this image, we typically need to provide a suite of attributes for the computer to work on. For example, in the stacked channel we have several attributes that can quantify the stacked channel facies. But those attributes are not perfectly aligning in the same region. In this particular region, we have reflectors that are converging into each other so an attribute that quantifies the convergence of the reflectors may best quantify this kind of feature in this local scale. Here we have discontinuity within these reflectors, so some coherence type of attribute may best quantify this feature. Here we have a continuous layer gently dipping, so maybe it’s just as simple as the dip of the reflectors quantifying this local scale feature. Finally, we have some very weak amplitude here, so we can just use the amplitude to quantify here.

So, in short, each of the attributes may qualify a particular region within the big geologic facies, but not everywhere. Then the problem we have is “how can we quantify the complex patterns in this relatively big window as one uniform facies?” The end goal is us wanting to have something like that. We want to color code the same seismic facies into the same color – into one uniform color represented by one uniform value. For example, here we have all those facies color-coded and all those transparent regions mean we don’t have a facies assigned. Or, actually, we have a facies assigned and we can call these facies zero – representing everything else. Although this result is from an interpreter’s manual pick, I will show you in the first application that after we train a computer, or after we train a deep learning model that a computer can do a pretty good job to mimic those picks on other seismic slices.

Before I go into the actual application let me share with you a very interesting story. Here’s a bet between shallow learning and deep learning. The bet happened in 1995, March 14, almost 24 years ago. The bet was between two very famous figures – on the left is Vladimir Vapnik who is the inventor of support vector machines. On the right is his boss at Bell Labs at the time. So one day Larry Jackel bet that after a few years people will have a good understand of the big neural networks -or the deep neural networks – and people will start using those deep neural networks with great success. As a counter-bet, Vapnik thought that even after 10 years, people will still be having trouble with those big neural networks and people may not use those anymore. They will turn to kernel methods such as support vector machines which are shallow learning methods. They bet on a very fancy dinner.

After 5 years, it came out that both of them were wrong, because by the year 2000, people were still having trouble using big neural networks. However, people were not dumping the neural network at all. They were still using them, so Vapnik lost as well because people are still using neural networks. In fact, after 10 years from that time, into the early 2010’s, people were starting to use very large, very deep neural networks such as convolutional neural networks. So as a result both of them lost the bet and they had to split the dinner, and guess who had a free dinner? It’s Yann LeCun, who happens to be the witness at their bet. So as we know, Yann LeCun is one of the founders or inventors of the convolutional neural network that we use today. So then there’s a question about what is shallow learning and what is deep learning. Well, different people may have different answers but for me, I think the main distinction between shallow learning and deep learning is if an algorithm learning from the features provided by the user or the algorithm learning the features by itself. So if we’re using a shallow learning method such as a typical neural network – a multi-layer perceptron neural network – the first step to do if we apply the neural network on seismic data is to extract seismic attributes and let the neural network classify on those attributes. So that means the algorithm needs to learn from the features- and here features means seismic attributes. On the other hand if we’re using a deep-learning method to classify on seismic data, typically we will provide the raw input which is the seismic amplitude. Some people may use even pre-stack seismic amplitude and here we’re just using post-stack. It’s still a relatively raw data compared to the seismic attributes that we calculated from the seismic amplitude. So during the training process of a deep learning method it will automatically derive a great number of – I will call attributes – from your input data and find the best ones that represent the data so that your target classes can be well separated. For example here we have two seismic facies, one is a stack channel the other is a tilted fault block, and if we want to separate those two features using a shallow learning method, this is what we’re going to do. We have to choose a bunch of seismic attributes that best distinguishes those two facies. Maybe we can use discontinuity or dip magnitude or amplitude variance or even reflector convergence. But the problem is, even if we use all those attributes, we probably won’t have a perfect separation between those two facies because those patterns are so complex. At every region they have different responses from a particular attribute. But don’t get me wrong, I’m not saying that those attributes are useless, that we don’t need to use AAPSI attributes at all. Seismic attributes are very useful to quantify local properties and they’re very useful for visualization purposes because once we have the seismic attributes it’s very easy for us as human beings to identify the features. It just becomes difficult for a computer to use that information and separate those very complex patterns from the others.

So what are we going to do? How do you quantitatively describe the difference between those two very complex facies? Well the answer is ‘let the machine tell us.’ If we are using a deep learning method, and here for example I’m showing a very simple deep learning model and people call it an encoder-decoder model, then we just feed in this seismic amplitude data and after training, we will have a classified seismic facies just like I showed at the very beginning that color-code different facies to a single color and according to what training data we provide. For example, here it classifies this stacked channel into one uniform color or a single value and here the tilted fault blocks in another value. So deep learning automatically learns the most suitable attributes to use although those attributes most likely won’t make any sense for human beings if we just look at those attributes but the computer with the algorithm can figure out the difference or use those attribute to separate your target facies.

Let’s look at the first application. The first application is on seismic facies classification on this data set I used in the introduction. This, being a testing data set, is relatively small or about one gig in size. Because I did the study almost a year ago, at that time it took me about 90 minutes to run the training on a not that powerful single GPU. Right now with the growing computing powers and with the better scripting and better software libraries we can do it much faster. So there are 31 lines manually annotated from the seismic volume and those are used for training and validation. For example we can uniformly interpret or annotate some of the lines in this volume and those are those annotated line and we can train the model using, I’d say, 29 lines and testing your results on two lines that not used in training, and we can also do a cross- validation which means first time we choose this 29 lines and test on the other two lines and the next time we train on a different set of training data – a different 29 lines and testing on the other two remaining lines. After several rounds, if we have a relatively stable result, then we know that our model parameters are pretty good to use and the result is pretty reliable.

So here is the result on the training data, after we run the training. This is a line that used in training and this is the manually interpreted result. As you can see, there are several seismic facies, and whether it’s geologically meaningful or it represents a particular geophysical phenomenon those facies are being picked out by hand and after we run the training, we first of all want to test the neural network on the training data to see if the network is converged well and behaves well on the training data. This is the result on the same training line, as you can see that if I flip back and forth, back and forth, as you can see those two images match very, very good, which is good but it’s not that interesting because this line is used in training and what we really want to see is how the neural network performs on a testing line that is not used for training. So this is a line that is not used in training and this is, again, this is a hand-picked result. We consider this a ground truth and this is the predicted result on the same line. So this gives you an idea of how this network performs on data that it hasn’t seen before. To measure the performance, or measure the quality of the prediction we have several options in terms of the performance metric. The most commonly used one may be the sample accuracy, which means how many samples are correctly predicted and here’s it’s 93% of samples are correctly predicted but in this case, this metric is okay to use because we don’t have a huge imbalance among all those classes. In some cases, particularly if we’re looking for a feature that only consists of a fraction of samples in your data set, such as if you want to pick out only faults, then this metric is very misleading. Let’s say if you only have 0.1% of your data as faults in your data set, even if you don’t predict anything for the fault and get all the remaining – and predict every sample as non-fault, you still have a 99.9% correctness or accuracy. But, in fact, you have nothing predicted for the fault. So a more robust metric to use is the intersection over union, which is defined like this: for a particular class, I’d say for the stacked channel complex, we take the ground truth of the stacked channel complex which is found outlined by this region and the intersection between the ground truth and this predicted result, then divided by the overall extension of your ground truth and predicted result. So basically it calculated how much overlap you have between your prediction and your ground truth. And this is only for one class. If you have more than one class – in this example we have 9 different facies – we will average over this measurement over all your classes, so that it essentially takes out the imbalanced data problem because each class, no matter how many samples there are in the class, they contribute equally in your final metric.

For example, if you have a class with only ten samples, and the other class 1,000 samples, each of those will contribute .5 in your final measurement. So if you have a 10 sample class, you only have 2 samples correctly predicted, then you have a IoU measure for that particular class at .2 and for that 1,000 sample class, if you predict all of the samples correctly, you have IoU measure of 1 for that particular class, then you average over you only have .6. But if you are using sample accuracy, you will have accuracy close to 1, which is not a good estimate of the real performance of your model. So this is the first testing line, and here’s another testing line. Again, this is a ground truth that was manually picked from an interpreter’s manual interpretation. This is the predicted result. As you can see, those two images again match pretty well, and the main thing to notice is although the boundaries aren’t matching perfectly we have a very clean body within each of the fissures which makes the subsequent steps such as generating geobodies much, much easier. Also for this particular case, for this predicted result it matches the reflector pretty well. So I think we’re happy with this result and we can visualize it in 3D. So here we have an inline and a cross line of seismic amplitude, we can overlap our prediction on those two lines so actually we have a volumetric prediction everywhere which shows two lines . This display is very useful for interpreters because it gives you all the highlights, or all the regions of interest that the interpreter may be interested in, although those regions are not 100% accurate, it’s very easy to find your interest with this color-coded map, instead of just scanning through line by line without any highlights. And again we can visualize those features in 3D as some sort of geobody and as you can see here we have very nice well-defined gas chimney and all the other facies as well. And you can crop it to whatever display you would prefer.

So the second application I want to discuss today is fault detection. Here is a data set we want to test our fault detection. Again this data set is from offshore New Zealand, but from a different basin. So here we’re in Great South Basin. Typical workflow with fault detection will start with some sort of edge detection attribute, for example coherence. So here we have a coherence co-blended with this seismic amplitude, and as you can see that the coherence does a pretty good job to highlighting those faults, but there’s some problem with this coherence attributes because the coherence being an edge-detection algorithm, it detects all the discontinuities in this data set. For example here we have a bunch of very high coherence anomalies but, in this particular example, it’s very low in value coherence anomalies and those are not faults. So people call those things syneresis, which means they are cracks formed in the shaly formation when the formation lost water. Another problem, I will say, for this image is if we take a very close look at this coherence image we see some stair-step artifacts which means the coherence anomaly along the fault surface is not smooth. Instead it’s highly segmented. It’s related to some algorithm limitations in this kind of algorithm because it uses a vertical window. And so to get a better fault detection result or to get a better initial attribute that we can run our fault surface generation algorithms on, we need some studies using a neural network. So this is the result from convolutional neural network fault detection. As you can see here have very, very smooth fault and with almost no noise at all from other types of discontinuities. Let me flip back to this coherence and let’s go to our CNN. So you can see that we have a very well defined fault with almost no noise at all. So this is a very promising result. So how do we get this result? You may ask this question. There are several ways that we can do fault detection using CNN, and some of those are very easy to implement, and some of those may require some well-designed algorithm.

So let’s start with something basic. The most basic, or most naive way, to implement this CNN based fault detection is to do something like that. We define this problem as a classification problem and we pick some training data to represent faults and another class of training data to represent non-faults. So here all the green lines are the training data, training samples picked to represent faults and the red triangles are what we picked to represent non-faults. I picked things on 5 lines. Here is the coherence image just to show you how the faults look like on the seismic data. Being a naive implementation, the algorithm we use is something like that. So for every sample, we extract a small 3D patch around the sample and we classify the small 3D patch to be whether it’s a fault or non-fault and assign this value to the center point. In this way, we can after we train this model, we can classify all the samples in the seismic volume one-by-one by using a sliding window of the size of this 3D patch.

This is the result of the naive implementation. I will say this result is kind of ugly because the faults are relatively thick and they’re not that continuous, we have a lot of noise here as well. At that time we were thinking to clean it up using some kind of image processing techniques. So we took this result, and went through an image processing workflow, and I call it regularization which is nothing but smoothing along the fault and sharpening across the fault. After this regularization, we have a result like that to compare against our raw CNN output. As you can see we have much sharper fault images and it cleaned up those noise quite well.

At this time you may ask who actually does the heavy lifting? Is that the CNN or is it actually this image processing regularization? To answer this question, we brought in our coherence image and went through exactly the same regularization or image processing workflow and this is what we got. So to compare this one with the one we got from using the CNN fault detection as our raw output, it’s pretty clear that once we use the CNN as our initial fault detection attribute we get rid of those type of noise. Moreover we have more continuous faults as well, compared to using this coherence.

To do a further comparison, we also did a fault detection using a swarm intelligence workflow from a 3rd party vendor that I cannot tell the name. So this is the result from a swarm intelligence fault detection and as you can see it brings out most of the faults pretty well but the problem is it may be too sensitive to the discontinuities and you have those responses almost everywhere in your data set. And you have – maybe there – those things are actually acquisition footprints or maybe some sort of noise. So if we use this example, sorry, this result to do a fault surface generation, you may have a bunch of fault surfaces that are not real. Then we zoom into this box region and on the left you have CNN based fault detection and on the right you have coherence-base fault detection and it’s pretty clear that we don’t have that kind of noise on here and also the kind of faults are very, very continuous and clean. Again, here is swarm intelligence and we have a bunch of noise here which may not be the real fault. And then we can view it in a vertical slice. Here is a coherence based and here is a CNN based. So coherence based and CNN based. And it’s pretty clear that the coherence based result, even though we went through this regularization step the fault surfaces are not that continuous and we have a bunch of other types of discontinuity response as well. But for CNN, it got rid of most of those and the faults are very continuous and sharp. But then you may identify a problem, so this result is not as good as the one I showed at the very beginning. So what’s the problem? There’s lots of faults that are missing, but in general it’s just not that good. As I said, this is a very naive implementation. At the time that we developed this, we thought maybe we can use a similar approach as we did with the seismic facies classification and use that type of CNN network and this is what we got. This result is the same as the one I showed at the very beginning, the only difference is here I used this image processing regularization to make those faults a little thinner. Again, we can use different types of neural networks, and this is just one possible way to do it. This is similar to the one we can use for seismic facies classification. We take a whole seismic line, if your data is relatively small, or we can take a large 2D patch of data, say, maybe 200 samples by 200 samples in size, and fit it into the network and we get a classification at every sample in this patch simultaneously. Once we move to this kind of algorithm, we will have a much better defined fault image. We can then make the faults even thinner by using some morphological thinning, which is just skeletonization, to make everything one sample thick on a bigger line. To look from a time slice, this is the naive implementation, which I called a 3D patch-based classification and this is the segmentation. This segmentation network only runs on 2D, which means it takes 2D image for training, and then you can run your prediction on both inline and crossline and sum it together. So it’s pseudo-3D, but it’s actually 2D. Even if it’s in 2D, it still gives you a better result compared to the 3D patch-based method. As you can see, it has much more faults and more continuous faults.

After that, we thought this result is okay, but we’re only using 2D information. We train it on 2D lines. So can we use 3D information and train a 3D CNN? Well, the answer is yes. This is the result we got from training a real 3D neural network on the same data set. To compare this result with this line-based CNN, I will say that it gives about the same faults. In general the faults are cleaner and the faults are more continuous, but moreover the biggest advantage of this 3D CNN fault detection neural network is to train this network I don’t need to use and prior knowledge for this particular data set I want to predict. This network is trained on the data that has not been seen on this data set. In other words, this network is very general so that we can apply this trained neural network to many seismic data sets as long as those are the – the data quality are relatively the same or I mean, it allows a certain variation in the data set but if you have, let’s say if you have a marine data, if you’re training on marine data and predicting on land data, it’s probably not going to work. We still have some limitation on data quality but in general it works very, very well and if you take the training time away from the users, so when the users use this type of technique to predict a fault, it becomes very, very fast. So for example, in this particular data set, this data set is maybe one gigabyte, maybe 900 megabytes or so, and only takes about a minute or even less than a minute to run on a single GPU.

Another thing that we tried is to run a fault detection using some input data other than seismic amplitude. In this particular example, I tried to run a fault detecting using a self-organizing map classification result. The reason behind it is some attributes, for example, instantaneous phase attribute, or cosine of instantaneous phase, are very, very good to make your reflector very continuous. It’s so continuous in a way that it highlights this discontinuity between your continuous reflectors. If we use those type of attributes and run a SOM and get a classification result on those type of attributes for example like that, then we have a pretty clear, well-defined faults as well. I trained my neural network on this image and hoped to see that they can pick the faults as good as on seismic amplitude data, or maybe even better. This is the result we get from training a fault-detection neural on the SOM classification result. As you can see, it picks out the faults really, really well and we may have a little bit missing faults in here, but in general, it gives you a very good result on fault detection, even though we’re not using the seismic amplitude data.

At the very beginning, I was hoping maybe this result won’t be as good as using the seismic amplitude because we’re limiting the information that we provide to the neural network, but in fact it just did a pretty good job. Maybe the reason behind it is we carefully chose the attribute that’s very good to highlight- or we’re not actually highlighting the fault, but it’s highlighting the continuous reflectors so that the faults stand out on those attributes.

Again, we can show this same result on this seismic amplitude. This is how they look like on the seismic amplitude. Let me flip back to the SOM and to the seismic amplitude. As you can see, those faults are very well defined.

Okay, the last application is for channel extraction. This channel extraction is actually similar to what we did for the seismic facies, but in this case we’re interested in a particular feature. So this data set is again, from offshore New Zealand Taranaki Basin but it’s a different survey compared to the very first one that I showed. So here in the survey we have a lot of channels and most of the channels in this part are relatively small scale, and here we have a very big channel in the shallower part. In this particular example I’m interested in extracting this big channel from this data set. As you know, extracting those smaller-scale channels are actually easier than the big ones because for smaller channels you can use the coherence response from the channel flanks or the channel edges. Maybe the curvature response from the bottom of the channel. Those responses are kind of overlapped with where the channel is. Or you can just extract the channel body using those attributes. But the problem with the big channel is, for the big channel those attributes are only sensitive to the edges of the channel or to the bottom of the channel, so it’s only sensitive to part of the channel but not the whole channel. That makes extracting the whole channel somewhat challenging. In this example, I manually highlighted this channel on several lines of the data and then after training the network I extracted the channel over the whole volume and this is what I got. This channel matches the boundary on the seismic pretty well. While we may have some disagreement in here, in general, I think in terms of getting a quick interpretation of this channel, I think the neural network does a pretty good job. We can, of course, see it in a 3D, and here on the lower right corner, we have the channel displayed on each of the time level and grow from the bottom to the top. Every time slice the channel matches the boundary on the seismic data pretty well. Then I looked into another channel in the deeper part of the same survey, so here we’re about 2 seconds, the previous one about .7 I think. Here we have a more sinuous channel in the deeper part of the survey, and again I picked some of the appearance of the channel. After training the network, we were able to match the channel boundary pretty well using this CNN classification. And of course we again, have some noise that leaked outside of the channel but I’m not too worried about that because this is the raw input, raw output from the neural network without any post-editing. So again, we can visualize it in 3D, so the channel develops from this side and once we go up, we can see this sinuous channel start to show up in this side and it matches the boundary pretty well. And of course, again we have some noise and those things can be cleaned up by some post-editing techniques.

So conclusions. I think after I showed the 3 applications it’s safe to say that deep-learning methods represented by convolutional neural networks are powerful in qualifying complex seismic reflection patterns into uniform facies, whether we’re interested in multiple facies at the same time or we’re interested in a particular feature of interest such as faults or channels. And we demonstrated the application for the 3 problems with clear success, and finally I think that with great flexibility in model architecture with different types of CNNs and with all your clever genius researchers we can develop something particular for a particular problem so that we believe that CNNs are promising in other interpretation tasks as well. I would like to thank Geophysical Insights for permission to show this work, and also want to thank New Zealand Petroleum and Minerals for providing those beautiful data sets used in the study to the general public.

Tao Zhao

Research Geophysicist | Geophysical Insights

TAO ZHAO joined Geophysical Insights in 2017. As a Research Geophysicist, Dr. Zhao develops and applies shallow and deep machine learning techniques on seismic and well log data, and advances multiattribute seismic interpretation workflows. He received a B.S. in Exploration Geophysics from the China University of Petroleum in 2011, an M.S. in Geophysics from the University of Tulsa in 2013, and a Ph.D. in geophysics from the University of Oklahoma in 2017. During his Ph.D. work at the University of Oklahoma, Dr. Zhao was an active member of the Attribute-Assisted Seismic Processing and Interpretation (AASPI) Consortium developing pattern recognition and seismic attribute algorithms.

A Fault Detection Workflow Using Deep Learning and Image Processing

A Fault Detection Workflow Using Deep Learning and Image Processing

By Tao Zhao
Published with permission: SEG International Exposition and 88th Annual Meeting
October 2018

Summary

Within the last a couple of years, deep learning techniques, represented by convolutional neural networks (CNNs), have been applied to fault detection problems on seismic data with an impressive outcome. As is true for all supervised learning techniques, the performance of a CNN fault detector highly depends on the training data, and post-classification regularization may greatly improve the result. Sometimes, a pure CNN-based fault detector that works perfectly on synthetic data may not perform well on field data. In this study, we investigate a fault detection workflow using both CNN and directional smoothing/sharpening. Applying both on a realistic synthetic fault model based on the SEAM (SEG Advanced Modeling) model and also field data from the Great South Basin, offshore New Zealand, we demonstrate that the proposed fault detection workflow can perform well on challenging synthetic and field data.

Introduction

Benefited from its high flexibility in network architecture, convolutional neural networks (CNNs) are a supervised learning technique that can be designed to solve many challenging problems in exploration geophysics. Among these problems, detection of particular seismic facies of interest might be the most straightforward application of CNNs. The first published study applying CNN on seismic data might be Waldeland and Solberg (2017), in which the authors used a CNN model to classify salt versus non-salt features in a seismic volume. At about the same time as Waldeland and Solberg (2017), Araya-Polo et al. (2017) and Huang et al. (2017) reported success in fault detection using CNN models.

From a computer vision perspective, in seismic data, faults are a special group of edges. CNN has been applied to more general edge detection problems with great success (El- Sayed et al., 2013; Xie and Tu, 2015). However, faults in seismic data are fundamentally different from edges in images used in computer vision domain. The regions separated by edges in a traditional computer vision image are relatively homogeneous, whereas in seismic data such regions are defined by patterns of reflectors. Moreover, not all edges in seismic data are faults. In practice, although providing excellent fault images, traditional edge detection attributes such as coherence (Marfurt et al., 1999) are also sensitive to stratigraphic edges such as unconformities, channel banks, and karst collapses. Wu and Hale (2016) proposed a brilliant workflow for automatically extracting fault surfaces, in which a crucial step is computing the fault likelihood. CNN-based fault detection methods can be used as an alternative approach to generate such fault likelihood volumes, and the fault strike and dip can be then computed from the fault likelihood.

One drawback of supervised machine learning-based fault detection is its brute-force nature, meaning that instead of detecting faults following geological/geophysical principles, the detection purely depends on the training data. In reality, we will never have training data that covers all possible appearances of faults in seismic data, nor are our data noise- free. Therefore, although the raw output from the CNN classifier may adequately represent faults in synthetic data of simple structure and low noise, some post-processing steps are needed for the result to be useful on field data. Based on the traditional coherence attribute, Qi et al. (2017) introduced an image processing-based workflow to skeletonize faults. In this study, we regularize the raw output from a CNN fault detector with an image processing workflow built on Qi et al. (2017) to improve the fault images.

We use both a realistic synthetic data and field data to investigate the effectiveness of the proposed workflow. The synthetic data should ideally be a good approximation of field data and provide full control on the parameter set. We build our synthetic data based on the SEAM model (Fehler and Larner, 2008) by taking sub-volumes from the impedance model and inserting faults. After verifying the performance on the synthetic data, we then move on to field data acquired from the Great South Basin, offshore New Zealand, where extensive amount of faulting occurs. Results on both synthetic and field data show great potential of the proposed fault detection workflow which provides very clean fault images.

Proposed Workflow

The proposed workflow starts with a CNN classifier which is used to produce a raw image of faults. In this study, we adopt a 3D patch-based CNN model that classifies each seismic sample using samples within a 3D window. An example of the CNN architecture used in this study is provided in Figure 1. Basic patch-based CNN model consists of several convolutional layers, pooling (downsampling) layers, and fully-connected layers. Given a 3D patch of seismic amplitudes, a CNN model first automatically extracts several high-level abstractions of the image (similar to seismic attributes) using the convolutional and pooling layers, then classifies the extracted attributes using the fully- connected layers, which behave similar to a traditional multilayer perceptron network. The output from the network is then a single value representing the facies label of the seismic sample centered at the 3D patch. In this study, the label is binary, representing “fault” or “non-fault”.

Figure 1. Sketches of a 2D patch-based CNN architecture. In this demo case, each input data instance is a small 2D patch of seismic amplitude centered at the sample to be classified. The corresponding output is a class label representing the patch (in this case, fault), which is usually assigned to the center sample. Different types of layers are denoted in different colors, with layer types marked at their first appearance in the network. The size of the cuboids approximately represents the output size of each layer.

We then use a suite of image processing techniques to improve the quality of the fault images. First, we use a directional Laplacian of Gaussian (LoG) filter (Machado et al., 2016) to enhance lineaments that are of high angle from layering reflectors and suppress anomalies close to reflector dip, while calculating the dip, azimuth, and dip magnitude of the faults. Taking these data, we then use a skeletonization step, redistributing the fault anomalies within a fault damage zone to the most likely fault plane. We then do a thresholding to generate a binary image for faults. Optionally, if the result is still noisy, we can continue with a median filter to reduce the random noise and iteratively perform the directional LoG and skeletonization to achieve a desirable result. Figure 2 summarizes the regularization workflow.

Synthetic Test

We first test the proposed workflow on synthetic data built on the SEAM model. To make the model a good approximation of real field data, we select a portion in the SEAM model where stacked channels and turbidites exist. We then randomly insert faults in the impedance model and convolve with a 40Hz Ricker wavelet to generate seismic volumes. The parameters used in random generation of five reverse faults in the 3D volume are provided in Table 1. Figure 3a shows one line from the generated synthetic data with faults highlighted in red. In this model, we observe strong layer deformation with amplitude change along reflectors due to the turbidites in the model. Therefore, such synthetic data are in fact quite challenging for a fault detection algorithm, because of the existence of other types of discontinuities.

We randomly use 20% of the samples on the fault planes and approximately the same amount of non-fault samples to train the CNN model. The total number of training sample is about 350,000, which represents <1% of the total samples in the seismic volume. Figure 3b shows the raw output from the CNN fault detector on the same line shown in Figure 3a. We observe that instead of sticks, faults appear as a small zone. Also, as expected, there are some misclassifications where data are quite challenging. We then perform the regularization steps excluding the optional steps in Figure 2. Figure 3c shows the result after directional LoG filter and skeletonization. Notice that these two steps have cleaned up much of the noise, and the faults are now thinner and more continuous. Finally, we perform a thresholding to generate a fault map where faults are labeled as “1” and “0” for everywhere else (Figure 3d). Figure 4 shows the fault detection result on a less challenging line. We observe that the result on such line is nearly perfect.

Figure 2. The regularization workflow used to improve the fault images after CNN fault detection.

Fault attribute Values range
Dip angle (degree) -15 to 15
Strike angle (degree) -25 to 25
Displacement (m) 25 to 75

Table 1. Parameter ranges used in generating faults in the synthetic model.

Field Data Test

We further verify the proposed workflow on field data from the Great South Basin, offshore New Zealand. The seismic data contain extensive faulting with complex geometry, as well as other types of coherence anomalies as shown in Figure 5. In this case, we manually picked training data on five seismic lines for regions representing fault and non-fault. An example line is given in Figure 6. As one may notice, although the training data consist very limited coverage in the whole volume, we try to include the most representative samples for the two classes. On the field data, we use the whole regularization workflow including the optional steps. Figure 7 gives the final output from the proposed workflow, and the result from using coherence in lieu of raw CNN output in the workflow. We observe that the result from CNN plus regularization gives clean fault planes with very limited noise from other types of discontinuities.

Conclusion

In this study, we introduce a fault detection workflow using both CNN-based classification and image processing regularization. We are able to train a CNN classifier to be sensitive to only faults, which greatly reduces the mixing between faults and other discontinuities in the produced faults images. To improve the resolution and further suppress non-fault features in the raw fault images, we then use an image processing-based regularization workflow to enhance the fault planes. The proposed workflow shows great potential on both challenging synthetic data and field data.

Acknowledgements

The authors thank Geophysical Insights for the permission to publish this work. We thank New Zealand Petroleum and Minerals for providing the Great South Basin seismic data to the public. The CNN fault detector used in this study is implemented in TensorFlow, an open source library from Google. The authors also thank Gary Jones at Geophysical Insights for valuable discussions on the SEAM model.

Figure 3. Line A from the synthetic data showing seismic amplitude with a) artificially created faults highlighted in red; b) raw output from CNN fault detector; c) CNN detected faults after directional LoG and skeletonization; and d) final fault after thresholding.

Figure 4. Line B from the synthetic data showing seismic amplitude co-rendered with a) randomly created faults highlighted in red and b) final result from the fault detection workflow, in which predicted faults are marked in red.

Figure 5. Coherence attribute along t = 1.492s. Coherence shows discontinuities not limited to faults, posting challenges to obtain only fault images.

Figure 6. A vertical slice from the field seismic amplitude data with manually picked regions for training the CNN fault detector. Green regions represent fault and red regions represent non-fault.

References

Araya-Polo, M., T. Dahlke, C. Frogner, C. Zhang, T. Poggio, and D. Hohl, 2017, Automated fault detection without seismic processing: The Leading Edge, 36, 208–214.

El-Sayed, M. A., Y. A. Estaitia, and M. A. Khafagy, 2013, Automated edge detection using convolutional neural network: International Journal of Advanced Computer Science and Applications, 4, 11–17.

Fehler, M., and K. Larner, 2008, SEG Advanced Modeling (SEAM). Phase I first year update: The Leading Edge, 27, 1006–1007.

Huang, L., X. Dong, and T. E. Clee, 2017, A scalable deep learning platform for identifying geologic features from seismic attributes: The Leading Edge, 36, 249–256.

Machado, G., A. Alali, B. Hutchinson, O. Olorunsola, and K. J. Marfurt, 2016, Display and enhancement of volumetric fault images: Interpretation, 4, 1, SB51–SB61.

Marfurt, K. J., V. Sudhaker, A. Gersztenkorn, K. D. Crawford, and S. E. Nissen, 1999, Coherency calculations in the presence of structural dip: Geophysics, 64, 104–111.

Qi, J., G. Machado, and K. Marfurt, 2017, A workflow to skeletonize faults and stratigraphic features: Geophysics, 82, no. 4, O57–O70.

Waldeland, A. U., and A. H. S. S. Solberg, 2017, Salt classification using deep learning: 79th Annual International Conference and Exhibition, EAGE, Extended Abstracts, Tu-B4-12.

Wu, X., and D. Hale, 2016, 3D seismic image processing for faults: Geophysics, 81, no. 2, IM1–IM11.

Xie, S., and Z. Tu, 2015, Holistically-nested edge detection: Proceedings of the IEEE International Conference on Computer Vision, 1395–1403.