web analytics
NEW e-Course by Dr. Tom Smith: Machine Learning Essentials for Seismic Interpretation

NEW e-Course by Dr. Tom Smith: Machine Learning Essentials for Seismic Interpretation

Machine learning is foundational to the digital transformation of the oil & gas industry and will have a dramatic impact on the exploration and production of hydrocarbons.  Dr. Tom Smith, the founder and CEO of Geophysical Insights, conducts a comprehensive survey of machine learning technology and its applications in this 24-part series.  The course will benefit geoscientists, engineers, and data analysts at all experience levels, from data analysts who want to better understand applications of machine learning to geoscience, to senior geophysicists with deep experience in the field.

Aspects of supervised learning, unsupervised learning, classification and reclassification are introduced to illustrate how they work on seismic data.  Machine learning is presented, not as an end-all-be-all, but as a new set of tools which enables interpretation on seismic data on a new, higher level that of abstraction  that promises to reduce risks and identify features that which might otherwise be missed.

The following major topics are covered:

  • Operation  – supervised and unsupervised learning; buzzwords; examples
  • Foundation  – seismic processing for ML; attribute selection list objectives; principal component analysis
  • Practice  – geobodies; below-tuning; fluid contacts; making predictions
  • Prediction – the best well; the best seismic processing; over-fitting; cross-validation; who makes the best predictions?

This course can be taken for certification, or for informational purposes only (without certification). 

Enroll today for this valuable e-course from Geophysical Insights!

Future of Seismic Interpretation with Machine Learning and Deep Learning

By: Iván Marroquín, Ph.D. – Senior Research Geophysicist

I am very excited to participate as a speaker in the workshop on Big Data and Machine Learning organized by European Association of Geoscientists & Engineers. My presentation is about using machine learning and deep learning to advance seismic interpretation process for the benefit of hydrocarbons exploration and production.

Companies in the oil and gas industry invest millions of dollars in an effort to improve their understanding of their reservoir characteristics and predict their future behavior. An integral part of this effort consists of using traditional workflows for interpreting large volumes of seismic data. Geoscientists are required to manually define relationships between geological features and seismic patterns. As a result, the task of finding significative seismic responses to recognize reservoir characteristics can be overwhelming.

In this era of big data revolution, we are at the beginning of the next fundamental shift in seismic interpretation. Knowledge discovery, based on machine learning and deep learning, supports geoscientists in two ways. First, it interrogates volumes of seismic data without preconceptions. The objective is to automatically find key insights, hidden patterns, and correlations. So then, geoscientists gain visibility into complex relationships between geologic features and seismic data. To illustrate this point, Figure 1a shows a thin bed reservoir scenario from Texas (USA). In terms of seismic data, it is difficult to discern the presence of the seismic event associated with the producing zone at well location. The use of machine learning to derive a seismic classification output (Figure 1b) brought forward a much rich stratigraphic information. Upon closer examination using time slice views (Figure 1c), it is indicated that the reservoir is an offshore bar. Note how well oil production matches the extent of the reservoir body.  

Figure 1. Seismic classification result using machine learning (result provided by Deborah Sacrey, senior geologist with Geophysical Insights).  

Another way knowledge discovery can help geoscientists is to automate elements of seismic interpretation process. At the rate machine learning and deep learning can consume large amounts of seismic data, it makes possible to constantly review, modify, and take appropriate actions at the right time. With these possibilities, geoscientists are free to focus on other, more valuable tasks. The following example demonstrates that a deep learning model trained can be trained on seismic data or derived attributes (e.g., seismic classification, instantaneous, geometric, etc.) to identify desired outcomes, such as fault locations. In this case, a seismic classification volume (Figure 2a) was generated from seismic amplitude data (Taranaki Basin, west coast of New Zealand). Figure 2b shows the predicted faults displayed against the classification volume. To corroborate the quality of the prediction, the faults are also displayed against the seismic amplitude data (Figure 2c). It is important to note that the seismic classification volume provides an additional benefit to the process of seismic interpretation. It has the potential to expose stratigraphic information not readily apparent in seismic amplitude data.  

Figure 2. Fault location predictions using deep learning (result provided by Dr. Tao Zhao, research geophysicist with Geophysical Insights).

The Holy Grail of Machine Learning in Seismic Interpretation

The Holy Grail of Machine Learning in Seismic Interpretation

A few years ago, we had geophysics and geology – two distinct that were well defined. Then, geoscience came along, and it was an amalgam of geology and geophysics.  Many people started calling themselves geoscientists as opposed “geologist” or “geophysicist”. But the changes weren’t quite finished. Along came a qualifying adjective, and that has to do with unconventional resource development or unconventional exploration. We understand how to do exploration, but unconventional has to do with understanding shale and finding sweet spots, but it is a type of exploration.  By joining unconventional and resource development, we broaden what we do as professionals.  However, the mindset of unconventional geophysics is really closer to mining geophysics than it is conventional exploration.

A few years ago, we had geophysics and geology – two distinct that were well defined. Then, geoscience came along, and it was an amalgam of geology and geophysics.  Many people started calling themselves geoscientists as opposed “geologist” or “geophysicist”. But the changes weren’t quite finished. Along came a qualifying adjective, and that has to do with unconventional resource development or unconventional exploration. We understand how to do exploration, but unconventional has to do with understanding shale and finding sweet spots, but it is a type of exploration.  By joining unconventional and resource development, we broaden what we do as professionals.  However, the mindset of unconventional geophysics is really closer to mining geophysics than it is conventional exploration.

So, today’s topic has to do with the “holy grail” of machine learning in seismic interpretation.  We’re trying to tie this to seismic interpretation only.  Even if that’s a pretty big topic, we’re going to focus on a few highlights.  I can’t even summarize machine learning for seismic interpretation.  It’s already too big!  Nearly every company is investigating or applying machine learning these days.  So, for this talk I’m just going to have to focus on this narrow topic of machine learning in seismic interpretation and hit a few highlights.

Let’s start at 50,000 feet – way up at the top.  If you’ve been intimidated by this machine learning stuff, let’s define terms.  Machine learning is an engine.  It’s an algorithm that learns without explicit programming. That’s really fundamental. What does that mean? That means an algorithm that’s going to learn from the data. So, that means given one set of data, it’s going to come up with an answer, but with a different set of data, it will come up with a different answer.  The whole field of artificial intelligence is broken up into strong AI and Narrow AI.  Strong AI is coming up with a robot that looks and behaves like a person. Narrow AI attempts to duplicate the brain’s neurological processes that have been perfected over millions of years of biological development. A Self-organizing map, or SOM, is a type of neural network that adjusts to training data.  However, it makes no assumptions about the characteristics of the data.  So, if you look at the whole field of artificial intelligence, and then we look at machine learning as a subset of that, there are two parts: unsupervised neural networks and supervised neural networks.  Unsupervised is where you feed it the data and say “you go figure it out.”  In supervised neural networks, you give it both the data and the right answer. Some examples of supervised neural networks would be convolutional neural networks and deep learning algorithms.  Convolutional is a more classical type of a supervised neural network, where for every data sample, we know the answer.  So, a data sample might be ‘we have x, y, and z properties, and by the way, we know what the classification is a pri·o·ri. A classical example of a supervised neural network would be this: Your uncle just passed away and gave you the canning operations in Cordova, Alaska.  You go to the plant to see what you’ve inherited. Let’s say you’ve got all these people standing at a beltline manually sorting fish, and they’ve got buckets eels, and buckets for flounder, etc. Being a great geoscientist, you recognize this as an opportunity to apply machine learning to possibly re-assign those people to more productive tasks. As the fish come along, you weight them, you take a picture of them, you see what the scales are, general texture, you get some idea about the general shape of them.  You see what I’ve described are three properties, or attributes. Perhaps you add more attributes and are up to four or five. Now, we have 5 attributes that define each type of fish, so in mathematical terms, we’re now dealing with a five dimensional problem. We call this ‘Attribute Space’. Pretty soon, you run through all the eels and you get measurements for each eel.  So, you get the neural network trained on eels. And then you run through all the flounder. And guess what – there’s going to be variations, of course, but in attribute space, of those four or five measurements that we made for each one of type of fish are going to wind up in a different cluster in Attribute Space. And that’s how we tell the difference between eels and flounder. Or whatever else you got.  And everything else that you can’t classify very well, goes into a bucket that is labeled ‘unclassified’. (More on this later in the presentation.) And, you put that into your algorithm.  So that’s basically the difference between supervised neural networks and unsupervised neural networks. Deep learning is a category of neural networks that can operate in both supervised and unsupervised discovery.

Now, before we get deeper into our subject today, I’d like to draw your attention to some of the terms: the concept of Big Data.  If you remember a few years ago, if you wanted to survive in the oil and gas business, finding large fields was the objective. Well, we have another big thing today – Big Data. Our industry is looking at ways to apply the concepts of Big Data analytics. We hear senior management of E&P companies talking about Big Data and launching Data Analytics teams. So, what is Big Data or Data Analytics? It’s access to large volumes of disparate kinds of oil and gas data that is analyzed by machine learning algorithms to discover unknown relationships, those that were not identified previously. The other key point about Big Data is that it is disparate kinds. So the fact is you say “I’m doing Big Data analytics with my seismic data” – that’s not really an appropriate choice of terms. If you say “I’m going to throw in all my seismic data, along with associated wells, and my production data” – now you’re starting to talk about real Big Data operations.  And, the opportunities are huge. Finally, there’s IoT – Internet of Things – which you’ve probably heard or read.  I predict that IoT will have a larger impact on our industry than machine learning, however, the two are related.  And why is that?  Almost EVERYTHING we use can be wired to the internet. In seismic acquisition, for instance, we’re looking at smart geophones being hooked up that sense the direction of the boat and can send and receive data. In fact, when the geophones get planted, they have a GPS in each one of those things so that when it’s pulled up and thrown in the back of a pickup truck, the geophones can report their location in real-time.  There are countless other examples of how IoT will change our industry.

Let’s consider wirelines as a starting point of interpretation and figuring out the deposition of the environment using wireline classifications. If we pick a horizon, and based on that auto-picked horizon, we have a wavelet at every bin. We pull that wavelet out. In this auto-picked horizon, we may have a million samples and we have a million wavelets because we have a wavelet for each sample. (Some early neural learning tools were based on this concept of classifying wavelets.) Using these different classes, machine learning analyzes and trains on those million wavelets, finding say seven most significantly different. And then we go back and classify all of them. And so we have this cut shown here, across the channel and the wavelet, closest to the center, discovered to be tied to that channel. So there’s the channel wavelet, and now we have overbank wavelets, some splay wavelets – several different wavelets. And from this, a nice colormap can be produced indicating the type of wavelet.

Horizon attributes look at the properties of the wavelet along the vicinity of the horizon, at say frequency of 25 to 80 hertz with attributes like instantaneous phase. So we now have a collection of information about that pic using horizon attributes. Using volume attributes, we’ll look at a pair of horizons and integrate the seismic attributes between the horizons. This will result in a number, such as the average amplitude or average envelope value, that represents a sum of seismic samples in a time or depth interval. However, when considering machine learning, the method of analysis is fundamentally different. We have one seismic sample and associated with that sample we have multiple seismic attributes associated with that sample. This produces a multi-attribute sample vector that is the subject of the machine learning process.

Ok, so let’s take a look at some of the results: This is a self-organizing map, analysis of a wedge using only 2 attributes. We’ve got three cases – low, medium, and high levels of noise, and in the box over here you can see tuning thickness is right here, and everything to the right of that arrow is below tuning. Now, the SOM is multi-attribute samples. And in this case, we are keeping things very simple since we only have two attributes. If you have only two attributes, you can plot them on a piece of paper – x axis, y axis. However, the classification process works just fine for two dimensions or twenty dimensions.  It’s a machine learning algorithm. In two dimensions, we can look at it and decide “did it do a good job or did it not?” For this example, we’ve used the amplitude and the Hilbert Transform because we know they’re orthogonal to each other. We can plot those as individual points on paper. Every sample is a point on that scatter plot. However, if we put it through a SOM analysis, the first stage is SOM training, which is trying to locate natural clusters in attribute space, and then the second phase is once those neurons have gone through the training process, we then take the results out and classify ALL the samples. So, we have here the results – every single sample is classified. Low noise, medium noise, high noise, and here are the results here.  If you go to tuning thickness, we are tracking with SOM analysis events way below tuning thickness.  And the fact that there’s the top of the wedge or … this one right here is where things get below tuning thickness. Eventually tip the corresponding trace right over there.  Now, there’s a certain bias.  We are using here for this analysis a two-dimensional topology – it’s two dimensions, but also the connectivity is hexagonal connectivity between these neurons, which is made use of during the training process.  And there’s a certain bias here because this is a smooth colormap.  By the way, these are colormaps as opposed to colorbars.  Right? Color maps, not colorbars. In terms of color MAPS, you can have four points of connectivity, and then it’s just like a grid.  Or 6 points of connectivity, and then it’s hexagonal.  That helps us understand the training that was used. Well, there’s a certain bias about having smooth colors and we have attempted in this process here – there’s 8 rows and 8 columns – every single one of those has gone looking for a natural cluster in attribute space.  Although it’s only two dimensions, they are still is a hunting process. Each of these 64 neurons, after the training process, are trying to zero in on a natural cluster. And there’s a certain bias here in using smooth colors because that happens like yellow and greens and here’s blues and reds. Here’s a random color – and you can see the results.  But even if we use random colors, we are still tracking events way below tuning thickness using the SOM classification.

We are demonstrating the resolution well below tuning.  There’s no magic.  We use only two attributes – the real part and the imaginary part, which is the Hilbert Transform, and we are demonstrating the SOM characteristics of training using only two attributes.

The self-organizing map, SOM, training algorithm is modeled on discovering of natural clusters in attribute space, using training rules based upon the human visual cortex.  Conceptually, this is a simple but powerful idea. We can see examples in nature of simple rules that lead to profound results.

So, the whole idea behind self-organizing assemblages is the following:  Snow geese and fish are both examples of self-organizing assemblages. Individuals follow a simple rule.  The individual goose is just basically following a very simple rule: Follow the goose in front of me, just a few feet behind and either left or right. It’s a simple as that.  That’s an example of self-organizing assemblage, but yet some of the properties of that are pretty profound, because once they get up to altitude, they can go for a long time and long distances using the slipstream properties of that “v” formation.  The basic rule for a schooling fish is ‘swim close to your buddies.  Not so close that you’ll bump into them, and not so far away that it doesn’t get represented as a school of fish.’ When the shark swims by, the school needs to look like one big fish. If those individual fish were too far apart, the shark would see the smaller isolated fish as easy prey. So, there’s even a simple rule here of a optimum distance one to the other. These are just two examples of where simple rules produce complex results when applied at scale.

Unsupervised neural networks work, which classify the data, also work on simple rules but operating on large volumes of seismic samples in attribute space.

The first example is the Eagle Ford case study. Patricia Santagrossi published these results last year.  This is a 3D survey of a little over 200 square miles. The SOM analysis was run between the Buda and the Austin Chalk and the Eagle Ford is right above the Buda in this little region right there.  The Eagle Ford shale layer was 108′ thick, which is only 14 ms.  Now both the Buda and Austin Chalk are known , strong peak events. So, if you count how many cycles we go through here, peak trough, kind of a doublet, trough, peak. The good stuff here is basically all the bed from one peak to one trough. Conventional seismic data. Here’s the Eagle Ford shale as measured right at the Buda break well there.  We have both a horizontal and a vertical well right here. And that trough is associated with the Eagle Ford Shale.  That trough and that peak. So, this is the SOM result with an 8x8 set of neurons that are used for the training. Look at the visible amount of detail here. Not just between the Buda and the Austin Chalk, but actually you can see how things are changing, even along the formation here, within the run of the horizontal well. Because every change in color here corresponds to a change in neuron.

These results were computed by machine learning using seismic attributes alone. We did not tie the results to any of the wells. The SOM analysis was run on seismic samples with multiple attributes values. The key idea here is simultaneous multi-attribute analysis using machine learning. Now, let’s look further at this Eagle Ford case study.

These are results computed by machine learning using seismic attributes.  We did not skew the results and tie them to any of the wells.  They were not forced to fit the wells or anything else. The SOM analysis was run strictly on seismic data and the multi-attribute seismic samples.  Again, the right term is simultaneous multi-attribute analysis. Multi-attribute, meaning it’s a vector. In our analysis every single sample is being used simultaneously to classify the data – a solution.  So although this area is 200 square miles from an aerial view, between the Buda and the Austin Chalk, we’re looking at every single sample – not just wavelets. By simple inspection, we can see that the results corroborate the results of applying machine learning with the well logs, but there has been no force fitting of the data. These arrows are referring to the SOM winning neurons. If we look at detail, here is Well #8, a vertical well in the Eagle Ford shale. The high resistivity zone is right in here. That could be tied into the red stuff. So, here again we’re dealing with seismic data on a sample-by-sample basis.

The SOM winning neurons identified 24 geobodies, autopicked in 150 feet of vertical section at this well on #8 in the Eagle Ford borehole. Some of the geobodies – not all of them – some of them track the underwells and went over the entire 200 sq. mile 3D survey.

This is to zero in a little bit more.  So I can give you some association here. This is the high resistivity zone is correlating with winning neuron 54, 60, and 53 in this zone right in here. There’s the Eagle Ford Ash that is identified with neurons 63 and 64. And Patricia even found to tie in with this marker right here – this is neuron 55.

And this well, by the way, well #8, was 372 Mboe. SOM classification neurons are associated with specific wireline lithofacies units. That’s really hard to argue against.  We have evidence, in this case up here for example, of an unconformity where we lost a neuron right through here and then we picked it up again over there.  And, there is evidence in the Marl of slumping of some kind.  So, we’re starting to understand what’s happening geologically using machine learning. We’re seeing finer detail – more than we would have using conventional seismic data and single attributes.

Tricia found a generalized cross-section of Cretaceous in Texas, northwest / southeast towards the gulf. Eagle Ford shale fits in here below the Marl and there’s an unconformity between those two – she was able to see some evidence of that.

The well that we just looked at was well #8, and it ties in with the winning neuron.  Let’s take a look at another well, say for example, well #3, a vertical well with some x-ray diffraction associated with it. We can truly nail this stuff with the real lithology, so not only do we have a wireline result, but we also have X-ray diffraction results to corroborate the classification results.

So, of the 64 neurons, over 41,000 were classified as “the good stuff.” Not on a sample basis, so you can integrate that – you can tally all that stuff up and start to come up with estimates.

So, specific geobodies relate to winning neuron that we’re tracking – #12 – that’s the bottom line. And from that we were able to develop a whole Wheeler diagram for the Eagle Ford group for the survey.  And the good stuff are the winning neurons 58 and 57. They end up on the neuron topology map here, so those two were associated with the wireline lithofacies footstep – the high resistivity part of the Eagle Ford shale. But she was able to work out additional things, such as more clastics and carbonates in the west and clastics in the southeast. And, she was able to work out not only Debris Apron, but the ashy beds and how they tie in.  Altogether, these were the neurons associated with the Eagle Ford shale. These were the neurons – 1, 9, and 10, that’s the basal clay shale.  And the Marls were associated with these neurons.

So, the autopicked geobodies, across the survey on the basis we’re developing the depositional environment of the Eagle Ford that compare favorable with the well logs. Using seismic data alone, one of our associates received feedback to the effect that “seismic is only good in conventionals, just for the big structural picture.” Man, what a sad conclusion that is.  There’s a heck of a lot  more out of this high resistivity zone pay that was associated with two specific neurons, demonstrating that this machine learning technology is equally applicable to unconventionals.

The second case study here is the Gulf of Mexico, by my distinguished associate, Mr. Rocky Roden. This is not deepwater – only approximately 300 feet. Here’s a north fault amplitude buildup. Here, these are time contours and the amplitude conformance to structure is pretty good. In this crossline – 3183 – going from west to east is the distribution of the histogram of the values. You can see here in the dotted portion, this is just the amplitude display, and the box right here is a blowup of the edge right there of that reservoir. What you can see here is the SOM classification using colors.  Red is associated with the gas-over-oil contact and oil-over-water contact. A single sample.  So here we have the use of machine learning to help us find fluid contacts, which are very difficult to see.  This is all without having bandwidth, frequency range, point source, point receivers – it isn’t a case of everything dialed in just the right way. The rest of the story is just the use of machine learning. However, it’s machine learning on not just samples of single numbers, but each sample as a combination of attributes; as a vector. Using that choice of attributes, we’re able to identify fluid contacts. For easier viewing, we make all these others transparent and only show those that you can see visually here of what has been estimated using the classifier of the fluid contacts and also the hills.  In addition, look at the edges. The ability to define the edge of the reservoirs and come up with volumetrics, is pretty clear to be superior. Over here on the left, Rocky’s taken the “goodness of fit”, which is an estimate of the probability of how well each of these samples fits the winning neuron, and by lowering the probability limit, and saying “I just want to look at the anomalies”, that edge of the amplitude conformance of structure, I think is clearly better than what you would have using amplitude alone.

So, new machine learning technology stuff using simultaneous multi-attributes is resolving much finer reservoir detail than we’ve had in the past, and the geobodies that fit the reservoirs are revealed in the details, frankly, previously not available.

In general, this is what our “Earth to Earth” model looks like.  If we start here with the 3D survey, and then from the 3D survey, we decide on a set of attributes.  We take all our samples, which are vectors because of our choice of attributes, and then literally, plot them in attribute space. If you’ve 5 attributes, it’s 5-dimensional space.  If you have 8 attributes, it’s 8-dimensional space. And your choice of attributes is going to illuminate different properties of the reservoir. So, the choice of attributes that Rocky used helped to zero in on those fluid contacts, would not be the ones he would use to illuminate the volume properties or the absorption properties, for example.  Once the attribute volumes is in attribute space, we use a machine learning classifier to analyze and look for natural clusters of information in attribute space. Once those are classified in attribute space, the results then, are presented back in a virtual model, if you will, of the earth itself. So, our job here is our picking geobodies, some of which have geologic significance and some of which don’t.   The real power is in the natural clusters of information in attribute space.  If you have a channel and you’ve got the attributes selected to illuminate channel properties, then, every single point that is associated with the channel, no matter where it is, is going to concentrate in the same place in attribute space.  Natural clusters of information in attribute space are all stacking.  The neurons are hunting, looking for natural clusters, or higher density, in attribute space.  They do this using very simple rules.  The mathematics behind this process were published by us in the November 2015 edition of the Interpretation journal, so if you would like to dig into the details, I invite you to read that paper, which is available on our website.

Two keys are: 1. Attribute selection list. Think about your choice of attributes as an illumination function. What you are trying to do with your choice of attributes is an illumination function of the real geobodies in the earth and how they end up as natural clusters in attribute space. And that’s the key.  2. Neurons search for clusters of information in attribute space. Remember the movie, The Matrix? The humans had to be still and hide from the machines that went crazy and hunted them. That’s not too unlike what’s going on in attribute space. It’s like The Matrix because the data samples themselves don’t move. They’re just waiting there. It’s the neurons that are running around in attribute space, looking for clusters of information. The natural cluster is an image of one or more geobodies in the earth, but it’s been illuminated in attribute space, totally depending on the illumination list.  It stacks in common place in attributes – that’s the key.

Seismic stratigraphy is broken up into two levels here: first is seismic sequence analysis where you look at your seismic data and you organize it and break it up in to packets of concordant reflections. It’s pretty straightforward stuff – chaotic depositional patterns.  And then after you have developed a sequence analysis, you can categorize the different sequences. You have a facies analysis trying to infer the depositional setting. Is the sea level rising? Is it falling? Is it stationary? All this naturally falls in because the seismic reflections are revealing geology on a very broad basis.

Well, the attribute – it’s hunting geobodies as well. Multi-attribute geobodies are also components of seismic stratigraphy. We define it this way: a simple geobody has been auto-picked by machine learning in attribute space. That’s all it is – we’re defining a simple geobody. We all know how to run an auto-picker. In 15 minutes, you can be taught how to run an auto-picker in attribute space. Complex geobodies are interpreted by you and I. We look at the simple geobodies and we composite those just the way we saw in that wheeler diagram. We combine those to make complex geobodies.  We give it a name, some kind of texture, some kind of surface – all those things are interpreted geobodies and the construction of these complex geobodies can be sped up by some geologic rule-making.

Now the mathematical foundation we published in 2015 ties this altogether pretty nicely. You see, machine learning isn’t magic.  It depends on the noise level of the seismic data. Random noise broadens natural clusters in attribute space. What that means then, is that we’re attenuating noise so optimum acquisition and data processing, delivering natural clusters with the greatest separation. In other words, nice, tight clusters in attribute space will be much easier for the machine learning algorithm to identify when you have nice, clean identification and separation. So, acquisition and data processing matters.

However, this isn’t talking about coherent noise. Coherent noise is something else. Because with coherent noise, you may have an acquisition footprint, but that forms a cluster in attribute space and one of those neurons is going to go after that just as well because it’s an increase in information density in attribute space and voila – you have a handful of neurons that are associated with an acquisition footprint. Coherent noise can be deducted by the classification process where the processor has merged two surveys.

Second thing: Better wavelet processing leads to narrower, natural clusters, more compact natural clusters leads to better geobody resolution because geobodies are derived from natural clusters.

Last but not least, larger neural networks produce greater geobody details. You run a 6x6, an 8x8 and a 10x10 2D Colormaps, you eventually get to the point where you’re just swamped with details and you just can’t figure this thing out. We see that again and again.  So, it’s better to look at the situation from 40K feet, and then 20, and then 10. Usually, we just go ahead and run all three SOM runs all at once to get them all done and to examine them in increasing levels of detail.

I’d like to now switch gears on something entirely different.  Put the SOM box here aside for a minute, and let’s revisit the work Rocky Roden did in the Gulf of Mexico . Rocky came up with an important way of thinking about the application of this new tool.

In terms of using multi-attribute seismic interpretation – think of it as a process and what’s really important is starting with the geologic question of what you want to answer. For example: we’re trying to illuminate channels. Ok, so there are a certain set of attributes that would be good.  So, what we have then here is, ask the question first. Firmly have that in your mind for this multi-attribute seismic interpretation process.

There’s a certain set of attributes for the geologic question, and the terminology for that set is the “attribute selection list”. When you do an interpretation like this, you really need to be aware of the current attributes being used when looking at the data. Depending on the question, we then take the discipline and we say “well, if this is the question you’re asking”, this attribute selection list is appropriate. Remember, the attribute selection list is an illumination function.

Once you have the geologic process, the next step is the attribute selection list, and then classify simple geobodies, which is auto-picking your data in attribute space and looking at the results.

Now, this just doesn’t happen in back and it just doesn’t happen at once – it’s an iterative process. So, interpreting complex geobodies is basically more than one SOM run, and more than one geologic question. And interpreting these results at different levels – how many neurons, that sort of thing, this is a whole seismic interpretation process. Interpreting these complex geobodies is the next step.

We’re looking at results and constructing geologic models. Decide which is the final geologic model, and then our last step is making property predictions.

So, in the world of multiple geologic models, or multiple statistic models, it really doesn’t make any difference. We select the model, we test the model, we select a bunch of models, we test those models, and we choose one! Why? Because we want to make some predictions.  There’s got to be one final model that we decide on as professional that this is most reliable and we’re going to use it.  Whether it’s exploration, exploitation, or even appraisal, same methodology – it’s all the same for geologic models and statistical models.

The point here boils down to something pretty fundamental.  As exploration geophysicists, we’re in the business of prediction. That’s our business. The boss wants to know “where do you want to drill, and how deep? And what should we expect on the way down? Do we have any surprises here?” They want answers! And we’re in the business of prediction.

So how good you are as a geoscientist depends, fundamentally, on how good are your predictions of your final model? That’s what we do. Whether you want to think about it like that or not, that’s really the bottom line. 

So this is really about model building for multi-attribute interpretation – that’s the first step. Then we’re going to test the model and choose the model. Ok, so, should that model-building be shipped out as a data processing project? Or through our geo-processing people?  Or is that really something that should be part of interpretation? Do you really trust that the right models have been built from geoprocessing? Maybe. Maybe not.  If it takes 3 months, you sure hope you have the right model from a data processing company. And foolish, foolish, foolish if you think there’s only one run.  That’s really dangerous.  That’s a kiss and a prayer, and oh, after three months, this is what you’re going to build your model on. 

So, as an aside, if you decide that building models is a data processing job, where’s the spontaneity? And I ask you – where’s the sense of adventure? Where’s the sense of the hunt? That’s what interpretation is all about – the hunt. Do you trust that the right questions have been asked before the models are built?  And my final point here is that there are hundred’s of reasons just to follow procedure.  Stay on the path and follow procedure. Unfortunately, nobody wants to argue. The truth here is what we’re looking for. And truth, invariably – that path has twists and turns. That’s exploration. That’s what we’re doing here.  That’s fun stuff. That’s what keeps our juices going… about finding those little twists and turns and zeroing in on finding truth. 

Now model testing and final selection have begun when models are built and you decide which is the right one. For example, you generate 3 SOMs – an 8x8, 12x12, 4x4, and you look at results and the boss says “ok, you’ve been monkeying around long enough, what’s the answer? Give me the answer”… “Well…hmm…” you respond. “I like this one. I think 8x8 is the right one.”  Now, you could do that, but you might not want to admit it to the boss! One quantitative way of comparing models would be to look at your residual errors.  The only trouble with that is it’s not very robust. However, a quantitative assessment – comparing models – is a good way to go. 

So, there is a better methodology – better than just comparing residual errors – this is a whole field of cross-validation methodologies. Not going to go into that stuff right here, but some cross-validation tools: bootstrapping, bagging, and even Bayesian statistics are helpful tools in helping us prepare models and helping us figure out the model that is robust and in the face of new data is going to give us a good strong answer – NOT the answer that fits the data the best. 

Think about the old problem of fitting a least squares line through some data. You write your algorithm in python or whatever tool, and it kind of fits through the data, and the boss goes “I don’t know why you’re monkeying around with lines. I think this is an exponential curve because this is production data.” So, you make an exponential curve.  Now, this business of cross-validation, think about this: fitting a polynomial to the data: two terms, a line, three terms, a parabola, four terms … until n… we could make n equal 15 and by golly there’s no possibility of error – we crank that thing down. The trouble is, we have over-fit the data. It fits this data perfect, but some new data comes in and it’s a terrible model because the errors are going to be really high. It’s not robust. So, this whole comes up to cross validation methodology is really very important. The future here is, “who’s going to be making the prediction – you, or the machine?” I maintain to make good decisions, it’s going to be us! We’re the ones that will be making the right characteristics – because we’ll leverage machine learning.

Let’s take a look at Machine Learning. Our company vision is the following: 

“There’s no reason why we cannot expect to query our seismic data for information with learning machines, just as effortlessly and with as much reliability as we query the web for the nearest gas station.” 

Now this statement of where our company is going is not a statement of “get rid of the interpreters“. It’s a statement, in my way of thinking, and in all of us at our operations, it’s a statement of a way forward. Because truly, this use of machine learning is a whole new way of doing seismic interpretation. It’s using it as a tool – it’s not replacing anybody.  Deep learning, which is important for seismic evaluation, might be a holy grail, but its roots are in image processing, not in the physics of wave motion. Be very careful with that.

Image processing is very good at telling the difference between Glen and me from that have pictures of us. Or if you have kitties and you have little doggies, image processing can classify those, even right down to those that you’re not real certain whether it’s a dog or cat.  So, deep learning is focused on image processing and also on the subtle distinctions between what is the essence of a dog and what is the essence of a cat, irrespective of whether the cat is laying there or standing there or climbing up a tree.  That’s the real power of this sort of thing. 

Here’s a comparison of SOM and Deep Learning in terms of all of its properties, and there’s good and bad things about each one of these.  There’s no magic about any one of these. Not to say one’s better than the other.

I would like to point out that unsupervised machine learning trains by discovering natural clusters in attribute space. Once those natural clusters have been identified in attribute space, attribute space is carved up and say any samples to this region right in here in attribute space corresponds to this winning neuron and over here is that winning neuron.  Your data is auto-picked and put back in 3-dimensional space in a virtual 3D survey. That’s the essence of what’s available today.

Supervised machine learning trains on patters that it discovers on amplitude data alone. Now there are two deep learning algorithms that are popular today. One’s called Convolutional Neural Network, which learns by visual patterns, faces, sometimes called eigenfaces, uses PCA. And then there are fully convolutional networks, which are using sample size patches and full connections between the network layers. 

Here’s a little cartoon showing you this business about layers.  This is the picture and trying to identify the little features of this, you can’t say that this is a robot, as opposed to a cat or a dog, until it goes through this analysis. Using patching and features maps, using different features for different things, it goes from one patch to the next to the next, until finally – your outputs here -well, it must be robot, dog, or kitty. It’s a classifier using the properties it has discovered in a single image. The algorithm has discovered its own attributes. You might say “that’s pretty cool”. And indeed it is, but it’s only using the information seen in that picture. So, it’s association – it’s the texture features of that image. 

Here’s an example from one of our associates – Tao Zhao – he’s been working in the area of full convolutional networks. This example is where he’s done some training – training lines A – clinoforms here, chaotic deposition here, maybe some salt down there, and then some concordant reflections up top. Here’s an example of the results of the FCN. And then here is the classification of salt down here. So, the displays here are examples of full convolutional networks. 

One final point and then I’ll sit down: Data is more important than the algorithms. The training rules are very simple. Remember the snow geese? Remember the fish? If you were a fish or if you were a snow goose, the rules are pretty simple. There’s a fanny – I’m gonna be about 3 feet behind it, and I’m not gonna be right behind the snow goose ahead of me – I want to be either to the left or the right. Simple rule. You’re a fish, you want to have another fish around you of a certain distance. Simple rules. What’s important here is data is more important than the algorithms.

Here is an example taken from E&P Magazine this month (January). For several years this company called Solution Seekers has been training on production data using a variety of different data and looking for patterns to develop best practice drilling recommendations. Kind of a cool big-picture kind of a concept.

So machine learning training rules are simple  – the real value is the classification of results it’s the data the builds the complexity. My question to you is: Does this really address the right questions? If it does, extremely valuable stuff. If it misses the direction of where we’re going – the geologic question – it’s not that useful.

So machine learning training rules are simple  – the real value is the classification of results it’s the data the builds the complexity. My question to you is: Does this really address the right questions? If it does, it’s extremely valuable stuff. If it misses the direction of where we’re going – the geologic question – it’s not that useful.

Solving Interpretation Problems using Machine Learning on Multi-Attribute, Sample-Based Seismic Data

Solving Interpretation Problems using Machine Learning on Multi-Attribute, Sample-Based Seismic Data

Solving Interpretation Problems using Machine Learning on Multi-Attribute, Sample-Based Seismic Data
Presented by Deborah Sacrey, Owner of Auburn Energy
Challenges addressed in this webinar include:

  • Reducing risk in drilling marginal or dry holes
  • Interpretation of thin bedded reservoirs far below conventional seismic tuning
  • How to better understand reservoir characteristics
  • Interpretation of reservoirs in deep, pressured environments
  • Using the classification process to help with correlations in difficult stratigraphic or structural environments

The webinar is open to those interested in learning more about how the application of machine learning is key to seismic interpretation.

 
Deborah Sacrey

Deborah Sacrey

Owner

Auburn Energy

Deborah Sacrey is a geologist/geophysicist with 41 years of oil and gas exploration experience in the Texas, Louisiana Gulf Coast, and Mid-Continent areas of the US. Deborah specializes in 2D and 3D interpretation for clients in the US and internationally.

She received her degree in Geology from the University of Oklahoma in 1976 and began her career with Gulf Oil in Oklahoma City. She started Auburn Energy in 1990 and built her first geophysical workstation using the Kingdom software in 1996. Deborah then worked closely with SMT (now part of IHS) for 18 years developing and testing Kingdom. For the past eight years, she has been part of a team to study and bring the power of multi-attribute neural analysis of seismic data to the geoscience community, guided by Dr. Tom Smith, founder of SMT. Deborah has become an expert in the use of the Paradise® software and has over five discoveries for clients using the technology.

Deborah is very active in the geological community. She is past national President of SIPES (Society of Independent Professional Earth Scientists), past President of the Division of Professional Affairs of AAPG (American Association of Petroleum Geologists), Past Treasurer of AAPG and Past President of the Houston Geological Society. She is currently the incoming President of the Gulf Coast Association of Geological Societies (GCAGS) and is a member of the GCAGS representation on the AAPG Advisory Council. Deborah is also a DPA Certified Petroleum Geologist #4014 and DPA Certified Petroleum Geophysicist #2. She is active in the Houston Geological Society, South Texas Geological Society and the Oklahoma City Geological Society (OCGS).

Interpretation of DHI Characteristics with Machine Learning

Interpretation of DHI Characteristics with Machine Learning

By: Rocky Roden and ChingWen Chen, Ph.D.
Published with permission: First Break
Volume 35, May 2017

Introduction

In conventional geological settings, oil companies routinely evaluate prospects for their drilling portfolio where the process of interpreting seismic amplitude anomalies as Direct Hydrocarbon Indicators (DHIs) plays an important role. DHIs are an acoustic response owing to the presence of hydrocarbons and can have
a significant impact on prospect risking and determining well locations (Roden et al., 2005; Fahmy 2006; Forrest et al., 2010; Roden et al., 2012; Rudolph and Goulding, 2017). DHI anomalies are caused by changes in rock physics properties (P and S wave velocities, and density) typically of the hydrocarbon-filled
reservoir in relation to the encasing rock or the brine portion of the reservoir. Examples of DHIs include bright spots, flat spots, character/phase change at a projected oil or gas/water contact, amplitude conformance to structure, and an appropriate amplitude variation with offset on gathers. Many uncertainties should be considered and analyzed in the process of assigning a probability of success and resource estimate range before including a seismic amplitude anomaly prospect in an oil company’s prospect portfolio (Roden et al., 2012).

Seismic amplitude anomalies that are DHIs have played a major role in oil and gas exploration since the early 1970s (Hilterman, 2001). The technology and methods to identify and risk seismic amplitude anomalies have advanced considerably over the years, especially with the use of AVO (Amplitude vs. Offset) and improved acquisition and processing seismic technology (Roden et al., 2014). The proper evaluation of seismic direct hydrocarbon indicators for appropriate geologic settings has proven to have a significant impact on risking prospects. Rudolph and Goulding (2017) indicate from an ExxonMobil database of prospects that DHI-based prospects had over twice the success rate of non-DHI prospects on both a geologic and economic basis. In an industry-wide database of DHI prospects from around the world, Roden et al. (2012) indicate that when a prospect has a >20% DHI Index, a measure of the risk associated with DHI characteristics, almost all the wells were successful. Even with the use of advanced seismic technology and well-equipped interpretation workstations, the interpretation of DHI characteristics is not always easy or straightforward.

A key technology employed in evaluating potential DHI features is seismic attributes. Seismic attributes are any measurable property of seismic data including stacked or prestack data. Seismic attributes can be computed on a trace, multiple traces, on an entire volume, over isolated windows, on a horizon, and in either
time or depth. There are hundreds of seismic attributes generated in our industry (Brown, 2004; Chen and Sidney, 1997; Chopra and Marfurt, 2007; Taner, 2003) and can be generally categorized as instantaneous, geometric, AVO, seismic inversion, and spectral decomposition attributes. The instantaneous, AVO, and inversion attributes are typically utilized to highlight and identify DHI features. For example, amplitude envelope, average energy, and sweetness are good instantaneous attributes to display how amplitudes stand out above the background, potentially identifying a bright spot and a potential hydrocarbon accumulation. AVO attributes such as intercept times gradient, fluid factor, Lambda/Mu/Rho and far offset-minus near offset-times the far offset can help to identify hydrocarbon-bearing reservoirs (Roden et al., 2014). However, not all amplitude anomalies are DHIs and interpreting numerous seismic attributes can be complicated and at times confusing. In addition, it is almost impossible for geoscientists to understand how numerous seismic attributes (>3) interrelate.

Over the last few years, machine learning has evolved to help interpreters handle numerous and large volumes of data (e.g. seismic attributes) and help to understand how these different types of data relate to each other. Machine learning uses computer algorithms that iteratively learn from the data and independently adapt to produce reliable, repeatable results. We incorporate a machine learning workflow where principal component analysis (PCA) and self-organizing maps (SOM) are employed to analyze combinations of seismic attributes for meaningful patterns that correspond to direct hydrocarbon indicators. A machine learning multi-attribute approach with the proper input parameters can help interpreters to more efficiently and accurately evaluate DHIs and help reduce risk in prospects and projects.

Interpreting DHIs

Important DHI Characteristics

Table 1 Most important DHI characteristics for AVO classes 2 and 3 as determined by Forrest et al. (2010) and Roden et al. (2012)

DHI characteristics are usually associated with anomalous seismic responses in a trapping configuration: structural traps, stratigraphic traps, or a combination of both. These include bright spots, flat spots, amplitude conformance to structure, etc. DHI anomalies are also compared to other features such as models, similar events, background trends, proven productive anomalies, and geologic features. DHI indicators can also be located below presumed trapped hydrocarbons where shadow zones or velocity pull-down effects may be present. DHI effects can even be present dispersed in the sediment column in the form of gas chimneys or clouds. Forrest et al. (2010) and Roden et al. (2012) have documented the most important DHI characteristics based on well success rates in an industry-wide database of DHI prospects. Other than the amplitude strength above background (bright spots), Table 1 lists these DHI characteristics by AVO classes 2 and 3. These two AVO classes (Rutherford and Williams, 1989) relate to the amplitude with offset response from the top of gas sands which represent the specific geologic settings where most DHI characteristics are found. Therefore, the application of machine learning employing seismic multi-attribute analysis may help to clearly define DHI characteristics and assist the interpreter in making a more accurate assessment of prospect risk.

Class 3 DHI Characteristics

Table 2 Most important Class 3 DHI characteristics as denoted by Forrest et al. (2010) and Roden et al. (2012) and a designation of typical instantaneous attributes that identify these characteristics. Not all instantaneous attributes by themselves are conducive to identifying the top DHI characteristics.

Multi-attribute machine learning workflow

With the goal of identifying DHI characteristics, an interpreter must determine the specific attributes to employ in a machine learning workflow. A geoscientist can select appropriate attributes based on their previous knowledge and experience to define DHIs in a specific play or trend. Table 2 lists several common instantaneous attributes and the associated stacked seismic data DHI characteristics they tend to identify. These relationships are of course subjective and depend on the geologic setting and data quality. Class 3 DHIs are usually interpreted on full stack volumes and/or offset/angle volumes and their associated derivative products. Class 2 DHIs are typically interpreted on offset/angle volumes (especially far offset/angle volumes), gathers, and their associated derivative products including various types of crossplots. The relationships between attributes and DHI characteristics can be variable depending on the geologic setting and the seismic data quality. If it is unclear which attributes to employ, principal component analysis (PCA) can assist interpreters. PCA is a linear mathematical technique to reduce a large set of variables (seismic attributes) to a smaller set that still contains most of the variation of independent information in the larger set. In other words, PCA helps to determine the most meaningful seismic attributes.
The first principal component accounts for as much of the variability in the data as possible and each succeeding component (orthogonal to each preceding) accounts for as much of the remaining variability. Given a set of seismic attributes generated from the same original volume, PCA can identify combinations of attributes producing the largest variability in the data suggesting these combinations of attributes that will better identify specific geologic features of interest and in this case specific DHI characteristics. Even though the first principal component represents the largest linear attribute combinations that best represents the variability of the bulk of the data, it may not identify specific features of interest to the interpreter. The interpreter should also evaluate succeeding principal components because they may be associated with DHI characteristics not identified with the first principal component. In fact, the top contributing seismic attributes from the first few principal components, when combined, often produce the best results for DHI delineation. In other words, PCA is a tool that, employed in an interpretation workflow with a geoscientist’s knowledge of DHI related attributes, can give direction to meaningful seismic attributes and improve interpretation results. It is logical, therefore, that a PCA evaluation may provide important information on appropriate seismic attributes to take into a self-organizing map generation.

After appropriate seismic attributes have been selected, the next level of interpretation requires pattern recognition and classification of often subtle information embedded in the seismic attributes. Taking advantage of today’s computing technology, visualization techniques, and understanding of appropriate parameters, self-organizing maps (SOMs) efficiently distill multiple seismic attributes into classification and probability volumes (Smith and Taner, 2010; Roden et al., 2015). Developed by Kohonen in 1982 (Kohonen, 2001), SOM is a powerful non-linear cluster analysis and pattern recognition approach that helps interpreters to identify patterns in their data that can relate to geologic features and DHI characteristics. The samples for each of the selected seismic attributes from the desired window in a 3D survey are placed in attribute space where they are normalized or standardized to the same scale. Also in attribute space are neurons, which are points in space that start at random locations and train from the attribute data and mathematically hunt for natural clusters of information in the seismic data. After the SOM analysis, each neuron will have identified a natural cluster as a pattern in the data. These clusters reveal significant information about the classification structure of natural groups that are difficult to view any other way. In addition to the resultant classification volume, a probability volume is also generated which is a measure of the Euclidean distance from a data point to its associated winning neuron (Roden et al., 2015). The winning neuron is the one that is nearest to the data point in attribute space. It has been discovered that a low classification probability corresponds to areas that are quite anomalous as opposed to high probability zones that relate to regional and common events in the data.

To interpret the SOM classification results, each neuron is displayed in a 2D color map. Highlighting a neuron or combination of neurons in a 2D color map identifies their associated natural clusters or patterns in the survey because each seismic attribute data point retains its physical location in the 3D survey. The identification of these patterns in the data enables interpreters to define geology not easily interpreted from conventional seismic amplitude displays alone. These visual cues are facilitated by an interactive workstation environment.

Low probability anomalies

After the SOM process and natural clusters have been identified, Roden et al. (2015) describe the calculation of a classification probability. This probability estimates the probable certainty that a winning neuron classification is successful. The classification probability ranges from zero to 100% and is based on goodness of fit of the Euclidean distances between the multi-attribute data points and their associated winning neuron. Those areas in the survey where the classification probability is low correspond to areas where no winning neurons fit the data very well. In other words, anomalous regions in the survey are noted by low probability. DHI characteristics are often associated with low classification probabilities because they are anomalous features that are usually not widespread throughout the survey.

SOM analysis for Class 3 DHI characteristics

A class 3 geologic setting is associated with low acoustic impedance reservoirs that are relatively unconsolidated. These reservoirs typically have porosities greater than 25% and velocities less than 2700 m/sec. The following DHI characteristics are identified by multi-attribute SOM analyses in an offshore Gulf of Mexico class 3 setting. This location is associated with a shallow oil and gas field (approximately 1200 m) in a water depth of 140 m that displayed a high seismic amplitude response. Two producing wells with approximately 30 m of pay each were drilled in this field on the upthrown side of an east-west trending normal fault. Before these wells were drilled, operators had drilled seven unsuccessful wells in the area based on prominent seismic amplitudes that were either wet or low saturation gas. Therefore, the goal was to identify as many DHI characteristics as possible to reduce risk and accurately define the field and to develop SOM analysis approaches that can help to identify other possible prospective targets in the area.

Initially, 20 instantaneous seismic attributes were run through a PCA analysis in a zone 20ms above and 150 ms below the top of the mapped producing reservoir. Based on these PCA results, various combinations of attributes were employed in different SOM analyses with neuron counts from 3X3, 5X5, 8X8, 10X10, and 12X12 employed for each set of seismic attributes. It is from this machine learning multi-attribute interpretation workflow that the results defining different DHI characteristics were interpreted and described below. All of the figures associated with this example are from a SOM analysis with a 5X5 neuron count and employed the instantaneous attributes listed below.

  • Sweetness
  • Envelope
  • Instantaneous Frequency
  • Thin Bed
  • Relative Acoustic Impedance
  • Hilbert
  • Cosine of Instantaneous Phase
  • Final Raw Migration

SOM classification of a reservoir

Figure 1 From the top of the producing reservoir: a) time structure map in contours with an amplitude overlay in colour and b) SOM classification with low probability less than 1% denoted by white areas. The yellow line in b) represents with downdip edge of the high amplitude zone designated in a).

Figure 1a displays a time structure map as denoted by the contours with an amplitude overlay (color) from the mapped top of the reservoir in this field. The horizon at the top of the reservoir was picked on a trough (low impedance) on zero phase seismic data (SEG normal polarity). Figure 1a indicates that there is a relatively good amplitude conformance to structure based on the amplitude. Figure 1b is a display of classification probability from the SOM analysis at the top of the reservoir at the same scale as Figure 1a. This indicates that the top of this reservoir exhibits an anomalous response from the SOM analysis where any data points with a probability of less than 1% are displayed in the white areas. In comparing Figure 1a and 1b it is apparent that the low probability area corresponds closely to the amplitude conformance to structure as denoted by the yellow outline in Figure 1b. This confirms the identification of the productive area with low probability and proves the efficacy of this SOM approach. The consistency of the low probability SOM response in the field is another positive DHI indicator. In fact, the probabilities as low as .01% still produce a consistent response over the field indicating how the evaluation of low probability anomalies is critical in the interpretation of DHI characteristics.

This field contains oil with a gas cap and before drilling, there were hints of possible flat spots suggesting hydrocarbon contacts on the seismic data, but the evidence was inconsistent and not definitive. Figure 2 displays a north-south vertical inline profile through the middle of the field and its location is denoted in Figure 1. Figure 2a exhibits the initial stacked amplitude data with the location of the field annotated. Figure 2b denotes the SOM analysis results of this same vertical inline 9411 which incorporated the eight instantaneous attributes listed above in a 5X5 neuron matrix. The associated 2D color map in Figure 2b denotes the 25 natural patterns or clusters identified from the SOM process. It is apparent in this figure that the reservoir and portions of the gas/oil contact and the oil/water contact are easily identified. This is more easily seen in Figure 2c where the 2D color map indicates that the neurons highlighted in grey (20 and 25) are defining the hydrocarbon-bearing portions of the reservoir above the hydrocarbon contacts and the flat spots interpreted as hydrocarbon contacts are designated by the rust-colored neuron (15). The location of the reservoir and hydrocarbon contacts are corroborated by well control. The southern edge of the reservoir is revealed in the enlargements of the column displays on the right. Downdip of the field is another undrilled anomaly defined by the SOM analysis that exhibits similar DHI characteristics identified by the same neurons.

stacked seismic amplitude display 2

Figure 2 North-south vertical profile 9411 through the middle of the field: a) stacked seismic amplitude display with the field location designated, b) SOM classification with 25 neurons indicated by the 2D colour map over a 170 ms window, and c) three neurons highlighting the reservoir above the oil/water and gas/oil contacts and the hydrocarbon contacts (flat spots). The expanded insets denote the details from the SOM results at the downdip edge of the field.

stacked seismic amplitude display 2

Figure 3 West-east vertical profile 3183 through the field: a) stacked seismic amplitude display denoting tie with line 9411, b) SOM classification with 25 neurons indicated by the 2D colour map, and c) three neurons highlighting the gas/oil and oil/water contacts and the hydrocarbon contacts (flat spots). The expanded insets clearly display the edge of the field in the SOM classifications.

West to east crossline 3179 over the field is displayed in Figure 3 and with it the location designated in Figure 1. The stacked seismic amplitude display of Figure 3a indicates that its tie with inline 9411 is located in the updip portion of the reservoir where there is an apparent gas/oil contact. Figure 3b exhibits the SOM results of this west-east line utilizing 25 neurons as designated by the 2D color map. Similar to Figure 2b, Figure 3b indicates that the SOM analysis has clearly defined the reservoir by the grey neurons (20 and 25) and the hydrocarbon contacts in the rust-colored neuron (15). Towards the west, the rust-colored neuron (15) denotes the oil/water contact as defined by the flat spot on this crossline. Figure 3c displays only neurons 15, 20, and 25 to clearly define the reservoir, its relationship above the hydrocarbon contacts, and the contacts themselves. The three enlargements on the left are added for detail.

What is very evident from the SOM results in both Figures 2 and 3 is a clear character change and definition of the downdip edges of the reservoir. The downdip edge definition of an interpreted trap is an important DHI characteristic that is clearly defined by the SOM analysis in this field. The expanded insets in Figures 2 and 3 indicate that the SOM results are producing higher resolution results than the amplitude data alone and the edge terminations of the field are easily interpreted. These results substantiate that the SOM process with the appropriate set of seismic attributes can exhibit thin beds better than conventional amplitude data.

SOM analysis for Class 2 DHI characteristics

A class 2 geologic setting contains reservoirs more consolidated than class 3 and the acoustic impedance of the reservoirs are about equal to the encasing sediments. Typical porosities range from 15 to 25% and velocities 2700-3600 m/sec for these reservoirs. In class 2 settings, AVO attributes play a larger role in the evaluation of DHI characteristics than in class 3 (Roden et al., 2014). This example is located onshore Texas and targets Eocene sands at approximately 1830 m deep. The initial well B was drilled just downthrown on a small southwest-northeast regional fault, with a subsequent well drilled on the upthrown side (Well A). The reservoirs in the wells are approximately 5-m thick and composed of thinly laminated sands. The tops of these sands produce a class 2 AVO response with near zero amplitude on the near offsets and an increase in negative amplitude with offset (SEG normal polarity).

time structure map

Figure 4 Time structure map at the top of the producing Eocene reservoir.

The goal of the multi-attribute analysis was to determine the full extent of the reservoirs revealed by any DHIs because both wells were performing much better than the size of the amplitude anomaly indicated from the stack and far offset seismic data. Figure 4 is a time-structure map from the top of the Eocene reservoir. This map indicates that both wells are located in stratigraphic traps with Well A situated on southeast dip and Well B located on the northwest dip that terminates into the regional fault. The defined anomaly conformance to downdip closure cannot be seen in the Well A reservoir because the areal extent of the reservoir is in a north-south channel and the downdip conformance location is
very narrow. In the Well B reservoir, the downdip edge of the reservoir actually terminates into the fault so an interpretation of the downip conformance cannot be determined. The updip portion of the reservoir at Well B actually thins out towards the south-east forming an updip seal for the stratigraphic trap. The Well B reservoir was interpreted to have a stacked data amplitude anomaly of approximately 70 acres and the Well A reservoir was determined to only have an amplitude anomaly of only about 34 acres (Figure 5a).

Amplitude -SOM classification

Figure 5 At the top of the Eocene reservoir: a) stacked seismic amplitude, b) SOM classification with 64 neurons, and c) same classification as the middle display with low probability of less than 30% designated by the white areas.

Conventional-Stacked-Seismic-Amplitude-Display-optimized

Figure 6 North-south arbitrary line through Wells A and B with the location designated in Figure 4: a) stacked seismic amplitude display, b) SOM classification with 64 neurons indicated by the 2D colour map, and c) SOM classification with only four neurons in grey highlighting both the reservoirs associated with the wells.

The gathers associated with the 3D PSTM survey over this area were conditioned and employed in the generation of very specific AVO attributes conducive to the identification of class 2 AVO anomalies in this geologic setting. The ten AVO attributes used for the SOM analysis were selected from a PCA analysis of 18 AVO attributes. The AVO attributes that were selected for the SOM analysis are listed below:

  • Far – Near
  • Shuey 2 term approximation – Intercept
  • Shuey 2 term approximation – Gradient
  • Shuey 2 term approximation – 1/2 (Intercept + Gradient)
  • Shuey 2 term approximation – 1/2 (Intercept – Gradient)
  • Shuey 3 term approximation – Intercept
  • Shuey 3 term approximation – Gradient
  • Shuey 3 term approximation – 1/2 (Intercept + Gradient)
  • Verm-Hilterman approximation – Normal Incident
  • Verm-Hilterman approximation – Poisson’s Reflectivity

Several different neuron counts were generated with these ten AVO attributes and the results in the associated figures are from the 8X8 (64 neurons) count. Figure 5b displays the SOM results from the top of the Eocene reservoirs. The associated 2D color map indicates that neurons 47, 58, 62, and 63 are defining the reservoirs drilled by the two wells. Comparing the areal distribution of the amplitude defined reservoirs in 5a to the SOM defined reservoirs in Figure 5b indicates that the later is larger. In fact, the Well A amplitude defined area of 34 acres is compared to approximately 95 acres as denoted by four neurons in Figure 5b. The Well B amplitude defined reservoir area was determined to be 70 acres, whereas, the SOM defined area was determined to be approximately 200 acres. The SOM defined areal distributions were determined to be consistent with engineering and pressure data in the two wells. The anomaly consistency in the mapped target area is evident in Figure 5b and is better in defining the extent of the producing reservoirs than amplitudes.

Figure 5c displays the SOM results of 5b. However, less than 30% of the low classification probability results are displayed in white. It denotes that the core of the reservoirs at each of the well locations reveal low probability. Low probability is defining anomalous zones based on the ten AVO attributes run in the SOM classification process.

 

Stacked Seismic Amplitude Display 3

Figure 7 Northeast-southwest inline 2109 through Well B with location designated in Figure 4: a) stacked seismic amplitude display, b) SOM classification with 64 neurons as denoted by the 2D colour map, and c) SOM classification with only four grey neurons highlighting the reservoir at Well B. The expanded insets display the updip edges of the reservoir with the SOM results clearly defining the updip seal edge of the field.

 

Figure 6 is a north-south arbitrary line running through both Wells A and B with its location denoted in Figure 4. Figure 6a is the conventional stacked seismic amplitude display of this line. Figure 6b displays the SOM results and the reservoirs at both wells defined by neurons 47, 58, 62, and 63. In Figure 6c only these four neurons are turned on defining the location of the reservoirs on this line. The four neurons are clearly defining the field and the southern downdip limits of the reservoir associated with Well A and the updip limits of the reservoir at Well B where the sands are thinning out to the south. Figure 7 is northwest-southeast inline 2109 with its location denoted in Figure 4. Figure 7a is the stacked amplitude display and Figure 7b displays the SOM results defining the limits of the Well B reservoir as it terminates at the fault to the northwest. Figure 7c with only the four neurons defining the reservoir displayed indicates the thinning out of the reservoir updip much more clearly than with amplitudes alone. The insets of Figures 7b and 7c illustrate the details in the updip portion of the reservoir defined by the SOM process. The SOM analysis incorporates ten AVO attributes and is not limited by conventional amplitude/frequency limitations of thickness and areal distribution. The AVO attributes selected for this SOM analysis are specifically designed to bring out the appropriate AVO observations for a class 2 setting. It is clear from these results that the AVO attributes in this SOM analysis are clearly distinguishing the anomalous areas associated with the producing reservoirs from the equivalent events and zones outside these stratigraphic traps.

Conclusions

For more than 40 years seismic amplitudes have been employed to interpret DHIs in an attempt to better define prospects and fields. There are dozens of DHI characteristics associated primarily with class 2 and 3 geologic settings. Hundreds of seismic attributes have been developed in an effort to derive more information from the original seismic amplitude data and further improve DHI interpretations. A machine learning workflow incorporating seismic attributes, PCA, and SOM, has been proven to produce excellent results in the interpretation of DHIs. This machine learning workflow was applied to data in class 2 and 3 reservoirs in an effort to interpret the most important DHI characteristics as defined by a worldwide industry database. The SOM analysis employing instantaneous attributes in a class 3 setting successfully identified the top DHI characteristics and especially those defining edge effects and hydrocarbon contacts (flat spots). AVO attributes conducive to providing information in class 2 settings incorporated in a SOM analysis allowed the interpretation of DHI characteristics that better defined the areal extent of the producing reservoirs than amplitudes by clearly denoting the stratigraphic trap edges.

Low SOM classification probabilities have been proven to help identify DHI characteristics. These low probabilities relate to data regions where the attributes are very different from the data points of all of the attributes in the SOM analysis and their associated winning neurons, which has defined a natural cluster or pattern in the data. Anomalous regions in the data, for example, DHI characteristics, are noted by low probability. This analytical approach of defining low probabilities proved to be helpful in identifying DHI characteristics in both class 2 and 3 settings.

An important observation in these two case studies is that the use of appropriate seismic attributes in a SOM analysis can not only identify DHI characteristics not initially interpreted but can also increase or decrease confidence in already identified characteristics. This multi-attribute machine learning workflow provides a methodology to produce more accurate identification of DHI characteristics and a better risk assessment of a geoscientist’s interpretation.

Acknowledgments

The authors would like to thank the staff of Geophysical Insights for the research and development of the machine learning applications used in this paper. We would also like to thank the Rose & Associates DHI consortium, which has provided extremely valuable information on DHI characteristics. The seismic data in the offshore case study is courtesy of Petroleum Geo-Services. Thanks also go to Deborah Sacrey and Mike Dunn for reviewing the paper. Finally, we would like to thank Tom Smith for reviewing this paper and for the inspiration to push the boundaries of interpretation technology.

References

Brown, A.B, [2004]. Interpretation of three-dimensional seismic data. AAPG Memoir 42/SEG Investigations in Geophysics No. 9, sixth edition.

Chen, Q. and Sidney, S. [1997]. Seismic attribute technology for reservoir forecasting and monitoring. The Leading Edge, 16, 445-448. Chopra, S. and Marfurt, K. [2007]. Seismic attributes for prospect identification and reservoir characterization. SEG Geophysical Development Series No. 11.

Fahmy, W.A. [2006]. DHI/AVO best practices methodology and applications: a historical perspective. SEG/EAGE Distinguished Lecture presentation.

Forrest, M., Roden, R. and Holeywell, R. [2010]. Risking seismic amplitude anomaly prospects based on database trends. The Leading Edge, 5, 936-940.

Hilterman, F.J. [2001]. Seismic amplitude interpretation. Distinguished instructor short course, SEG/EAGE. Kohonen, T. [2001]. Self Organizing Maps. Third extended addition, Springer Series in Information Services, Vol. 30.

Roden, R., Forrest, M. and Holeywell, R. [2005]. The impact of seismic amplitudes on prospect risk analysis. The Leading Edge, 7, 706-711.

Roden, R., Forrest, M. and Holeywell, R. [2012]. Relating seismic interpretation to reserve/resource calculations: Insights from a DHI consortium. The Leading Edge, 9, 1066-1074.

Roden, R., Forrest, M., Holeywell, R., Carr, M. and Alexander, P.A. [2014]. The role of AVO in prospect risk assessment. Interpretation, 2, SC61-SC76.

Roden, R., Smith, T. and Sacrey, D. [2015]. Geologic pattern recognition from seismic attributes: Principal component analysis and self-organizing maps. Interpretation, 3, SAE59-SAE83.

Rudolph, K.W. and Goulding, F.J. [2017]. Benchmarking exploration predictions and performance using 20+ yr of drilling results: One company’s experience. AAPG Bulletin, 101, 161-176.

Rutherford, S.R. and Williams, R.H. [1989]. Amplitude-versus-offset variations in gas sands: Geophysics, 54, 680-688.

Smith, T. and Taner, M.T. [2010]. Natural clusters in multi-attribute seismics found with self-organizing maps. Extended Abstracts, Robinson-Treitel Spring Symposium by GSH/SEG, March 10-11, 2010, Houston, Tx.

Taner, M.T. [2003]. Attributes revisited. http://www.rocksolidimages.com/pdf/attrib_revisited.htm, accessed 13 August 2013.

Rocky Roden ROCKY RODEN owns his own consulting company, Rocky Ridge Resources Inc., and works with several oil companies on technical and prospect evaluation issues. He also is a principal in the Rose and Associates DHI Risk Analysis Consortium and was Chief Consulting Geophysicist with Seismic Micro-technology. He is a proven oil finder (36 years in the industry) with extensive knowledge of modern geoscience technical approaches (past Chairman – The Leading Edge Editorial Board). As Chief Geophysicist and Director of Applied Technology for Repsol-YPF, his role comprised advising corporate officers, geoscientists, and managers on interpretation, strategy and technical analysis for exploration and development in offices in the U.S., Argentina, Spain, Egypt, Bolivia, Ecuador, Peru, Brazil, Venezuela, Malaysia, and Indonesia. He has been involved in the technical and economic evaluation of Gulf of Mexico lease sales, farmouts worldwide, and bid rounds in South America, Europe, and the Far East. Previous work experience includes exploration and development at Maxus Energy, Pogo Producing, Decca Survey, and Texaco. He holds a BS in Oceanographic Technology-Geology from Lamar University and a MS in Geological and Geophysical Oceanography from Texas A&M University. Rocky is a member of SEG, AAPG, HGS, GSH, EAGE, and SIPES.
ChingWen Chen, seismic interpreter CHINGWEN CHEN, PH.Dreceived an M.S. (2007) and a Ph.D. (2011) in Geophysics from the University of Houston, studying global seismology. After graduation, she joined the industry as a geophysicist with Noble Energy where she supported both exploration and development projects. Dr. Chen has a great passion for quantitative seismic interpretation, and more specifically rock physics, seismic imaging and multi-seismic attribute analysis. She later joined Geophysical Insights as a Senior Geophysicist, where the application of machine learning techniques became a focus of her work. Since 2015, her primary interest has been in increasing the efficiency of seismic interpretation.