Abstract:
The tools of machine learning, petrophysics, well logs, and bi-variate statistics are applied in an integrated methodology to identify and discriminate reservoirs with hydrocarbon storage capacity. While the use of any one of these methods is familiar, their application together is unique. The webinar presents the process and results from two different geologic settings:
- Conventional: Channel slope and fan facies environments offshore Mexico
- Unconventional: Niobrara chalk and shale formation in the U.S.
The webinar is based on work published initially by Leal et al. (2019), and the methodology continues to yield excellent results in conventional and unconventional geologic settings alike.
Petrophysics is used to define sedimentary facies and their Effective Porosity using well logs. Petrophysical ranges are grouped in classes and labeled as categorical variables, specifically “Net Reservoir” and “Not Reservoir.” First, a lithology cutoff such as Vshale is applied, and a specific Effective Porosity range defines a “Net Reservoir” condition. Neurons from machine learning are compared to the Net Reservoir condition using bi-variate statistics, determining if there is a statistical relationship between neurons and sedimentary facies. The result is a histogram that reveals which neurons are most responsive to the Net Reservoir condition, enabling a prediction of similar sedimentary facies utilizing 3D seismic volumes across a region of interest.
Carrie Laudon
Senior Geophysical Consultant – Geophysical Research, LLC (d/b/a Geophysical Insights)
Carolan (Carrie) Laudon holds a PhD in geophysics from the University of Minnesota and a BS in geology from the University of Wisconsin Eau Claire. She has been Senior Geophysical Consultant with Geophysical Insights since 2017 working with Paradise®, the AI seismic workbench. Her prior roles include Vice President of Consulting Services and Microseismic Technology for Global Geophysical Services and 17 years with Schlumberger in technical management and sales. Dr. Laudon’s career has taken her to Alaska, Aberdeen, Scotland, Houston, Texas, Denver, Colorado and Reading, England. She spent five years early in her career with ARCO Alaska as a seismic interpreter for the Central North Slope exploration team.
Transcript
Carrie Laudon, Presenter:
Good morning, afternoon, and evening. Thank you for being with us today. My name is Carrie Laudon. I’m a senior geophysical consultant with Geophysical Insights, and today I’ll be presenting, Identify reservoirs by combining machine learning, petrophysics and bi-variate statistics. I’d also like to acknowledge a second author since it’s based on work by Fabian Rada and his colleagues at Petroleum Oil & Gas Services, Inc. As just mentioned the workflow I’m presenting today was by Fabian and his colleagues. This was published by Leal and others in 2019 under the title, Net reservoir discrimination through multi-attribute analysis at single sample scale, and that’s in the journal First Break, and also Fabian has provided me with additional images and slides from presentations he’s made previously on the workflow. Here’s the agenda we’ll go through today. I’ll start with an overview of the end-to-end workflow I’m presenting, then introduce the Niobrara unconventional reservoir from the Denver Julesburg Basin. We’ll have a short overview of self-organizing maps, multi-attribute, for thin bed detection, followed by calibrating SOMS with wells in the case of the Niobrara results. We do visual analysis via cross section and our 3D viewer and 2D color maps and then look at some techniques to apply statistical measures utilizing well logs. I’ll then do the same for a second case from a deep water conventional reservoir setting.
The workflow we’re going through today takes you through five steps. Obviously we start with step one and that’s the geological problem definition and the seismic volume QC. In step one, we’re considering the setting for our problem that we’re trying to solve, whether it’s faults or fractures, lithology, phases, porosity, DHS, et cetera. Things that we should take into account as well, your sampling interval, for example, and your tuning bed thickness, your bin size, whether you have any issues with your seismic data such as acquisition footprint or multiples. Step two and three are essentially generation of your attributes to use in your machine learning workflow. Step two involves just your own knowledge of attributes and what problem you’re trying to solve, but step three then takes it using principle component analysis to help you narrow down the list of attributes that are going into your self-organizing map. Step four is the SOM process itself and then step five is the calibration to the other information that you have about your reservoir and in this case, we’ll be focusing mostly on calibrating to well log data.
Okay. Let’s jump right into it with the Niobrara case. This slide contains an overview of the Niobrara study that we’re going to take you through. The location of the seismic survey is shown here in blue on this cartoon of the Wattenberg field and the Denver Basin. The Denver Basin is an asymmetric forelimb basin, it’s about 70,000 square miles, and is east of the Rocky Mountains, covering parts of Wyoming, Colorado, Kansas, and Nebraska. The Wattenberg field is the main production within the basin and that has approximately 47,000 wells that have been drilled with a production history going back to 1881. Primarily the initial production was in vertical completions in these older Jurassic sandstones. The field is a basin centered gas field and starting around 2009, the operators turn their attention to drilling horizontal wells in this unconventional reservoir called the Niobrara formation.
Niobrara formation is a late cretaceous in age, deposited in the Western Interior Seaway and its organic rich shales interbedded with fractured chalk benches, which we’ve informally termed A, B and C. When the operators turn to drilling horizontal wells about 10 plus years ago, they quickly started encountering faults. At the time they didn’t have seismic data, so in the intervening time several 3D surveys were acquired and in 2018 GPI and Geokinetics at the time provided us with a 3D survey to see if we could improve the seismic resolution within the reservoir interval of the Niobrara. So you can see on the right, we’ve got a blown up well type section and you can see we have a V-shale curve here on the left. Here’s a typical seismic trace from the survey at the well location. This was one millisecond data.
We have an effective prosody curve and a resistivity curve to the right. This is the first bench here, this small interval, the A bench, and that’s about 15 feet thick. The primary reservoir is the B, which you can see here near the small peak a little bit further below the top of the Niobrara. The C bench is a little thicker than the A and is deeper in the section and the entire sequence that we analyzed is from the top Niobrara to the Greenhorn, which you can see here on the well log. Over to the right is a typical seismic section, an inline from the survey, and we evaluated about a 60 millisecond interval into a time, sampled at one millisecond between the top Niobrara and the top of the Greenhorn. So within this interval, we have the three chalk benches as well as the Codell sandstone, which is an older unit, highly heterogeneous, but also productive.
GPI and Fairfield acquired a five phase multi-client survey over the Northern part of the Wattenberg field. Our study area was a hundred square miles from Niobrara phase five, and that was in the Western portion of the multi-client survey. We also got 28 wells from the public COGCC database. Those were available to help with our time to depth ties as well as giving us the log control that we wanted to calibrate our SOM results. The wells we’ve highlighted in red, we have a full petrophysical evaluation on, so those are the ones that we’ll use in the bi-variate statistics. The next step in our workflow is to look at the structural setting of the study area and look at what we need to take into consideration when designing our attributes and our self-organizing maps study. Here in our study area of the Niobrara we have a two-way time structure map, so you can see this west is to the bottom of the screen. We’re kind of in a monoclonal structure coming up into the front range of the Rocky Mountains. To the left is an inset from Friedman and others on the Austin Chalk, but it shows the kind of fractures that you would predict for various structural features and, again, with the benches, the fracture porosity and permeability are an important component production.
This is a geometric attribute extracted at the top of the Niobrara, the most positive curvature. In this slide, you can see that there two predominant structural fabrics Northwest to Southeast and Northeast to Southwest. It appears that you actually set up some fault compartments within the reservoir. The actual Niobrara formation is internally faulted and has a lot of bed bounded faults that seem to suggest perhaps more brittleness in the chalk units than in the overline or the upper most shale section. Also, recall below the B we also have another C bench, as well as the Codell sandstone. The Fort Hays Limestone, which is a high acoustic impedance unit, but fairly tight, low porosity so not typically targeted for drilling these days. The base event that we mapped was this peak that represents the top of the Greenhorn. For the next few minutes, I’m going to give you just a brief introduction to SOM. SOM is an unsupervised machine learning technology that we at Geophysical Insights have used extensively for multi-attribute seismic analysis.
I recognize that there are various levels of experience today in our audience with using SOM along with seismic data, but since we’re focusing more on the results today, I’m just going to give you a few slides on SOM. SOM is a method that uses attributes space to classify seismic samples. In this example, I’m just doing a plot of two attributes. One and two are X and Y and these, you can see, have been classified into four clusters of multi-attributes samples. During SOM, within each cluster, we introduced new multi-attribute samples called neurons and the SOM neurons will move and seek out natural clusters of characteristics in the seismic data. The neurons learn the characteristics of a data cluster through an iterative process of cooperative and competitive training and then when the learning is completed, each unique cluster is assigned to a neuron and the samples are now classified.
That was a hypothetical cross plot of two attributes and this is a similar but synthetic example that shows a classification by SOM of data points that have simply two attributes. This synthetic example was made by our CEO, Tom Smith, and what we have is a model. So you can see our timeline here with reflection coefficients. We have a positive, low amplitude peak, a peak trough doublet, and then kind of a weak trough down at the bottom. We have a wavelet to convolve with our C series. The traces were duplicated a hundred times and some amount of noise was added to the data. Going over to the cross plot, when you just plot the data points in X and Y space, you see that we do have quite a few clusters. Moving across this has been put through a self-organizing map and assigned 64 neurons.
On the right here, you see the SOM classification of the same data. We have assigned the colors randomly on this 2D color map so you can see the natural clusters that the SOM has detected in the data set. Now coming back to our original model, where we’ve put the neuron numbers alongside the trace here, you can see each of the neurons that has been used to identify the clusters, which correspond to the actual events in the seismic data. So you can see here along the zero axis, you can see event nine, which is our trough event and event 57, which is our positive peak and, again, we have the Hilbert transform on the Y axis and the regular amplitude on the x-axis, so here’s our negative amplitude for our trough and our positive amplitude for our peak. Likewise, you can see our two doublet events, the 8, 55 pair and 20, 33 pair on the cross plot. This is an example, again, using a synthetic seismogram to show how the SOM classification process works.
Step three in our process helps to address the task that many geophysicists often face, which is how to know which attributes to use in your multi-attribute analysis and so this step is about using the most sensitive attributes based on knowledge of the problem that you’re trying to solve. In our case we’re looking for separating chalks from shales, so things like acoustic impedance might be of interest. We also wanted to detect thin beds, so sweetness or thin bed indicator might be an interest. Principal component analysis is the tool that we offer in paradise to help with the attributes selection.
This is a PCA menu from our paradise software. This is for the Niobrara example. So in this case, we use the typical suite of poststack instantaneous attributes put into PCA. I think there were 15 or 16 instantaneous attributes. The graph has shown for a single inline 1683, which tied our one well that had a sonic log. The eigenvectors can be queried by their inline ranges and eigenvalue and contributions of individual attributes to an eigenvector can be looked at in this chart. Now I’ve blown this up a little bit to make it easier to look at, but this is the eigenvalue chart for eigenvalue one. What we see here is the attributes are listed sequentially by their prominence in the data. In this case the highest contribution to eigenvector one is sweetness followed by envelope and relative acoustic impedance, and their contribution to the eigenvector is listed here as 20, around 19 and around 15% and then there’s a big drop to the next attribute, which is instantaneous frequency. We use this to choose attributes from individual eigenvectors and there’s often a very clear cutoff as to which attributes have made a contribution to the eigenvector.
Moving down to eigenvector two, again, we can see in this case we have thin bed and instantaneous frequency up at around 20%, and then it drops off to 10% for attenuation. We work through the data this way and look at multiple inlines, often multiple time intervals and ultimately end up with a list of attributes that we’re going to try out in our SOM. Of course, throughout the process of seismic interpretation you do need to QC your attributes and make sure that what’s going into the SOM looks like good data. In this case, I’m giving you a look at the sweetness extracted near the B Chalk, it’s just sliced down from the top of the Niobrara and likewise, the thin bed indicator near the B Chalk. And you can see, we do see some variation throughout the data or throughout the area, but mostly you just want to make sure that you don’t have some obvious issue with your data, like footprint or bad traces, et cetera. These cutouts are just no permit zone, so those are null data.
At the end of the PCA analysis, we ended up with this list of instantaneous attributes going into the SOM. Over here on the right, you can see the eigenvectors that those were selected from. After selecting the attributes via PCA, we run it through a process of SOM. You have access to multiple topologies and very often the topology selection is an important factor. We’ll often run multiple topologies in any given study. In this particular study we ended up using or liking the eight by eight topologies, so the results that you’re going to see in the next few slides are from a 64 neuron classification of that Niobrara to Greenhorn interval. We computed SOMs at multiple topologies and decided after a qualitative evaluation that an eight by eight topology of 64 neurons had the best resolution without pushing the data too far.
Here we see an inline with just our zone of interest going through one of the wells that we have our petrophysical analysis on. Here are the eight by eight SOM results through the same well. Right away you can see that we’ve got this zone in the middle, that’s the red sequence of neurons surrounded by yellow, and those actually do tie to our B bench. I think it’s interesting to note too how much more structural detail there is along the chalk versus the overlying shaley section. This image zooms into the same well and the SOM and highlights the correlation of lithology from our petrophysical analysis to the SOM neurons. The B Chalk bench is noted by the stack pattern of yellow, red, yellow neurons. The red neurons actually correspond to the maximum carbonate content within the B bench itself, so if you look at the lithology track here on the right, you can see that what’s highlighted in green on the wellbore is tied to that high volume of calcite sequence in the well, in total that’s about a five millisecond time interval to a time in the seismic.
This is a cross section showing three of the wells with the full petrophysical panel up above. We have about a 60 meter interval here, which is the Niobrara, top to base, on the SOM that corresponds to about 40 of our 50 to 60 milliseconds as shown here. Again, this is the well we showed in the previous slide, the Rotharmel 11-33. Another interesting thing to note in the SOM is something that our petrophysicist called shale pay. We have a net pay flag here using a conventional analysis and then this next track to the right is what they identified as shale pay and that pretty much corresponds to the high TOC zones in the well. Now we’ve marked that high TOC zone with this white marker and you can see that we can track this one throughout the SOM as well, it’s not as tightly defined as the B chalk bench, but it’s also quite visible and corresponds to these lower neuron numbers, which tend to be in purples and pinks. Another interesting thing to note is how the SOM can actually help us in some of these structurally complex zones where you see we have some faulting. Down here, near the base, we might actually go back and reinterpret our Greenhorn, but we can still follow the B bench across this arbitrary line through our wells.
What I’ve been illustrating here are visual QCs to tie the SOM results to well data. We have the capability now, the previous slides were done before we had the ability to use a well log extraction in the Paradise software, but we’ve had that now for about a year. This is a cross section through all seven of the wells that we have the petrophysical results on. What we’ve done here is create a template. This left most track has a V shale curve, as well as a gamma ray. You can see those track quite nicely with each other. The middle track is just depth, so these are depth templates, not time templates. This third track is our eight by eight SOM and over that I’ve overlaying the V calcite curve because after working with the data, that was the one I found matches the SOM most closely. For example, here in our B bench, you can see we have a high volume of calcite. The fourth track is the TOC curve and, again, here’s that high TOC zone with the low SOM numbers that I pointed out in the previous slide. This track is the effective porosity and the final track is the deep resistivity curve.
Now zooming in a little bit further we can again see our B bench is marked by these two markers, top and base, but you can see us coming into the higher calcite zone as we go through the SOM. We actually have a SOM boundary between the blue and the green neuron as we start to increase the content of calcite. You can also see down here, in the Fort Hays Limestone, we’re getting a repeat of some of the same neurons that are marking the B bench. This shows that we’re really tracking mostly on lithology here, but we can also see a little bit of a higher porosity zone in the B benches compared to the shales surrounding it, above and below. This is again more of a visual QC of the tie between the SOMs and the well logs, but it’s a good QC step.
To do this step, we’re going to use our well logs and identify cutoffs within those well logs and the Niobrara example and the upcoming example, that’s the offshore sandstone. What we did was use petrophysical cutoffs to identify reservoir versus non-reservoir rocks and using those results along with our SOM extraction, we’re going to build a contingency table to compare the SOM neurons to the petrophysical parameters in this case, our non-reservoir flags and then see using a chi-square test see if we can prove a relationship between the SOM neurons and the PE flags in our wells. This slide is taken from a project that our summer intern Yin Kai Wang did last summer, where we asked him to help us build a program to do these contingency tables from our well log files. This is just an example of what the contingency tables are that we build. Essentially in the first column, we see the neuron numbers that are encountered in the wells and then in each row, we have the number of samples that are flagged by our petrophysical cutoffs as either no reservoir or net reservoir, and then be in Excel we can total these in various ways.
This list here is just some of the different statistical measures that could be applied to the data once we get it into this format. The Chi-squared independent test is the one we have used the most. Essentially it’s a measure that tests whether a relationship exists between the variables in the well logs in the reservoir column and the variables that are in the neurons. We’ll look at some of these specifics from our case studies. Now here’s a contingency table and a Chi-squared measure in all of its glory from our Geophysical Insights program that Yin Kai created for us last summer. This is from just the Niobrara A interval. You can see on the left, here are the neuron numbers that were sampled by the wells, I should say, there were seven wells. These columns are the flags as to whether they are reservoir samples or non-reservoir samples, so those have been colored brown for non-reservoir and yellow for reservoir. In addition, we’ve created a histogram that also gives you a visual representation of the amount of reservoir versus non-reservoir that were encountered in the wells for any one of these neurons. The last part of this Excel sheet shows you a Chi-squared calculation based on the contingency table.
The null hypothesis, which is listed as H0, is the condition where the lithologic contrast neurons are independent of the well log flagged net reservoir. If the actual Chi-squared measurement is larger than the theoretical value, then the null hypothesis is rejected and the alternative hypothesis is accepted, which is there is a relationship between the net reservoir and the SOM. Let’s look at this example just to point out again, what we have looking at one of our neurons N24. I’ve also highlighted it over here in the Paradise displays, where we can use our 2D color map and just turn on the neurons that we see in the histogram. Here’s our SOM in the 3D view, I’ve only highlighted the neurons that are appearing in the histogram for the A bench, so that gives us a nice visual representation of those neurons because again, we’re only taking their existence in the seven wells, but we can also see how they appear throughout the 3D survey. And again, we get a count on the number of samples that the wells encountered for each of those neuron 24 and which are reservoir and non-reservoir. You can also see it as a percentage and then finally we see the Chi-squared calculation, which shows that the null hypothesis is rejected.
Here is the contingency table and the histogram for the B bench and it’s interesting here. You can see all of the neurons that were encountered by the wells for our B interval and here it is in the 3D viewer. I think this one is interesting to look up because if you look at the histogram visually, you can see that clearly those neurons have sampled reservoir for the most part. The theoretical value of the Chi-squared calculation though is larger than the calculated value for the Chi-squared. In this case we actually accept the null hypothesis and it’s saying, there’s no relationship between the lithologic contrasts on neurons and the reservoir. What’s going on in this case that you have to be careful of is we don’t have enough non reservoir samples to actually get a valid measurement of the Chi-squared statistics.
Likewise, we have the same condition in the C, if we just isolate the C bench. We’re just sampling reservoir and we have very few non reservoir samples. And again, you can see, we have a different set of neurons for the C bench and with this one we’re starting to pick up some interesting patterns. This looks to be following a little bit, that area that we saw with the sweetness and the thin bed indicator. To avoid the issue of sampling too thin of an interval and not actually getting your shale samples into the calculation. I also looked at a histogram here of the entire Niobrara interval and in this case we had 45 degrees of freedom in our calculation because we’ve actually simpled almost every neuron of the 64 in our SOM, but the Chi-squared calculation is showing that the alternative hypothesis is accepted and there is a relationship between net reservoir and our SOM neurons.
Again, here, we can also just look at the visual picture and quickly see that we have some definitely shale dominated neurons here, as well as some reservoir dominated neurons like neuron 12. That is one of the challenges of using the contingency tables is getting a good interval and again, if you think back to the neurons that were in the Fort Hays Limestone, if I had gone deeper than the Niobrara and taken it all the way down to the Greenhorn, we might not have been able to get the same result. You have to be somewhat mindful of your interpretation because I did notice visually that some of the same neurons were repeated deeper in the section over the Fort Hays Limestone, which we knew for certain that it wasn’t reservoir because of it’s low porosity.
One final piece of work that our intern Yin Kai did for us on the Niobrara data was to compare the statistical measures across several topologies because that is always a question that we get quite often is how do you know which topology is best and very often it’s the interpreter’s judgment through those visual QCs that you do, but we can also use these contingency tables and statistical measures to actually look at which topology gives us the best result. So again, Yin Kai took all of these SOM topologies in the seven wells and he computed various statistical measures on those topologies to see which gave us the highest value. You can see going from four by four here out to 12 by 12, you actually get a nice increase up to eight by eight and then it drops off again. It happens across four different statistical measures. We see the 12 by 12 starts to show an increase, but I think that’s just the amount of granularity at that point in your measurements caused that increase to occur.
I want to leave you with one last consideration on this Niobrara case before we move on to the second case study and that’s to always keep in mind that when you’re using well logs to calibrate a seismic classification, we’ve got a very small sample set compared to the number of samples in our entire volume. This is just an arbitrary line showing, again, just the neurons that were sampled by the wells, by our seven wells, through the B bench. We can start to see, like this area here, some neurons are not represented so it’s always good to include those visual QCs. In the next case study, we’ll introduce another way to look at the neuron distributions throughout your full volume and that’s a vertical proportion curve.
All right, moving on to the second case study. This is taken directly from the First Break article by Leal and others, so I’m really just using their materials and I hope I do it justice and I hope you find it interesting. We’re going through the same multi-attribute analysis using Paradise software, again, a five step process that takes you through a geologic problem definition, attribute generation, attribute selection, utilizing your knowledge of the reservoir and the geological problem combined with principle component analysis, self-organizing maps, and finally calibration to your petrophysics. In this picture, we see an overview of the geologic problem. We’re looking at a slope fan complex as well reservoir sands within the slope. Up here is the shelf fold part of the play. All of the wells are drilled down here off of the shelf, so either on the slope, or down in the basin.
The reservoir is a channel sand and slope fan facies and the rock is a court sand stone of fine to medium grade grain with calcareous clay matrix. In addition there’s inner collations of sandy and slightly calcareous shale. This is just a little snapshot from the PSTM seismic volume with a sampling of four milliseconds and a 30 by 30 meter bin spacing. Here you can see one of the wells as it comes through the seismic and a lithology log showing some of the sands as well as, I believe this is the porosity log. On this little cartoon, we see the dominant frequency is 14 Hertz and that would correspond to a tuning thickness of about 58 meters. Step two in the workflow is to generate instantaneous attributes that you think will be potentially significant in solving the geologic problem here, which is to identify the reservoir sands. They initially generated eight instantaneous attributes shown here that they thought had the potential to work well in the SOM. We have the imaginary part or the Hilbert transform instantaneous frequency, thin bed indicator, amplitude from the original volume, envelope, relative acoustic impedance, sweetness, and the real part of the amplitude trace.
These attributes were brought into the Paradise PCA tool. You can see the results from, this is an eigenvalue chart for several of the key inlines. You can see from eigenvector one, four attributes were chosen, sweetness, envelope, relative acoustic impedance, and Hilbert. Then from eigenvector two, we get the amplitudes and then from eigenvector three, none of the attributes past the selection criteria so ultimately they use six of the attributes of the eight and took just the top two eigenvectors to go into the SOM. Multiple SOM topologies were tested in this step four and the visual QC led them to select a five by five as given the best result. In the next slide, we’ll see what that looks like tied to the well data.
Here’s an arbitrary line of the five by five SOM results through all of the wells in the study. You can see neurons up in the upper left part of the color map are highlighted as corresponding to reservoir in our wells. These have been circled in between the wells. Some of the things to note, neuron 17 is the only one sampled by well three, which did not have a good production. Neurons 21 and 22 are quite prevalent through the rest of these two wells and then you can see we transitioned back to neuron 17. This would suggest that neurons 21 and 22 actually are the best reservoir in the field. Here on the right, we see a mapped distribution of these four neurons, in this case, neuron 23, which is in the top of the sequence has also been turned off in this map view. So we can see our neuron 22, which is considered one of the better neurons has a limited distribution. Neuron 17 would appear to have a somewhat wider distribution, but potentially not as good of reservoir rock.
This slide contains a short movie clip, which is going to show you all of the neurons in our five by five SOM after they have been sampled into a geocellular model. Notice when we get to the ones identified as reservoir, namely 16 and 17, followed later by 21 and 22, we can see the extent of the good sands and their distribution around the field and how the wells have penetrated those sands. Also, the distribution conforms to the depositional setting that we described earlier, which is a tow of slope fan or lobe. The clip then finishes with a 3D view of the SOM neurons, as well as pen in the original amplitude data through the SOM neurons. Sampling the SOM neuron into a geomodel also gives us access to something called a vertical proportion curve. This allows us to look at the total number of neurons represented in the volume by individual layers, so you can see and you can filter that by neuron number.
You can see the top of our reservoir here and you can see it over here on the vertical proportion curve. You can kind of see how our neurons of interest are concentrated in the lower portion of the volume. If we filter that back just the neurons of interest, you can then go and see their distribution in the vertical sequence that we’re studying. … showing templates from each of our wells and alongside you can see the neuron numbers. These are the good reservoir sands as identified kind of as channel features here. Along this column, we see a net pay flag and in this column here, we see our net reservoir flag, which was determined using a porosity cutoff.
Those were cast into a contingency table and you can see, for example, well four and the depth of the samples, as well as the V clay, the porosity, as well as the water saturation. In this contingency table, we have a column for both net reservoir and net pay, but the example we’re going to show is just going to focus again on the net reservoir. Here we see our SOM neuron number. Here’s a look at the contingency table, much like the one we showed for the Niobrara section. You can see, again, we’re looking at neuron 21 which in, for example, well four had three samples of net reservoir and three samples of non-net reservoir. Here we have it summed up for all of the wells in the field.
Here we have it shown as a histogram. We can see are five neurons of interest that we identified through our visual QC. We can see how those plot out in the histogram table. Once again, neurons 21 and 22 appear to have the best reservoir and neuron 17 was only detected in well three. You can also see the geologic interpretation that has been attached to these wells and their neurons. You can analyze how much each variable is repeated in each of the categories, what we want to know then is with our Chi-squared measurement, if we can establish via statistical tools, the relationship between the SOM neurons and the net reservoir. Here’s another detailed look at the step by step calculation for the Chi-squared. Here’s the table from this case for the off shore. We have our Chi-squared value here that we’ve computed and down here we have the theoretical Chi-squared value. If our computed value is greater than the theoretical value, then we reject the null hypothesis and we accept the alternative hypothesis.
Okay. To wrap up. Seismic multi-attribute analysis is delivering on the promise of improving interpretations via the integration of attributes which represent and respond to subsurface conditions like stratigraphy, lithology, faulting, fracturing, fluids, pressure, et cetera. Machine learning is one of the technologies that we can apply to multi-attribute analysis. Statistical methods and SOMs enhance the interpretation process and can be easily utilized to augment your traditional interpretation and utilize attributes space to simultaneously classify suites of attributes into sample-based, high dimension clusters. Visualization tools, including 2D color maps and well log extractions enable us to get closer ties to our wells. Bi-variate statistics can also be applied to validate and quantify the relationships between rock properties as measured by well logs in our cases presented today, and the multi-attribute SOM classifications.
In closing, I’d like to acknowledge Pemex and Petroleum Oil & Gas Services, Inc. for permission to use their case study. I’d like to acknowledge GPI and Fairfield for use of their data and permission to publish the Niobrara study. Yin Kia Wang for his statistical program that I use to analyze the extracted SOMs from the Niobrara and of course, colleagues from Geophysical Insights who provided many of the materials that I presented today. Thank you and I’ll open it for questions. Thanks again.