Identify Reservoirs by Combining Machine Learning, Petrophysics, and Bi-variate Statistics

Abstract: 

The tools of machine learning, petrophysics, well logs, and bi-variate statistics are applied in an integrated methodology to identify and discriminate reservoirs with hydrocarbon storage capacity. While the use of any one of these methods is familiar, their application together is unique. The webinar presents the process and results from two different geologic settings:

  • Conventional: Channel slope and fan facies environments offshore Mexico
  • Unconventional: Niobrara chalk and shale formation in the U.S.

The webinar is based on work published initially by Leal et al. (2019), and the methodology continues to yield excellent results in conventional and unconventional geologic settings alike.

Petrophysics is used to define sedimentary facies and their Effective Porosity using well logs. Petrophysical ranges are grouped in classes and labeled as categorical variables, specifically “Net Reservoir” and “Not Reservoir.”   First, a lithology cutoff such as Vshale is applied, and a specific Effective Porosity range defines a “Net Reservoir” condition.  Neurons from machine learning are compared to the Net Reservoir condition using bi-variate statistics, determining if there is a statistical relationship between neurons and sedimentary facies. The result is a histogram that reveals which neurons are most responsive to the Net Reservoir condition, enabling a prediction of similar sedimentary facies utilizing 3D seismic volumes across a region of interest.

Dr. Carrie Laudon | Senior Geoscientist | Geophysical Insights
Carrie Laudon
Senior Geophysical Consultant – Geophysical Research, LLC (d/b/a Geophysical Insights)

Carolan (Carrie) Laudon holds a PhD in geophysics from the University of Minnesota and a BS in geology from the University of Wisconsin Eau Claire. She has been Senior Geophysical Consultant with Geophysical Insights since 2017 working with Paradise®, the AI seismic workbench. Her prior roles include Vice President of Consulting Services and Microseismic Technology for Global Geophysical Services and 17 years with Schlumberger in technical management and sales. Dr. Laudon’s career has taken her to Alaska, Aberdeen, Scotland, Houston, Texas, Denver, Colorado and Reading, England. She spent five years early in her career with ARCO Alaska as a seismic interpreter for the Central North Slope exploration team.

Transcript

 

Carrie Laudon, Presenter:

Good morning, afternoon, and evening. Thank you for being with us today. My name is Carrie Laudon. I’m a senior geophysical consultant with Geophysical Insights, and today I’ll be presenting, Identify reservoirs by combining machine learning, petrophysics and bi-variate statistics. I’d also like to acknowledge a second author since it’s based on work by Fabian Rada and his colleagues at Petroleum Oil & Gas Services, Inc. As just mentioned the workflow I’m presenting today was by Fabian and his colleagues. This was published by Leal and others in 2019 under the title, Net reservoir discrimination through multi-attribute analysis at single sample scale, and that’s in the journal First Break, and also Fabian has provided me with additional images and slides from presentations he’s made previously on the workflow. Here’s the agenda we’ll go through today. I’ll start with an overview of the end-to-end workflow I’m presenting, then introduce the Niobrara unconventional reservoir from the Denver Julesburg Basin. We’ll have a short overview of self-organizing maps, multi-attribute, for thin bed detection, followed by calibrating SOMS with wells in the case of the Niobrara results. We do visual analysis via cross section and our 3D viewer and 2D color maps and then look at some techniques to apply statistical measures utilizing well logs. I’ll then do the same for a second case from a deep water conventional reservoir setting.

The workflow we’re going through today takes you through five steps. Obviously we start with step one and that’s the geological problem definition and the seismic volume QC. In step one, we’re considering the setting for our problem that we’re trying to solve, whether it’s faults or fractures, lithology, phases, porosity, DHS, et cetera. Things that we should take into account as well, your sampling interval, for example, and your tuning bed thickness, your bin size, whether you have any issues with your seismic data such as acquisition footprint or multiples. Step two and three are essentially generation of your attributes to use in your machine learning workflow. Step two involves just your own knowledge of attributes and what problem you’re trying to solve, but step three then takes it using principle component analysis to help you narrow down the list of attributes that are going into your self-organizing map. Step four is the SOM process itself and then step five is the calibration to the other information that you have about your reservoir and in this case, we’ll be focusing mostly on calibrating to well log data.

Okay. Let’s jump right into it with the Niobrara case. This slide contains an overview of the Niobrara study that we’re going to take you through. The location of the seismic survey is shown here in blue on this cartoon of the Wattenberg field and the Denver Basin. The Denver Basin is an asymmetric forelimb basin, it’s about 70,000 square miles, and is east of the Rocky Mountains, covering parts of Wyoming, Colorado, Kansas, and Nebraska. The Wattenberg field is the main production within the basin and that has approximately 47,000 wells that have been drilled with a production history going back to 1881. Primarily the initial production was in vertical completions in these older Jurassic sandstones. The field is a basin centered gas field and starting around 2009, the operators turn their attention to drilling horizontal wells in this unconventional reservoir called the Niobrara formation.

Niobrara formation is a late cretaceous in age, deposited in the Western Interior Seaway and its organic rich shales interbedded with fractured chalk benches, which we’ve informally termed A, B and C. When the operators turn to drilling horizontal wells about 10 plus years ago, they quickly started encountering faults. At the time they didn’t have seismic data, so in the intervening time several 3D surveys were acquired and in 2018 GPI and Geokinetics at the time provided us with a 3D survey to see if we could improve the seismic resolution within the reservoir interval of the Niobrara. So you can see on the right, we’ve got a blown up well type section and you can see we have a V-shale curve here on the left. Here’s a typical seismic trace from the survey at the well location. This was one millisecond data.

We have an effective prosody curve and a resistivity curve to the right. This is the first bench here, this small interval, the A bench, and that’s about 15 feet thick. The primary reservoir is the B, which you can see here near the small peak a little bit further below the top of the Niobrara. The C bench is a little thicker than the A and is deeper in the section and the entire sequence that we analyzed is from the top Niobrara to the Greenhorn, which you can see here on the well log. Over to the right is a typical seismic section, an inline from the survey, and we evaluated about a 60 millisecond interval into a time, sampled at one millisecond between the top Niobrara and the top of the Greenhorn. So within this interval, we have the three chalk benches as well as the Codell sandstone, which is an older unit, highly heterogeneous, but also productive.

GPI and Fairfield acquired a five phase multi-client survey over the Northern part of the Wattenberg field. Our study area was a hundred square miles from Niobrara phase five, and that was in the Western portion of the multi-client survey. We also got 28 wells from the public COGCC database. Those were available to help with our time to depth ties as well as giving us the log control that we wanted to calibrate our SOM results. The wells we’ve highlighted in red, we have a full petrophysical evaluation on, so those are the ones that we’ll use in the bi-variate statistics. The next step in our workflow is to look at the structural setting of the study area and look at what we need to take into consideration when designing our attributes and our self-organizing maps study. Here in our study area of the Niobrara we have a two-way time structure map, so you can see this west is to the bottom of the screen. We’re kind of in a monoclonal structure coming up into the front range of the Rocky Mountains. To the left is an inset from Friedman and others on the Austin Chalk, but it shows the kind of fractures that you would predict for various structural features and, again, with the benches, the fracture porosity and permeability are an important component production.

This is a geometric attribute extracted at the top of the Niobrara, the most positive curvature. In this slide, you can see that there two predominant structural fabrics Northwest to Southeast and Northeast to Southwest. It appears that you actually set up some fault compartments within the reservoir. The actual Niobrara formation is internally faulted and has a lot of bed bounded faults that seem to suggest perhaps more brittleness in the chalk units than in the overline or the upper most shale section. Also, recall below the B we also have another C bench, as well as the Codell sandstone. The Fort Hays Limestone, which is a high acoustic impedance unit, but fairly tight, low porosity so not typically targeted for drilling these days. The base event that we mapped was this peak that represents the top of the Greenhorn. For the next few minutes, I’m going to give you just a brief introduction to SOM. SOM is an unsupervised machine learning technology that we at Geophysical Insights have used extensively for multi-attribute seismic analysis.

I recognize that there are various levels of experience today in our audience with using SOM along with seismic data, but since we’re focusing more on the results today, I’m just going to give you a few slides on SOM. SOM is a method that uses attributes space to classify seismic samples. In this example, I’m just doing a plot of two attributes. One and two are X and Y and these, you can see, have been classified into four clusters of multi-attributes samples. During SOM, within each cluster, we introduced new multi-attribute samples called neurons and the SOM neurons will move and seek out natural clusters of characteristics in the seismic data. The neurons learn the characteristics of a data cluster through an iterative process of cooperative and competitive training and then when the learning is completed, each unique cluster is assigned to a neuron and the samples are now classified.

That was a hypothetical cross plot of two attributes and this is a similar but synthetic example that shows a classification by SOM of data points that have simply two attributes. This synthetic example was made by our CEO, Tom Smith, and what we have is a model. So you can see our timeline here with reflection coefficients. We have a positive, low amplitude peak, a peak trough doublet, and then kind of a weak trough down at the bottom. We have a wavelet to convolve with our C series. The traces were duplicated a hundred times and some amount of noise was added to the data. Going over to the cross plot, when you just plot the data points in X and Y space, you see that we do have quite a few clusters. Moving across this has been put through a self-organizing map and assigned 64 neurons.

On the right here, you see the SOM classification of the same data. We have assigned the colors randomly on this 2D color map so you can see the natural clusters that the SOM has detected in the data set. Now coming back to our original model, where we’ve put the neuron numbers alongside the trace here, you can see each of the neurons that has been used to identify the clusters, which correspond to the actual events in the seismic data. So you can see here along the zero axis, you can see event nine, which is our trough event and event 57, which is our positive peak and, again, we have the Hilbert transform on the Y axis and the regular amplitude on the x-axis, so here’s our negative amplitude for our trough and our positive amplitude for our peak. Likewise, you can see our two doublet events, the 8, 55 pair and 20, 33 pair on the cross plot. This is an example, again, using a synthetic seismogram to show how the SOM classification process works.

Step three in our process helps to address the task that many geophysicists often face, which is how to know which attributes to use in your multi-attribute analysis and so this step is about using the most sensitive attributes based on knowledge of the problem that you’re trying to solve. In our case we’re looking for separating chalks from shales, so things like acoustic impedance might be of interest. We also wanted to detect thin beds, so sweetness or thin bed indicator might be an interest. Principal component analysis is the tool that we offer in paradise to help with the attributes selection.

This is a PCA menu from our paradise software. This is for the Niobrara example. So in this case, we use the typical suite of poststack instantaneous attributes put into PCA. I think there were 15 or 16 instantaneous attributes. The graph has shown for a single inline 1683, which tied our one well that had a sonic log. The eigenvectors can be queried by their inline ranges and eigenvalue and contributions of individual attributes to an eigenvector can be looked at in this chart. Now I’ve blown this up a little bit to make it easier to look at, but this is the eigenvalue chart for eigenvalue one. What we see here is the attributes are listed sequentially by their prominence in the data. In this case the highest contribution to eigenvector one is sweetness followed by envelope and relative acoustic impedance, and their contribution to the eigenvector is listed here as 20, around 19 and around 15% and then there’s a big drop to the next attribute, which is instantaneous frequency. We use this to choose attributes from individual eigenvectors and there’s often a very clear cutoff as to which attributes have made a contribution to the eigenvector.

Moving down to eigenvector two, again, we can see in this case we have thin bed and instantaneous frequency up at around 20%, and then it drops off to 10% for attenuation. We work through the data this way and look at multiple inlines, often multiple time intervals and ultimately end up with a list of attributes that we’re going to try out in our SOM. Of course, throughout the process of seismic interpretation you do need to QC your attributes and make sure that what’s going into the SOM looks like good data. In this case, I’m giving you a look at the sweetness extracted near the B Chalk, it’s just sliced down from the top of the Niobrara and likewise, the thin bed indicator near the B Chalk. And you can see, we do see some variation throughout the data or throughout the area, but mostly you just want to make sure that you don’t have some obvious issue with your data, like footprint or bad traces, et cetera. These cutouts are just no permit zone, so those are null data.

At the end of the PCA analysis, we ended up with this list of instantaneous attributes going into the SOM. Over here on the right, you can see the eigenvectors that those were selected from. After selecting the attributes via PCA, we run it through a process of SOM. You have access to multiple topologies and very often the topology selection is an important factor. We’ll often run multiple topologies in any given study. In this particular study we ended up using or liking the eight by eight topologies, so the results that you’re going to see in the next few slides are from a 64 neuron classification of that Niobrara to Greenhorn interval. We computed SOMs at multiple topologies and decided after a qualitative evaluation that an eight by eight topology of 64 neurons had the best resolution without pushing the data too far.

Here we see an inline with just our zone of interest going through one of the wells that we have our petrophysical analysis on. Here are the eight by eight SOM results through the same well. Right away you can see that we’ve got this zone in the middle, that’s the red sequence of neurons surrounded by yellow, and those actually do tie to our B bench. I think it’s interesting to note too how much more structural detail there is along the chalk versus the overlying shaley section. This image zooms into the same well and the SOM and highlights the correlation of lithology from our petrophysical analysis to the SOM neurons. The B Chalk bench is noted by the stack pattern of yellow, red, yellow neurons. The red neurons actually correspond to the maximum carbonate content within the B bench itself, so if you look at the lithology track here on the right, you can see that what’s highlighted in green on the wellbore is tied to that high volume of calcite sequence in the well, in total that’s about a five millisecond time interval to a time in the seismic.

This is a cross section showing three of the wells with the full petrophysical panel up above. We have about a 60 meter interval here, which is the Niobrara, top to base, on the SOM that corresponds to about 40 of our 50 to 60 milliseconds as shown here. Again, this is the well we showed in the previous slide, the Rotharmel 11-33. Another interesting thing to note in the SOM is something that our petrophysicist called shale pay. We have a net pay flag here using a conventional analysis and then this next track to the right is what they identified as shale pay and that pretty much corresponds to the high TOC zones in the well. Now we’ve marked that high TOC zone with this white marker and you can see that we can track this one throughout the SOM as well, it’s not as tightly defined as the B chalk bench, but it’s also quite visible and corresponds to these lower neuron numbers, which tend to be in purples and pinks. Another interesting thing to note is how the SOM can actually help us in some of these structurally complex zones where you see we have some faulting. Down here, near the base, we might actually go back and reinterpret our Greenhorn, but we can still follow the B bench across this arbitrary line through our wells.

What I’ve been illustrating here are visual QCs to tie the SOM results to well data. We have the capability now, the previous slides were done before we had the ability to use a well log extraction in the Paradise software, but we’ve had that now for about a year. This is a cross section through all seven of the wells that we have the petrophysical results on. What we’ve done here is create a template. This left most track has a V shale curve, as well as a gamma ray. You can see those track quite nicely with each other. The middle track is just depth, so these are depth templates, not time templates. This third track is our eight by eight SOM and over that I’ve overlaying the V calcite curve because after working with the data, that was the one I found matches the SOM most closely. For example, here in our B bench, you can see we have a high volume of calcite. The fourth track is the TOC curve and, again, here’s that high TOC zone with the low SOM numbers that I pointed out in the previous slide. This track is the effective porosity and the final track is the deep resistivity curve.

Now zooming in a little bit further we can again see our B bench is marked by these two markers, top and base, but you can see us coming into the higher calcite zone as we go through the SOM. We actually have a SOM boundary between the blue and the green neuron as we start to increase the content of calcite. You can also see down here, in the Fort Hays Limestone, we’re getting a repeat of some of the same neurons that are marking the B bench. This shows that we’re really tracking mostly on lithology here, but we can also see a little bit of a higher porosity zone in the B benches compared to the shales surrounding it, above and below. This is again more of a visual QC of the tie between the SOMs and the well logs, but it’s a good QC step.

To do this step, we’re going to use our well logs and identify cutoffs within those well logs and the Niobrara example and the upcoming example, that’s the offshore sandstone. What we did was use petrophysical cutoffs to identify reservoir versus non-reservoir rocks and using those results along with our SOM extraction, we’re going to build a contingency table to compare the SOM neurons to the petrophysical parameters in this case, our non-reservoir flags and then see using a chi-square test see if we can prove a relationship between the SOM neurons and the PE flags in our wells. This slide is taken from a project that our summer intern Yin Kai Wang did last summer, where we asked him to help us build a program to do these contingency tables from our well log files. This is just an example of what the contingency tables are that we build. Essentially in the first column, we see the neuron numbers that are encountered in the wells and then in each row, we have the number of samples that are flagged by our petrophysical cutoffs as either no reservoir or net reservoir, and then be in Excel we can total these in various ways.

This list here is just some of the different statistical measures that could be applied to the data once we get it into this format. The Chi-squared independent test is the one we have used the most. Essentially it’s a measure that tests whether a relationship exists between the variables in the well logs in the reservoir column and the variables that are in the neurons. We’ll look at some of these specifics from our case studies. Now here’s a contingency table and a Chi-squared measure in all of its glory from our Geophysical Insights program that Yin Kai created for us last summer. This is from just the Niobrara A interval. You can see on the left, here are the neuron numbers that were sampled by the wells, I should say, there were seven wells. These columns are the flags as to whether they are reservoir samples or non-reservoir samples, so those have been colored brown for non-reservoir and yellow for reservoir. In addition, we’ve created a histogram that also gives you a visual representation of the amount of reservoir versus non-reservoir that were encountered in the wells for any one of these neurons. The last part of this Excel sheet shows you a Chi-squared calculation based on the contingency table.

The null hypothesis, which is listed as H0, is the condition where the lithologic contrast neurons are independent of the well log flagged net reservoir. If the actual Chi-squared measurement is larger than the theoretical value, then the null hypothesis is rejected and the alternative hypothesis is accepted, which is there is a relationship between the net reservoir and the SOM. Let’s look at this example just to point out again, what we have looking at one of our neurons N24. I’ve also highlighted it over here in the Paradise displays, where we can use our 2D color map and just turn on the neurons that we see in the histogram. Here’s our SOM in the 3D view, I’ve only highlighted the neurons that are appearing in the histogram for the A bench, so that gives us a nice visual representation of those neurons because again, we’re only taking their existence in the seven wells, but we can also see how they appear throughout the 3D survey. And again, we get a count on the number of samples that the wells encountered for each of those neuron 24 and which are reservoir and non-reservoir. You can also see it as a percentage and then finally we see the Chi-squared calculation, which shows that the null hypothesis is rejected.

Here is the contingency table and the histogram for the B bench and it’s interesting here. You can see all of the neurons that were encountered by the wells for our B interval and here it is in the 3D viewer. I think this one is interesting to look up because if you look at the histogram visually, you can see that clearly those neurons have sampled reservoir for the most part. The theoretical value of the Chi-squared calculation though is larger than the calculated value for the Chi-squared. In this case we actually accept the null hypothesis and it’s saying, there’s no relationship between the lithologic contrasts on neurons and the reservoir. What’s going on in this case that you have to be careful of is we don’t have enough non reservoir samples to actually get a valid measurement of the Chi-squared statistics.

Likewise, we have the same condition in the C, if we just isolate the C bench. We’re just sampling reservoir and we have very few non reservoir samples. And again, you can see, we have a different set of neurons for the C bench and with this one we’re starting to pick up some interesting patterns. This looks to be following a little bit, that area that we saw with the sweetness and the thin bed indicator. To avoid the issue of sampling too thin of an interval and not actually getting your shale samples into the calculation. I also looked at a histogram here of the entire Niobrara interval and in this case we had 45 degrees of freedom in our calculation because we’ve actually simpled almost every neuron of the 64 in our SOM, but the Chi-squared calculation is showing that the alternative hypothesis is accepted and there is a relationship between net reservoir and our SOM neurons.

Again, here, we can also just look at the visual picture and quickly see that we have some definitely shale dominated neurons here, as well as some reservoir dominated neurons like neuron 12. That is one of the challenges of using the contingency tables is getting a good interval and again, if you think back to the neurons that were in the Fort Hays Limestone, if I had gone deeper than the Niobrara and taken it all the way down to the Greenhorn, we might not have been able to get the same result. You have to be somewhat mindful of your interpretation because I did notice visually that some of the same neurons were repeated deeper in the section over the Fort Hays Limestone, which we knew for certain that it wasn’t reservoir because of it’s low porosity.

One final piece of work that our intern Yin Kai did for us on the Niobrara data was to compare the statistical measures across several topologies because that is always a question that we get quite often is how do you know which topology is best and very often it’s the interpreter’s judgment through those visual QCs that you do, but we can also use these contingency tables and statistical measures to actually look at which topology gives us the best result. So again, Yin Kai took all of these SOM topologies in the seven wells and he computed various statistical measures on those topologies to see which gave us the highest value. You can see going from four by four here out to 12 by 12, you actually get a nice increase up to eight by eight and then it drops off again. It happens across four different statistical measures. We see the 12 by 12 starts to show an increase, but I think that’s just the amount of granularity at that point in your measurements caused that increase to occur.

I want to leave you with one last consideration on this Niobrara case before we move on to the second case study and that’s to always keep in mind that when you’re using well logs to calibrate a seismic classification, we’ve got a very small sample set compared to the number of samples in our entire volume. This is just an arbitrary line showing, again, just the neurons that were sampled by the wells, by our seven wells, through the B bench. We can start to see, like this area here, some neurons are not represented so it’s always good to include those visual QCs. In the next case study, we’ll introduce another way to look at the neuron distributions throughout your full volume and that’s a vertical proportion curve.

All right, moving on to the second case study. This is taken directly from the First Break article by Leal and others, so I’m really just using their materials and I hope I do it justice and I hope you find it interesting. We’re going through the same multi-attribute analysis using Paradise software, again, a five step process that takes you through a geologic problem definition, attribute generation, attribute selection, utilizing your knowledge of the reservoir and the geological problem combined with principle component analysis, self-organizing maps, and finally calibration to your petrophysics. In this picture, we see an overview of the geologic problem. We’re looking at a slope fan complex as well reservoir sands within the slope. Up here is the shelf fold part of the play. All of the wells are drilled down here off of the shelf, so either on the slope, or down in the basin.

The reservoir is a channel sand and slope fan facies and the rock is a court sand stone of fine to medium grade grain with calcareous clay matrix. In addition there’s inner collations of sandy and slightly calcareous shale. This is just a little snapshot from the PSTM seismic volume with a sampling of four milliseconds and a 30 by 30 meter bin spacing. Here you can see one of the wells as it comes through the seismic and a lithology log showing some of the sands as well as, I believe this is the porosity log. On this little cartoon, we see the dominant frequency is 14 Hertz and that would correspond to a tuning thickness of about 58 meters. Step two in the workflow is to generate instantaneous attributes that you think will be potentially significant in solving the geologic problem here, which is to identify the reservoir sands. They initially generated eight instantaneous attributes shown here that they thought had the potential to work well in the SOM. We have the imaginary part or the Hilbert transform instantaneous frequency, thin bed indicator, amplitude from the original volume, envelope, relative acoustic impedance, sweetness, and the real part of the amplitude trace.

These attributes were brought into the Paradise PCA tool. You can see the results from, this is an eigenvalue chart for several of the key inlines. You can see from eigenvector one, four attributes were chosen, sweetness, envelope, relative acoustic impedance, and Hilbert. Then from eigenvector two, we get the amplitudes and then from eigenvector three, none of the attributes past the selection criteria so ultimately they use six of the attributes of the eight and took just the top two eigenvectors to go into the SOM. Multiple SOM topologies were tested in this step four and the visual QC led them to select a five by five as given the best result. In the next slide, we’ll see what that looks like tied to the well data.

Here’s an arbitrary line of the five by five SOM results through all of the wells in the study. You can see neurons up in the upper left part of the color map are highlighted as corresponding to reservoir in our wells. These have been circled in between the wells. Some of the things to note, neuron 17 is the only one sampled by well three, which did not have a good production. Neurons 21 and 22 are quite prevalent through the rest of these two wells and then you can see we transitioned back to neuron 17. This would suggest that neurons 21 and 22 actually are the best reservoir in the field. Here on the right, we see a mapped distribution of these four neurons, in this case, neuron 23, which is in the top of the sequence has also been turned off in this map view. So we can see our neuron 22, which is considered one of the better neurons has a limited distribution. Neuron 17 would appear to have a somewhat wider distribution, but potentially not as good of reservoir rock.

This slide contains a short movie clip, which is going to show you all of the neurons in our five by five SOM after they have been sampled into a geocellular model. Notice when we get to the ones identified as reservoir, namely 16 and 17, followed later by 21 and 22, we can see the extent of the good sands and their distribution around the field and how the wells have penetrated those sands. Also, the distribution conforms to the depositional setting that we described earlier, which is a tow of slope fan or lobe. The clip then finishes with a 3D view of the SOM neurons, as well as pen in the original amplitude data through the SOM neurons. Sampling the SOM neuron into a geomodel also gives us access to something called a vertical proportion curve. This allows us to look at the total number of neurons represented in the volume by individual layers, so you can see and you can filter that by neuron number.

You can see the top of our reservoir here and you can see it over here on the vertical proportion curve. You can kind of see how our neurons of interest are concentrated in the lower portion of the volume. If we filter that back just the neurons of interest, you can then go and see their distribution in the vertical sequence that we’re studying. … showing templates from each of our wells and alongside you can see the neuron numbers. These are the good reservoir sands as identified kind of as channel features here. Along this column, we see a net pay flag and in this column here, we see our net reservoir flag, which was determined using a porosity cutoff.

Those were cast into a contingency table and you can see, for example, well four and the depth of the samples, as well as the V clay, the porosity, as well as the water saturation. In this contingency table, we have a column for both net reservoir and net pay, but the example we’re going to show is just going to focus again on the net reservoir. Here we see our SOM neuron number. Here’s a look at the contingency table, much like the one we showed for the Niobrara section. You can see, again, we’re looking at neuron 21 which in, for example, well four had three samples of net reservoir and three samples of non-net reservoir. Here we have it summed up for all of the wells in the field.

Here we have it shown as a histogram. We can see are five neurons of interest that we identified through our visual QC. We can see how those plot out in the histogram table. Once again, neurons 21 and 22 appear to have the best reservoir and neuron 17 was only detected in well three. You can also see the geologic interpretation that has been attached to these wells and their neurons. You can analyze how much each variable is repeated in each of the categories, what we want to know then is with our Chi-squared measurement, if we can establish via statistical tools, the relationship between the SOM neurons and the net reservoir. Here’s another detailed look at the step by step calculation for the Chi-squared. Here’s the table from this case for the off shore. We have our Chi-squared value here that we’ve computed and down here we have the theoretical Chi-squared value. If our computed value is greater than the theoretical value, then we reject the null hypothesis and we accept the alternative hypothesis.

Okay. To wrap up. Seismic multi-attribute analysis is delivering on the promise of improving interpretations via the integration of attributes which represent and respond to subsurface conditions like stratigraphy, lithology, faulting, fracturing, fluids, pressure, et cetera. Machine learning is one of the technologies that we can apply to multi-attribute analysis. Statistical methods and SOMs enhance the interpretation process and can be easily utilized to augment your traditional interpretation and utilize attributes space to simultaneously classify suites of attributes into sample-based, high dimension clusters. Visualization tools, including 2D color maps and well log extractions enable us to get closer ties to our wells. Bi-variate statistics can also be applied to validate and quantify the relationships between rock properties as measured by well logs in our cases presented today, and the multi-attribute SOM classifications.

In closing, I’d like to acknowledge Pemex and Petroleum Oil & Gas Services, Inc. for permission to use their case study. I’d like to acknowledge GPI and Fairfield for use of their data and permission to publish the Niobrara study. Yin Kia Wang for his statistical program that I use to analyze the extracted SOMs from the Niobrara and of course, colleagues from Geophysical Insights who provided many of the materials that I presented today. Thank you and I’ll open it for questions. Thanks again.

Shopping Cart
  • Registration confirmation will be emailed to you.

  • We're committed to your privacy. Geophysical Insights uses the information you provide to us to contact you about our relevant content, events, and products. You may unsubscribe from these communications at any time. For more information, check out our Privacy Policy

    Deborah SacreyOwner - Auburn Energy

    How to Use Paradise to Interpret Carbonate Reservoirs

    The key to understanding Carbonate reservoirs in Paradise start with good synthetic ties to the wavelet data. If one is not tied correctly, then it will be very east to mis-interpret the neurons as reservoir, when they are not. Secondly, the workflow should utilize Principal Component Analysis to better understand the zone of interest and the attributes to use in the SOM analysis. An important part to interpretation is understanding “Halo” and “Trailing” neurons as part of the stack around a reservoir or potential reservoir. Usually, one sees this phenomenon around deep, pressured gas reservoirs, but it can happen in shallow reservoirs as well. Two case studies are presented to emphasize the importance of looking for halo or trailing patterns around good reservoirs. One is a deep Edwards example in south central Texas, and the other a shallow oil reservoir in the Austin Chalk in the San Antonio area. Another way to help enhance carbonate reservoirs is through Spectral Decomposition. A case history is shown in the Smackover in Alabama to highlight and focus on an oolitic shoal reservoir which tunes at a specific frequency in the best wells. Not all carbonate porosity is at the top of the deposition. A case history will be discussed looking for porosity in the center portion of a reef in west Texas. And finally, one of the most difficult interpretation challenges in the carbonate spectrum is correctly mapping the interface between two carbonate layers. A simple technique is shown to help with that dilemma, by using few attributes and a low-topology count to understand regional depositional sequences. This example is from the Delaware Basin in southeastern New Mexico.

    Dr. Carrie LaudonSenior Geophysical Consultant

    Applying Unsupervised Multi-Attribute Machine Learning for 3D Stratigraphic Facies Classification in a Carbonate Field, Offshore Brazil

    We present results of a multi-attribute, machine learning study over a pre-salt carbonate field in the Santos Basin, offshore Brazil. These results test the accuracy and potential of Self-organizing maps (SOM) for stratigraphic facies delineation. The study area has an existing detailed geological facies model containing predominantly reef facies in an elongated structure.

    Carrie LaudonSenior Geophysical Consultant - Geophysical Insights

    Automatic Fault Detection and Applying Machine Learning to Detect Thin Beds

    Rapid advances in Machine Learning (ML) are transforming seismic analysis. Using these new tools, geoscientists can accomplish the following quickly and effectively:

    • Run fault detection analysis in a few hours, not weeks
    • Identify thin beds down to a single seismic sample
    • Generate seismic volumes that capture structural and stratigraphic details

    Join us for a ‘Lunch & Learn’ sessions daily at 11:00 where Dr. Carolan (“Carrie”) Laudon will review the theory and results of applying a combination of machine learning tools to obtain the above results.  A detailed agenda follows.

    Agenda

    Automated Fault Detection using 3D CNN Deep Learning

    • Deep learning fault detection
    • Synthetic models
    • Fault image enhancement
    • Semi-supervised learning for visualization
    • Application results
      • Normal faults
      • Fault/fracture trends in complex reservoirs

    Demo of Paradise Fault Detection Thoughtflow®

    Stratigraphic analysis using machine learning with fault detection

    • Attribute Selection using Principal Component Analysis (PCA)
    • Multi-Attribute Classification using Self-Organizing Maps (SOM)
    • Case studies – stratigraphic analysis and fault detection
      • Fault-karst and fracture examples, China
      • Niobrara – Stratigraphic analysis and thin beds, faults
    Thomas ChaparroSenior Geophysicist - Geophysical Insights

    Paradise: A Day in The Life of the Geoscientist

    Over the last several years, the industry has invested heavily in Machine Learning (ML) for better predictions and automation. Dramatic results have been realized in exploration, field development, and production optimization. However, many of these applications have been single use ‘point’ solutions. There is a growing body of evidence that seismic analysis is best served using a combination of ML tools for a specific objective, referred to as ML Orchestration. This talk demonstrates how the Paradise AI workbench applications are used in an integrated workflow to achieve superior results than traditional interpretation methods or single-purpose ML products. Using examples from combining ML-based Fault Detection and Stratigraphic Analysis, the talk will show how ML orchestration produces value for exploration and field development by the interpreter leveraging ML orchestration.

    Aldrin RondonSenior Geophysical Engineer - Dragon Oil

    Machine Learning Fault Detection: A Case Study

    An innovative Fault Pattern Detection Methodology has been carried out using a combination of Machine Learning Techniques to produce a seismic volume suitable for fault interpretation in a structurally and stratigraphic complex field. Through theory and results, the main objective was to demonstrate that a combination of ML tools can generate superior results in comparison with traditional attribute extraction and data manipulation through conventional algorithms. The ML technologies applied are a supervised, deep learning, fault classification followed by an unsupervised, multi-attribute classification combining fault probability and instantaneous attributes.

    Thomas ChaparroSenior Geophysicist - Geophysical Insights

    Thomas Chaparro is a Senior Geophysicist who specializes in training and preparing AI-based workflows. Thomas also has experience as a processing geophysicist and 2D and 3D seismic data processing. He has participated in projects in the Gulf of Mexico, offshore Africa, the North Sea, Australia, Alaska, and Brazil.

    Thomas holds a bachelor’s degree in Geology from Northern Arizona University and a Master’s in Geophysics from the University of California, San Diego. His research focus was computational geophysics and seismic anisotropy.

    Aldrin RondonSenior Geophysical Engineer - Dragon Oil

    Bachelor’s Degree in Geophysical Engineering from Central University in Venezuela with a specialization in Reservoir Characterization from Simon Bolivar University.

    Over 20 years exploration and development geophysical experience with extensive 2D and 3D seismic interpretation including acquisition and processing.

    Aldrin spent his formative years working on exploration activity in PDVSA Venezuela followed by a period working for a major international consultant company in the Gulf of Mexico (Landmark, Halliburton) as a G&G consultant. Latterly he was working at Helix in Scotland, UK on producing assets in the Central and South North Sea.  From 2007 to 2021, he has been working as a Senior Seismic Interpreter in Dubai involved in different dedicated development projects in the Caspian Sea.

    Deborah SacreyOwner - Auburn Energy

    How to Use Paradise to Interpret Clastic Reservoirs

    The key to understanding Clastic reservoirs in Paradise starts with good synthetic ties to the wavelet data. If one is not tied correctly, then it will be easy to mis-interpret the neurons as reservoir, whin they are not. Secondly, the workflow should utilize Principal Component Analysis to better understand the zone of interest and the attributes to use in the SOM analysis. An important part to interpretation is understanding “Halo” and “Trailing” neurons as part of the stack around a reservoir or potential reservoir. Deep, high-pressured reservoirs often “leak” or have vertical percolation into the seal. This changes the rock properties enough in the seal to create a “halo” effect in SOM. Likewise, the frequency changes of the seismic can cause a subtle “dim-out”, not necessarily observable in the wavelet data, but enough to create a different pattern in the Earth in terms of these rock property changes. Case histories for Halo and trailing neural information include deep, pressured, Chris R reservoir in Southern Louisiana, Frio pay in Southeast Texas and AVO properties in the Yegua of Wharton County. Additional case histories to highlight interpretation include thin-bed pays in Brazoria County, including updated information using CNN fault skeletonization. Continuing the process of interpretation is showing a case history in Wharton County on using Low Probability to help explore Wilcox reservoirs. Lastly, a look at using Paradise to help find sweet spots in unconventional reservoirs like the Eagle Ford, a case study provided by Patricia Santigrossi.

    Mike DunnSr. Vice President of Business Development

    Machine Learning in the Cloud

    Machine Learning in the Cloud will address the capabilities of the Paradise AI Workbench, featuring on-demand access enabled by the flexible hardware and storage facilities available on Amazon Web Services (AWS) and other commercial cloud services. Like the on-premise instance, Paradise On-Demand provides guided workflows to address many geologic challenges and investigations. The presentation will show how geoscientists can accomplish the following workflows quickly and effectively using guided ThoughtFlows® in Paradise:

    • Identify and calibrate detailed stratigraphy using seismic and well logs
    • Classify seismic facies
    • Detect faults automatically
    • Distinguish thin beds below conventional tuning
    • Interpret Direct Hydrocarbon Indicators
    • Estimate reserves/resources

    Attend the talk to see how ML applications are combined through a process called "Machine Learning Orchestration," proven to extract more from seismic and well data than traditional means.

    Sarah Stanley
    Senior Geoscientist

    Stratton Field Case Study – New Solutions to Old Problems

    The Oligocene Frio gas-producing Stratton Field in south Texas is a well-known field. Like many onshore fields, the productive sand channels are difficult to identify using conventional seismic data. However, the productive channels can be easily defined by employing several Paradise modules, including unsupervised machine learning, Principal Component Analysis, Self-Organizing Maps, 3D visualization, and the new Well Log Cross Section and Well Log Crossplot tools. The Well Log Cross Section tool generates extracted seismic data, including SOMs, along the Cross Section boreholes and logs. This extraction process enables the interpreter to accurately identify the SOM neurons associated with pay versus neurons associated with non-pay intervals. The reservoir neurons can be visualized throughout the field in the Paradise 3D Viewer, with Geobodies generated from the neurons. With this ThoughtFlow®, pay intervals previously difficult to see in conventional seismic can finally be visualized and tied back to the well data.

    Laura Cuttill
    Practice Lead, Advertas

    Young Professionals – Managing Your Personal Brand to Level-up Your Career

    No matter where you are in your career, your online “personal brand” has a huge impact on providing opportunity for prospective jobs and garnering the respect and visibility needed for advancement. While geoscientists tackle ambitious projects, publish in technical papers, and work hard to advance their careers, often, the value of these isn’t realized beyond their immediate professional circle. Learn how to…

    • - Communicate who you are to high-level executives in exploration and development
    • - Avoid common social media pitfalls
    • - Optimize your online presence to best garner attention from recruiters
    • - Stay relevant
    • - Create content of interest
    • - Establish yourself as a thought leader in your given area of specialization
    Laura Cuttill
    Practice Lead, Advertas

    As a 20-year marketing veteran marketing in oil and gas and serial entrepreneur, Laura has deep experience in bringing technology products to market and growing sales pipeline. Armed with a marketing degree from Texas A&M, she began her career doing technical writing for Schlumberger and ExxonMobil in 2001. She started Advertas as a co-founder in 2004 and began to leverage her upstream experience in marketing. In 2006, she co-founded the cyber-security software company, 2FA Technology. After growing 2FA from a startup to 75% market share in target industries, and the subsequent sale of the company, she returned to Advertas to continue working toward the success of her clients, such as Geophysical Insights. Today, she guides strategy for large-scale marketing programs, manages project execution, cultivates relationships with industry media, and advocates for data-driven, account-based marketing practices.

    Fabian Rada
    Sr. Geophysicist, Petroleum Oil & Gas Services

    Statistical Calibration of SOM results with Well Log Data (Case Study)

    The first stage of the proposed statistical method has proven to be very useful in testing whether or not there is a relationship between two qualitative variables (nominal or ordinal) or categorical quantitative variables, in the fields of health and social sciences. Its application in the oil industry allows geoscientists not only to test dependence between discrete variables, but to measure their degree of correlation (weak, moderate or strong). This article shows its application to reveal the relationship between a SOM classification volume of a set of nine seismic attributes (whose vertical sampling interval is three meters) and different well data (sedimentary facies, Net Reservoir, and effective porosity grouped by ranges). The data were prepared to construct the contingency tables, where the dependent (response) variable and independent (explanatory) variable were defined, the observed frequencies were obtained, and the frequencies that would be expected if the variables were independent were calculated and then the difference between the two magnitudes was studied using the contrast statistic called Chi-Square. The second stage implies the calibration of the SOM volume extracted along the wellbore path through statistical analysis of the petrophysical properties VCL and PHIE, and SW for each neuron, which allowed to identify the neurons with the best petrophysical values in a carbonate reservoir.

    Heather Bedle
    Assistant Professor, University of Oklahoma

    Heather Bedle received a B.S. (1999) in physics from Wake Forest University, and then worked as a systems engineer in the defense industry. She later received a M.S. (2005) and a Ph. D. (2008) degree from Northwestern University. After graduate school, she joined Chevron and worked as both a development geologist and geophysicist in the Gulf of Mexico before joining Chevron’s Energy Technology Company Unit in Houston, TX. In this position, she worked with the Rock Physics from Seismic team analyzing global assets in Chevron’s portfolio. Dr. Bedle is currently an assistant professor of applied geophysics at the University of Oklahoma’s School of Geosciences. She joined OU in 2018, after instructing at the University of Houston for two years. Dr. Bedle and her student research team at OU primarily work with seismic reflection data, using advanced techniques such as machine learning, attribute analysis, and rock physics to reveal additional structural, stratigraphic and tectonic insights of the subsurface.

    Jie Qi
    Research Geophysicist

    An Integrated Fault Detection Workflow

    Seismic fault detection is one of the top critical procedures in seismic interpretation. Identifying faults are significant for characterizing and finding the potential oil and gas reservoirs. Seismic amplitude data exhibiting good resolution and a high signal-to-noise ratio are key to identifying structural discontinuities using seismic attributes or machine learning techniques, which in turn serve as input for automatic fault extraction. Deep learning Convolutional Neural Networks (CNN) performs well on fault detection without any human-computer interactive work. This study shows an integrated CNN-based fault detection workflow to construct fault images that are sufficiently smooth for subsequent fault automatic extraction. The objectives were to suppress noise or stratigraphic anomalies subparallel to reflector dip, and sharpen fault and other discontinuities that cut reflectors, preconditioning the fault images for subsequent automatic extraction. A 2D continuous wavelet transform-based acquisition footprint suppression method was applied time slice by time slice to suppress wavenumber components to avoid interpreting the acquisition footprint as artifacts by the CNN fault detection method. To further suppress cross-cutting noise as well as sharpen fault edges, a principal component edge-preserving structure-oriented filter is also applied. The conditioned amplitude volume is then fed to a pre-trained CNN model to compute fault probability. Finally, a Laplacian of Gaussian filter is applied to the original CNN fault probability to enhance fault images. The resulting fault probability volume is favorable with respect to traditional human-interpreter generated on vertical slices through the seismic amplitude volume.

    Dr. Jie Qi
    Research Geophysicist

    An integrated machine learning-based fault classification workflow

    We introduce an integrated machine learning-based fault classification workflow that creates fault component classification volumes that greatly reduces the burden on the human interpreter. We first compute a 3D fault probability volume from pre-conditioned seismic amplitude data using a 3D convolutional neural network (CNN). However, the resulting “fault probability” volume delineates other non-fault edges such as angular unconformities, the base of mass transport complexes, and noise such as acquisition footprint. We find that image processing-based fault discontinuity enhancement and skeletonization methods can enhance the fault discontinuities and suppress many of the non-fault discontinuities. Although each fault is characterized by its dip and azimuth, these two properties are discontinuous at azimuths of φ=±180° and for near vertical faults for azimuths φ and φ+180° requiring them to be parameterized as four continuous geodetic fault components. These four fault components as well as the fault probability can then be fed into a self-organizing map (SOM) to generate fault component classification. We find that the final classification result can segment fault sets trending in interpreter-defined orientations and minimize the impact of stratigraphy and noise by selecting different neurons from the SOM 2D neuron color map.

    Ivan Marroquin
    Senior Research Geophysicist

    Connecting Multi-attribute Classification to Reservoir Properties

    Interpreters rely on seismic pattern changes to identify and map geologic features of importance. The ability to recognize such features depends on the seismic resolution and characteristics of seismic waveforms. With the advancement of machine learning algorithms, new methods for interpreting seismic data are being developed. Among these algorithms, self-organizing maps (SOM) provides a different approach to extract geological information from a set of seismic attributes.

    SOM approximates the input patterns by a finite set of processing neurons arranged in a regular 2D grid of map nodes. Such that, it classifies multi-attribute seismic samples into natural clusters following an unsupervised approach. Since machine learning is unbiased, so the classifications can contain both geological information and coherent noise. Thus, seismic interpretation evolves into broader geologic perspectives. Additionally, SOM partitions multi-attribute samples without a priori information to guide the process (e.g., well data).

    The SOM output is a new seismic attribute volume, in which geologic information is captured from the classification into winning neurons. Implicit and useful geological information are uncovered through an interactive visual inspection of winning neuron classifications. By doing so, interpreters build a classification model that aids them to gain insight into complex relationships between attribute patterns and geological features.

    Despite all these benefits, there are interpretation challenges regarding whether there is an association between winning neurons and geological features. To address these issues, a bivariate statistical approach is proposed. To evaluate this analysis, three cases scenarios are presented. In each case, the association between winning neurons and net reservoir (determined from petrophysical or well log properties) at well locations is analyzed. The results show that the statistical analysis not only aid in the identification of classification patterns; but more importantly, reservoir/not reservoir classification by classical petrophysical analysis strongly correlates with selected SOM winning neurons. Confidence in interpreted classification features is gained at the borehole and interpretation is readily extended as geobodies away from the well.

    Heather Bedle
    Assistant Professor, University of Oklahoma

    Gas Hydrates, Reefs, Channel Architecture, and Fizz Gas: SOM Applications in a Variety of Geologic Settings

    Students at the University of Oklahoma have been exploring the uses of SOM techniques for the last year. This presentation will review learnings and results from a few of these research projects. Two projects have investigated the ability of SOMs to aid in identification of pore space materials – both trying to qualitatively identify gas hydrates and under-saturated gas reservoirs. A third study investigated individual attributes and SOMs in recognizing various carbonate facies in a pinnacle reef in the Michigan Basin. The fourth study took a deep dive of various machine learning algorithms, of which SOMs will be discussed, to understand how much machine learning can aid in the identification of deepwater channel architectures.

    Fabian Rada
    Sr. Geophysicist, Petroleum Oil & Gas Servicest

    Fabian Rada joined Petroleum Oil and Gas Services, Inc (POGS) in January 2015 as Business Development Manager and Consultant to PEMEX. In Mexico, he has participated in several integrated oil and gas reservoir studies. He has consulted with PEMEX Activos and the G&G Technology group to apply the Paradise AI workbench and other tools. Since January 2015, he has been working with Geophysical Insights staff to provide and implement the multi-attribute analysis software Paradise in Petróleos Mexicanos (PEMEX), running a successful pilot test in Litoral Tabasco Tsimin Xux Asset. Mr. Rada began his career in the Venezuelan National Foundation for Seismological Research, where he participated in several geophysical projects, including seismic and gravity data for micro zonation surveys. He then joined China National Petroleum Corporation (CNPC) as QC Geophysicist until he became the Chief Geophysicist in the QA/QC Department. Then, he transitioned to a subsidiary of Petróleos de Venezuela (PDVSA), as a member of the QA/QC and Chief of Potential Field Methods section. Mr. Rada has also participated in processing land seismic data and marine seismic/gravity acquisition surveys. Mr. Rada earned a B.S. in Geophysics from the Central University of Venezuela.

    Hal GreenDirector, Marketing & Business Development - Geophysical Insights

    Introduction to Automatic Fault Detection and Applying Machine Learning to Detect Thin Beds

    Rapid advances in Machine Learning (ML) are transforming seismic analysis. Using these new tools, geoscientists can accomplish the following quickly and effectively: a combination of machine learning (ML) and deep learning applications, geoscientists apply Paradise to extract greater insights from seismic and well data for these and other objectives:

    • Run fault detection analysis in a few hours, not weeks
    • Identify thin beds down to a single seismic sample
    • Overlay fault images on stratigraphic analysis

    The brief introduction will orient you with the technology and examples of how machine learning is being applied to automate interpretation while generating new insights in the data.

    Sarah Stanley
    Senior Geoscientist and Lead Trainer

    Sarah Stanley joined Geophysical Insights in October, 2017 as a geoscience consultant, and became a full-time employee July 2018. Prior to Geophysical Insights, Sarah was employed by IHS Markit in various leadership positions from 2011 to her retirement in August 2017, including Director US Operations Training and Certification, the Operational Governance Team, and, prior to February 2013, Director of IHS Kingdom Training. Sarah joined SMT in May, 2002, and was the Director of Training for SMT until IHS Markit’s acquisition in 2011.

    Prior to joining SMT Sarah was employed by GeoQuest, a subdivision of Schlumberger, from 1998 to 2002. Sarah was also Director of the Geoscience Technology Training Center, North Harris College from 1995 to 1998, and served as a voluntary advisor on geoscience training centers to various geological societies. Sarah has over 37 years of industry experience and has worked as a petroleum geoscientist in various domestic and international plays since August of 1981. Her interpretation experience includes tight gas sands, coalbed methane, international exploration, and unconventional resources.

    Sarah holds a Bachelor’s of Science degree with majors in Biology and General Science and minor in Earth Science, a Master’s of Arts in Education and Master’s of Science in Geology from Ball State University, Muncie, Indiana. Sarah is both a Certified Petroleum Geologist, and a Registered Geologist with the State of Texas. Sarah holds teaching credentials in both Indiana and Texas.

    Sarah is a member of the Houston Geological Society and the American Association of Petroleum Geologists, where she currently serves in the AAPG House of Delegates. Sarah is a recipient of the AAPG Special Award, the AAPG House of Delegates Long Service Award, and the HGS President’s award for her work in advancing training for petroleum geoscientists. She has served on the AAPG Continuing Education Committee and was Chairman of the AAPG Technical Training Center Committee. Sarah has also served as Secretary of the HGS, and Served two years as Editor for the AAPG Division of Professional Affairs Correlator.

    Dr. Tom Smith
    President & CEO

    Dr. Tom Smith received a BS and MS degree in Geology from Iowa State University. His graduate research focused on a shallow refraction investigation of the Manson astrobleme. In 1971, he joined Chevron Geophysical as a processing geophysicist but resigned in 1980 to complete his doctoral studies in 3D modeling and migration at the Seismic Acoustics Lab at the University of Houston. Upon graduation with the Ph.D. in Geophysics in 1981, he started a geophysical consulting practice and taught seminars in seismic interpretation, seismic acquisition and seismic processing. Dr. Smith founded Seismic Micro-Technology in 1984 to develop PC software to support training workshops which subsequently led to development of the KINGDOM Software Suite for integrated geoscience interpretation with world-wide success.

    The Society of Exploration Geologists (SEG) recognized Dr. Smith’s work with the SEG Enterprise Award in 2000, and in 2010, the Geophysical Society of Houston (GSH) awarded him an Honorary Membership. Iowa State University (ISU) has recognized Dr. Smith throughout his career with the Distinguished Alumnus Lecturer Award in 1996, the Citation of Merit for National and International Recognition in 2002, and the highest alumni honor in 2015, the Distinguished Alumni Award. The University of Houston College of Natural Sciences and Mathematics recognized Dr. Smith with the 2017 Distinguished Alumni Award.

    In 2009, Dr. Smith founded Geophysical Insights, where he leads a team of geophysicists, geologists and computer scientists in developing advanced technologies for fundamental geophysical problems. The company launched the Paradise® multi-attribute analysis software in 2013, which uses Machine Learning and pattern recognition to extract greater information from seismic data.

    Dr. Smith has been a member of the SEG since 1967 and is a professional member of SEG, GSH, HGS, EAGE, SIPES, AAPG, Sigma XI, SSA and AGU. Dr. Smith served as Chairman of the SEG Foundation from 2010 to 2013. On January 25, 2016, he was recognized by the Houston Geological Society (HGS) as a geophysicist who has made significant contributions to the field of geology. He currently serves on the SEG President-Elect’s Strategy and Planning Committee and the ISU Foundation Campaign Committee for Forever True, For Iowa State.

    Carrie LaudonSenior Geophysical Consultant - Geophysical Insights

    Applying Machine Learning Technologies in the Niobrara Formation, DJ Basin, to Quickly Produce an Integrated Structural and Stratigraphic Seismic Classification Volume Calibrated to Wells

    This study will demonstrate an automated machine learning approach for fault detection in a 3D seismic volume. The result combines Deep Learning Convolution Neural Networks (CNN) with a conventional data pre-processing step and an image processing-based post processing approach to produce high quality fault attribute volumes of fault probability, fault dip magnitude and fault dip azimuth. These volumes are then combined with instantaneous attributes in an unsupervised machine learning classification, allowing the isolation of both structural and stratigraphic features into a single 3D volume. The workflow is illustrated on a 3D seismic volume from the Denver Julesburg Basin and a statistical analysis is used to calibrate results to well data.

    Ivan Marroquin
    Senior Research Geophysicist

    Iván Dimitri Marroquín is a 20-year veteran of data science research, consistently publishing in peer-reviewed journals and speaking at international conference meetings. Dr. Marroquín received a Ph.D. in geophysics from McGill University, where he conducted and participated in 3D seismic research projects. These projects focused on the development of interpretation techniques based on seismic attributes and seismic trace shape information to identify significant geological features or reservoir physical properties. Examples of his research work are attribute-based modeling to predict coalbed thickness and permeability zones, combining spectral analysis with coherency imagery technique to enhance interpretation of subtle geologic features, and implementing a visual-based data mining technique on clustering to match seismic trace shape variability to changes in reservoir properties.

    Dr. Marroquín has also conducted some ground-breaking research on seismic facies classification and volume visualization. This lead to his development of a visual-based framework that determines the optimal number of seismic facies to best reveal meaningful geologic trends in the seismic data. He proposed seismic facies classification as an alternative to data integration analysis to capture geologic information in the form of seismic facies groups. He has investigated the usefulness of mobile devices to locate, isolate, and understand the spatial relationships of important geologic features in a context-rich 3D environment. In this work, he demonstrated mobile devices are capable of performing seismic volume visualization, facilitating the interpretation of imaged geologic features.  He has definitively shown that mobile devices eventually will allow the visual examination of seismic data anywhere and at any time.

    In 2016, Dr. Marroquín joined Geophysical Insights as a senior researcher, where his efforts have been focused on developing machine learning solutions for the oil and gas industry. For his first project, he developed a novel procedure for lithofacies classification that combines a neural network with automated machine methods. In parallel, he implemented a machine learning pipeline to derive cluster centers from a trained neural network. The next step in the project is to correlate lithofacies classification to the outcome of seismic facies analysis.  Other research interests include the application of diverse machine learning technologies for analyzing and discerning trends and patterns in data related to oil and gas industry.

    Dr. Jie Qi
    Research Geophysicist

    Dr. Jie Qi is a Research Geophysicist at Geophysical Insights, where he works closely with product development and geoscience consultants. His research interests include machine learning-based fault detection, seismic interpretation, pattern recognition, image processing, seismic attribute development and interpretation, and seismic facies analysis. Dr. Qi received a BS (2011) in Geoscience from the China University of Petroleum in Beijing, and an MS (2013) in Geophysics from the University of Houston. He earned a Ph.D. (2017) in Geophysics from the University of Oklahoma, Norman. His industry experience includes work as a Research Assistant (2011-2013) at the University of Houston and the University of Oklahoma (2013-2017). Dr. Qi was with Petroleum Geo-Services (PGS), Inc. in 2014 as a summer intern, where he worked on a semi-supervised seismic facies analysis. In 2017, he served as a postdoctoral Research Associate in the Attributed Assisted-Seismic Processing and Interpretation (AASPI) consortium at the University of Oklahoma from 2017 to 2020.

    Rocky R. Roden
    Senior Consulting Geophysicist

    The Relationship of Self-Organization, Geology, and Machine Learning

    Self-organization is the nonlinear formation of spatial and temporal structures, patterns or functions in complex systems (Aschwanden et al., 2018). Simple examples of self-organization include flocks of birds, schools of fish, crystal development, formation of snowflakes, and fractals. What these examples have in common is the appearance of structure or patterns without centralized control. Self-organizing systems are typically governed by power laws, such as the Gutenberg-Richter law of earthquake frequency and magnitude. In addition, the time frames of such systems display a characteristic self-similar (fractal) response, where earthquakes or avalanches for example, occur over all possible time scales (Baas, 2002).

    The existence of nonlinear dynamic systems and ordered structures in the earth are well known and have been studied for centuries and can appear as sedimentary features, layered and folded structures, stratigraphic formations, diapirs, eolian dune systems, channelized fluvial and deltaic systems, and many more (Budd, et al., 2014; Dietrich and Jacob, 2018). Each of these geologic processes and features exhibit patterns through the action of undirected local dynamics and is generally termed “self-organization” (Paola, 2014).

    Artificial intelligence and specifically neural networks exhibit and reveal self-organization characteristics. The reason for the interest in applying neural networks stems from the fact that they are universal approximators for various kinds of nonlinear dynamical systems of arbitrary complexity (Pessa, 2008). A special class of artificial neural networks is aptly named self-organizing map (SOM) (Kohonen, 1982). It has been found that SOM can identify significant organizational structure in the form of clusters from seismic attributes that relate to geologic features (Strecker and Uden, 2002; Coleou et al., 2003; de Matos, 2006; Roy et al., 2013; Roden et al., 2015; Zhao et al., 2016; Roden et al., 2017; Zhao et al., 2017; Roden and Chen, 2017; Sacrey and Roden, 2018; Leal et al, 2019; Hussein et al., 2020; Hardage et al., 2020; Manauchehri et al., 2020). As a consequence, SOM is an excellent machine learning neural network approach utilizing seismic attributes to help identify self-organization features and define natural geologic patterns not easily seen or seen at all in the data.

    Rocky R. Roden
    Senior Consulting Geophysicist

    Rocky R. Roden started his own consulting company, Rocky Ridge Resources Inc. in 2003 and works with several oil companies on technical and prospect evaluation issues. He is also a principal in the Rose and Associates DHI Risk Analysis Consortium and was Chief Consulting Geophysicist with Seismic Micro-technology. Rocky is a proven oil finder with 37 years in the industry, gaining extensive knowledge of modern geoscience technical approaches.

    Rocky holds a BS in Oceanographic Technology-Geology from Lamar University and a MS in Geological and Geophysical Oceanography from Texas A&M University. As Chief Geophysicist and Director of Applied Technology for Repsol-YPF, his role comprised of advising corporate officers, geoscientists, and managers on interpretation, strategy and technical analysis for exploration and development in offices in the U.S., Argentina, Spain, Egypt, Bolivia, Ecuador, Peru, Brazil, Venezuela, Malaysia, and Indonesia. He has been involved in the technical and economic evaluation of Gulf of Mexico lease sales, farmouts worldwide, and bid rounds in South America, Europe, and the Far East. Previous work experience includes exploration and development at Maxus Energy, Pogo Producing, Decca Survey, and Texaco. Rocky is a member of SEG, AAPG, HGS, GSH, EAGE, and SIPES; he is also a past Chairman of The Leading Edge Editorial Board.

    Bob A. Hardage

    Bob A. Hardage received a PhD in physics from Oklahoma State University. His thesis work focused on high-velocity micro-meteoroid impact on space vehicles, which required trips to Goddard Space Flight Center to do finite-difference modeling on dedicated computers. Upon completing his university studies, he worked at Phillips Petroleum Company for 23 years and was Exploration Manager for Asia and Latin America when he left Phillips. He moved to WesternAtlas and worked 3 years as Vice President of Geophysical Development and Marketing. He then established a multicomponent seismic research laboratory at the Bureau of Economic Geology and served The University of Texas at Austin as a Senior Research Scientist for 28 years. He has published books on VSP, cross-well profiling, seismic stratigraphy, and multicomponent seismic technology. He was the first person to serve 6 years on the Board of Directors of the Society of Exploration Geophysicists (SEG). His Board service was as SEG Editor (2 years), followed by 1-year terms as First VP, President Elect, President, and Past President. SEG has awarded him a Special Commendation, Life Membership, and Honorary Membership. He wrote the AAPG Explorer column on geophysics for 6 years. AAPG honored him with a Distinguished Service award for promoting geophysics among the geological community.

    Bob A. Hardage

    Investigating the Internal Fabric of VSP data with Attribute Analysis and Unsupervised Machine Learning

    Examination of vertical seismic profile (VSP) data with unsupervised machine learning technology is a rigorous way to compare the fabric of down-going, illuminating, P and S wavefields with the fabric of up-going reflections and interbed multiples created by these wavefields. This concept is introduced in this paper by applying unsupervised learning to VSP data to better understand the physics of P and S reflection seismology. The zero-offset VSP data used in this investigation were acquired in a hard-rock, fast-velocity, environment that caused the shallowest 2 or 3 geophones to be inside the near-field radiation zone of a vertical-vibrator baseplate. This study shows how to use instantaneous attributes to backtrack down-going direct-P and direct-S illuminating wavelets to the vibrator baseplate inside the near-field zone. This backtracking confirms that the points-of-origin of direct-P and direct-S are identical. The investigation then applies principal component (PCA) analysis to VSP data and shows that direct-S and direct-P wavefields that are created simultaneously at a vertical-vibrator baseplate have the same dominant principal components. A self-organizing map (SOM) approach is then taken to illustrate how unsupervised machine learning describes the fabric of down-going and up-going events embedded in vertical-geophone VSP data. These SOM results show that a small number of specific neurons build the down-going direct-P illuminating wavefield, and another small group of neurons build up-going P primary reflections and early-arriving down-going P multiples. The internal attribute fabric of these key down-going and up-going neurons are then compared to expose their similarities and differences. This initial study indicates that unsupervised machine learning, when applied to VSP data, is a powerful tool for understanding the physics of seismic reflectivity at a prospect. This research strategy of analyzing VSP data with unsupervised machine learning will now expand to horizontal-geophone VSP data.

    Tom Smith
    President and CEO, Geophysical Insights

    Machine Learning for Incomplete Geoscientists

    This presentation covers big-picture machine learning buzz words with humor and unassailable frankness. The goal of the material is for every geoscientist to gain confidence in these important concepts and how they add to our well-established practices, particularly seismic interpretation. Presentation topics include a machine learning historical perspective, what makes it different, a fish factory, Shazam, comparison of supervised and unsupervised machine learning methods with examples, tuning thickness, deep learning, hard/soft attribute spaces, multi-attribute samples, and several interpretation examples. After the presentation, you may not know how to run machine learning algorithms, but you should be able to appreciate their value and avoid some of their limitations.

    Deborah Sacrey
    Owner, Auburn Energy

    Deborah is a geologist/geophysicist with 44 years of oil and gas exploration experience in Texas, Louisiana Gulf Coast and Mid-Continent areas of the US. She received her degree in Geology from the University of Oklahoma in 1976 and immediately started working for Gulf Oil in their Oklahoma City offices.

    She started her own company, Auburn Energy, in 1990 and built her first geophysical workstation using Kingdom software in 1996. She helped SMT/IHS for 18 years in developing and testing the Kingdom Software. She specializes in 2D and 3D interpretation for clients in the US and internationally. For the past nine years she has been part of a team to study and bring the power of multi-attribute neural analysis of seismic data to the geoscience public, guided by Dr. Tom Smith, founder of SMT. She has become an expert in the use of Paradise software and has seven discoveries for clients using multi-attribute neural analysis.

    Deborah has been very active in the geological community. She is past national President of SIPES (Society of Independent Professional Earth Scientists), past President of the Division of Professional Affairs of AAPG (American Association of Petroleum Geologists), Past Treasurer of AAPG and Past President of the Houston Geological Society. She is also Past President of the Gulf Coast Association of Geological Societies and just ended a term as one of the GCAGS representatives on the AAPG Advisory Council. Deborah is also a DPA Certified Petroleum Geologist #4014 and DPA Certified Petroleum Geophysicist #2. She belongs to AAPG, SIPES, Houston Geological Society, South Texas Geological Society and the Oklahoma City Geological Society (OCGS).

    Mike Dunn
    Senior Vice President Business Development

    Michael A. Dunn is an exploration executive with extensive global experience including the Gulf of Mexico, Central America, Australia, China and North Africa. Mr. Dunn has a proven a track record of successfully executing exploration strategies built on a foundation of new and innovative technologies. Currently, Michael serves as Senior Vice President of Business Development for Geophysical Insights.

    He joined Shell in 1979 as an exploration geophysicist and party chief and held increasing levels or responsibility including Manager of Interpretation Research. In 1997, he participated in the launch of Geokinetics, which completed an IPO on the AMEX in 2007. His extensive experience with oil companies (Shell and Woodside) and the service sector (Geokinetics and Halliburton) has provided him with a unique perspective on technology and applications in oil and gas. Michael received a B.S. in Geology from Rutgers University and an M.S. in Geophysics from the University of Chicago.

    Hal GreenDirector, Marketing & Business Development - Geophysical Insights

    Hal H. Green is a marketing executive and entrepreneur in the energy industry with more than 25 years of experience in starting and managing technology companies. He holds a B.S. in Electrical Engineering from Texas A&M University and an MBA from the University of Houston. He has invested his career at the intersection of marketing and technology, with a focus on business strategy, marketing, and effective selling practices. Mr. Green has a diverse portfolio of experience in marketing technology to the hydrocarbon supply chain – from upstream exploration through downstream refining & petrochemical. Throughout his career, Mr. Green has been a proven thought-leader and entrepreneur, while supporting several tech start-ups.

    He started his career as a process engineer in the semiconductor manufacturing industry in Dallas, Texas and later launched an engineering consulting and systems integration business. Following the sale of that business in the late 80’s, he joined Setpoint in Houston, Texas where he eventually led that company’s Manufacturing Systems business. Aspen Technology acquired Setpoint in January 1996 and Mr. Green continued as Director of Business Development for the Information Management and Polymer Business Units.

    In 2004, Mr. Green founded Advertas, a full-service marketing and public relations firm serving clients in energy and technology. In 2010, Geophysical Insights retained Advertas as their marketing firm. Dr. Tom Smith, President/CEO of Geophysical Insights, soon appointed Mr. Green as Director of Marketing and Business Development for Geophysical Insights, in which capacity he still serves today.

    Hana Kabazi
    Product Manager

    Hana Kabazi joined Geophysical Insights in October of 201, and is now one of our Product Managers for Paradise. Mrs. Kabazi has over 7 years of oil and gas experience, including 5 years and Halliburton – Landmark. During her time at Landmark she held positions as a consultant to many E&P companies, technical advisor to the QA organization, and as product manager of Subsurface Mapping in DecsionSpace. Mrs. Kabazi has a B.S. in Geology from the University of Texas Austin, and an M.S. in Geology from the University of Houston.

    Dr. Carrie LaudonSenior Geophysical Consultant - Geophysical Insights

    Carolan (Carrie) Laudon holds a PhD in geophysics from the University of Minnesota and a BS in geology from the University of Wisconsin Eau Claire. She has been Senior Geophysical Consultant with Geophysical Insights since 2017 working with Paradise®, their machine learning platform. Prior roles include Vice President of Consulting Services and Microseismic Technology for Global Geophysical Services and 17 years with Schlumberger in technical, management and sales, starting in Alaska and including Aberdeen, Scotland, Houston, TX, Denver, CO and Reading, England. She spent five years early in her career with ARCO Alaska as a seismic interpreter for the Central North Slope exploration team.