Calibrating SOM Results to Wells – Improving Stratigraphic Resolution in the Niobrara


Over the last few years, because of the increase in low cost computer power, individuals and companies have stepped up investigations into the use of machine learning in many areas of E&P. For the geosciences, the emphasis has been in reservoir characterization, seismic data processing and most recently, interpretation.
By using statistical tools such as Attribute Selection, which uses Principal Component Analysis (PCA), and Multi-Attribute Classification using Self Organizing Maps (SOM), a multi-attribute 3D seismic volume can be “classified.” PCA reduces a large set of seismic attributes to those that are the most meaningful. The output of the PCA serves as the input to the SOM, a form of unsupervised neural network, which when combined with a 2D color map facilitates the identification of clustering within the data volume.
The application of SOM and PCA in Paradise will be highlighted through a case study of the Niobrara unconventional reservoir. 100 square miles from Phase 5 of Geophysical Pursuit, Inc. and Fairfield Geotechnologies’ multiclient library were analyzed for stratigraphic resolution of the Niobrara chalk reservoirs within a 60 millisecond two-way time window. Thirty wells from the COGCC public database were available to corroborate log data to the SOM results. Several SOM topologies were generated and extracted within Paradise at well locations. These were exported and run through a statistical analysis program to visualize the neuron to reservoir correlations via histograms. Chi2 squared independence tests also validated a relationship between SOM neuron numbers and the presence of reservoir for all chalk benches within the Niobrara.

Carrie Laudon
Carolan (Carrie) Laudon holds a PhD in geophysics from the University of Minnesota and a BS in geology from the University of Wisconsin Eau Claire. She has been Senior Geophysical Consultant with Geophysical Insights since 2017 working with Paradise®, their machine learning platform. Prior roles include Vice President of Consulting Services and Microseismic Technology for Global Geophysical Services and 17 years with Schlumberger in technical, management and sales, starting in Alaska and including Aberdeen, Scotland, Houston, TX, Denver, CO and Reading, England. She spent five years early in her career with ARCO Alaska as a seismic interpreter for the Central North Slope exploration team.



Carrie Laudon: 

Good afternoon. Those of you… I’m not all that used to talking into my screen, but thank you for attending my talk today.

I am presenting a case study for an unconventional reservoir from the US which illustrates how self-organizing maps can push the limit of seismic resolution to single sample and how we can take these results and use our Wells to validate the machine learning results.

So, the outline of my talk today. We’re going to start with a little bit of discussion around, why use machine learning? What does it bring to us that we didn’t have previously? I’ll take you through some examples of how attributes, sampling and self-organizing maps can resolve geology below tuning. Then, an introduction to the Niobrara petroleum system in the Denver Julesburg basin. And follow that with our multi attribute machine learning interpretation thought flow which is, comprised of two pieces, the principal component analysis followed by self-organizing maps. Then, present the results of SOM on Niobrara seismic survey. We’ll look at those results visually, and then look at how I took those results at the well locations and calibrated our SOM, and then finish up with some well to SOM statistics.

There we go. Why do we want to consider seismic multi attribute analysis? The seismic attributes have been around for decades and they’ve always been very challenging to interpret. Because, they’re respondent to a complex subsurface that’s a combination of lithology, fluids, pressure, stress, faults, and fractures. And there’s never really a single attribute that tells us this is it. We can interpret this one attribute. So, we’ve always been trying to interpret multiple attributes.

And what does machine learning bring to our task of interpreting seismic attributes? It helps us to address the overwhelming task of interpreting dozens and hundreds of attributes. So, seismic data has always been a big data problem in our industry and machine learning addresses the human inability to visualize attribute relationships beyond four dimensions. As humans, we can easily 2D or 3D cross plots, and we can add color. But beyond that, we cannot see relationships in four dimensions. However, a computer does not have that limitation.

And then, over the last few years, computer power continues to evolve exponentially and new visualization techniques. For example, in Paradise, we have a 2D color map. So there’re additional visualization techniques that have come to the forefront as well. But ultimately, our motivation in considering multi attribute analysis is to produce better sub-surface interpretations reducing our risk, and hopefully for our clients, allowing them to make money, whether that’s drilling better wells or leasing better acreage.

So, today we’re going to look at unsupervised learning. But, if you were to Google machine learning, you’d say these four categories, are four main categories of machine learning. Let’s take a brief stroll through these. Supervised learning is really the most popular method in our industry. It takes a set of known input data and trains a model to apply new data to predict a result. An advantage of supervised learning is that you get directly to a specific answer. For example, you might get a porosity model from pre-stack inversion and that’s been trained to well data. Unsupervised learning, however is quite different. And it does have its own advantages.

Unsupervised learning has no a priori knowledge of the data it’s organizing. The training adapts itself directly from the data. An advantage to this is it finds patterns in the data that might not exist in the training data that’s used for supervised learning. We see this quite often in our unsupervised machine learning projects. Toward the end of the talk, I can point out some of the reasons for this.

Supervised learning can be thought of as a task driven approach. And it’s predicting the next value from a set of inputs. Whereas unsupervised learning is data-driven and is a means of finding patterns within the data. Then our other two types are semi-supervised learning, which is a simple combination of the two. And lastly, reinforcement learning, which is when an algorithm learns over time to maximize returns based on rewards it receives for performing certain actions.

So, the case study today is it’s going to be on unsupervised learning.

The next section is going to go through a little bit of how and why self-organizing maps can help us discriminate or resolve thin beds. SOM classification is simply a method that uses attribute space to classify seismic samples. In this example, we just have a cross plot of two attributes and they’ve already been marked with their symbols into which cluster they fall into. The SOM process simply introduces natural clusters and attribute space and introduces multi attribute samples called neurons. And the SOM neurons will seek out the natural clusters of like characteristics in the seismic data. And the neurons learn the characteristics of the data cluster through an iterative process of cooperative followed by competitive training. And when the learning is completed, each unique cluster is assigned to a neuron number and the samples are now classified.

Another important concept in Paradise is that we use sample base attributes. So, every sample from every attribute is put into the SOM. We don’t use a wavelet based approach. This example is just showing you how instantaneous attributes at the sample skill look for about… I think this is about a hundred milliseconds of data in time. In attributes space, this would consist of seven dimensions.

This is going to take you through an synthetic example that is really, I think, a powerful illustration of thin bed discrimination by self-organizing maps. In this model, we have an RC series on the left. I believe our two millisecond sample, right? We have a sort of thick positive acoustic impedance followed by a single sample trough peak, each one sample, a peak trough, each one sample, and then a slightly thicker trough.

This RC series was convolved with a wavelet to create traces. Yeah. And then, these were duplicated to a hundred traces with noise added. Here on the left, we simply have cross flooded the Hilbert transform versus the regular amplitude. And you can see that clusters are very apparent in this simple cross plot of the data. Now, if we run this through an 8×8 SOM process, you can see each of these clusters is assigned to one of these 64 neurons in the SOM 2D color map.

Now taking this back to our original time series, you can see in the example that each of our events in our original RC series has formed a unique cluster within the SOM. And so, cluster neuro cluster neuron 57 is our sort of thick high acoustic impedance layer. It’s actually separated the negative amplitude from the positive amplitude in 8 and 55. So, there’s one doublet pair. And then 28 and 33 are a second doublet pair. And then, our other higher acoustic impedance layer number 9. I hope this convinces you, that the sound process is able to distinguish very thin layers within a seismic time sequence.

In practical, how does this work in a 3D cube? I’ll just take you through an imaginary 10 attribute example. We have a 3D amplitude volume in survey space. And if we were to compute or select 10 attributes to go into SOM. We might’ve started with 20 or 30 attributes, but we selected SOM and each sample in the 3D cube. We’ll also have 10 values associated in X, Y and time, in attributes. And, we take those attributes into surveys, into attribute space. Now we’re in 10 dimensional space and we run that through the SOM classification process. The SOM will perform cooperative and competitive training. And in this case classify to 64 patterns. Now that’s not set in stone. That’s up to the interpreter. We offer a very large number of potential SOM typologies, 64 is one of our favorites.

Once the data are classified and attributes space, it goes through a nonlinear process to map those SOM neurons back to a 2D color map. And then, each of these neurons is also placed back into the 3D volumes. So, you end up with a 3D volume back in survey space where each sample is classified as one of these neurons. And then, once you have that in a 3D volume, you can start to interrogate it and interpret it.

That’s the SOM process in a nutshell. I guess, I said that. I forgot to put the words up.

Okay. Now we’re going to look at an introduction to the case study. Our case study is out of the Denver-Julesburg basin. The Wattenberg field… Here’s Denver, for reference. The Wattenberg field was produced as a basin centered conventional gas reservoir down in these Dakota sands. Our study areas up here in the Northern part, almost near the Wyoming border.

This is an asymmetric forelimb basin, covers approximately 70,000 square miles over parts of Colorado, Wyoming, Kansas, and Nebraska. The basin has over 47,000 oil and gas wells with a production history dating back to 1881. However, until 2009, it really wasn’t being drilled with highly deviated wells there. Most of the Wells in the basin at that point were vertical. They were drilling without seismic. Starting around 2009, 11 years ago, the operators in Wattenberg began to drill and complete horizontal Wells within the Niobrara. And we have a cartoon here showing the strat column. All of these are pay zones. The shales that are source rocks are shown here, and the Niobrara is kind of blowing up. Typical depth… I should mention this is from Sonnenburg, a professor at Colorado school of mines taken from one of his publications.

There are these chalk benches within the Niobrara and they’re informally termed the A,B and C bench. Those are the primary reservoirs for the horizontal drilling. They’re interbedded with a very rich source rocks. However, when the horizontal drilling started, they found they were hitting a lot of structures and faults that they weren’t seeing in the vertical wells. So they quickly recognize the need for 3D seismic data.

Beginning around 2015 or so, maybe 2010, sorry. GPI and at the time Geokinetics undertook a 1500 square mile multi-client program in Northern Colorado, the outline of which you can see here. They provided you physical insights with a hundred square miles of this multi-client data to do a proof of concept on whether our self-organizing maps approach could improve the resolution of the Niobrara reservoirs. In addition to the seismic data, we had about 30 wells in the area, the hundred square miles that were available through the public COGCC database. And out of these, we selected seven, which are shown in red to have a full petrophysical analysis run on.

The time structure map is shown here north on the left for the hundred square miles. And we’re showing the production is very closely tied to fracturing in the chalks. As you can imagine, it’s a very low porosity reservoir and the insets here are structural elements. This was from a study of the Austin chalk showing the kind of fracture patterns to expect for various structural elements, most of which we have present in our survey. Here to the west, you can see you’re coming up on a fairly steep fold as you’re coming up to the front range of the Rocky mountains. The predominant faulting direction is, you can see from the time structure mat Northeast to Southwest. But, that’s also overprinted with the Northwest to Southeast fault pattern.

This is on the top of the Niobrara. This is the most positive curvature. So, you can see a lot more detail on the complexity of the fault geometries and the two main fabrics over printing each other and how these would actually generate nice fracture sets within the chalks. These attributes are computed from the SB consortium library that is provided with Paradise.

Here’s our original data, and it’s very nice seismic data, very clean. We selected approximately 60 milliseconds. The vertical scale on the left here is to a time in seconds and trace spacing is about 110 feet. The seismic data we down sampled once from two milliseconds to one millisecond. The Niobrara is a fairly easy event to follow. The top Niobrara is a strong peak shown here. The base of our study section was the top of the green horn.

Within the 60 milliseconds, we have our four reservoirs that the benches of the Niobrara, as well as the codell sandstone, which I forgot to mention in the last slide. But, the codell is a sandstone. That’s also being produced from these horizontal wells. And it’s a fairly thin, highly heterogeneous sandstone that is overlaid unconformably by the Niobrara formation. We have a strong trailing trough that sits between the A bench and the B bench. Another thing to notice is how much more internal faulting there is within the Niobrara that you don’t necessarily pick up. If you’re just looking at that the very shaley part near the top of the Niobrara.

We’re looking to see whether the SOM process can give us more detail on these four reservoirs within the 60 millisecond interval. Selecting the attributes that go into SOM is another very important step in the analysis, because as I mentioned earlier, we all have maybe our favorite attributes. But, it’s hard to put a direct physical interpretation onto a single seismic attribute. How do we decide which attributes to run into our SOM process? For that, we use principal component analysis. Principal component analysis is another unsupervised machine learning technique that helps us with dimensionality reduction. It’s a technique that will take a large set of variables and reduce it to a smaller set that will still contain most of the variation of independent information from the large set. So, PCA helps us find the most prominent and important seismic attributes in our data.

Paradise allows us to do a pretty exhaustive PCA. Rather than go through how we do it, we’ll go through some of the results. This is a PCA result from Paradise for the first two eigenvectors of our interval Niobrara to Greenhorn. In this case, we selected an inline that was going through one of our key wells. But, you can look at all of the in-lines in your survey, and you can look at suites of in-lines in the survey to see if your PCA, how it changes throughout your survey and through various time intervals.

In this case, we ran it over a fairly narrow window, just our Niobrara to Greenhorn. And from the first eigenvector, you can see, I kind of blew up cause it’s hard to read the chart. But we had three prominent attributes within the first eigenvector. Those are sweetness, envelope and relative acoustic impedance. You can see their relative contributions to the eigenvector. Then it drops off quickly for the other attributes that we put into the PCA. Likewise, in eigenvector two, we only had two prominent attributes and those are thin bed and instantaneous frequency.

These five attributes along with another four from other eigenvectors are going to go into our SOM. But, you don’t want to just do this blindly necessarily. It’s always really important, you should inspect your attributes. You just don’t want to look at the bar graphs and throw them into the SOM. So, we took the additional step of trying to look at the sweetness, for example, at our primary reservoir, the Niobrara B. Likewise, we had the thin bed indicator extracted near the Niobrara B, near the primary reservoir. And you just want to make sure you don’t have any surprises in those attributes. So, ultimately we selected nine attributes from our instantaneous suite, out of six of our eigenvectors and these are listed here. So, that went into the SOM.

Looking at the results. Here we have our original data through one of our key wells. One of the wells that we have the petrophysical evaluation on. And here we have our 8×8 SOM, which after looking, doing a visual kind of QC of the different SOM typologies, this was the one I liked the best. The one I tried to tie back into my wells. The B bench is near the center up here on our amplitude section. It’s this kind of faint peak. But in the SOM, it just jumps right out at you. It’s this yellow, red, yellow sequence. And again, you can see there’s a lot more structure on the chalk than there is on the seismic horizon that you’re picking. So, as far as trying to place Wells, this kind of detailed image of your reservoir will certainly enable you to place your wells and understand how you’re going to stay in zone.

Zooming into the well you can again see we have about five milliseconds of data covering the B, which I’ve blown up here. That’s a total of about 30 feet in depth. And our bright red and orange neurons actually are corresponding to the maximum carbonate content within that B bench, as you can see here. So, the neurons in the middle of the B interval are actually showing you the sweet spot of the reservoir. And here’s a reference for how these logs were calculated, if anybody wants to follow up on that. These guys have done all this from triple combo type wireline logs and come up with lithologies and TOC, et cetera. Here, you can see our clean pay flag as well.

Now this is a cross section through three of our Wells with petrophysics. And there’s a lot more to look at in this cross section, but I’ll show you some of the highlights. We have our B bench here, and you can see the markers. Interestingly, the SOM images were B is through some complicated structure. This area right here is very complex, and it’s hard to pick what’s happening structurally with the Niobrara as you take this down to the Greenhorn. Likewise, you can see some areas where perhaps your reservoir has been faulted out. You get a lot more detail out of the SOM that you can use to also avoid areas, in addition to finding your sweet spots.

The A bench, the base of the A bench is well resolved in the navy blue. And we also have our best source rock is this high TOC zone that’s sitting above the B and below the A bench. And that’s shown in this pink and blue neurons. The base of the high TOC zone is not resolved by the SOM. But, the top of it is. So all in all, we get a much better picture of the stratigraphy from the SOM than we do just from using the amplitude data.

Everything I’ve shown up to now was work that was done in 2018. I revisited this project after Paradise 3.4 got released because it came with the ability now to extract logs from our SOM results at well locations. Here’s a cross section I built in Paradise through our 8×8 SOM and the seven wells that we have, the petrophysical results on. What we have on the left, the track one has got a V-shaped curve and a gamma ray. In track two, I’ve taken the SOM and I’ve also overlaying it with the volume of calcite curve from the petrophysics. And then, the third track is our TOC. The fourth track is effective porosity. And then, the fifth track is resistivity. Because the reservoir is chalk, I think that the volume of calcite is a better indicator of reservoir than be shale. So when we move into the SOM statistics, we’re going to use the volume of calcite as a cutoff.

But another thing to know, we have our Fort Hayes limestone down here near the base of our interval that we studied. And you can see that the SOM neurons could potentially be repeated. So it’s important you can export these logs as last files. We had a summer intern named Yin Kai Wang who built a statistical analysis tool to help us evaluate the extracted logs along with the other well logs in the wells. But it’s important in this case, I only evaluated down to the base of the Niobrara so that we weren’t catching this Fort Hayes limestone, when we’re trying to evaluate the SOM results, statistically.

Now this is from a program that he wrote for us over the summer. And it’s based on a paper that was published by Leal and others. If you want to see more on that too, we have Fabia and Rada has presented in our booth this week. They built something called contingency tables and produced statistical measures of the relationship between the flags that they applied to the logs. In this case, we used effective porosity and volume of calcite too, to flag reservoir versus non reservoir just within that Niobrara interval. So, I like personally just looking at the histograms and looking at these histograms, you can quickly see that we have some neurons in our SOM that are dominated by non reservoir. And, we have some that are clearly reservoir. Then we have some that are mixed.

When you make this contingency table, you get a measure called Chi squared, which shows you whether there is a relationship between your SOM neurons and the flag that you’ve said in this case, non reservoir versus reservoir. And in this case, the Chi squared measure on the 8×8 SOM is saying, yes, there is a relationship between SOM neuron and the presence of reservoir or non reservoir. So that’s going over the entire Niobrara interval. Going back to Paradise, I looked at these in three dimensions and tried to get a feeling for which, whether the neurons vertically and laterally were present throughout the various benches through the various reservoirs.

So, here’s our A bench. Our extracted log tells us these neurons are present in the wells, and it shows us which of these neurons have reservoirs. Some of them are almost exclusively reservoir and some are non reservoir. The Chi squared measure shows us that there is a relationship. And then looking in the 3D viewer, you can turn on just those neurons that you found at the wells and look at how they are stacked both vertically and laterally, 3D volume. You can see these neurons are concentrated near the top of the formation.

I actually think 15 and 16 are part of the shale. But of course, the time to depth conversion can sometimes pick up a stray here or there, a stray hit count. Looking at our B bench again. This is kind of neat to see that your neurons are fairly well clustered in the 2D map. These are all the neurons that are in the wells at the B bench. Again, the Chi square test failed. In this case, it’s not because there isn’t a relationship. This is really because we have very few neurons that are actually sampling non reservoir. The B bench is almost entirely a reservoir. So, in this case, the histogram is probably more helpful than necessarily the Chi squared.

But, if you go to my colleague Ivan Merck in his talk tomorrow, you might be able to find some other measures besides these, to look at the statistical relationships. And then, moving on to the C bench. Again, we see a different cluster or set of neurons that are sampling our C at the Wells. And again, we can see that most of the C bench is reservoir as well.

So, this is really helpful to be able to map out in very fine detail over the three-dimensional volume, the presence of these neurons. That’s one way to calibrate your SOM results to your wells. Another question we look to address this summer with Yin Kai’s work was, I think I mentioned that I just selected based on my visual inspection of the SOM was, the 8×8. But, he’s actually found a way to measure, since we can measure Chi squared, also likelihood ratio, Cramer’s V and Bayes factor, all computed from these contingency tables.

You can look at all of these different SOM typologies and what’s happening at the wells. You can see that the highest Chi squared value is the 8×8. I apologize, this is hard to read. But, we have 4×4 on the left and 12×12 on the right. Each one of these measures seems to suggest that the 8×8 typology is the best, at least over that top to base Niobrara section. And certainly you could look over the whole SOM. You could probably apply some different cutoffs, et cetera. And one thing I’ve decided about this particular SOM after looking at the extracted logs and looking at the log curves we had available is, it’s really discriminating between carbonate and shale. So it’s a lithology discrimination SOM, and we can use the SOM to map out the chalk benches. We could probably also use it to map out the shales. I didn’t really go into detail on the what’s below the Niobrara. But, I’d like to take that on next is looking at how it does with the codell sandstone.

To wrap up, one thing that you do have to be careful of is… And this kind of goes with supervised classifications as well is, we had seven wells in a hundred square miles, and we extracted seismic data at those wells and produced hit counts of what neurons got sampled at that well location. So, on the left, this is from Petrel, is the full histogram for the 3D seismic SOM. This is 7×7, not the 8×8. But, the statistics for the full 3D volume for the Niobrara B bench on the 7×7. And in this case, you see that 45 of 49 neurons in the SOM have data within the seismic volume. But if I take my seven wells, we’ve actually only sampled 19 of those 49 neurons. So, it’s important to look at the statistics of your full survey, in addition to what’s happening at your wells.

And it might be that some of these neurons are not really in the zone of interest. There might be something wrong with your horizon or the way you’ve isolated the zone or something. But, when you go back into Paradise and turn on only the 19 of the 49 neurons that our histogram says are the B bench. And then, you look at that in 3D and you do find holes right smack in the middle of the B bench. It’s definitely worth going back and finding out what those neurons are, how extensive they are in your volume, and maybe looking for additional wells that might sample these neurons. And they may show you something different, if you can find a well that has sampled that interval from the ones that you sampled up to that point.

It’s important to consider the volume as well as the well statistics. So, that’s just a watch out.

Thanks for your time. I’ll wrap it up and then we’ll take questions. In summary, seismic, multi attribute analysis is delivering on the promise of improving interpretations via the integration of our attributes, which respond to subsurface conditions, such as stratigraphy, lithology, faulting, fracturing, fluids, pressure, et cetera. Statistical methods and SOMs enhance the interpretation process and can be easily utilized to augment traditional interpretation and utilizes attribute space to simultaneously classify suites of attributes into sample based, high dimension clusters.

In the DJ basin, we have used SOM to resolve our primary reservoir targets, the Niobrara chalk benches. And these are found within approximately 60 milliseconds of two-way travel time data. We’ve resolved our benches to the level of one to five neurons, which in depth would correspond to anywhere from 7 to 35 feet in thickness. The Paradise visualization tools, including 2D color maps and well log extractions enabled closer ties to our well data, which is bringing a lot more value to the results that we’re producing.

Quick acknowledgements. Really think GPI and Fairfield to let us use this data and such a nice high quality data set. I have to acknowledge Yin Kai for the program he provided that lets us apply the statistics to our extracted logs. Sarah Stanley and Patricia Santogrossi did a lot of the work on this 3D volume as well. And Digital formation for providing the petrophysical analysis.

Shopping Cart
  • Registration confirmation will be emailed to you.

  • We're committed to your privacy. Geophysical Insights uses the information you provide to us to contact you about our relevant content, events, and products. You may unsubscribe from these communications at any time. For more information, check out our Privacy Policy

    Deborah SacreyOwner - Auburn Energy

    How to Use Paradise to Interpret Carbonate Reservoirs

    The key to understanding Carbonate reservoirs in Paradise start with good synthetic ties to the wavelet data. If one is not tied correctly, then it will be very east to mis-interpret the neurons as reservoir, when they are not. Secondly, the workflow should utilize Principal Component Analysis to better understand the zone of interest and the attributes to use in the SOM analysis. An important part to interpretation is understanding “Halo” and “Trailing” neurons as part of the stack around a reservoir or potential reservoir. Usually, one sees this phenomenon around deep, pressured gas reservoirs, but it can happen in shallow reservoirs as well. Two case studies are presented to emphasize the importance of looking for halo or trailing patterns around good reservoirs. One is a deep Edwards example in south central Texas, and the other a shallow oil reservoir in the Austin Chalk in the San Antonio area. Another way to help enhance carbonate reservoirs is through Spectral Decomposition. A case history is shown in the Smackover in Alabama to highlight and focus on an oolitic shoal reservoir which tunes at a specific frequency in the best wells. Not all carbonate porosity is at the top of the deposition. A case history will be discussed looking for porosity in the center portion of a reef in west Texas. And finally, one of the most difficult interpretation challenges in the carbonate spectrum is correctly mapping the interface between two carbonate layers. A simple technique is shown to help with that dilemma, by using few attributes and a low-topology count to understand regional depositional sequences. This example is from the Delaware Basin in southeastern New Mexico.

    Dr. Carrie LaudonSenior Geophysical Consultant

    Applying Unsupervised Multi-Attribute Machine Learning for 3D Stratigraphic Facies Classification in a Carbonate Field, Offshore Brazil

    We present results of a multi-attribute, machine learning study over a pre-salt carbonate field in the Santos Basin, offshore Brazil. These results test the accuracy and potential of Self-organizing maps (SOM) for stratigraphic facies delineation. The study area has an existing detailed geological facies model containing predominantly reef facies in an elongated structure.

    Carrie LaudonSenior Geophysical Consultant - Geophysical Insights

    Automatic Fault Detection and Applying Machine Learning to Detect Thin Beds

    Rapid advances in Machine Learning (ML) are transforming seismic analysis. Using these new tools, geoscientists can accomplish the following quickly and effectively:

    • Run fault detection analysis in a few hours, not weeks
    • Identify thin beds down to a single seismic sample
    • Generate seismic volumes that capture structural and stratigraphic details

    Join us for a ‘Lunch & Learn’ sessions daily at 11:00 where Dr. Carolan (“Carrie”) Laudon will review the theory and results of applying a combination of machine learning tools to obtain the above results.  A detailed agenda follows.


    Automated Fault Detection using 3D CNN Deep Learning

    • Deep learning fault detection
    • Synthetic models
    • Fault image enhancement
    • Semi-supervised learning for visualization
    • Application results
      • Normal faults
      • Fault/fracture trends in complex reservoirs

    Demo of Paradise Fault Detection Thoughtflow®

    Stratigraphic analysis using machine learning with fault detection

    • Attribute Selection using Principal Component Analysis (PCA)
    • Multi-Attribute Classification using Self-Organizing Maps (SOM)
    • Case studies – stratigraphic analysis and fault detection
      • Fault-karst and fracture examples, China
      • Niobrara – Stratigraphic analysis and thin beds, faults
    Thomas ChaparroSenior Geophysicist - Geophysical Insights

    Paradise: A Day in The Life of the Geoscientist

    Over the last several years, the industry has invested heavily in Machine Learning (ML) for better predictions and automation. Dramatic results have been realized in exploration, field development, and production optimization. However, many of these applications have been single use ‘point’ solutions. There is a growing body of evidence that seismic analysis is best served using a combination of ML tools for a specific objective, referred to as ML Orchestration. This talk demonstrates how the Paradise AI workbench applications are used in an integrated workflow to achieve superior results than traditional interpretation methods or single-purpose ML products. Using examples from combining ML-based Fault Detection and Stratigraphic Analysis, the talk will show how ML orchestration produces value for exploration and field development by the interpreter leveraging ML orchestration.

    Aldrin RondonSenior Geophysical Engineer - Dragon Oil

    Machine Learning Fault Detection: A Case Study

    An innovative Fault Pattern Detection Methodology has been carried out using a combination of Machine Learning Techniques to produce a seismic volume suitable for fault interpretation in a structurally and stratigraphic complex field. Through theory and results, the main objective was to demonstrate that a combination of ML tools can generate superior results in comparison with traditional attribute extraction and data manipulation through conventional algorithms. The ML technologies applied are a supervised, deep learning, fault classification followed by an unsupervised, multi-attribute classification combining fault probability and instantaneous attributes.

    Thomas ChaparroSenior Geophysicist - Geophysical Insights

    Thomas Chaparro is a Senior Geophysicist who specializes in training and preparing AI-based workflows. Thomas also has experience as a processing geophysicist and 2D and 3D seismic data processing. He has participated in projects in the Gulf of Mexico, offshore Africa, the North Sea, Australia, Alaska, and Brazil.

    Thomas holds a bachelor’s degree in Geology from Northern Arizona University and a Master’s in Geophysics from the University of California, San Diego. His research focus was computational geophysics and seismic anisotropy.

    Aldrin RondonSenior Geophysical Engineer - Dragon Oil

    Bachelor’s Degree in Geophysical Engineering from Central University in Venezuela with a specialization in Reservoir Characterization from Simon Bolivar University.

    Over 20 years exploration and development geophysical experience with extensive 2D and 3D seismic interpretation including acquisition and processing.

    Aldrin spent his formative years working on exploration activity in PDVSA Venezuela followed by a period working for a major international consultant company in the Gulf of Mexico (Landmark, Halliburton) as a G&G consultant. Latterly he was working at Helix in Scotland, UK on producing assets in the Central and South North Sea.  From 2007 to 2021, he has been working as a Senior Seismic Interpreter in Dubai involved in different dedicated development projects in the Caspian Sea.

    Deborah SacreyOwner - Auburn Energy

    How to Use Paradise to Interpret Clastic Reservoirs

    The key to understanding Clastic reservoirs in Paradise starts with good synthetic ties to the wavelet data. If one is not tied correctly, then it will be easy to mis-interpret the neurons as reservoir, whin they are not. Secondly, the workflow should utilize Principal Component Analysis to better understand the zone of interest and the attributes to use in the SOM analysis. An important part to interpretation is understanding “Halo” and “Trailing” neurons as part of the stack around a reservoir or potential reservoir. Deep, high-pressured reservoirs often “leak” or have vertical percolation into the seal. This changes the rock properties enough in the seal to create a “halo” effect in SOM. Likewise, the frequency changes of the seismic can cause a subtle “dim-out”, not necessarily observable in the wavelet data, but enough to create a different pattern in the Earth in terms of these rock property changes. Case histories for Halo and trailing neural information include deep, pressured, Chris R reservoir in Southern Louisiana, Frio pay in Southeast Texas and AVO properties in the Yegua of Wharton County. Additional case histories to highlight interpretation include thin-bed pays in Brazoria County, including updated information using CNN fault skeletonization. Continuing the process of interpretation is showing a case history in Wharton County on using Low Probability to help explore Wilcox reservoirs. Lastly, a look at using Paradise to help find sweet spots in unconventional reservoirs like the Eagle Ford, a case study provided by Patricia Santigrossi.

    Mike DunnSr. Vice President of Business Development

    Machine Learning in the Cloud

    Machine Learning in the Cloud will address the capabilities of the Paradise AI Workbench, featuring on-demand access enabled by the flexible hardware and storage facilities available on Amazon Web Services (AWS) and other commercial cloud services. Like the on-premise instance, Paradise On-Demand provides guided workflows to address many geologic challenges and investigations. The presentation will show how geoscientists can accomplish the following workflows quickly and effectively using guided ThoughtFlows® in Paradise:

    • Identify and calibrate detailed stratigraphy using seismic and well logs
    • Classify seismic facies
    • Detect faults automatically
    • Distinguish thin beds below conventional tuning
    • Interpret Direct Hydrocarbon Indicators
    • Estimate reserves/resources

    Attend the talk to see how ML applications are combined through a process called "Machine Learning Orchestration," proven to extract more from seismic and well data than traditional means.

    Sarah Stanley
    Senior Geoscientist

    Stratton Field Case Study – New Solutions to Old Problems

    The Oligocene Frio gas-producing Stratton Field in south Texas is a well-known field. Like many onshore fields, the productive sand channels are difficult to identify using conventional seismic data. However, the productive channels can be easily defined by employing several Paradise modules, including unsupervised machine learning, Principal Component Analysis, Self-Organizing Maps, 3D visualization, and the new Well Log Cross Section and Well Log Crossplot tools. The Well Log Cross Section tool generates extracted seismic data, including SOMs, along the Cross Section boreholes and logs. This extraction process enables the interpreter to accurately identify the SOM neurons associated with pay versus neurons associated with non-pay intervals. The reservoir neurons can be visualized throughout the field in the Paradise 3D Viewer, with Geobodies generated from the neurons. With this ThoughtFlow®, pay intervals previously difficult to see in conventional seismic can finally be visualized and tied back to the well data.

    Laura Cuttill
    Practice Lead, Advertas

    Young Professionals – Managing Your Personal Brand to Level-up Your Career

    No matter where you are in your career, your online “personal brand” has a huge impact on providing opportunity for prospective jobs and garnering the respect and visibility needed for advancement. While geoscientists tackle ambitious projects, publish in technical papers, and work hard to advance their careers, often, the value of these isn’t realized beyond their immediate professional circle. Learn how to…

    • - Communicate who you are to high-level executives in exploration and development
    • - Avoid common social media pitfalls
    • - Optimize your online presence to best garner attention from recruiters
    • - Stay relevant
    • - Create content of interest
    • - Establish yourself as a thought leader in your given area of specialization
    Laura Cuttill
    Practice Lead, Advertas

    As a 20-year marketing veteran marketing in oil and gas and serial entrepreneur, Laura has deep experience in bringing technology products to market and growing sales pipeline. Armed with a marketing degree from Texas A&M, she began her career doing technical writing for Schlumberger and ExxonMobil in 2001. She started Advertas as a co-founder in 2004 and began to leverage her upstream experience in marketing. In 2006, she co-founded the cyber-security software company, 2FA Technology. After growing 2FA from a startup to 75% market share in target industries, and the subsequent sale of the company, she returned to Advertas to continue working toward the success of her clients, such as Geophysical Insights. Today, she guides strategy for large-scale marketing programs, manages project execution, cultivates relationships with industry media, and advocates for data-driven, account-based marketing practices.

    Fabian Rada
    Sr. Geophysicist, Petroleum Oil & Gas Services

    Statistical Calibration of SOM results with Well Log Data (Case Study)

    The first stage of the proposed statistical method has proven to be very useful in testing whether or not there is a relationship between two qualitative variables (nominal or ordinal) or categorical quantitative variables, in the fields of health and social sciences. Its application in the oil industry allows geoscientists not only to test dependence between discrete variables, but to measure their degree of correlation (weak, moderate or strong). This article shows its application to reveal the relationship between a SOM classification volume of a set of nine seismic attributes (whose vertical sampling interval is three meters) and different well data (sedimentary facies, Net Reservoir, and effective porosity grouped by ranges). The data were prepared to construct the contingency tables, where the dependent (response) variable and independent (explanatory) variable were defined, the observed frequencies were obtained, and the frequencies that would be expected if the variables were independent were calculated and then the difference between the two magnitudes was studied using the contrast statistic called Chi-Square. The second stage implies the calibration of the SOM volume extracted along the wellbore path through statistical analysis of the petrophysical properties VCL and PHIE, and SW for each neuron, which allowed to identify the neurons with the best petrophysical values in a carbonate reservoir.

    Heather Bedle
    Assistant Professor, University of Oklahoma

    Heather Bedle received a B.S. (1999) in physics from Wake Forest University, and then worked as a systems engineer in the defense industry. She later received a M.S. (2005) and a Ph. D. (2008) degree from Northwestern University. After graduate school, she joined Chevron and worked as both a development geologist and geophysicist in the Gulf of Mexico before joining Chevron’s Energy Technology Company Unit in Houston, TX. In this position, she worked with the Rock Physics from Seismic team analyzing global assets in Chevron’s portfolio. Dr. Bedle is currently an assistant professor of applied geophysics at the University of Oklahoma’s School of Geosciences. She joined OU in 2018, after instructing at the University of Houston for two years. Dr. Bedle and her student research team at OU primarily work with seismic reflection data, using advanced techniques such as machine learning, attribute analysis, and rock physics to reveal additional structural, stratigraphic and tectonic insights of the subsurface.

    Jie Qi
    Research Geophysicist

    An Integrated Fault Detection Workflow

    Seismic fault detection is one of the top critical procedures in seismic interpretation. Identifying faults are significant for characterizing and finding the potential oil and gas reservoirs. Seismic amplitude data exhibiting good resolution and a high signal-to-noise ratio are key to identifying structural discontinuities using seismic attributes or machine learning techniques, which in turn serve as input for automatic fault extraction. Deep learning Convolutional Neural Networks (CNN) performs well on fault detection without any human-computer interactive work. This study shows an integrated CNN-based fault detection workflow to construct fault images that are sufficiently smooth for subsequent fault automatic extraction. The objectives were to suppress noise or stratigraphic anomalies subparallel to reflector dip, and sharpen fault and other discontinuities that cut reflectors, preconditioning the fault images for subsequent automatic extraction. A 2D continuous wavelet transform-based acquisition footprint suppression method was applied time slice by time slice to suppress wavenumber components to avoid interpreting the acquisition footprint as artifacts by the CNN fault detection method. To further suppress cross-cutting noise as well as sharpen fault edges, a principal component edge-preserving structure-oriented filter is also applied. The conditioned amplitude volume is then fed to a pre-trained CNN model to compute fault probability. Finally, a Laplacian of Gaussian filter is applied to the original CNN fault probability to enhance fault images. The resulting fault probability volume is favorable with respect to traditional human-interpreter generated on vertical slices through the seismic amplitude volume.

    Dr. Jie Qi
    Research Geophysicist

    An integrated machine learning-based fault classification workflow

    We introduce an integrated machine learning-based fault classification workflow that creates fault component classification volumes that greatly reduces the burden on the human interpreter. We first compute a 3D fault probability volume from pre-conditioned seismic amplitude data using a 3D convolutional neural network (CNN). However, the resulting “fault probability” volume delineates other non-fault edges such as angular unconformities, the base of mass transport complexes, and noise such as acquisition footprint. We find that image processing-based fault discontinuity enhancement and skeletonization methods can enhance the fault discontinuities and suppress many of the non-fault discontinuities. Although each fault is characterized by its dip and azimuth, these two properties are discontinuous at azimuths of φ=±180° and for near vertical faults for azimuths φ and φ+180° requiring them to be parameterized as four continuous geodetic fault components. These four fault components as well as the fault probability can then be fed into a self-organizing map (SOM) to generate fault component classification. We find that the final classification result can segment fault sets trending in interpreter-defined orientations and minimize the impact of stratigraphy and noise by selecting different neurons from the SOM 2D neuron color map.

    Ivan Marroquin
    Senior Research Geophysicist

    Connecting Multi-attribute Classification to Reservoir Properties

    Interpreters rely on seismic pattern changes to identify and map geologic features of importance. The ability to recognize such features depends on the seismic resolution and characteristics of seismic waveforms. With the advancement of machine learning algorithms, new methods for interpreting seismic data are being developed. Among these algorithms, self-organizing maps (SOM) provides a different approach to extract geological information from a set of seismic attributes.

    SOM approximates the input patterns by a finite set of processing neurons arranged in a regular 2D grid of map nodes. Such that, it classifies multi-attribute seismic samples into natural clusters following an unsupervised approach. Since machine learning is unbiased, so the classifications can contain both geological information and coherent noise. Thus, seismic interpretation evolves into broader geologic perspectives. Additionally, SOM partitions multi-attribute samples without a priori information to guide the process (e.g., well data).

    The SOM output is a new seismic attribute volume, in which geologic information is captured from the classification into winning neurons. Implicit and useful geological information are uncovered through an interactive visual inspection of winning neuron classifications. By doing so, interpreters build a classification model that aids them to gain insight into complex relationships between attribute patterns and geological features.

    Despite all these benefits, there are interpretation challenges regarding whether there is an association between winning neurons and geological features. To address these issues, a bivariate statistical approach is proposed. To evaluate this analysis, three cases scenarios are presented. In each case, the association between winning neurons and net reservoir (determined from petrophysical or well log properties) at well locations is analyzed. The results show that the statistical analysis not only aid in the identification of classification patterns; but more importantly, reservoir/not reservoir classification by classical petrophysical analysis strongly correlates with selected SOM winning neurons. Confidence in interpreted classification features is gained at the borehole and interpretation is readily extended as geobodies away from the well.

    Heather Bedle
    Assistant Professor, University of Oklahoma

    Gas Hydrates, Reefs, Channel Architecture, and Fizz Gas: SOM Applications in a Variety of Geologic Settings

    Students at the University of Oklahoma have been exploring the uses of SOM techniques for the last year. This presentation will review learnings and results from a few of these research projects. Two projects have investigated the ability of SOMs to aid in identification of pore space materials – both trying to qualitatively identify gas hydrates and under-saturated gas reservoirs. A third study investigated individual attributes and SOMs in recognizing various carbonate facies in a pinnacle reef in the Michigan Basin. The fourth study took a deep dive of various machine learning algorithms, of which SOMs will be discussed, to understand how much machine learning can aid in the identification of deepwater channel architectures.

    Fabian Rada
    Sr. Geophysicist, Petroleum Oil & Gas Servicest

    Fabian Rada joined Petroleum Oil and Gas Services, Inc (POGS) in January 2015 as Business Development Manager and Consultant to PEMEX. In Mexico, he has participated in several integrated oil and gas reservoir studies. He has consulted with PEMEX Activos and the G&G Technology group to apply the Paradise AI workbench and other tools. Since January 2015, he has been working with Geophysical Insights staff to provide and implement the multi-attribute analysis software Paradise in Petróleos Mexicanos (PEMEX), running a successful pilot test in Litoral Tabasco Tsimin Xux Asset. Mr. Rada began his career in the Venezuelan National Foundation for Seismological Research, where he participated in several geophysical projects, including seismic and gravity data for micro zonation surveys. He then joined China National Petroleum Corporation (CNPC) as QC Geophysicist until he became the Chief Geophysicist in the QA/QC Department. Then, he transitioned to a subsidiary of Petróleos de Venezuela (PDVSA), as a member of the QA/QC and Chief of Potential Field Methods section. Mr. Rada has also participated in processing land seismic data and marine seismic/gravity acquisition surveys. Mr. Rada earned a B.S. in Geophysics from the Central University of Venezuela.

    Hal GreenDirector, Marketing & Business Development - Geophysical Insights

    Introduction to Automatic Fault Detection and Applying Machine Learning to Detect Thin Beds

    Rapid advances in Machine Learning (ML) are transforming seismic analysis. Using these new tools, geoscientists can accomplish the following quickly and effectively: a combination of machine learning (ML) and deep learning applications, geoscientists apply Paradise to extract greater insights from seismic and well data for these and other objectives:

    • Run fault detection analysis in a few hours, not weeks
    • Identify thin beds down to a single seismic sample
    • Overlay fault images on stratigraphic analysis

    The brief introduction will orient you with the technology and examples of how machine learning is being applied to automate interpretation while generating new insights in the data.

    Sarah Stanley
    Senior Geoscientist and Lead Trainer

    Sarah Stanley joined Geophysical Insights in October, 2017 as a geoscience consultant, and became a full-time employee July 2018. Prior to Geophysical Insights, Sarah was employed by IHS Markit in various leadership positions from 2011 to her retirement in August 2017, including Director US Operations Training and Certification, the Operational Governance Team, and, prior to February 2013, Director of IHS Kingdom Training. Sarah joined SMT in May, 2002, and was the Director of Training for SMT until IHS Markit’s acquisition in 2011.

    Prior to joining SMT Sarah was employed by GeoQuest, a subdivision of Schlumberger, from 1998 to 2002. Sarah was also Director of the Geoscience Technology Training Center, North Harris College from 1995 to 1998, and served as a voluntary advisor on geoscience training centers to various geological societies. Sarah has over 37 years of industry experience and has worked as a petroleum geoscientist in various domestic and international plays since August of 1981. Her interpretation experience includes tight gas sands, coalbed methane, international exploration, and unconventional resources.

    Sarah holds a Bachelor’s of Science degree with majors in Biology and General Science and minor in Earth Science, a Master’s of Arts in Education and Master’s of Science in Geology from Ball State University, Muncie, Indiana. Sarah is both a Certified Petroleum Geologist, and a Registered Geologist with the State of Texas. Sarah holds teaching credentials in both Indiana and Texas.

    Sarah is a member of the Houston Geological Society and the American Association of Petroleum Geologists, where she currently serves in the AAPG House of Delegates. Sarah is a recipient of the AAPG Special Award, the AAPG House of Delegates Long Service Award, and the HGS President’s award for her work in advancing training for petroleum geoscientists. She has served on the AAPG Continuing Education Committee and was Chairman of the AAPG Technical Training Center Committee. Sarah has also served as Secretary of the HGS, and Served two years as Editor for the AAPG Division of Professional Affairs Correlator.

    Dr. Tom Smith
    President & CEO

    Dr. Tom Smith received a BS and MS degree in Geology from Iowa State University. His graduate research focused on a shallow refraction investigation of the Manson astrobleme. In 1971, he joined Chevron Geophysical as a processing geophysicist but resigned in 1980 to complete his doctoral studies in 3D modeling and migration at the Seismic Acoustics Lab at the University of Houston. Upon graduation with the Ph.D. in Geophysics in 1981, he started a geophysical consulting practice and taught seminars in seismic interpretation, seismic acquisition and seismic processing. Dr. Smith founded Seismic Micro-Technology in 1984 to develop PC software to support training workshops which subsequently led to development of the KINGDOM Software Suite for integrated geoscience interpretation with world-wide success.

    The Society of Exploration Geologists (SEG) recognized Dr. Smith’s work with the SEG Enterprise Award in 2000, and in 2010, the Geophysical Society of Houston (GSH) awarded him an Honorary Membership. Iowa State University (ISU) has recognized Dr. Smith throughout his career with the Distinguished Alumnus Lecturer Award in 1996, the Citation of Merit for National and International Recognition in 2002, and the highest alumni honor in 2015, the Distinguished Alumni Award. The University of Houston College of Natural Sciences and Mathematics recognized Dr. Smith with the 2017 Distinguished Alumni Award.

    In 2009, Dr. Smith founded Geophysical Insights, where he leads a team of geophysicists, geologists and computer scientists in developing advanced technologies for fundamental geophysical problems. The company launched the Paradise® multi-attribute analysis software in 2013, which uses Machine Learning and pattern recognition to extract greater information from seismic data.

    Dr. Smith has been a member of the SEG since 1967 and is a professional member of SEG, GSH, HGS, EAGE, SIPES, AAPG, Sigma XI, SSA and AGU. Dr. Smith served as Chairman of the SEG Foundation from 2010 to 2013. On January 25, 2016, he was recognized by the Houston Geological Society (HGS) as a geophysicist who has made significant contributions to the field of geology. He currently serves on the SEG President-Elect’s Strategy and Planning Committee and the ISU Foundation Campaign Committee for Forever True, For Iowa State.

    Carrie LaudonSenior Geophysical Consultant - Geophysical Insights

    Applying Machine Learning Technologies in the Niobrara Formation, DJ Basin, to Quickly Produce an Integrated Structural and Stratigraphic Seismic Classification Volume Calibrated to Wells

    This study will demonstrate an automated machine learning approach for fault detection in a 3D seismic volume. The result combines Deep Learning Convolution Neural Networks (CNN) with a conventional data pre-processing step and an image processing-based post processing approach to produce high quality fault attribute volumes of fault probability, fault dip magnitude and fault dip azimuth. These volumes are then combined with instantaneous attributes in an unsupervised machine learning classification, allowing the isolation of both structural and stratigraphic features into a single 3D volume. The workflow is illustrated on a 3D seismic volume from the Denver Julesburg Basin and a statistical analysis is used to calibrate results to well data.

    Ivan Marroquin
    Senior Research Geophysicist

    Iván Dimitri Marroquín is a 20-year veteran of data science research, consistently publishing in peer-reviewed journals and speaking at international conference meetings. Dr. Marroquín received a Ph.D. in geophysics from McGill University, where he conducted and participated in 3D seismic research projects. These projects focused on the development of interpretation techniques based on seismic attributes and seismic trace shape information to identify significant geological features or reservoir physical properties. Examples of his research work are attribute-based modeling to predict coalbed thickness and permeability zones, combining spectral analysis with coherency imagery technique to enhance interpretation of subtle geologic features, and implementing a visual-based data mining technique on clustering to match seismic trace shape variability to changes in reservoir properties.

    Dr. Marroquín has also conducted some ground-breaking research on seismic facies classification and volume visualization. This lead to his development of a visual-based framework that determines the optimal number of seismic facies to best reveal meaningful geologic trends in the seismic data. He proposed seismic facies classification as an alternative to data integration analysis to capture geologic information in the form of seismic facies groups. He has investigated the usefulness of mobile devices to locate, isolate, and understand the spatial relationships of important geologic features in a context-rich 3D environment. In this work, he demonstrated mobile devices are capable of performing seismic volume visualization, facilitating the interpretation of imaged geologic features.  He has definitively shown that mobile devices eventually will allow the visual examination of seismic data anywhere and at any time.

    In 2016, Dr. Marroquín joined Geophysical Insights as a senior researcher, where his efforts have been focused on developing machine learning solutions for the oil and gas industry. For his first project, he developed a novel procedure for lithofacies classification that combines a neural network with automated machine methods. In parallel, he implemented a machine learning pipeline to derive cluster centers from a trained neural network. The next step in the project is to correlate lithofacies classification to the outcome of seismic facies analysis.  Other research interests include the application of diverse machine learning technologies for analyzing and discerning trends and patterns in data related to oil and gas industry.

    Dr. Jie Qi
    Research Geophysicist

    Dr. Jie Qi is a Research Geophysicist at Geophysical Insights, where he works closely with product development and geoscience consultants. His research interests include machine learning-based fault detection, seismic interpretation, pattern recognition, image processing, seismic attribute development and interpretation, and seismic facies analysis. Dr. Qi received a BS (2011) in Geoscience from the China University of Petroleum in Beijing, and an MS (2013) in Geophysics from the University of Houston. He earned a Ph.D. (2017) in Geophysics from the University of Oklahoma, Norman. His industry experience includes work as a Research Assistant (2011-2013) at the University of Houston and the University of Oklahoma (2013-2017). Dr. Qi was with Petroleum Geo-Services (PGS), Inc. in 2014 as a summer intern, where he worked on a semi-supervised seismic facies analysis. In 2017, he served as a postdoctoral Research Associate in the Attributed Assisted-Seismic Processing and Interpretation (AASPI) consortium at the University of Oklahoma from 2017 to 2020.

    Rocky R. Roden
    Senior Consulting Geophysicist

    The Relationship of Self-Organization, Geology, and Machine Learning

    Self-organization is the nonlinear formation of spatial and temporal structures, patterns or functions in complex systems (Aschwanden et al., 2018). Simple examples of self-organization include flocks of birds, schools of fish, crystal development, formation of snowflakes, and fractals. What these examples have in common is the appearance of structure or patterns without centralized control. Self-organizing systems are typically governed by power laws, such as the Gutenberg-Richter law of earthquake frequency and magnitude. In addition, the time frames of such systems display a characteristic self-similar (fractal) response, where earthquakes or avalanches for example, occur over all possible time scales (Baas, 2002).

    The existence of nonlinear dynamic systems and ordered structures in the earth are well known and have been studied for centuries and can appear as sedimentary features, layered and folded structures, stratigraphic formations, diapirs, eolian dune systems, channelized fluvial and deltaic systems, and many more (Budd, et al., 2014; Dietrich and Jacob, 2018). Each of these geologic processes and features exhibit patterns through the action of undirected local dynamics and is generally termed “self-organization” (Paola, 2014).

    Artificial intelligence and specifically neural networks exhibit and reveal self-organization characteristics. The reason for the interest in applying neural networks stems from the fact that they are universal approximators for various kinds of nonlinear dynamical systems of arbitrary complexity (Pessa, 2008). A special class of artificial neural networks is aptly named self-organizing map (SOM) (Kohonen, 1982). It has been found that SOM can identify significant organizational structure in the form of clusters from seismic attributes that relate to geologic features (Strecker and Uden, 2002; Coleou et al., 2003; de Matos, 2006; Roy et al., 2013; Roden et al., 2015; Zhao et al., 2016; Roden et al., 2017; Zhao et al., 2017; Roden and Chen, 2017; Sacrey and Roden, 2018; Leal et al, 2019; Hussein et al., 2020; Hardage et al., 2020; Manauchehri et al., 2020). As a consequence, SOM is an excellent machine learning neural network approach utilizing seismic attributes to help identify self-organization features and define natural geologic patterns not easily seen or seen at all in the data.

    Rocky R. Roden
    Senior Consulting Geophysicist

    Rocky R. Roden started his own consulting company, Rocky Ridge Resources Inc. in 2003 and works with several oil companies on technical and prospect evaluation issues. He is also a principal in the Rose and Associates DHI Risk Analysis Consortium and was Chief Consulting Geophysicist with Seismic Micro-technology. Rocky is a proven oil finder with 37 years in the industry, gaining extensive knowledge of modern geoscience technical approaches.

    Rocky holds a BS in Oceanographic Technology-Geology from Lamar University and a MS in Geological and Geophysical Oceanography from Texas A&M University. As Chief Geophysicist and Director of Applied Technology for Repsol-YPF, his role comprised of advising corporate officers, geoscientists, and managers on interpretation, strategy and technical analysis for exploration and development in offices in the U.S., Argentina, Spain, Egypt, Bolivia, Ecuador, Peru, Brazil, Venezuela, Malaysia, and Indonesia. He has been involved in the technical and economic evaluation of Gulf of Mexico lease sales, farmouts worldwide, and bid rounds in South America, Europe, and the Far East. Previous work experience includes exploration and development at Maxus Energy, Pogo Producing, Decca Survey, and Texaco. Rocky is a member of SEG, AAPG, HGS, GSH, EAGE, and SIPES; he is also a past Chairman of The Leading Edge Editorial Board.

    Bob A. Hardage

    Bob A. Hardage received a PhD in physics from Oklahoma State University. His thesis work focused on high-velocity micro-meteoroid impact on space vehicles, which required trips to Goddard Space Flight Center to do finite-difference modeling on dedicated computers. Upon completing his university studies, he worked at Phillips Petroleum Company for 23 years and was Exploration Manager for Asia and Latin America when he left Phillips. He moved to WesternAtlas and worked 3 years as Vice President of Geophysical Development and Marketing. He then established a multicomponent seismic research laboratory at the Bureau of Economic Geology and served The University of Texas at Austin as a Senior Research Scientist for 28 years. He has published books on VSP, cross-well profiling, seismic stratigraphy, and multicomponent seismic technology. He was the first person to serve 6 years on the Board of Directors of the Society of Exploration Geophysicists (SEG). His Board service was as SEG Editor (2 years), followed by 1-year terms as First VP, President Elect, President, and Past President. SEG has awarded him a Special Commendation, Life Membership, and Honorary Membership. He wrote the AAPG Explorer column on geophysics for 6 years. AAPG honored him with a Distinguished Service award for promoting geophysics among the geological community.

    Bob A. Hardage

    Investigating the Internal Fabric of VSP data with Attribute Analysis and Unsupervised Machine Learning

    Examination of vertical seismic profile (VSP) data with unsupervised machine learning technology is a rigorous way to compare the fabric of down-going, illuminating, P and S wavefields with the fabric of up-going reflections and interbed multiples created by these wavefields. This concept is introduced in this paper by applying unsupervised learning to VSP data to better understand the physics of P and S reflection seismology. The zero-offset VSP data used in this investigation were acquired in a hard-rock, fast-velocity, environment that caused the shallowest 2 or 3 geophones to be inside the near-field radiation zone of a vertical-vibrator baseplate. This study shows how to use instantaneous attributes to backtrack down-going direct-P and direct-S illuminating wavelets to the vibrator baseplate inside the near-field zone. This backtracking confirms that the points-of-origin of direct-P and direct-S are identical. The investigation then applies principal component (PCA) analysis to VSP data and shows that direct-S and direct-P wavefields that are created simultaneously at a vertical-vibrator baseplate have the same dominant principal components. A self-organizing map (SOM) approach is then taken to illustrate how unsupervised machine learning describes the fabric of down-going and up-going events embedded in vertical-geophone VSP data. These SOM results show that a small number of specific neurons build the down-going direct-P illuminating wavefield, and another small group of neurons build up-going P primary reflections and early-arriving down-going P multiples. The internal attribute fabric of these key down-going and up-going neurons are then compared to expose their similarities and differences. This initial study indicates that unsupervised machine learning, when applied to VSP data, is a powerful tool for understanding the physics of seismic reflectivity at a prospect. This research strategy of analyzing VSP data with unsupervised machine learning will now expand to horizontal-geophone VSP data.

    Tom Smith
    President and CEO, Geophysical Insights

    Machine Learning for Incomplete Geoscientists

    This presentation covers big-picture machine learning buzz words with humor and unassailable frankness. The goal of the material is for every geoscientist to gain confidence in these important concepts and how they add to our well-established practices, particularly seismic interpretation. Presentation topics include a machine learning historical perspective, what makes it different, a fish factory, Shazam, comparison of supervised and unsupervised machine learning methods with examples, tuning thickness, deep learning, hard/soft attribute spaces, multi-attribute samples, and several interpretation examples. After the presentation, you may not know how to run machine learning algorithms, but you should be able to appreciate their value and avoid some of their limitations.

    Deborah Sacrey
    Owner, Auburn Energy

    Deborah is a geologist/geophysicist with 44 years of oil and gas exploration experience in Texas, Louisiana Gulf Coast and Mid-Continent areas of the US. She received her degree in Geology from the University of Oklahoma in 1976 and immediately started working for Gulf Oil in their Oklahoma City offices.

    She started her own company, Auburn Energy, in 1990 and built her first geophysical workstation using Kingdom software in 1996. She helped SMT/IHS for 18 years in developing and testing the Kingdom Software. She specializes in 2D and 3D interpretation for clients in the US and internationally. For the past nine years she has been part of a team to study and bring the power of multi-attribute neural analysis of seismic data to the geoscience public, guided by Dr. Tom Smith, founder of SMT. She has become an expert in the use of Paradise software and has seven discoveries for clients using multi-attribute neural analysis.

    Deborah has been very active in the geological community. She is past national President of SIPES (Society of Independent Professional Earth Scientists), past President of the Division of Professional Affairs of AAPG (American Association of Petroleum Geologists), Past Treasurer of AAPG and Past President of the Houston Geological Society. She is also Past President of the Gulf Coast Association of Geological Societies and just ended a term as one of the GCAGS representatives on the AAPG Advisory Council. Deborah is also a DPA Certified Petroleum Geologist #4014 and DPA Certified Petroleum Geophysicist #2. She belongs to AAPG, SIPES, Houston Geological Society, South Texas Geological Society and the Oklahoma City Geological Society (OCGS).

    Mike Dunn
    Senior Vice President Business Development

    Michael A. Dunn is an exploration executive with extensive global experience including the Gulf of Mexico, Central America, Australia, China and North Africa. Mr. Dunn has a proven a track record of successfully executing exploration strategies built on a foundation of new and innovative technologies. Currently, Michael serves as Senior Vice President of Business Development for Geophysical Insights.

    He joined Shell in 1979 as an exploration geophysicist and party chief and held increasing levels or responsibility including Manager of Interpretation Research. In 1997, he participated in the launch of Geokinetics, which completed an IPO on the AMEX in 2007. His extensive experience with oil companies (Shell and Woodside) and the service sector (Geokinetics and Halliburton) has provided him with a unique perspective on technology and applications in oil and gas. Michael received a B.S. in Geology from Rutgers University and an M.S. in Geophysics from the University of Chicago.

    Hal GreenDirector, Marketing & Business Development - Geophysical Insights

    Hal H. Green is a marketing executive and entrepreneur in the energy industry with more than 25 years of experience in starting and managing technology companies. He holds a B.S. in Electrical Engineering from Texas A&M University and an MBA from the University of Houston. He has invested his career at the intersection of marketing and technology, with a focus on business strategy, marketing, and effective selling practices. Mr. Green has a diverse portfolio of experience in marketing technology to the hydrocarbon supply chain – from upstream exploration through downstream refining & petrochemical. Throughout his career, Mr. Green has been a proven thought-leader and entrepreneur, while supporting several tech start-ups.

    He started his career as a process engineer in the semiconductor manufacturing industry in Dallas, Texas and later launched an engineering consulting and systems integration business. Following the sale of that business in the late 80’s, he joined Setpoint in Houston, Texas where he eventually led that company’s Manufacturing Systems business. Aspen Technology acquired Setpoint in January 1996 and Mr. Green continued as Director of Business Development for the Information Management and Polymer Business Units.

    In 2004, Mr. Green founded Advertas, a full-service marketing and public relations firm serving clients in energy and technology. In 2010, Geophysical Insights retained Advertas as their marketing firm. Dr. Tom Smith, President/CEO of Geophysical Insights, soon appointed Mr. Green as Director of Marketing and Business Development for Geophysical Insights, in which capacity he still serves today.

    Hana Kabazi
    Product Manager

    Hana Kabazi joined Geophysical Insights in October of 201, and is now one of our Product Managers for Paradise. Mrs. Kabazi has over 7 years of oil and gas experience, including 5 years and Halliburton – Landmark. During her time at Landmark she held positions as a consultant to many E&P companies, technical advisor to the QA organization, and as product manager of Subsurface Mapping in DecsionSpace. Mrs. Kabazi has a B.S. in Geology from the University of Texas Austin, and an M.S. in Geology from the University of Houston.

    Dr. Carrie LaudonSenior Geophysical Consultant - Geophysical Insights

    Carolan (Carrie) Laudon holds a PhD in geophysics from the University of Minnesota and a BS in geology from the University of Wisconsin Eau Claire. She has been Senior Geophysical Consultant with Geophysical Insights since 2017 working with Paradise®, their machine learning platform. Prior roles include Vice President of Consulting Services and Microseismic Technology for Global Geophysical Services and 17 years with Schlumberger in technical, management and sales, starting in Alaska and including Aberdeen, Scotland, Houston, TX, Denver, CO and Reading, England. She spent five years early in her career with ARCO Alaska as a seismic interpreter for the Central North Slope exploration team.