What Interpreters Should Know About Machine Learning

By Rocky Roden | May 2020

Introduction to Machine Learning for Interpreters

Introduction to Machine Learning for Interpreters

● Why Machine Learning now?
● Address terminology confusion
● Types of Machine Learning
● Case studies
● Machine Learning and the “Black Box” connotation
● Machine Learning and compute power
● Future trends

 

How does Machine Learning Relate to Finding Oil and Gas?

How does all this technology relate to finding oil and gas?

We hear a lot of words today in our lives related to machine learning, artificial intelligence, deep learning, which can be quite confusing at times. However, the focus of this webinar is “How does all this technology relate to finding oil and gas?”.

 

Why Use Machine Learning?

Why should interpreter use machine learning?

You may ask, “Why use machine learning now?” Well, machine learning is very good at helping to analyze large amounts of data simultaneously. It’s the Big Data issue.Machine learning can certainly help to determine the relationships between several types of data all at once. For example, in the interpretation process, we typically look at 2D and 3D crossplots. What happens when I have 5, 10, or even 20 different types of data. How do they relate or not relate to each other? Machine learning can help us in this regard. Machine learning can also help discover nonlinearities in the law of the solutions that we use now in the interpretation process. We quite often make assumptions on different elements of the Earth, when in reality they are not linear but very complicated. Quite often machine learning can help understand these nonlinearities. It can certainly improve the efficiency and accuracy over time and ultimately, it can help automate a lot of these interpretive processes. Overall, helping us to be more effective and efficient. Of course, the ultimate goal in machine learning is to help interpreters reveal geological features, properties, or trends that are very difficult to see or interpret in our data or the data unable to be seen with conventional approaches. Ultimately, it’s reducing the risk in drilling for oil and gas.

 

Challenges and Opportunities for Machine Learning in the Geosciences

Challenges and Opportunities for Machine Learning in the geosciences

Geoscience is deeply grounded in physical laws and principles. These have been developed by many of the people seen pictured in the right, over multiple centuries of a lot of systematic research. Geoscience features have space and time relationships, highly multi-variate, follow many non-linear trends, lots of non-stationary characteristics, and often involve rare but significant events. Then the geoscience data itself can have challenges. These can include: multiple resolutions, varying degrees of noise, incompleteness, sample size issues, and improperly processed data, just to name a few.

Machine Learning in Geosciences

Machine learning in the geosciences

The questions are:
-How can geoscience interpreters ensure that the established bounds and rigorous theories, that have been established over centuries, are not violated by machine learning. By the same token, how can we advance our geoscience interpretation, especially with Machine Learning, if we are restricted by the established practices of the past? Well, the answer is pretty easy. In the Oil and Gas industry, the proof is what is found by the drill bit. That ultimately tells us the answers. It is obvious we don’t have all of the answers yet. If we did, no one would ever drill a dry hole.

 

Data Science vs Big Data vs Data Analytics

Data science vs big data vs data analytics

Forbes indicates that data is growing so fast that by this year, 2020, data will be growing at the rate of 1.7 megabytes per second, for every person on Earth. That is a tremendous amount of information. That is why the data science field has dramatically grown over the last decade. Data science encompasses anything related to preparing, looking at, or analyzing lots of data. The Big Data issue relates to having such massive amounts of data, that require other processes to evaluate it. Traditional database and software techniques don’t work anymore. Data analytics are just the approaches and techniques used to analyze all of this information.

 

What is Machine Learning?

Machine learning definition

Most people give a definition of Machine Learning accredited to Arthur Samuel, who indicated it’s the “field of study that gives computers the ability to learn without being explicitly programmed”. We’ve heard a lot of the modern-day challenges of computer systems beating the chess champions of the world and other games. However, many people don’t realize Arthur Samuel developed an IBM machine that ran Machine Learning that beat the checkers champion of Connecticut. This is part of the evolution of Machine Learning.

 

Categories of Data Science

Categories of data science

This display exhibits a lot of different categories of data science but, what’s important to look at is where Machine Learning is located in these sorts of data science categories. As shown, it is a part of AI (artificial intelligence).

 

Categories of Data Science – Artificial Intelligence

AI, Machine learning, Deep learning

Artificial intelligence is a big and very significant category of data science. AI is any technique that enables computers to mimic human behavior. A subset of AI is Machine Learning. As defined by Arthur Samuel, it is a technique that gives computers the ability to learn without being explicitly programmed. A subset of Machine Learning is Deep Learning. That relates to the computation of multi-layer neural networks.

 

Types of Machine Learning

Types of machine learning

If you were to Google Machine Learning, you’d get something similar to the following:
Supervised Learning- It is the most widely used type of machine learning. It takes a known set of input data and the known responses, and then takes the description/label that it describes and then develops a model to find the answer that is already known. Once the model is developed, it can be applied to another data set hoping to find what was originally found when developing the model.

Unsupervised Learning- It is actually quite different. In unsupervised learning, there is no knowledge of previous data. The training adapts to the data, identifying natural patterns, structures, and clusters. A great example is babies. Most researchers think that newborn babies, up until about a year old, fundamentally learn by unsupervised learning. They come into this world not knowing anything other than their surroundings, items in the surroundings, and parents. They look at the relationships of the things around them.

Semi-Supervised Learning- It is really combining supervised and unsupervised learning in different ways to get better answers.

Reinforcement Learning- This is the type of algorithm that learns over time and maximizes returns based on the rewards it receives for performing certain actions.

We will focus mainly on supervised and unsupervised learning, how it’s being used, and a few examples of interpreting geology. Semi-supervised and reinforcement learning will be discussed later in the presentation.

 

Supervised and Unsupervised Machine Learning

Supervised and Unsupervised machine learning

There are many types of supervised and unsupervised learning. Listed are some of the different kinds of machine learning approaches that are seen and used in the Geoscience community for interpretation. The left list (unsupervised learning) are a few very common approaches in geological applications. Approaches listed in red are non-neural network statistical approaches. Of course there are neural networks under unsupervised learning. Over in unsupervised learning, there neural networks again. Different than the ones in supervised learning along with something called dimensionality reduction. This will be discussed as it relates to unsupervised learning.

 

Non-Neural Network Machine Learning (Regression and Classification)

Non-Neural Network machine learning

Of these non-neural networks supervised learning approaches like linear regression, decision trees, and random forests. These are all regression and classification approaches to help solve some interpretational problems. I’ll give some examples.

 

AVO intercept and gradient computed from least-squares linear-fit line (Linear Regression) through amplitude vs Zoeppritz approximation

AVO intercept and gradient computed from least-squares linear-fit line (Linear Regression) through amplitude vs Zoeppritz approximation

Being an AVO guy, one of the things that I have been doing for years, and anyone that interprets AVO will be familiar with, is calculating intercepting gradients. It’s a very simple linear regression when calculating how amplitudes change with offset depending on what kind of linear approximation being used. For example, here is a Shuey 2-Term. I crossplot that and do a least-squares linear-fit line. The slope of that line is a gradient and the intercept is a zero offset. Those are two of the fundamental attributes used in our industry. Linear regression; very simple and very straight forward.

 

Comparison of Acoustic Impedance Inversion and Predictive Analytics to determine key reservoir properties for 4D surveys

Comparison of Acoustic Impedance Inversion and Predictive Analytics to determine key reservoir properties for 4D surveys

Another approach of non-neural networks in supervised learning is by ConocoPhillips. They did an analysis of their 4D surveys in the Gulf of Mexico and North sea. They came to the conclusion that the Acoustic Impedance Inversion they were using, was not sufficient enough to see the changes in the reservoir properties that they wanted to see. As a consequence, they applied Random Forests and Gradient Boosting along with 4D seismic attributes and prestack data. This apparently enabled them to much better predict the change in pressure, water saturation, and gas saturation. Again, this was by using some non-neural network supervised learning approaches.

 

Neural NetworksMachine Learning Neural Networks

Neural networks are obviously a very popular and common approach of Machine Learning to use in interpreting geology and geophysics.

 

Biological Neural Network

Biological Neural Network

A lot of neural networks are based on the biological neural network. This relates to the brain, which is a collection of 10 billion interconnected neurons. Each of these cells uses biochemical reactions to receive, process, and transmit information.

 

Artificial Neural Network

AI Neural Network

An Artificial Neural Network is no more than a computational simulation of a Biological Neural Network. It’s composed of a large number of highly interconnected processing elements, called neurons. Here we have both supervised and unsupervised with neurons being used very differently in both. The computations are very different.

 

An Example of Supervised Learning

Supervised machine learning test

Here is a test for you. Look at these series of numbers. Based on these numbers, what is the answer to the missing number? Hopefully you got 81. Congratulations, you have successfully accomplished supervised learning. You looked at the series of numbers, you built a model that relates to squaring the first number to get the number on the right, applied this to the test at the bottom, and got 81. This is a typical example of supervised learning.

 

Supervised Learning – Neural Network

Supervised machine learning - Neural network

How supervised learning works; I take a known object. For example, here we have my cat and a description of it, which are sometimes called labels. Ultimately, I develop a model that determines this is a known cat. I take this model and apply it to another object, and it determines whether the object is a cat or not. This is fundamentally supervised learning.

 

Supervised Learning – Pros & Cons

Neural network pros and cons

What are the pros and cons of supervised learning? Pros: It’s a very clear objective. This is why it’s the most used Machine Learning application today. It is easy to measure the accuracy and it is controlled. Cons: It can be labor-intensive, especially if you’re trying to develop the labels or descriptors. Do you have enough information to really build an accurate model? It fundamentally gives you limited insights, because you’re telling it what to find. It won’t find anything that you don’t tell it to find.

 

Deep Learning/Deep Neural Network – More that one hidden layer

Deep Learning neural network

Another aspect of supervised learning is in the neural network. If you look to the left, this is a very simple neural network, sometimes called a shallow neural network. You have the input (which is the labels/descriptors), you have one hidden layer of neurons, and then you have the output. If you look on the right, this is a deep neural network, sometimes called deep learning. There are several hidden layers there, in this case, there are a lot of neurons in each layer. The most popular deep learning approach today, on the bottom, is a Convolutional Neural Network (CNN). CNNs were developed out of image recognition. It takes an image input and assigns weights and biases to differentiate one from the other.

 

Supervised Learning Case Study

Deep Learning – Convolutional Neural Network

Deep learning for seismic facies

In this example, there is a seismic line and I am going to identify seismic facies. What I typically do is look at several lines in a 3D survey and identify reflection patterns on those lines that I want to be interpreted throughout the entire dataset. By identifying these patterns. T starts to build a model A model that recognizes these different reflection patterns, hopefully in many cases associated with seismic facies.

 

Deep learning for seismic facies classification

Deep learning for seismic facies classification

Typically these are generated or run on graphic processing units (GPUs). It looks at the data, takes this model and analyzes all the data based on the few input lines given to build the model, then it classifies the whole dataset.

 

3D Visualization of the Facies Classification

Deep learning for seismic facies classification graph

Once it’s done you have a 3D visualization of the facies you originally wanted to see in your data. This is an example of a Convolutional Neural Network for seismic facies interpretation.

 

How Many Patterns or Clusters Can You Identify?

Identify seismic patterns and clusters

Here is another test. How many patterns or clusters can you identify from these six objects? Look at these and think about this. Shapes, Fruit/Non-Fruit, Color, and etc. Congratulations, you have successfully accomplished unsupervised learning. Very different, here you were looking for the natural patterns or clusters. How did you identify them?

 

 

Unsupervised Learning

How does unsupervised learning work? You take a large amount of raw data that you know nothing about and apply an unsupervised learning algorithm. It examines the data, learns from the data, and identifies the patterns/clusters/classes. Then it separates that out into different patterns. The thing that is different about unsupervised learning, is you have to interpret those patterns, unlike supervised learning where you knew what you were trying to find. You may or may not understand what you ultimately get out of unsupervised learning.

 

Unsupervised Learning – Neural Networks

Here are the pros and cons of unsupervised learning. Pros It is very fast to start. I don’t need a lot of descriptors or labels going into the process. It can be very disruptive, it may show you things you have never seen before or didn’t understand. Cons: It is somewhat difficult to measure the accuracy. It does require a little bit more experience on the perimeters quite often. There is another issue of something called the curse of dimensionality.

 

Curse of Dimensionality

The curse of dimensionality is a phenomenon quite often associated with unsupervised learning. So what does this mean? It refers to a phenomenon that occurs when analyzing several different types of data, in other words several dimensions, at one time. There are issues that come up that are not related to the normal 3D world that we live in. In other words, as the amount of data or dimensionality increases, the volume of space increases so fast that the available data becomes sparse. What does this mean? Take for example the 1D plot seen on the bottom. You see a series of data points on the 1D graph. If you take those same points and plot them on a 2D graph you see they start to space in between the data points. If you take the 3D graph, on the right, you have a lot more space between the points. As the number of data types or dimensions grow, the amount of data required to analyze it accurately grows exponentially.

 

How to defeat the Curse of Dimensionality?

How do we defeat the Curse of Dimensionality? The first way is by incorporating prior knowledge. Then we need to reduce its dimensionality. This needs to be done to improve classification performance and make interpretations feasible. Dimensionality reduction reduces the dimensions you are looking at to the data space where your data is most dense, ignoring parts where the data is sparse. One of the most common and popular approaches used today to reduce dimensionality is Principal Component Analysis (PCA).

 

Principal Component Analysis (PCA)

This is a linear mathematical technique to reduce a large set of variables (e.g. seismic attributes) to a small set that still contains most of the variation of independent information in the large set. This means it can help us find the prominent attributes in the dataset.

 

What is the Largest Variation in the Data?

Here is a graphic approach to understanding principal component analysis. If I were to crossplot a couple of attributes and the data looked as the information in the graph below, what is the largest variation in the data? You can see that the largest variation or spread of data is shown by the red line. This is the first PCA. What is the second-largest variation in the data? There are two attributes and two principal components. The second one is orthogonal to the first and it shows the second-largest variation in the data, shown by the green line. The eigenvector is the direction of the line sowing the variance or spread in the data. The eigenvalue is the measure of that variance. How does this relate to interpreters?

 

PCA in Twelve Seismic Attributes

Here is an example of principal component analysis. Here we have taken twelve seismic attributes and one PCA. The graph depicts the highest eigenvalues on each of the inlines in this 3D survey. So for example, if I took the red bar and wanted to look at the PCA as it relates to the red bar, say this line was running directly through the well of interest. These are the results, twelve attributes, and twelve principal components. The first principal component, which is the highest eigenvalue, there are twelve attributes listed and highlighted is the percentage contribution. There are four attributes that stand out with over 90% contribution of the first principal component. There is no question that these four attributes are very prominent in the data. The second principal component has three prominent attributes in the data. Looking at the first and second principal components there are seven attributes that are very prominent in the dataset. Those attributes can be taken and take them into an unsupervised approach such as Self-Organizing Maps (SOMs). The real issue is, are these the attributes to uncover that I am looking for?

 

Self-Organizing Maps (SOMs)

Self-organizing maps

Self-Organizing Maps (SOMs) are an unsupervised learning approach. It classifies data into clusters, categories, or patterns based on their properties. A neuron is a point that identifies a natural cluster of attributes in the data. Some of these clusters will have geologic significance. Some of them will not; they will identify coherent/incoherent noise and other elements in your data that are not cared about. Although, as long as the right attributes are selected within the realm of reasonable and good data.

 

How SOM Works

Seismic Attribute space

This is not too difficult. For example, there is a dataset with a very specific and identified zone of interest to be analyzed. There 10 prominent attributes in this dataset, within the zone of interest. What I do is take every one of these 10 attributes in that zone of interest, which means every single data point in that zone has 10 values. I take all of those of those data points and place them into attribute space. Initially, I was in survey space. Now I’ve calculated these attributes, taken all these data points, and put them into something call attribute space, sometimes called hyperspace. Because there are 10 attributes, there are 10 dimensions. That can not be seen visually, because there are 10 different data types. First thing I do is take all of those data types and normalize or standardize them to the same scale. Once that is done I will place neurons, in this case 64 neurons, randomly in this attribute space. Then I instruct the Self-Organizing Map to find the pattern in the data. The data points never move. The neurons move around, with very simple math, until it identifies 64 patterns in the data. Once it’s done that, it non-linearly maps this back to a Neuron Topology Space. In other words, as you are looking at the 2D color map there are 64 hexagons. Each one represents a neuron. Each neuron represents a pattern in the data that contains different percentages of the 10 attributes that went into it into the first place. This is how we take the information through the SOM classification process and go back to survey space. Now each one of the neurons that identify by color, show up in the 3D survey space so interpretations can be made. I can turn anyone or sets of neurons on or off to see whatever geology those particular neurons bring out in the data.

 

Unsupervised Learning Case Study
Principal Component Analysis and Self-Organizing Maps
Offshore Gulf of Mexico Case Study – Class 3 AVO
Offshore Gulf of Mexico Case Study - Class 3 AVO

This case study is in the offshore Gulf of Mexico, a very simple Class 3 AVO Bright Spot setting. We are looking at a relatively shallow reservoir, for amplitudes and something that perhaps relates to DHIs, that can reduce the risks of drilling prospects in this area. Before these two wells were drilled, there were seven wells drilled in this area. All on amplitudes and all wet or low saturation gas, before these two wells made discoveries.

Attributes Input for PCAPCA for seismic interpretation
This data’s zone of interest took 20 instantaneous attributes in the Principal Component Analysis to find out what were the prominent attributes.

Principal Component Analysis

Principal component analysis

After taking these 20 seismic attributes and running through principal component analysis, here are the results. As you can see across the top, these bars represent the highest eigenvalue in each of the inlines of this particular survey. Now selecting the area where the red bars are located and the PCA results below are the average of each of the red bars in total. Then we look at the 1st, 2nd, 3rd, and 4th Principal Components, for the highest contributing attribute in each of the four principal components. The highest percentage of attributes as a whole, came out to be these eight attributes: Sweetness, Envelope, Instantaneous Frequency, Thin Bed, Relative Acoustic Impedance, Hilbert, Cosine of Instantaneous Phase, and Final Raw Migration. These eight attributes were placed in Self-Organizing Maps.

SOM Attributes and Analysis

Reservoir characteristics for seismic interpretation

The top display is a map of the amplitude, the top of the reservoir. It’s a very prominent and high amplitude event. There is a good amplitude conformance to structure, it’s relatively consistent in the mapped target area. The bottom display is a result form the Self-Organizing Map analysis. You will see low probability in white. Low probability is a measure of the probability of the data points and how close they are to the neuron that identified that particular pattern or cluster that they’re in. For example, there is a neuron that has identified a cluster or pattern. All the points that are very close to the neuron has a high probability, but the points that are very far away have a low probability. They are anomalous. What is anomalous in the seismic data? One of them is DHIs, direct hydrocarbon indicators, by definition of seismic anomalies. What we have found is quite often these low probability anomalies from SOM analysis, relate to direct hydrocarbon indicators. This depends on the geological setting you are in and the attributes being used. By comparing the displays, you will see the low probability, in this case less than 1% probability was clearly defining the lateral extent of the reservoir.

Stacked Amplitude and SOM Analysis of Line 9411

Seismic amplitude and SOM

If you look at it from the standpoint of analyzing the neurons on the patterns they specifically see, this is the result. There is an inline right through the middle of the field. The top display is a typical conventional amplitude line in time through that field. The bottom two displays are the SOM results. If you pay attention to the middle display, you will see there are 25 neurons used there. The 20th and 25th neurons are identified in gray and if you look to the seismic line, the gray represents that reservoir above the hydrocarbon contacts. The hydrocarbon contacts are identified by the reddish color on neuron #15. It identified a gal/oil contact, oil/water contact, and something anomalous down deep that has not been drilled yet. The bottom display shows only those three neurons turned on. The dashed box on each display shows where the reservoir transitions from a hydrocarbon leg into a water leg. Looking at the SOM analysis you can see we are getting down to one or two samples where the reservoir pitches out and goes into the water leg; a much higher resolution than you normally see on seismic data.

Stacked Amplitude and SOM Analysis of Line 3183

Seismic interpretation SOM analysis

This is the same SOM display with a different focus. We are now looking at the strike line through that field. Again, look at the SOM results. You can see that reservoir identified by the two neurons in gray and the gas/oil contact identified by the reddish line. Very good identifying DHI characteristics in this particular dataset.

 

What is Machine Learning?

Machine Learning Black box

Machine Learning, whether you are using supervised or unsupervised methods, is relatively straightforward. In fact, many of the complicated mathematical derivations used in processing are quite involved. They are much more complicated than the neural network approaches used in machine learning. However, the difference is there are thousands of millions of computations involved in these neural networks. It is the inability to examine each of the iterations that projects the “black box” perception or connotation. Data goes in, there is an analysis, and data comes out with the answer. It is very difficult to understand how it got that answer because of so many computations. For seismic noise, coherent and incoherent noise, an insufficient amount of data, and etc can complicate this issue. It is this “black box” phenomenon at times that has prevented people from accepting some of the results. The computations at times can be quite compelling.

Machine Learning and Human Behavior

Machine learning and human behavior

In Machine Learning (neural networks) we are typically trying to mimic human behavior by using inputs and receiving outputs. From the perspective of a machine learning system, the human is the black box. How do they come up with those answers?

 

Machine Learning and Compute Power

Machine learning and compute power

In Machine Learning, especially with neural networks, computing power has come to be a factor. Typically with Machine Learning, issues are based around things like CPU vs GPU, High-Performance Computing, and the Cloud.

Machine Learning and Compute Power – CPU & GPU

Machine learning and compute power for seismic interpretation

CPUs (Central Processing Units) which we all know are essential in all of our computers, are good at efficiently executing a few complex operations. While a CPU is excellent at handling one set of very complex instructions, a GPU (Graphics Processing Unit) is very good at handling many sets of very simple instructions. GPUs evolved from the gaming industry and it is a preferred approach for most Deep Learning Convolutional Neural Networks. Which should be used for Machine Learning? GPUs are typically faster than CPUs for neural networks. However, GPUs cost more than CPUs. Economics, time, size of the job, and machine learning types should all be considered. Often the Machine Learning applications will already be written for a GPU or CPU.

Machine Learning and Compute Power – HPC

High performance computing for seismic interpretation

Another way Machine Learning is computed is by using HPC (High-Performance Computing) systems, sometimes called “supercomputers”. These HPC clusters consist of hundreds of thousands of compute servers that are networked together. Typically they are clustered and parallel with each other, boosting the processing speed, to give you a lot of high-performance computing. Of course, this is at a price.

 

Machine Learning and Compute Power – The Cloud

Machine learning and Cloud

The Cloud refers to the servers that are accessed over the Internet and the software and databases that run in those servers. It could be a high-performance computing system or just a server you are using for computing. Whatever it is, you can have any with a Cloud. Cloud servers are located around data centers around the world. It allows companies and users to not have to manage physical servers themselves or run software applications on their own internal machines. All of this comes at a price, obviously.

 

Machine Learning Applications Being Developed Today

Machine learning seismic interpretation application

The majority of Machine Learning approaches today have been applied to our established geoscience workflows and practices. In other words, we are trying to improve elements of the traditional workflow. Can Machine Learning discover something “profound” that we have not seen before?

 

Profound Results from Machine Learning
University of Ontario Study

Profound results from machine learning in seismic interpretation

A study was done at the University of Ontario, where they looked at babies in an intensive neonatal care unit. Telemetry devices were attached to the babies. The purpose was to try to accurately identify when these babies were developing infections, one of the biggest worries for premature babies. This information was sent into a Machine Learning algorithm 48 hours prior to any symptoms becoming evident, that they would have infections. The process was becoming so good at identifying this that the doctors and clinicians believed what they were seeing, because of it’s high accuracy. They could not explain why the answers were being provided before they could see any symptoms. Of course there is a scientific explanation for this, however, they didn’t know what that was. It proved itself many times over. This is profound. It discovered something new and unknown.

 

Types of Machine Learning

Semi supervised machine learning

With these different sorts of machine learning, which one of these might provide one of these profound step changes in how interpretation is done. We have discussed supervised and unsupervised learning. Supervised learning is one of the common Machine Learning applications used in our industry. Unsupervised Learning is less used, but this may provide us with profound step changes. It is able to produce disruptive changes that have never been seen before, unlike Supervised Learning where I am telling it what to find. In reality, the biggest advances will probably be in the Semi-Supervised and Reinforcement Learning approaches.

 

Semi-Supervised Learning

What is semi-supervised machine learning

Here is an example of semi-supervised learning, which is really combining supervised and unsupervised learning approaches. For example, if I have gone through an unsupervised learning methodology, I have identified a set of attributes that are really good for faults. (i.e coherency, curvature, and etc.) Using those attributes, run a Self-Organizing Map, which is an unsupervised approach, and use those results in a Convolutional Neural Network that identifies faults. CNN is another deep learning approach. I have used two types of Machine Learning approaches with better data going in than conventional amplitude data to identify faults. This is a semi-supervised learning approach.

 

Will Machine Learning “Profoundly” Change Geoscience Interpretation?

Will machine learning change geoscience interpretation

So will Machine Learning really change how we do interpretation? If machine learning improves our existing interpretation workflows by giving faster and more accurate answers, that in itself is significant and maybe “profound”. It is necessary, we have to be able to do things more efficiently, in less time, and with fewer people. Here are different sets of machine learning approaches where machine learning has been used. Over the last two years, after going to many workshops, sessions, and several different conventions, these are the most common machine learning applications I found for interpretation. Most are supervised approaches and a few are unsupervised approaches. There are many more, however these are the approaches I have found to be the most common.

 

How do machine learning change geoscience interpretation

We are already starting to see machine learning applications provide higher seismic resolution results, whether supervised or unsupervised, than with inversion, fault detection, and stratigraphic facies analysis.

 

Machine Learning will not replace geoscience interpreters

We are starting to see the identification of seismic anomalies, like previously seen with the Direct Hydrocarbon Indicators. We are also seeing the isolation of various types of noise in the data, trying to identify what is real and what is not.

 

identification of seismic anomalies, like previously seen with the Direct Hydrocarbon Indicators

As previously stated, Semi-Supervised and Reinforcement Learning methods hold great promise and everything is shifting toward these methods. The ability to combine the best of these different machine learning approaches, to get better answers. It’s very exciting! We are in a time that we have the tools to do this and it is a very exciting interpretation tool.

 

Future of Machine Learning in Geoscience Interpretation

Future of Machine Learning in Geoscience Interpretation

I’d like to make a prediction, given what has been shown as it relates to what interpreters should know about machine learning.

  1.  Machine Learning can be disruptive. It can change the way you do things and give better and more accurate answers at times.
  2. Artificial Intelligence and Machine Learning will not replace geoscience interpreters. In my years, I have seen many technologies evolve and each evolution we hear “This will limit the interpreter”. In reality it provides more capabilities and more tools for the interpreter to do a better job.
  3. However, in the next five or ten years, geoscience interpreters who do not use Machine Learning will be replaced by those who do.
Most Popular Papers
Case Study: An Integrated Machine Learning-Based Fault Classification Workflow
Using machine learning to classify a 100-square-mile seismic volume in the Niobrara, geoscientists were able to interpret thin beds below ...
Case Study with Petrobras: Applying Unsupervised Multi-Attribute Machine Learning for 3D Stratigraphic Facies Classification in a Carbonate Field, Offshore Brazil
Using machine learning to classify a 100-square-mile seismic volume in the Niobrara, geoscientists were able to interpret thin beds below ...
Applying Machine Learning Technologies in the Niobrara Formation, DJ Basin, to Quickly Produce an Integrated Structural and Stratigraphic Seismic Classification Volume Calibrated to Wells
Carolan Laudon, Jie Qi, Yin-Kai Wang, Geophysical Research, LLC (d/b/a Geophysical Insights), University of Houston | Published with permission: Unconventional Resources ...
Shopping Cart
  • Registration confirmation will be emailed to you.

  • We're committed to your privacy. Geophysical Insights uses the information you provide to us to contact you about our relevant content, events, and products. You may unsubscribe from these communications at any time. For more information, check out our Privacy Policy

    Deborah SacreyOwner - Auburn Energy

    How to Use Paradise to Interpret Carbonate Reservoirs

    The key to understanding Carbonate reservoirs in Paradise start with good synthetic ties to the wavelet data. If one is not tied correctly, then it will be very east to mis-interpret the neurons as reservoir, when they are not. Secondly, the workflow should utilize Principal Component Analysis to better understand the zone of interest and the attributes to use in the SOM analysis. An important part to interpretation is understanding “Halo” and “Trailing” neurons as part of the stack around a reservoir or potential reservoir. Usually, one sees this phenomenon around deep, pressured gas reservoirs, but it can happen in shallow reservoirs as well. Two case studies are presented to emphasize the importance of looking for halo or trailing patterns around good reservoirs. One is a deep Edwards example in south central Texas, and the other a shallow oil reservoir in the Austin Chalk in the San Antonio area. Another way to help enhance carbonate reservoirs is through Spectral Decomposition. A case history is shown in the Smackover in Alabama to highlight and focus on an oolitic shoal reservoir which tunes at a specific frequency in the best wells. Not all carbonate porosity is at the top of the deposition. A case history will be discussed looking for porosity in the center portion of a reef in west Texas. And finally, one of the most difficult interpretation challenges in the carbonate spectrum is correctly mapping the interface between two carbonate layers. A simple technique is shown to help with that dilemma, by using few attributes and a low-topology count to understand regional depositional sequences. This example is from the Delaware Basin in southeastern New Mexico.

    Dr. Carrie LaudonSenior Geophysical Consultant

    Applying Unsupervised Multi-Attribute Machine Learning for 3D Stratigraphic Facies Classification in a Carbonate Field, Offshore Brazil

    We present results of a multi-attribute, machine learning study over a pre-salt carbonate field in the Santos Basin, offshore Brazil. These results test the accuracy and potential of Self-organizing maps (SOM) for stratigraphic facies delineation. The study area has an existing detailed geological facies model containing predominantly reef facies in an elongated structure.

    Carrie LaudonSenior Geophysical Consultant - Geophysical Insights

    Automatic Fault Detection and Applying Machine Learning to Detect Thin Beds

    Rapid advances in Machine Learning (ML) are transforming seismic analysis. Using these new tools, geoscientists can accomplish the following quickly and effectively:

    • Run fault detection analysis in a few hours, not weeks
    • Identify thin beds down to a single seismic sample
    • Generate seismic volumes that capture structural and stratigraphic details

    Join us for a ‘Lunch & Learn’ sessions daily at 11:00 where Dr. Carolan (“Carrie”) Laudon will review the theory and results of applying a combination of machine learning tools to obtain the above results.  A detailed agenda follows.

    Agenda

    Automated Fault Detection using 3D CNN Deep Learning

    • Deep learning fault detection
    • Synthetic models
    • Fault image enhancement
    • Semi-supervised learning for visualization
    • Application results
      • Normal faults
      • Fault/fracture trends in complex reservoirs

    Demo of Paradise Fault Detection Thoughtflow®

    Stratigraphic analysis using machine learning with fault detection

    • Attribute Selection using Principal Component Analysis (PCA)
    • Multi-Attribute Classification using Self-Organizing Maps (SOM)
    • Case studies – stratigraphic analysis and fault detection
      • Fault-karst and fracture examples, China
      • Niobrara – Stratigraphic analysis and thin beds, faults
    Thomas ChaparroSenior Geophysicist - Geophysical Insights

    Paradise: A Day in The Life of the Geoscientist

    Over the last several years, the industry has invested heavily in Machine Learning (ML) for better predictions and automation. Dramatic results have been realized in exploration, field development, and production optimization. However, many of these applications have been single use ‘point’ solutions. There is a growing body of evidence that seismic analysis is best served using a combination of ML tools for a specific objective, referred to as ML Orchestration. This talk demonstrates how the Paradise AI workbench applications are used in an integrated workflow to achieve superior results than traditional interpretation methods or single-purpose ML products. Using examples from combining ML-based Fault Detection and Stratigraphic Analysis, the talk will show how ML orchestration produces value for exploration and field development by the interpreter leveraging ML orchestration.

    Aldrin RondonSenior Geophysical Engineer - Dragon Oil

    Machine Learning Fault Detection: A Case Study

    An innovative Fault Pattern Detection Methodology has been carried out using a combination of Machine Learning Techniques to produce a seismic volume suitable for fault interpretation in a structurally and stratigraphic complex field. Through theory and results, the main objective was to demonstrate that a combination of ML tools can generate superior results in comparison with traditional attribute extraction and data manipulation through conventional algorithms. The ML technologies applied are a supervised, deep learning, fault classification followed by an unsupervised, multi-attribute classification combining fault probability and instantaneous attributes.

    Thomas ChaparroSenior Geophysicist - Geophysical Insights

    Thomas Chaparro is a Senior Geophysicist who specializes in training and preparing AI-based workflows. Thomas also has experience as a processing geophysicist and 2D and 3D seismic data processing. He has participated in projects in the Gulf of Mexico, offshore Africa, the North Sea, Australia, Alaska, and Brazil.

    Thomas holds a bachelor’s degree in Geology from Northern Arizona University and a Master’s in Geophysics from the University of California, San Diego. His research focus was computational geophysics and seismic anisotropy.

    Aldrin RondonSenior Geophysical Engineer - Dragon Oil

    Bachelor’s Degree in Geophysical Engineering from Central University in Venezuela with a specialization in Reservoir Characterization from Simon Bolivar University.

    Over 20 years exploration and development geophysical experience with extensive 2D and 3D seismic interpretation including acquisition and processing.

    Aldrin spent his formative years working on exploration activity in PDVSA Venezuela followed by a period working for a major international consultant company in the Gulf of Mexico (Landmark, Halliburton) as a G&G consultant. Latterly he was working at Helix in Scotland, UK on producing assets in the Central and South North Sea.  From 2007 to 2021, he has been working as a Senior Seismic Interpreter in Dubai involved in different dedicated development projects in the Caspian Sea.

    Deborah SacreyOwner - Auburn Energy

    How to Use Paradise to Interpret Clastic Reservoirs

    The key to understanding Clastic reservoirs in Paradise starts with good synthetic ties to the wavelet data. If one is not tied correctly, then it will be easy to mis-interpret the neurons as reservoir, whin they are not. Secondly, the workflow should utilize Principal Component Analysis to better understand the zone of interest and the attributes to use in the SOM analysis. An important part to interpretation is understanding “Halo” and “Trailing” neurons as part of the stack around a reservoir or potential reservoir. Deep, high-pressured reservoirs often “leak” or have vertical percolation into the seal. This changes the rock properties enough in the seal to create a “halo” effect in SOM. Likewise, the frequency changes of the seismic can cause a subtle “dim-out”, not necessarily observable in the wavelet data, but enough to create a different pattern in the Earth in terms of these rock property changes. Case histories for Halo and trailing neural information include deep, pressured, Chris R reservoir in Southern Louisiana, Frio pay in Southeast Texas and AVO properties in the Yegua of Wharton County. Additional case histories to highlight interpretation include thin-bed pays in Brazoria County, including updated information using CNN fault skeletonization. Continuing the process of interpretation is showing a case history in Wharton County on using Low Probability to help explore Wilcox reservoirs. Lastly, a look at using Paradise to help find sweet spots in unconventional reservoirs like the Eagle Ford, a case study provided by Patricia Santigrossi.

    Mike DunnSr. Vice President of Business Development

    Machine Learning in the Cloud

    Machine Learning in the Cloud will address the capabilities of the Paradise AI Workbench, featuring on-demand access enabled by the flexible hardware and storage facilities available on Amazon Web Services (AWS) and other commercial cloud services. Like the on-premise instance, Paradise On-Demand provides guided workflows to address many geologic challenges and investigations. The presentation will show how geoscientists can accomplish the following workflows quickly and effectively using guided ThoughtFlows® in Paradise:

    • Identify and calibrate detailed stratigraphy using seismic and well logs
    • Classify seismic facies
    • Detect faults automatically
    • Distinguish thin beds below conventional tuning
    • Interpret Direct Hydrocarbon Indicators
    • Estimate reserves/resources

    Attend the talk to see how ML applications are combined through a process called "Machine Learning Orchestration," proven to extract more from seismic and well data than traditional means.

    Sarah Stanley
    Senior Geoscientist

    Stratton Field Case Study – New Solutions to Old Problems

    The Oligocene Frio gas-producing Stratton Field in south Texas is a well-known field. Like many onshore fields, the productive sand channels are difficult to identify using conventional seismic data. However, the productive channels can be easily defined by employing several Paradise modules, including unsupervised machine learning, Principal Component Analysis, Self-Organizing Maps, 3D visualization, and the new Well Log Cross Section and Well Log Crossplot tools. The Well Log Cross Section tool generates extracted seismic data, including SOMs, along the Cross Section boreholes and logs. This extraction process enables the interpreter to accurately identify the SOM neurons associated with pay versus neurons associated with non-pay intervals. The reservoir neurons can be visualized throughout the field in the Paradise 3D Viewer, with Geobodies generated from the neurons. With this ThoughtFlow®, pay intervals previously difficult to see in conventional seismic can finally be visualized and tied back to the well data.

    Laura Cuttill
    Practice Lead, Advertas

    Young Professionals – Managing Your Personal Brand to Level-up Your Career

    No matter where you are in your career, your online “personal brand” has a huge impact on providing opportunity for prospective jobs and garnering the respect and visibility needed for advancement. While geoscientists tackle ambitious projects, publish in technical papers, and work hard to advance their careers, often, the value of these isn’t realized beyond their immediate professional circle. Learn how to…

    • - Communicate who you are to high-level executives in exploration and development
    • - Avoid common social media pitfalls
    • - Optimize your online presence to best garner attention from recruiters
    • - Stay relevant
    • - Create content of interest
    • - Establish yourself as a thought leader in your given area of specialization
    Laura Cuttill
    Practice Lead, Advertas

    As a 20-year marketing veteran marketing in oil and gas and serial entrepreneur, Laura has deep experience in bringing technology products to market and growing sales pipeline. Armed with a marketing degree from Texas A&M, she began her career doing technical writing for Schlumberger and ExxonMobil in 2001. She started Advertas as a co-founder in 2004 and began to leverage her upstream experience in marketing. In 2006, she co-founded the cyber-security software company, 2FA Technology. After growing 2FA from a startup to 75% market share in target industries, and the subsequent sale of the company, she returned to Advertas to continue working toward the success of her clients, such as Geophysical Insights. Today, she guides strategy for large-scale marketing programs, manages project execution, cultivates relationships with industry media, and advocates for data-driven, account-based marketing practices.

    Fabian Rada
    Sr. Geophysicist, Petroleum Oil & Gas Services

    Statistical Calibration of SOM results with Well Log Data (Case Study)

    The first stage of the proposed statistical method has proven to be very useful in testing whether or not there is a relationship between two qualitative variables (nominal or ordinal) or categorical quantitative variables, in the fields of health and social sciences. Its application in the oil industry allows geoscientists not only to test dependence between discrete variables, but to measure their degree of correlation (weak, moderate or strong). This article shows its application to reveal the relationship between a SOM classification volume of a set of nine seismic attributes (whose vertical sampling interval is three meters) and different well data (sedimentary facies, Net Reservoir, and effective porosity grouped by ranges). The data were prepared to construct the contingency tables, where the dependent (response) variable and independent (explanatory) variable were defined, the observed frequencies were obtained, and the frequencies that would be expected if the variables were independent were calculated and then the difference between the two magnitudes was studied using the contrast statistic called Chi-Square. The second stage implies the calibration of the SOM volume extracted along the wellbore path through statistical analysis of the petrophysical properties VCL and PHIE, and SW for each neuron, which allowed to identify the neurons with the best petrophysical values in a carbonate reservoir.

    Heather Bedle
    Assistant Professor, University of Oklahoma

    Heather Bedle received a B.S. (1999) in physics from Wake Forest University, and then worked as a systems engineer in the defense industry. She later received a M.S. (2005) and a Ph. D. (2008) degree from Northwestern University. After graduate school, she joined Chevron and worked as both a development geologist and geophysicist in the Gulf of Mexico before joining Chevron’s Energy Technology Company Unit in Houston, TX. In this position, she worked with the Rock Physics from Seismic team analyzing global assets in Chevron’s portfolio. Dr. Bedle is currently an assistant professor of applied geophysics at the University of Oklahoma’s School of Geosciences. She joined OU in 2018, after instructing at the University of Houston for two years. Dr. Bedle and her student research team at OU primarily work with seismic reflection data, using advanced techniques such as machine learning, attribute analysis, and rock physics to reveal additional structural, stratigraphic and tectonic insights of the subsurface.

    Jie Qi
    Research Geophysicist

    An Integrated Fault Detection Workflow

    Seismic fault detection is one of the top critical procedures in seismic interpretation. Identifying faults are significant for characterizing and finding the potential oil and gas reservoirs. Seismic amplitude data exhibiting good resolution and a high signal-to-noise ratio are key to identifying structural discontinuities using seismic attributes or machine learning techniques, which in turn serve as input for automatic fault extraction. Deep learning Convolutional Neural Networks (CNN) performs well on fault detection without any human-computer interactive work. This study shows an integrated CNN-based fault detection workflow to construct fault images that are sufficiently smooth for subsequent fault automatic extraction. The objectives were to suppress noise or stratigraphic anomalies subparallel to reflector dip, and sharpen fault and other discontinuities that cut reflectors, preconditioning the fault images for subsequent automatic extraction. A 2D continuous wavelet transform-based acquisition footprint suppression method was applied time slice by time slice to suppress wavenumber components to avoid interpreting the acquisition footprint as artifacts by the CNN fault detection method. To further suppress cross-cutting noise as well as sharpen fault edges, a principal component edge-preserving structure-oriented filter is also applied. The conditioned amplitude volume is then fed to a pre-trained CNN model to compute fault probability. Finally, a Laplacian of Gaussian filter is applied to the original CNN fault probability to enhance fault images. The resulting fault probability volume is favorable with respect to traditional human-interpreter generated on vertical slices through the seismic amplitude volume.

    Dr. Jie Qi
    Research Geophysicist

    An integrated machine learning-based fault classification workflow

    We introduce an integrated machine learning-based fault classification workflow that creates fault component classification volumes that greatly reduces the burden on the human interpreter. We first compute a 3D fault probability volume from pre-conditioned seismic amplitude data using a 3D convolutional neural network (CNN). However, the resulting “fault probability” volume delineates other non-fault edges such as angular unconformities, the base of mass transport complexes, and noise such as acquisition footprint. We find that image processing-based fault discontinuity enhancement and skeletonization methods can enhance the fault discontinuities and suppress many of the non-fault discontinuities. Although each fault is characterized by its dip and azimuth, these two properties are discontinuous at azimuths of φ=±180° and for near vertical faults for azimuths φ and φ+180° requiring them to be parameterized as four continuous geodetic fault components. These four fault components as well as the fault probability can then be fed into a self-organizing map (SOM) to generate fault component classification. We find that the final classification result can segment fault sets trending in interpreter-defined orientations and minimize the impact of stratigraphy and noise by selecting different neurons from the SOM 2D neuron color map.

    Ivan Marroquin
    Senior Research Geophysicist

    Connecting Multi-attribute Classification to Reservoir Properties

    Interpreters rely on seismic pattern changes to identify and map geologic features of importance. The ability to recognize such features depends on the seismic resolution and characteristics of seismic waveforms. With the advancement of machine learning algorithms, new methods for interpreting seismic data are being developed. Among these algorithms, self-organizing maps (SOM) provides a different approach to extract geological information from a set of seismic attributes.

    SOM approximates the input patterns by a finite set of processing neurons arranged in a regular 2D grid of map nodes. Such that, it classifies multi-attribute seismic samples into natural clusters following an unsupervised approach. Since machine learning is unbiased, so the classifications can contain both geological information and coherent noise. Thus, seismic interpretation evolves into broader geologic perspectives. Additionally, SOM partitions multi-attribute samples without a priori information to guide the process (e.g., well data).

    The SOM output is a new seismic attribute volume, in which geologic information is captured from the classification into winning neurons. Implicit and useful geological information are uncovered through an interactive visual inspection of winning neuron classifications. By doing so, interpreters build a classification model that aids them to gain insight into complex relationships between attribute patterns and geological features.

    Despite all these benefits, there are interpretation challenges regarding whether there is an association between winning neurons and geological features. To address these issues, a bivariate statistical approach is proposed. To evaluate this analysis, three cases scenarios are presented. In each case, the association between winning neurons and net reservoir (determined from petrophysical or well log properties) at well locations is analyzed. The results show that the statistical analysis not only aid in the identification of classification patterns; but more importantly, reservoir/not reservoir classification by classical petrophysical analysis strongly correlates with selected SOM winning neurons. Confidence in interpreted classification features is gained at the borehole and interpretation is readily extended as geobodies away from the well.

    Heather Bedle
    Assistant Professor, University of Oklahoma

    Gas Hydrates, Reefs, Channel Architecture, and Fizz Gas: SOM Applications in a Variety of Geologic Settings

    Students at the University of Oklahoma have been exploring the uses of SOM techniques for the last year. This presentation will review learnings and results from a few of these research projects. Two projects have investigated the ability of SOMs to aid in identification of pore space materials – both trying to qualitatively identify gas hydrates and under-saturated gas reservoirs. A third study investigated individual attributes and SOMs in recognizing various carbonate facies in a pinnacle reef in the Michigan Basin. The fourth study took a deep dive of various machine learning algorithms, of which SOMs will be discussed, to understand how much machine learning can aid in the identification of deepwater channel architectures.

    Fabian Rada
    Sr. Geophysicist, Petroleum Oil & Gas Servicest

    Fabian Rada joined Petroleum Oil and Gas Services, Inc (POGS) in January 2015 as Business Development Manager and Consultant to PEMEX. In Mexico, he has participated in several integrated oil and gas reservoir studies. He has consulted with PEMEX Activos and the G&G Technology group to apply the Paradise AI workbench and other tools. Since January 2015, he has been working with Geophysical Insights staff to provide and implement the multi-attribute analysis software Paradise in Petróleos Mexicanos (PEMEX), running a successful pilot test in Litoral Tabasco Tsimin Xux Asset. Mr. Rada began his career in the Venezuelan National Foundation for Seismological Research, where he participated in several geophysical projects, including seismic and gravity data for micro zonation surveys. He then joined China National Petroleum Corporation (CNPC) as QC Geophysicist until he became the Chief Geophysicist in the QA/QC Department. Then, he transitioned to a subsidiary of Petróleos de Venezuela (PDVSA), as a member of the QA/QC and Chief of Potential Field Methods section. Mr. Rada has also participated in processing land seismic data and marine seismic/gravity acquisition surveys. Mr. Rada earned a B.S. in Geophysics from the Central University of Venezuela.

    Hal GreenDirector, Marketing & Business Development - Geophysical Insights

    Introduction to Automatic Fault Detection and Applying Machine Learning to Detect Thin Beds

    Rapid advances in Machine Learning (ML) are transforming seismic analysis. Using these new tools, geoscientists can accomplish the following quickly and effectively: a combination of machine learning (ML) and deep learning applications, geoscientists apply Paradise to extract greater insights from seismic and well data for these and other objectives:

    • Run fault detection analysis in a few hours, not weeks
    • Identify thin beds down to a single seismic sample
    • Overlay fault images on stratigraphic analysis

    The brief introduction will orient you with the technology and examples of how machine learning is being applied to automate interpretation while generating new insights in the data.

    Sarah Stanley
    Senior Geoscientist and Lead Trainer

    Sarah Stanley joined Geophysical Insights in October, 2017 as a geoscience consultant, and became a full-time employee July 2018. Prior to Geophysical Insights, Sarah was employed by IHS Markit in various leadership positions from 2011 to her retirement in August 2017, including Director US Operations Training and Certification, the Operational Governance Team, and, prior to February 2013, Director of IHS Kingdom Training. Sarah joined SMT in May, 2002, and was the Director of Training for SMT until IHS Markit’s acquisition in 2011.

    Prior to joining SMT Sarah was employed by GeoQuest, a subdivision of Schlumberger, from 1998 to 2002. Sarah was also Director of the Geoscience Technology Training Center, North Harris College from 1995 to 1998, and served as a voluntary advisor on geoscience training centers to various geological societies. Sarah has over 37 years of industry experience and has worked as a petroleum geoscientist in various domestic and international plays since August of 1981. Her interpretation experience includes tight gas sands, coalbed methane, international exploration, and unconventional resources.

    Sarah holds a Bachelor’s of Science degree with majors in Biology and General Science and minor in Earth Science, a Master’s of Arts in Education and Master’s of Science in Geology from Ball State University, Muncie, Indiana. Sarah is both a Certified Petroleum Geologist, and a Registered Geologist with the State of Texas. Sarah holds teaching credentials in both Indiana and Texas.

    Sarah is a member of the Houston Geological Society and the American Association of Petroleum Geologists, where she currently serves in the AAPG House of Delegates. Sarah is a recipient of the AAPG Special Award, the AAPG House of Delegates Long Service Award, and the HGS President’s award for her work in advancing training for petroleum geoscientists. She has served on the AAPG Continuing Education Committee and was Chairman of the AAPG Technical Training Center Committee. Sarah has also served as Secretary of the HGS, and Served two years as Editor for the AAPG Division of Professional Affairs Correlator.

    Dr. Tom Smith
    President & CEO

    Dr. Tom Smith received a BS and MS degree in Geology from Iowa State University. His graduate research focused on a shallow refraction investigation of the Manson astrobleme. In 1971, he joined Chevron Geophysical as a processing geophysicist but resigned in 1980 to complete his doctoral studies in 3D modeling and migration at the Seismic Acoustics Lab at the University of Houston. Upon graduation with the Ph.D. in Geophysics in 1981, he started a geophysical consulting practice and taught seminars in seismic interpretation, seismic acquisition and seismic processing. Dr. Smith founded Seismic Micro-Technology in 1984 to develop PC software to support training workshops which subsequently led to development of the KINGDOM Software Suite for integrated geoscience interpretation with world-wide success.

    The Society of Exploration Geologists (SEG) recognized Dr. Smith’s work with the SEG Enterprise Award in 2000, and in 2010, the Geophysical Society of Houston (GSH) awarded him an Honorary Membership. Iowa State University (ISU) has recognized Dr. Smith throughout his career with the Distinguished Alumnus Lecturer Award in 1996, the Citation of Merit for National and International Recognition in 2002, and the highest alumni honor in 2015, the Distinguished Alumni Award. The University of Houston College of Natural Sciences and Mathematics recognized Dr. Smith with the 2017 Distinguished Alumni Award.

    In 2009, Dr. Smith founded Geophysical Insights, where he leads a team of geophysicists, geologists and computer scientists in developing advanced technologies for fundamental geophysical problems. The company launched the Paradise® multi-attribute analysis software in 2013, which uses Machine Learning and pattern recognition to extract greater information from seismic data.

    Dr. Smith has been a member of the SEG since 1967 and is a professional member of SEG, GSH, HGS, EAGE, SIPES, AAPG, Sigma XI, SSA and AGU. Dr. Smith served as Chairman of the SEG Foundation from 2010 to 2013. On January 25, 2016, he was recognized by the Houston Geological Society (HGS) as a geophysicist who has made significant contributions to the field of geology. He currently serves on the SEG President-Elect’s Strategy and Planning Committee and the ISU Foundation Campaign Committee for Forever True, For Iowa State.

    Carrie LaudonSenior Geophysical Consultant - Geophysical Insights

    Applying Machine Learning Technologies in the Niobrara Formation, DJ Basin, to Quickly Produce an Integrated Structural and Stratigraphic Seismic Classification Volume Calibrated to Wells

    This study will demonstrate an automated machine learning approach for fault detection in a 3D seismic volume. The result combines Deep Learning Convolution Neural Networks (CNN) with a conventional data pre-processing step and an image processing-based post processing approach to produce high quality fault attribute volumes of fault probability, fault dip magnitude and fault dip azimuth. These volumes are then combined with instantaneous attributes in an unsupervised machine learning classification, allowing the isolation of both structural and stratigraphic features into a single 3D volume. The workflow is illustrated on a 3D seismic volume from the Denver Julesburg Basin and a statistical analysis is used to calibrate results to well data.

    Ivan Marroquin
    Senior Research Geophysicist

    Iván Dimitri Marroquín is a 20-year veteran of data science research, consistently publishing in peer-reviewed journals and speaking at international conference meetings. Dr. Marroquín received a Ph.D. in geophysics from McGill University, where he conducted and participated in 3D seismic research projects. These projects focused on the development of interpretation techniques based on seismic attributes and seismic trace shape information to identify significant geological features or reservoir physical properties. Examples of his research work are attribute-based modeling to predict coalbed thickness and permeability zones, combining spectral analysis with coherency imagery technique to enhance interpretation of subtle geologic features, and implementing a visual-based data mining technique on clustering to match seismic trace shape variability to changes in reservoir properties.

    Dr. Marroquín has also conducted some ground-breaking research on seismic facies classification and volume visualization. This lead to his development of a visual-based framework that determines the optimal number of seismic facies to best reveal meaningful geologic trends in the seismic data. He proposed seismic facies classification as an alternative to data integration analysis to capture geologic information in the form of seismic facies groups. He has investigated the usefulness of mobile devices to locate, isolate, and understand the spatial relationships of important geologic features in a context-rich 3D environment. In this work, he demonstrated mobile devices are capable of performing seismic volume visualization, facilitating the interpretation of imaged geologic features.  He has definitively shown that mobile devices eventually will allow the visual examination of seismic data anywhere and at any time.

    In 2016, Dr. Marroquín joined Geophysical Insights as a senior researcher, where his efforts have been focused on developing machine learning solutions for the oil and gas industry. For his first project, he developed a novel procedure for lithofacies classification that combines a neural network with automated machine methods. In parallel, he implemented a machine learning pipeline to derive cluster centers from a trained neural network. The next step in the project is to correlate lithofacies classification to the outcome of seismic facies analysis.  Other research interests include the application of diverse machine learning technologies for analyzing and discerning trends and patterns in data related to oil and gas industry.

    Dr. Jie Qi
    Research Geophysicist

    Dr. Jie Qi is a Research Geophysicist at Geophysical Insights, where he works closely with product development and geoscience consultants. His research interests include machine learning-based fault detection, seismic interpretation, pattern recognition, image processing, seismic attribute development and interpretation, and seismic facies analysis. Dr. Qi received a BS (2011) in Geoscience from the China University of Petroleum in Beijing, and an MS (2013) in Geophysics from the University of Houston. He earned a Ph.D. (2017) in Geophysics from the University of Oklahoma, Norman. His industry experience includes work as a Research Assistant (2011-2013) at the University of Houston and the University of Oklahoma (2013-2017). Dr. Qi was with Petroleum Geo-Services (PGS), Inc. in 2014 as a summer intern, where he worked on a semi-supervised seismic facies analysis. In 2017, he served as a postdoctoral Research Associate in the Attributed Assisted-Seismic Processing and Interpretation (AASPI) consortium at the University of Oklahoma from 2017 to 2020.

    Rocky R. Roden
    Senior Consulting Geophysicist

    The Relationship of Self-Organization, Geology, and Machine Learning

    Self-organization is the nonlinear formation of spatial and temporal structures, patterns or functions in complex systems (Aschwanden et al., 2018). Simple examples of self-organization include flocks of birds, schools of fish, crystal development, formation of snowflakes, and fractals. What these examples have in common is the appearance of structure or patterns without centralized control. Self-organizing systems are typically governed by power laws, such as the Gutenberg-Richter law of earthquake frequency and magnitude. In addition, the time frames of such systems display a characteristic self-similar (fractal) response, where earthquakes or avalanches for example, occur over all possible time scales (Baas, 2002).

    The existence of nonlinear dynamic systems and ordered structures in the earth are well known and have been studied for centuries and can appear as sedimentary features, layered and folded structures, stratigraphic formations, diapirs, eolian dune systems, channelized fluvial and deltaic systems, and many more (Budd, et al., 2014; Dietrich and Jacob, 2018). Each of these geologic processes and features exhibit patterns through the action of undirected local dynamics and is generally termed “self-organization” (Paola, 2014).

    Artificial intelligence and specifically neural networks exhibit and reveal self-organization characteristics. The reason for the interest in applying neural networks stems from the fact that they are universal approximators for various kinds of nonlinear dynamical systems of arbitrary complexity (Pessa, 2008). A special class of artificial neural networks is aptly named self-organizing map (SOM) (Kohonen, 1982). It has been found that SOM can identify significant organizational structure in the form of clusters from seismic attributes that relate to geologic features (Strecker and Uden, 2002; Coleou et al., 2003; de Matos, 2006; Roy et al., 2013; Roden et al., 2015; Zhao et al., 2016; Roden et al., 2017; Zhao et al., 2017; Roden and Chen, 2017; Sacrey and Roden, 2018; Leal et al, 2019; Hussein et al., 2020; Hardage et al., 2020; Manauchehri et al., 2020). As a consequence, SOM is an excellent machine learning neural network approach utilizing seismic attributes to help identify self-organization features and define natural geologic patterns not easily seen or seen at all in the data.

    Rocky R. Roden
    Senior Consulting Geophysicist

    Rocky R. Roden started his own consulting company, Rocky Ridge Resources Inc. in 2003 and works with several oil companies on technical and prospect evaluation issues. He is also a principal in the Rose and Associates DHI Risk Analysis Consortium and was Chief Consulting Geophysicist with Seismic Micro-technology. Rocky is a proven oil finder with 37 years in the industry, gaining extensive knowledge of modern geoscience technical approaches.

    Rocky holds a BS in Oceanographic Technology-Geology from Lamar University and a MS in Geological and Geophysical Oceanography from Texas A&M University. As Chief Geophysicist and Director of Applied Technology for Repsol-YPF, his role comprised of advising corporate officers, geoscientists, and managers on interpretation, strategy and technical analysis for exploration and development in offices in the U.S., Argentina, Spain, Egypt, Bolivia, Ecuador, Peru, Brazil, Venezuela, Malaysia, and Indonesia. He has been involved in the technical and economic evaluation of Gulf of Mexico lease sales, farmouts worldwide, and bid rounds in South America, Europe, and the Far East. Previous work experience includes exploration and development at Maxus Energy, Pogo Producing, Decca Survey, and Texaco. Rocky is a member of SEG, AAPG, HGS, GSH, EAGE, and SIPES; he is also a past Chairman of The Leading Edge Editorial Board.

    Bob A. Hardage

    Bob A. Hardage received a PhD in physics from Oklahoma State University. His thesis work focused on high-velocity micro-meteoroid impact on space vehicles, which required trips to Goddard Space Flight Center to do finite-difference modeling on dedicated computers. Upon completing his university studies, he worked at Phillips Petroleum Company for 23 years and was Exploration Manager for Asia and Latin America when he left Phillips. He moved to WesternAtlas and worked 3 years as Vice President of Geophysical Development and Marketing. He then established a multicomponent seismic research laboratory at the Bureau of Economic Geology and served The University of Texas at Austin as a Senior Research Scientist for 28 years. He has published books on VSP, cross-well profiling, seismic stratigraphy, and multicomponent seismic technology. He was the first person to serve 6 years on the Board of Directors of the Society of Exploration Geophysicists (SEG). His Board service was as SEG Editor (2 years), followed by 1-year terms as First VP, President Elect, President, and Past President. SEG has awarded him a Special Commendation, Life Membership, and Honorary Membership. He wrote the AAPG Explorer column on geophysics for 6 years. AAPG honored him with a Distinguished Service award for promoting geophysics among the geological community.

    Bob A. Hardage

    Investigating the Internal Fabric of VSP data with Attribute Analysis and Unsupervised Machine Learning

    Examination of vertical seismic profile (VSP) data with unsupervised machine learning technology is a rigorous way to compare the fabric of down-going, illuminating, P and S wavefields with the fabric of up-going reflections and interbed multiples created by these wavefields. This concept is introduced in this paper by applying unsupervised learning to VSP data to better understand the physics of P and S reflection seismology. The zero-offset VSP data used in this investigation were acquired in a hard-rock, fast-velocity, environment that caused the shallowest 2 or 3 geophones to be inside the near-field radiation zone of a vertical-vibrator baseplate. This study shows how to use instantaneous attributes to backtrack down-going direct-P and direct-S illuminating wavelets to the vibrator baseplate inside the near-field zone. This backtracking confirms that the points-of-origin of direct-P and direct-S are identical. The investigation then applies principal component (PCA) analysis to VSP data and shows that direct-S and direct-P wavefields that are created simultaneously at a vertical-vibrator baseplate have the same dominant principal components. A self-organizing map (SOM) approach is then taken to illustrate how unsupervised machine learning describes the fabric of down-going and up-going events embedded in vertical-geophone VSP data. These SOM results show that a small number of specific neurons build the down-going direct-P illuminating wavefield, and another small group of neurons build up-going P primary reflections and early-arriving down-going P multiples. The internal attribute fabric of these key down-going and up-going neurons are then compared to expose their similarities and differences. This initial study indicates that unsupervised machine learning, when applied to VSP data, is a powerful tool for understanding the physics of seismic reflectivity at a prospect. This research strategy of analyzing VSP data with unsupervised machine learning will now expand to horizontal-geophone VSP data.

    Tom Smith
    President and CEO, Geophysical Insights

    Machine Learning for Incomplete Geoscientists

    This presentation covers big-picture machine learning buzz words with humor and unassailable frankness. The goal of the material is for every geoscientist to gain confidence in these important concepts and how they add to our well-established practices, particularly seismic interpretation. Presentation topics include a machine learning historical perspective, what makes it different, a fish factory, Shazam, comparison of supervised and unsupervised machine learning methods with examples, tuning thickness, deep learning, hard/soft attribute spaces, multi-attribute samples, and several interpretation examples. After the presentation, you may not know how to run machine learning algorithms, but you should be able to appreciate their value and avoid some of their limitations.

    Deborah Sacrey
    Owner, Auburn Energy

    Deborah is a geologist/geophysicist with 44 years of oil and gas exploration experience in Texas, Louisiana Gulf Coast and Mid-Continent areas of the US. She received her degree in Geology from the University of Oklahoma in 1976 and immediately started working for Gulf Oil in their Oklahoma City offices.

    She started her own company, Auburn Energy, in 1990 and built her first geophysical workstation using Kingdom software in 1996. She helped SMT/IHS for 18 years in developing and testing the Kingdom Software. She specializes in 2D and 3D interpretation for clients in the US and internationally. For the past nine years she has been part of a team to study and bring the power of multi-attribute neural analysis of seismic data to the geoscience public, guided by Dr. Tom Smith, founder of SMT. She has become an expert in the use of Paradise software and has seven discoveries for clients using multi-attribute neural analysis.

    Deborah has been very active in the geological community. She is past national President of SIPES (Society of Independent Professional Earth Scientists), past President of the Division of Professional Affairs of AAPG (American Association of Petroleum Geologists), Past Treasurer of AAPG and Past President of the Houston Geological Society. She is also Past President of the Gulf Coast Association of Geological Societies and just ended a term as one of the GCAGS representatives on the AAPG Advisory Council. Deborah is also a DPA Certified Petroleum Geologist #4014 and DPA Certified Petroleum Geophysicist #2. She belongs to AAPG, SIPES, Houston Geological Society, South Texas Geological Society and the Oklahoma City Geological Society (OCGS).

    Mike Dunn
    Senior Vice President Business Development

    Michael A. Dunn is an exploration executive with extensive global experience including the Gulf of Mexico, Central America, Australia, China and North Africa. Mr. Dunn has a proven a track record of successfully executing exploration strategies built on a foundation of new and innovative technologies. Currently, Michael serves as Senior Vice President of Business Development for Geophysical Insights.

    He joined Shell in 1979 as an exploration geophysicist and party chief and held increasing levels or responsibility including Manager of Interpretation Research. In 1997, he participated in the launch of Geokinetics, which completed an IPO on the AMEX in 2007. His extensive experience with oil companies (Shell and Woodside) and the service sector (Geokinetics and Halliburton) has provided him with a unique perspective on technology and applications in oil and gas. Michael received a B.S. in Geology from Rutgers University and an M.S. in Geophysics from the University of Chicago.

    Hal GreenDirector, Marketing & Business Development - Geophysical Insights

    Hal H. Green is a marketing executive and entrepreneur in the energy industry with more than 25 years of experience in starting and managing technology companies. He holds a B.S. in Electrical Engineering from Texas A&M University and an MBA from the University of Houston. He has invested his career at the intersection of marketing and technology, with a focus on business strategy, marketing, and effective selling practices. Mr. Green has a diverse portfolio of experience in marketing technology to the hydrocarbon supply chain – from upstream exploration through downstream refining & petrochemical. Throughout his career, Mr. Green has been a proven thought-leader and entrepreneur, while supporting several tech start-ups.

    He started his career as a process engineer in the semiconductor manufacturing industry in Dallas, Texas and later launched an engineering consulting and systems integration business. Following the sale of that business in the late 80’s, he joined Setpoint in Houston, Texas where he eventually led that company’s Manufacturing Systems business. Aspen Technology acquired Setpoint in January 1996 and Mr. Green continued as Director of Business Development for the Information Management and Polymer Business Units.

    In 2004, Mr. Green founded Advertas, a full-service marketing and public relations firm serving clients in energy and technology. In 2010, Geophysical Insights retained Advertas as their marketing firm. Dr. Tom Smith, President/CEO of Geophysical Insights, soon appointed Mr. Green as Director of Marketing and Business Development for Geophysical Insights, in which capacity he still serves today.

    Hana Kabazi
    Product Manager

    Hana Kabazi joined Geophysical Insights in October of 201, and is now one of our Product Managers for Paradise. Mrs. Kabazi has over 7 years of oil and gas experience, including 5 years and Halliburton – Landmark. During her time at Landmark she held positions as a consultant to many E&P companies, technical advisor to the QA organization, and as product manager of Subsurface Mapping in DecsionSpace. Mrs. Kabazi has a B.S. in Geology from the University of Texas Austin, and an M.S. in Geology from the University of Houston.

    Dr. Carrie LaudonSenior Geophysical Consultant - Geophysical Insights

    Carolan (Carrie) Laudon holds a PhD in geophysics from the University of Minnesota and a BS in geology from the University of Wisconsin Eau Claire. She has been Senior Geophysical Consultant with Geophysical Insights since 2017 working with Paradise®, their machine learning platform. Prior roles include Vice President of Consulting Services and Microseismic Technology for Global Geophysical Services and 17 years with Schlumberger in technical, management and sales, starting in Alaska and including Aberdeen, Scotland, Houston, TX, Denver, CO and Reading, England. She spent five years early in her career with ARCO Alaska as a seismic interpreter for the Central North Slope exploration team.