The Scientific Universe From Square One to SOM – April 27-28 2021

GSH-SEG Spring Symposium-Data Science and Geophysics: How Machine Learning and AI will Change Our Industry – Apr 27-28

The theme of Geophysical Society of Houston’s premier technical event was the application of the latest data science tools to solving geophysical problems.

The virtual symposium was held over two days and consisted of a keynote address, multiple speakers, and two exciting lunchtime events. The first was the traditional SEG Student Challenge Bowl competition. This was held online and featured the traditional battle between student geophysicists to be crowned the winner of the GSH Challenge Bowl and earn a place at the Challenge Bowl finals at the SEG annual symposium in the fall. The second event was the announcement of the winners of the Machine Learning Competition run by the Data Science & Machine Learning SIG and included an overview of the competition challenge, the announcement of the winner, and a presentation by the winning entry. The speaker lineup for the event included:

Key Note Speaker: Jake Umbriaco, Digital Platform and Services Manager, Subsurface – Chevron

Confirmed Speakers and Topics include:

  • Aria Abubaker, Schlumberger – Machine Learning for Geoscience Applications
  • Satinder Chopra, SamiGeo – Some Machine Learning Applications for Seismic Facies Classification
  • Hugo Garcia, Geoteric – Automated Fault Detection from 3D Seismic using Artificial Intelligence
  • Elive Menyoli, Emerson – Wavefield Separation via Principle Component Analysis and Deep learning in the Local Angle Domain
  • Tom Smith, Geophysical Research, LLC (d/b/a Geophysical Insights) – The Scientific Universe from Square One to SOM
  • Yuting Duan, Shell – Estimation of Time-lapse Timeshifts using Machine Learning
  • Jon Downton, CGG – Predicting Reservoir Properties from Seismic and Well Data using Deep Neural Networks
  • Wenyi Hu, AGT & UH – Progressive Transfer Learning for Low-Frequency Prediction in FWI

For more information, please click here.

Dr. Thomas (“Tom”) A. Smith received a BS and MS degree in Geology from Iowa State University, and a Ph.D. in Geophysics from the University of Houston. His graduate research focused on a shallow refraction investigation of the Manson astrobleme. In 1971, he joined Chevron Geophysical as a processing geophysicist but resigned in 1980 to complete his doctoral studies in 3D modeling and migration at the Seismic Acoustics Lab at the University of Houston. Upon graduation with the Ph.D. in Geophysics in 1981, he started a geophysical consulting practice and taught seminars in seismic interpretation, seismic acquisition, and seismic processing. Dr. Smith founded Seismic Micro-Technology (SMT) in 1984 to develop PC software to support training workshops he was holding, which subsequently led to the development of the KINGDOM Software Suite for integrated geoscience interpretation.

In 2008, Dr. Smith founded Geophysical Insights, where he leads a team of geophysicists, geologists, and computer scientists in developing advanced technologies for fundamental geophysical problems. The company launched the Paradise® AI workbench in 2013, which uses Machine Learning, Deep Learning, and pattern recognition technologies to extract greater information from seismic and well data.

The Society of Exploration Geologists (SEG) recognized Dr. Smith’s work with the SEG Enterprise Award in 2000, and in 2010, the Geophysical Society of Houston (GSH) awarded him an Honorary Membership. Iowa State University (ISU) has recognized Dr. Smith throughout his career with the Distinguished Alumnus Lecturer Award in 1996, the Citation of Merit for National and International Recognition in 2002, and the highest alumni honor in 2015, the Distinguished Alumni Award. The University of Houston College of Natural Sciences and Mathematics recognized Dr. Smith with the 2017 Distinguished Alumni Award. In 2016, he was recognized by the Houston Geological Society (HGS) as a geophysicist who has made significant contributions to the field of geology.

Dr. Smith has been a member of the SEG since 1967 and is a professional member of SEG, GSH, HGS, EAGE, SIPES, AAPG, Sigma XI, SSA, and AGU. Dr. Smith served as Chairman of the SEG Foundation from 2010 to 2013. He currently serves on the SEG President-Elect’s Strategy and Planning Committee and the ISU Foundation Campaign Committee for Forever True, For Iowa State. The University of Houston College of Natural Sciences and Mathematics recognized Dr. Smith with the 2017 Distinguished Alumni Award.

Please contact us to receive more information on the Paradise AI workbench.

[forminator_form id=”34008″]

Dr Tom Smith
Dr. Tom Smith

Transcript

 

Dr. Tom Smith: 

Sorry about the technical difficulties, everyone, but in a nutshell, what we’re going to be doing in these next 35 minutes, we are going to start with basics. And I hope that doesn’t switch you off, but we are going to start off with the basics, and the basic, scientifically, is a black box, and we’re going to follow the tracks of where that black box takes us to what I’m going to be calling the queen of statistics. So I’ll just leave it at that for a few minutes and let you figure out, “What in the world is Tom talking about, queen of statistics? What is that?” After we get to that point, we’re going to demonstrate how the queen of statistics works with our seismic data and how it will benefit our seismic interpretations, and we’ll conclude the presentation with three recent interpretations.

Let’s start off simply. We have a single US $1 bill, and we’re going to weigh it on an accurate scale, as you can see on the right. And we ask some simple questions after we get the number. We have a weight, we have a certain number, and we’re going to ask some simple questions, such as, well, do all US dollar bills weigh the same? Is the weight of a $5 bill heavier, and does the weight change with time? These are basic questions. And so to be sure about this, we go back and we weigh the bill a second time, and guess what? I can guarantee you that we will get a different value. Now, the legitimate thing to do would be to estimate the number of significant digits recorded by these two measurements or more, and that’s what we would report. We have a number of significant digits.

But let’s get at the heart of the matter a little bit better. Why are these two measurements different? That’s the question. Well, if we don’t have an answer to that, we legitimately and honestly must create something new, and that’s black box as you can see on the right side, down at the bottom there, we put the process in a black box. Why do we do that? Because we don’t have an explanation for the different values, two or more. To be honest about this, take this little experiment and we have to put a cloak around it and say, “It’s a black box. We just don’t know what to do about this situation.” We asked a simple question on our statistical events. Are they real or not?

Well, situations where we can’t make exact measurements, excuse me a second here. Where we can’t make exact measurements. For example, we have a box now and we have a single event which we then reduce to one value. And we ask the universe, “Hello out there. Is this black box repeatable?” And we go to our next round and try it again. And if we do this again and again and again and again, we repeat the process, same black box, but now I have a series of events. They’re causal events. One happens right after the other. So put a little square bracket around this and indicate that this is a set of values now, but they’re an ordered set, because we recorded the first time, second time, the third time, continuing on like that. So we repeat the process. Now that’s not the whole story, because we’re not real sure if that’s a single process, or maybe there’s more than one process inside a little black box.

We also have the possibility of coincident events. We’ll have a series of values there, but that will be at one instant of time, where we have multiple sensors hooked up and we have coincident events, so we can measure several items of the same event. And we have another set of values now, but they’re at one incident of time, and when we record this again and again and again, we change the brackets to a spiral set of values over there to indicate that it’s a little bit different. And we ask simply, is a universe determined or is it indeterminate? Kind of a basic question. Well, as I’m sure you know, statistics helps us to understand and make sense of an indeterminate universe. That’s the way it works. And it’s not only a value of course in all the sciences, and in particular geo science, but it helps us establish some predictability, and based upon that predictability, we can make some reasonable predictions. It’s a simple statement that we can’t know everything.

Well, we’ll start off here with this black box, as you can see on the right, single event. And we’re going to ask a series of questions here. Simple one, here’s a black box. And is there one process in the box or is there more than one? In other words, can that process in the black box be divided? Yes or no? Is there more than one process? Two or more? Well, are they independent of each other? We have another question. What is their independence? If one depends upon the other, how do they affect the results? Well, we study that with joint distributions. And does anything ever change across time? Do the values change over time? Well, if they do, or if they don’t, that’s the topic of sarcastic aspects of the statistics and how do they change over time?

Well, that’s another field, and that’s their [inaudible] of the statistics, and what triggers an event to happen in the first place, well that’s just the simple discovery. And if the box contains several processes, can we separate these into separate black boxes? Well, that’s our employment. The conclusion is, frankly, indeterminacy, it’s really a big topic. We’ll spend a couple of minutes here, starting at the roots of probability, which you may or may not realize, probability is not indeterminacy. It’s something more than that. Let’s figure that out. Probability involves chance. Chance is also not indeterminacy. Chance is the possibility of an event happening amongst all possible events.

And we have a gentleman over here on the right, Girolamo Cardano, 1501 to 1576. He worked on this area of something new, not just indeterminacy, and can’t do anything more about it. We’re working on the concept of probability and laid out the probability of each of all possible events. Each possible event has its own probability. And probability itself is bounded. It’s bounded by the sum of all possible probabilities, and that sum of the probabilities adds up to one. That is the probability of certainty. It’s one. And all probabilities, as I’m sure you know, are all less than one. The little tricky part about this that’s sometimes forgotten as we sort of skip right over it, is of all possible events. We need to be a little bit careful about that.

The measurements are several events that sometimes we take those to infinity, which in this case, is a mathematical approximation, but probabilities are not exact. And Girolamo Cardano is the father of probability. Let’s take a look at the whole concept here of the statistics of classification. We’re going to look at it the following way. Let’s read this together. The universe may be indeterminate because we don’t know everything, but through the mathematics of counting, we can exactly subdivide a finite set of objects into an exact number of subdivisions.

Notice the word exactly. We can exactly subdivide, and there’s an adjective there, and the exact number of subdivisions. The adjectives and the adverbs have something to do with simple counting. There’s a glimmer of hope in this world of chaos and indeterminacy. There is a little determinacy here, and that is because counting is the easiest type of mathematics around the simplest. There’s no integration here. There’s nothing associated with that of all. The most basic type of statistics, the most exact type of statistics is classification, because we have a whole ability to define counting and put objects in sets. We count those the way we want to count them. So without hesitation, we conclude statistical classification as a branch of statistics, as truly the queen of statistics. Fortunately, we’re finding in our machine learning that this classification process, the statistical classification is indeed meeting great success. We’re actually very fortunate with that. Now, if classification is good, is that all there is? And we continue to ask the questions, well, what does a classification model really look like?

Let’s take a look at the next step after just classification. And now let’s look at a special type of classification. This is self-organizing classification. Over on the left in sort of a dark red, as you can see there, is the town of Pavia, Italy. It’s in northern Italy. And if you look at the shape of this little province right there, it’s in north Italy up in this corner right here that I’m pointing at. Now, Girolamo Cardano taught at the University of Pavia, which by the way, is one of the oldest universities in the world. And Pavia is in the province of Lombardi, rich agricultural lands. And would you say that the regions of Lombardi are self-organized or not? Let’s see if we can sort of slide in what self-organizing is all about. Well, take a look at this. The probability that poor old professor Cardano, many years ago, our father of probability, he taught at the University of Pavia, but do you suppose he ever encountered the concept of self-organization or ever thought about it? Highly unlikely. In other words, the probability of Cardano and self-organization, probability is pretty close to zero.

Well, let’s sneak up a little closer on this and ask, well, what does self-organization really mean? How do we define it? Well, guess what? Here’s a perfect example. The people of Lombardi have self-organized themselves. So we are sneaking up on it. Self-organization categorizes common properties. So we can see even within the province of Lombardi, there are certain areas probably too rough to plow through. And there are other regions that are highly agriculture, smaller like that, and probably go to agricultural soil conditions and also the size of towns and things like that. You getting the idea? Okay, let’s move on then.

Self-organized learning. Aristotle, sorry, 384 to 322 BC, had a school of rhetoric and it was not only a school, but also a way of thinking. Deduction was born with the introduction of experimental learning. Aristotle taught that the world could be actively investigated, and think about that. That really is a profound thought. It’s a huge thought. Socrates, in an earlier time, 470 to 399 BC, taught that during his time, there was a school that he had put together, and that school is the dialectic method, which says a problem was broken down into parts. Parts are studied individually and conclusions are drawn, so the problem is simply solved by the sum of parts.

Whereas the later process of thinking of the world is by deduction. We work on one part A, we conclude B, and from B, we study it, conclude C, carry on like that. Each of these steps is a step in deduction. Socrates’ earlier school was a process of induction where you study the individual parts and put them all together. Understanding both of these types of thinking is fundamental for understanding self-organizing patterns and how these patents are going to be analyzed. By who? By us. It’s part duction and part induction. It’s both.

So in this new seismic interpretation methodology, the interpreter uses machine learning to find patterns in the data that may have otherwise gone undetected. That’s the beauty and the power of our machine learning processes. Machine learning delivers statistical bottles. This is a fundamental point, often overlooked. Machine learning delivers a statistical model. We need to think about it statistically and analyze it and interpret it statistically. The interpreter is allowed, by this perspective, to focus more on the processes and less on creative interpretation. The goal of this new cooperation between the interpreter and the machine remains the same. Let’s make better predictions. That’s the bottom line. We have a new set of tools to help us with this.

Now, there was a professor, University of Finland, I believe, Teuvo Kohonen, who statistically modeled different categories of data, and those that he analyzed, the relationships between the different objects was not, no. He modeled his machine learning on a two dimensional model of connected neurons inspired by the visual cortex in the back of our heads. Left side is to the right eye. And the right side, if you recall, is to the left eye. So mapping from the eye through the optic nerve, to the visual cortex, two parts. Now he observed that the different regions of the neural network ended up after their training in different general characteristics. So one part of the neural network had one set of general characteristics, other regions of a neural network had different characteristics. He coined this self-organizing map. SOM is a type of unsupervised neural network with coincidence events, using attributes all recorded at the same time, they are coincidence events, and they could be put into a simple spreadsheet of rows and columns. Now we may not use a real spreadsheet, but certainly, it can be organized this way.

Bottom line, SOM is a statistical classification model, and there is an evolutionary process here. There is an evolution of machine learning, starting from the machine learning, self-adaptation to self-organization, and even self-awareness. And let’s look at these different parts, so you can see what I’m really talking about here. Self-adaptation in that aspect, we have machine algorithms that adapt to specific characteristics of the data, and they store this information in a layered neural network. Straightforward. Use it all the time. Self-organization, the neural network itself is organized into patterns that can be evaluated and recognized and interpreted. And then finally, the third level is self-awareness. Well, that’s self-organizing neural networks that adapt not only to the data, but also to the environment. And this is the evolutionary process of machine learning neural networks from one to the other.

Let’s take a look at SOM as a statistical modeler. We’ll start off with some coincidence sampling. And at one instant of time, we record a bunch of properties, a bunch of attributes, simple as that. So one event is one sample, but we’re recording a series of measurements, if you will. And we do that for a bunch of different events and we group all those together and we call that one big set of a number of events. But each event is a series of that, of the attributes. We can slap all of this into an Excel spreadsheet, as we’ve done right here. And then we can run a SOM classification on this process. As I say, it’s unsupervised machine learning, and it will organize the data into classes, and we take a look at those. So this very simple minded result here is, is that we add to more columns to our spreadsheet right here. And we get not only the classification from this machine learning process, but also the probability, which is the goodness of fit.

So for example, that first event, which would be, say, the first line, and we might color that as class one, which would give that a color. That’s a gray, and the second one might be a blue. And then the third one might be an orange or something like that. Very simple concepts, straightforward to implement, and very interesting results to interpret. Well, let’s focus on, the SOM as a statistical model. It results from machine learning that trains, looks for natural clusters of information and attribute space. The process depends heavily on three basic things. It depends upon our choice of attributes, our choice of our neural network. And of course, it always depends upon our choice of training samples. After training the SOM, a neural network is a map of what we call winning neurons. After training, the SOM model can be used, that neural network can be used to classify all samples of interest.

A single sample is classified to its nearest winning neuron, which is the closest distance and attribute space in the SOM. Classified samples also have this goodness of fit and they address how close the winning neuron is to the sample where it lies in attribute space. Now, we could have an attribute space with two attributes and that’s all on a piece of paper. If we have eight attributes, 10 attributes, it’s eight or 10 dimensional space, not a problem for computer work. A winning neuron lies near the center of a natural cluster in attribute space. Let’s take a look at what that really implies. It really implies the following. A winning neuron lies next to the center of a natural cluster. And so we have natural clustering. So this element in the SOM corresponds to these samples, and then the one over here, these samples, et cetera, et cetera, et cetera. Very straightforward, leads us to some very interesting things.

And each sample can be analyzed and find out what is its classification probability? Is it a good fit or a poor fit? You have here 100 seismic traces of our four basic seismic reflection wavelets, and we’ll start off here with a low noise level. So we’re going from, say, low acoustic impedance to high, and with a symmetric wavelet, we then have a symmetric peak event going from low acoustic impedance to high. Single reflection [inaudible] positive. We also have a little acoustic impedance thin bed response, which is a trough over peak events and isometric. We have a high acoustic impedance thin bed, which is a peak over trough. And then finally, the last of the four basic. And that would be from a high acoustic impedance back to low, and that would be a symmetric trough event. Down at the bottom, you can see the SOM results of using the 100 traces here as training samples, and classification is laid on top using this color map down here in the lower right-hand corner.

We repeat the same process with the medium level of noise, and a little bit higher level of noise. These are the results. You can see that over on the left, whereas everything comes up very nicely, because we’ve added no noise, here we’ve got medium, and then higher level of noise. Notice that even with the higher level of noise, over here at the top of our four basic reflection events, we see trackable, certainly, easily trackable events of these four basic reflection wavelets.

Let’s take a look at the results in attribute space. In other words, for every seismic trace, we’ll have the amplitude and we’ll also have its Hilbert transform. And we plot every sample then in attribute space, because there’s only two, the amplitude in the Hilbert transform though, of course, we’ll plot as single points. And we have then a sample here, which would be a plus. And after SOM, we have a winning neuron. And that’s with 100 seismic samples for low levels of noise, medium levels of noise, and higher levels of noise.

And you can see here, but in all cases, even for the higher level of noise, we have a cluster of samples in attribute space, which cluster around the seismic sample. Now let’s take a look at this a little closer. Here’s a synthetic seismogram, and I’m talking about a seismogram trace. It’s really got two parts. It’s got the real trace and the transform. Here’s the reflection coefficients, single reflection coefficient, the two couplets, and then back to the single again. And we plot these results after the SOM classification, 100 synthetic seismograms with medium noise, and you can see the results here. And then if we go back to the seismic traces themselves and categorize those, fast them, now these are the results.

So if we go to this first one, symmetric peak event, we know that the value of the real trace is a maximum. So it’s a large amplitude value, and we know that it’s a zero crossing of the Hilbert transform, which is a nine degrees ratio from the real trace. So the Hilbert transform is zero. And that means is that the Hilbert transform is zero, large positive amplitude. In other words, that sample would plot over here somewhere. We also have, then, the winning neuron classified samples as these two samples are right here. And then from those, we can then take these over and plot what they would look like on our original seismic data that was used. The winning neuron itself is close to one of the seismic samples, but two samples ended up being classified with the same classification and they plot over here is that two seismic samples dropped off to one, because this isn’t a statistical process, back to two, then back to one. Not perfect. Why? It’s a statistical model.

Natural clusters arise from the stacking of samples of similar attribute combinations across multiple traces, all in attribute space. As you can see here, the orthogonal pair amplitude and the Hilbert transform. Let’s move up a notch now, move away from a synthetic seismogram, it’s no change in the reflection coefficients, we’ll look at our classic wedge model at low, medium, and high levels of noise. The arrow over here, the white arrow, indicates a tuning thickness. And we can see that as we get below tuning thickness, even the SOMs using a smooth color map or random color map are taking us into regions significantly below tuning thickness. We track this by amplitude with our conventional analysis, or we track it using SOM classifications. And I think you can see that even way below tuning thickness, we have events that are trackable with the SOM process. SOM simply tracks natural clusters. That’s really the key to understanding how this stuff operates.

Next thing we want to take a look at is the amplitude and the Hilbert transform, two attributes. If we add instantaneous attributes, couple of them, these are the results, and we continue to add more seven attributes and we have those results. We’re still tracking something similar, but now we have a mixture of the two orthogonals, plus similar attributes, which are instantaneous attributes. And our SOM solution has not gone away. What we have is an attribute space. Our natural clusters are continuing to arise from stacking process. The stacking process of multi-attribute seismic samples that have similar combinations of attributes, because each of those natural clusters will be in different portions of attribute space. It’s easy to see on a piece of paper with the orthogonal amplitude and the Hilbert transform, even in eight dimensional space or three or five or eight, natural clusters are formed, and it is the SOM process which moves towards identifying where those natural clusters are in the attribute space.

Here’s one, even with 27 attributes, take a look at that. The SOM winning neurons track patterns of amplitudes as thin as individual seismic samples. We are not saying that though, the lateral changes in the reflection coefficients don’t have any effect. Of course they do. Lateral changes in their reflection coefficients will disrupt the patterns and the move to something else, but if there’s no lateral changes in the attribute properties, then they stack together and form a natural cluster and attribute space. Now I’ve demonstrated that on a previous slide.

Well, what is the effect of stacking in attribute space? And for all geophysicists, and a lot of geologists, and a lot of earth scientists, let’s face facts. Stacking is a wonderful process. If it wasn’t for the brilliant stacking effects of concentrating information, we would be in a world of hurt, as they would say. Well, let’s take a look at what is the effect of attribute stacking for two instantaneous attributes? And then even the one where we had 27 instantaneous attributes. So we could look at the sample counts. How many samples were associated with each of these winning neurons, relative density, and a minimum and maximum density? I think you can see from what’s shown here, these are called neuron topology maps. And we’re looking at these two rows here, two instantaneous attributes, to 27, and we see the effect of a higher dimension reality in the attribute space. The information is more spread out that more attributes of a similar kind now lead to a distribution of our information in broader areas.

What does that mean? It means [inaudible] simply more detailed. Orthogonal attributes are not required for this process, but certainly a good deal of discretion does need to be used, which we’ll take a look at a little bit later. What is the evidence of self-organization? Well, pretty simple to demonstrate that. This is the topic of super clusters. We’ve taken from the Stratton field 3D survey, which is public domain, there is a well number nine that has a VSP. We prefer to call this the Lena line that passes through well nine. It’s the line 79, and right at trace 89 and the middle [inaudible] is between 1.329, 1.585. We take that Lena line, a single line out of the 3D survey, and run the SOM analysis. In this case, it’s a 12 by 12 neural network topology. And this is the data then that we pass into a second cluster analysis of these 144 SOM winning neurons and we get this graph over here on the right.

Each of these statistical models where we take, this is our input data. And we say, “Well pretend that we had only two clusters out of 144 samples.” And from that, we get a number of the goodness of fit. And then we do it for 3, 4, 5, 6. I use [inaudible], which I suspect that some of you are familiar with. Each of these statistical models has an estimate of fit. And that’s the [inaudible] information criteria, which is the one that we use. And we see that the error decreases, the AIC decreases. And we go down to this point, which we marked in a little red triangle there, and we statistically estimate that this is our best super cluster statistical model. And for this one, we would say there of these 144 samples, there’s really about 12 clusters of information.

Let’s take a look at those. If we have 12, [inaudible] means 12 natural clusters of our SOM groups, let’s take a look at the results. We see here some dark brown, there’s an area up here, there’s some purple, another purple over there, kind of a light blue, kind of a darker blue, red, purple, things like that. Each one of these 12 is a color. Now, if we were to look at our statistical estimate, which we’ll call a max min estimate, and we can look at this one as one solution, but we can also look at the situation where it’s a little bit under classified, whenever we say, “Well, let’s look at the solution when there were 11 groups rather than 12. And we can also, of course, look at the one that’s a little bit over classified.” If we think 12 is the optimum, 11 would be a little bit under classified, and 13 would be a little bit over classified. Each of these three, we can statistically estimate to see which one is our statistical robust one.

It’s the one with a minimum AIC. And we choose to use the ones that are monotonically decreasing. Well, here’s our evidence of self-organization. All three of these super cluster models are demonstrating self-organization of the SOM solutions themselves without any guesstimating from you or I, this is our estimate of self-organization. I think it ought to be pretty clear to you, pretty obvious, that indeed, certain portions, certain regions of the SOM solution have natural association of similar properties.

Now I’d like to take the last part of this presentation, and I’d like to spend a couple of minutes showing you some results to show that these unorganized, unsupervised neural network solutions can be correlated and compared with our seismic data. And in this example right here, we are calibrating by comparing the SOM’s statistical model with some borehole measurements. And indeed, what we have is that in, this is a [inaudible] play in Colorado, and the best reservoirs of where there’s a fractured chalk situation correlate with this dark red neuron here, which correlates well, now these statistical calculations of the unsupervised work had nothing to do with the seismic data, and both the best reservoir and the areas of total organic content in a number of these reservoirs were shown to be fairly well correlated. It’s not perfect, of course, but it is getting us closer to we’re interested in.

Our second example here is a gas reservoir starting off with a far angled stack. And we have these attributes using these attributes of amplitude. I normalized amplitude instantaneous phase envelope, over transforming sweetness, and we use PCA to help us sort out what would be, as an interpretation step, which would be the selection of attributes that we would use to solve a particular geologic question. We will use as another important part of the interpretation, we choose where the SOM make five by five neuron typology. And that is the one which best ties the wells. Yes, that’s part of our interpretation part, the statistical models, where they’re all by our machine learning tools. And you can see here circled in yellow here, we can see where the reservoirs are for this arbitrary line. And you can see they correlate fairly well with one of the winning neurons, which we color in yellow as well.

In three dimensions, we have a situation. The arbitrary lines start over here on W2, and then ends up over here on W7. Now, starting with the classification, we can calculate statistically derived geo bodies. We auto picked a particular sample of a particular classification, and a closed body is the geo body with that same classification. So for example, this green geo body right here has 12,683 samples with an interval velocity and [inaudible] ratio with [inaudible] and an estimate of water saturation. We end up with a hydrocarbon pour volume estimate of this geo body, which we can then correlate and see how well it matches, or not, this reservoir compartmentalization, which has been predicted with our seismic interpretation and our selection of parameters, can be correlated with a reservoir test to see if this is a good match or not.

Our third and last example here is from the Openake amplitude section on the left. We’re using a combination of deep learning, some fault cleanup steps, and then passing it off to the SOM. In this case, these four winning neurons, 1, 2, 9, and 10 correlated with our fault patterns shown over on the right, just those four neurons, one in the middle. You can see how they correlate well with the faults. And there was absolutely no manual picking done here. So our machine learning calculations that were strictly unbiased by the no bias due to our training. If we combine these tools, our deep learning is based upon three fault models, as well as some other appropriate attributes, which were used. And now we look at the same results here in the 3D. And as I already mentioned this morning, this was an example of somebody supervised machine learning.

It’s where the geo scientist and the machine learning are both used to take us to a solution in which offers our better predictions. Manual 2D default picking in this particular case would be hard-pressed to handle this situation because as you can see here, these faults are running at about 45 degrees to either the inline or the cross line. 3D fault modeling has done a pretty good job. And as I said, we can take these and create geo bodies. And if you look in the vertical plane here, you can see that some of these geo bodies up in the shallow part right here are tying into the planes and then down beat, there’s others as well. And these are the statistics over here in the bottom part of these geo body counts for the fault classifications.

And finally, my last slide is that in the Canning basin, shifting to a slightly different area, machine learning of a fault, the patterns has got a large area of growth. You can see over on the right here. Not only are we’re picking the faults using machine principles of machine learning analysis. So we see not only the details, but also we have an expression of the three aerogenes of the two plates, the Australia plate and the Pacific plate. The lower one, middle one, and an upper one right there. So we are using machine learning effectively here, as trying to do this stuff by handpicking would be really difficult and time consuming, frankly.

So the messages of this are that we maintain classification is the queen of statistics because it’s an exact statistical estimate. SOM classifications are statistical models based upon multi attribute seismic surveys. And we show that the natural clusters and attribute space can be classified and related directly with seismic reflections. Certainly, we have more complicated geology than a wedge model. And what we’re looking at is we’re looking at taking things that correlate at the well and looking to see how those patterns continue away from the wells. [crosstalk].

Speaker 2: 

But we have four minutes remaining, just letting you know.

Dr. Tom Smith: 

We’re just wrapping it up. This stuff has been available for about eight years now. It’s been used in hundreds of seismic interpretations around many places in the world, and it is only one machine learning algorithm that’s a bench mark tool in this evolving seismic interpretation that assisted by geo statistical workbench. With that, I’m done. Thank you, Matt. Sorry I-

Shopping Cart
  • Registration confirmation will be emailed to you.

  • We're committed to your privacy. Geophysical Insights uses the information you provide to us to contact you about our relevant content, events, and products. You may unsubscribe from these communications at any time. For more information, check out our Privacy Policy

    Deborah SacreyOwner - Auburn Energy

    How to Use Paradise to Interpret Carbonate Reservoirs

    The key to understanding Carbonate reservoirs in Paradise start with good synthetic ties to the wavelet data. If one is not tied correctly, then it will be very east to mis-interpret the neurons as reservoir, when they are not. Secondly, the workflow should utilize Principal Component Analysis to better understand the zone of interest and the attributes to use in the SOM analysis. An important part to interpretation is understanding “Halo” and “Trailing” neurons as part of the stack around a reservoir or potential reservoir. Usually, one sees this phenomenon around deep, pressured gas reservoirs, but it can happen in shallow reservoirs as well. Two case studies are presented to emphasize the importance of looking for halo or trailing patterns around good reservoirs. One is a deep Edwards example in south central Texas, and the other a shallow oil reservoir in the Austin Chalk in the San Antonio area. Another way to help enhance carbonate reservoirs is through Spectral Decomposition. A case history is shown in the Smackover in Alabama to highlight and focus on an oolitic shoal reservoir which tunes at a specific frequency in the best wells. Not all carbonate porosity is at the top of the deposition. A case history will be discussed looking for porosity in the center portion of a reef in west Texas. And finally, one of the most difficult interpretation challenges in the carbonate spectrum is correctly mapping the interface between two carbonate layers. A simple technique is shown to help with that dilemma, by using few attributes and a low-topology count to understand regional depositional sequences. This example is from the Delaware Basin in southeastern New Mexico.

    Dr. Carrie LaudonSenior Geophysical Consultant

    Applying Unsupervised Multi-Attribute Machine Learning for 3D Stratigraphic Facies Classification in a Carbonate Field, Offshore Brazil

    We present results of a multi-attribute, machine learning study over a pre-salt carbonate field in the Santos Basin, offshore Brazil. These results test the accuracy and potential of Self-organizing maps (SOM) for stratigraphic facies delineation. The study area has an existing detailed geological facies model containing predominantly reef facies in an elongated structure.

    Carrie LaudonSenior Geophysical Consultant - Geophysical Insights

    Automatic Fault Detection and Applying Machine Learning to Detect Thin Beds

    Rapid advances in Machine Learning (ML) are transforming seismic analysis. Using these new tools, geoscientists can accomplish the following quickly and effectively:

    • Run fault detection analysis in a few hours, not weeks
    • Identify thin beds down to a single seismic sample
    • Generate seismic volumes that capture structural and stratigraphic details

    Join us for a ‘Lunch & Learn’ sessions daily at 11:00 where Dr. Carolan (“Carrie”) Laudon will review the theory and results of applying a combination of machine learning tools to obtain the above results.  A detailed agenda follows.

    Agenda

    Automated Fault Detection using 3D CNN Deep Learning

    • Deep learning fault detection
    • Synthetic models
    • Fault image enhancement
    • Semi-supervised learning for visualization
    • Application results
      • Normal faults
      • Fault/fracture trends in complex reservoirs

    Demo of Paradise Fault Detection Thoughtflow®

    Stratigraphic analysis using machine learning with fault detection

    • Attribute Selection using Principal Component Analysis (PCA)
    • Multi-Attribute Classification using Self-Organizing Maps (SOM)
    • Case studies – stratigraphic analysis and fault detection
      • Fault-karst and fracture examples, China
      • Niobrara – Stratigraphic analysis and thin beds, faults
    Thomas ChaparroSenior Geophysicist - Geophysical Insights

    Paradise: A Day in The Life of the Geoscientist

    Over the last several years, the industry has invested heavily in Machine Learning (ML) for better predictions and automation. Dramatic results have been realized in exploration, field development, and production optimization. However, many of these applications have been single use ‘point’ solutions. There is a growing body of evidence that seismic analysis is best served using a combination of ML tools for a specific objective, referred to as ML Orchestration. This talk demonstrates how the Paradise AI workbench applications are used in an integrated workflow to achieve superior results than traditional interpretation methods or single-purpose ML products. Using examples from combining ML-based Fault Detection and Stratigraphic Analysis, the talk will show how ML orchestration produces value for exploration and field development by the interpreter leveraging ML orchestration.

    Aldrin RondonSenior Geophysical Engineer - Dragon Oil

    Machine Learning Fault Detection: A Case Study

    An innovative Fault Pattern Detection Methodology has been carried out using a combination of Machine Learning Techniques to produce a seismic volume suitable for fault interpretation in a structurally and stratigraphic complex field. Through theory and results, the main objective was to demonstrate that a combination of ML tools can generate superior results in comparison with traditional attribute extraction and data manipulation through conventional algorithms. The ML technologies applied are a supervised, deep learning, fault classification followed by an unsupervised, multi-attribute classification combining fault probability and instantaneous attributes.

    Thomas ChaparroSenior Geophysicist - Geophysical Insights

    Thomas Chaparro is a Senior Geophysicist who specializes in training and preparing AI-based workflows. Thomas also has experience as a processing geophysicist and 2D and 3D seismic data processing. He has participated in projects in the Gulf of Mexico, offshore Africa, the North Sea, Australia, Alaska, and Brazil.

    Thomas holds a bachelor’s degree in Geology from Northern Arizona University and a Master’s in Geophysics from the University of California, San Diego. His research focus was computational geophysics and seismic anisotropy.

    Aldrin RondonSenior Geophysical Engineer - Dragon Oil

    Bachelor’s Degree in Geophysical Engineering from Central University in Venezuela with a specialization in Reservoir Characterization from Simon Bolivar University.

    Over 20 years exploration and development geophysical experience with extensive 2D and 3D seismic interpretation including acquisition and processing.

    Aldrin spent his formative years working on exploration activity in PDVSA Venezuela followed by a period working for a major international consultant company in the Gulf of Mexico (Landmark, Halliburton) as a G&G consultant. Latterly he was working at Helix in Scotland, UK on producing assets in the Central and South North Sea.  From 2007 to 2021, he has been working as a Senior Seismic Interpreter in Dubai involved in different dedicated development projects in the Caspian Sea.

    Deborah SacreyOwner - Auburn Energy

    How to Use Paradise to Interpret Clastic Reservoirs

    The key to understanding Clastic reservoirs in Paradise starts with good synthetic ties to the wavelet data. If one is not tied correctly, then it will be easy to mis-interpret the neurons as reservoir, whin they are not. Secondly, the workflow should utilize Principal Component Analysis to better understand the zone of interest and the attributes to use in the SOM analysis. An important part to interpretation is understanding “Halo” and “Trailing” neurons as part of the stack around a reservoir or potential reservoir. Deep, high-pressured reservoirs often “leak” or have vertical percolation into the seal. This changes the rock properties enough in the seal to create a “halo” effect in SOM. Likewise, the frequency changes of the seismic can cause a subtle “dim-out”, not necessarily observable in the wavelet data, but enough to create a different pattern in the Earth in terms of these rock property changes. Case histories for Halo and trailing neural information include deep, pressured, Chris R reservoir in Southern Louisiana, Frio pay in Southeast Texas and AVO properties in the Yegua of Wharton County. Additional case histories to highlight interpretation include thin-bed pays in Brazoria County, including updated information using CNN fault skeletonization. Continuing the process of interpretation is showing a case history in Wharton County on using Low Probability to help explore Wilcox reservoirs. Lastly, a look at using Paradise to help find sweet spots in unconventional reservoirs like the Eagle Ford, a case study provided by Patricia Santigrossi.

    Mike DunnSr. Vice President of Business Development

    Machine Learning in the Cloud

    Machine Learning in the Cloud will address the capabilities of the Paradise AI Workbench, featuring on-demand access enabled by the flexible hardware and storage facilities available on Amazon Web Services (AWS) and other commercial cloud services. Like the on-premise instance, Paradise On-Demand provides guided workflows to address many geologic challenges and investigations. The presentation will show how geoscientists can accomplish the following workflows quickly and effectively using guided ThoughtFlows® in Paradise:

    • Identify and calibrate detailed stratigraphy using seismic and well logs
    • Classify seismic facies
    • Detect faults automatically
    • Distinguish thin beds below conventional tuning
    • Interpret Direct Hydrocarbon Indicators
    • Estimate reserves/resources

    Attend the talk to see how ML applications are combined through a process called "Machine Learning Orchestration," proven to extract more from seismic and well data than traditional means.

    Sarah Stanley
    Senior Geoscientist

    Stratton Field Case Study – New Solutions to Old Problems

    The Oligocene Frio gas-producing Stratton Field in south Texas is a well-known field. Like many onshore fields, the productive sand channels are difficult to identify using conventional seismic data. However, the productive channels can be easily defined by employing several Paradise modules, including unsupervised machine learning, Principal Component Analysis, Self-Organizing Maps, 3D visualization, and the new Well Log Cross Section and Well Log Crossplot tools. The Well Log Cross Section tool generates extracted seismic data, including SOMs, along the Cross Section boreholes and logs. This extraction process enables the interpreter to accurately identify the SOM neurons associated with pay versus neurons associated with non-pay intervals. The reservoir neurons can be visualized throughout the field in the Paradise 3D Viewer, with Geobodies generated from the neurons. With this ThoughtFlow®, pay intervals previously difficult to see in conventional seismic can finally be visualized and tied back to the well data.

    Laura Cuttill
    Practice Lead, Advertas

    Young Professionals – Managing Your Personal Brand to Level-up Your Career

    No matter where you are in your career, your online “personal brand” has a huge impact on providing opportunity for prospective jobs and garnering the respect and visibility needed for advancement. While geoscientists tackle ambitious projects, publish in technical papers, and work hard to advance their careers, often, the value of these isn’t realized beyond their immediate professional circle. Learn how to…

    • - Communicate who you are to high-level executives in exploration and development
    • - Avoid common social media pitfalls
    • - Optimize your online presence to best garner attention from recruiters
    • - Stay relevant
    • - Create content of interest
    • - Establish yourself as a thought leader in your given area of specialization
    Laura Cuttill
    Practice Lead, Advertas

    As a 20-year marketing veteran marketing in oil and gas and serial entrepreneur, Laura has deep experience in bringing technology products to market and growing sales pipeline. Armed with a marketing degree from Texas A&M, she began her career doing technical writing for Schlumberger and ExxonMobil in 2001. She started Advertas as a co-founder in 2004 and began to leverage her upstream experience in marketing. In 2006, she co-founded the cyber-security software company, 2FA Technology. After growing 2FA from a startup to 75% market share in target industries, and the subsequent sale of the company, she returned to Advertas to continue working toward the success of her clients, such as Geophysical Insights. Today, she guides strategy for large-scale marketing programs, manages project execution, cultivates relationships with industry media, and advocates for data-driven, account-based marketing practices.

    Fabian Rada
    Sr. Geophysicist, Petroleum Oil & Gas Services

    Statistical Calibration of SOM results with Well Log Data (Case Study)

    The first stage of the proposed statistical method has proven to be very useful in testing whether or not there is a relationship between two qualitative variables (nominal or ordinal) or categorical quantitative variables, in the fields of health and social sciences. Its application in the oil industry allows geoscientists not only to test dependence between discrete variables, but to measure their degree of correlation (weak, moderate or strong). This article shows its application to reveal the relationship between a SOM classification volume of a set of nine seismic attributes (whose vertical sampling interval is three meters) and different well data (sedimentary facies, Net Reservoir, and effective porosity grouped by ranges). The data were prepared to construct the contingency tables, where the dependent (response) variable and independent (explanatory) variable were defined, the observed frequencies were obtained, and the frequencies that would be expected if the variables were independent were calculated and then the difference between the two magnitudes was studied using the contrast statistic called Chi-Square. The second stage implies the calibration of the SOM volume extracted along the wellbore path through statistical analysis of the petrophysical properties VCL and PHIE, and SW for each neuron, which allowed to identify the neurons with the best petrophysical values in a carbonate reservoir.

    Heather Bedle
    Assistant Professor, University of Oklahoma

    Heather Bedle received a B.S. (1999) in physics from Wake Forest University, and then worked as a systems engineer in the defense industry. She later received a M.S. (2005) and a Ph. D. (2008) degree from Northwestern University. After graduate school, she joined Chevron and worked as both a development geologist and geophysicist in the Gulf of Mexico before joining Chevron’s Energy Technology Company Unit in Houston, TX. In this position, she worked with the Rock Physics from Seismic team analyzing global assets in Chevron’s portfolio. Dr. Bedle is currently an assistant professor of applied geophysics at the University of Oklahoma’s School of Geosciences. She joined OU in 2018, after instructing at the University of Houston for two years. Dr. Bedle and her student research team at OU primarily work with seismic reflection data, using advanced techniques such as machine learning, attribute analysis, and rock physics to reveal additional structural, stratigraphic and tectonic insights of the subsurface.

    Jie Qi
    Research Geophysicist

    An Integrated Fault Detection Workflow

    Seismic fault detection is one of the top critical procedures in seismic interpretation. Identifying faults are significant for characterizing and finding the potential oil and gas reservoirs. Seismic amplitude data exhibiting good resolution and a high signal-to-noise ratio are key to identifying structural discontinuities using seismic attributes or machine learning techniques, which in turn serve as input for automatic fault extraction. Deep learning Convolutional Neural Networks (CNN) performs well on fault detection without any human-computer interactive work. This study shows an integrated CNN-based fault detection workflow to construct fault images that are sufficiently smooth for subsequent fault automatic extraction. The objectives were to suppress noise or stratigraphic anomalies subparallel to reflector dip, and sharpen fault and other discontinuities that cut reflectors, preconditioning the fault images for subsequent automatic extraction. A 2D continuous wavelet transform-based acquisition footprint suppression method was applied time slice by time slice to suppress wavenumber components to avoid interpreting the acquisition footprint as artifacts by the CNN fault detection method. To further suppress cross-cutting noise as well as sharpen fault edges, a principal component edge-preserving structure-oriented filter is also applied. The conditioned amplitude volume is then fed to a pre-trained CNN model to compute fault probability. Finally, a Laplacian of Gaussian filter is applied to the original CNN fault probability to enhance fault images. The resulting fault probability volume is favorable with respect to traditional human-interpreter generated on vertical slices through the seismic amplitude volume.

    Dr. Jie Qi
    Research Geophysicist

    An integrated machine learning-based fault classification workflow

    We introduce an integrated machine learning-based fault classification workflow that creates fault component classification volumes that greatly reduces the burden on the human interpreter. We first compute a 3D fault probability volume from pre-conditioned seismic amplitude data using a 3D convolutional neural network (CNN). However, the resulting “fault probability” volume delineates other non-fault edges such as angular unconformities, the base of mass transport complexes, and noise such as acquisition footprint. We find that image processing-based fault discontinuity enhancement and skeletonization methods can enhance the fault discontinuities and suppress many of the non-fault discontinuities. Although each fault is characterized by its dip and azimuth, these two properties are discontinuous at azimuths of φ=±180° and for near vertical faults for azimuths φ and φ+180° requiring them to be parameterized as four continuous geodetic fault components. These four fault components as well as the fault probability can then be fed into a self-organizing map (SOM) to generate fault component classification. We find that the final classification result can segment fault sets trending in interpreter-defined orientations and minimize the impact of stratigraphy and noise by selecting different neurons from the SOM 2D neuron color map.

    Ivan Marroquin
    Senior Research Geophysicist

    Connecting Multi-attribute Classification to Reservoir Properties

    Interpreters rely on seismic pattern changes to identify and map geologic features of importance. The ability to recognize such features depends on the seismic resolution and characteristics of seismic waveforms. With the advancement of machine learning algorithms, new methods for interpreting seismic data are being developed. Among these algorithms, self-organizing maps (SOM) provides a different approach to extract geological information from a set of seismic attributes.

    SOM approximates the input patterns by a finite set of processing neurons arranged in a regular 2D grid of map nodes. Such that, it classifies multi-attribute seismic samples into natural clusters following an unsupervised approach. Since machine learning is unbiased, so the classifications can contain both geological information and coherent noise. Thus, seismic interpretation evolves into broader geologic perspectives. Additionally, SOM partitions multi-attribute samples without a priori information to guide the process (e.g., well data).

    The SOM output is a new seismic attribute volume, in which geologic information is captured from the classification into winning neurons. Implicit and useful geological information are uncovered through an interactive visual inspection of winning neuron classifications. By doing so, interpreters build a classification model that aids them to gain insight into complex relationships between attribute patterns and geological features.

    Despite all these benefits, there are interpretation challenges regarding whether there is an association between winning neurons and geological features. To address these issues, a bivariate statistical approach is proposed. To evaluate this analysis, three cases scenarios are presented. In each case, the association between winning neurons and net reservoir (determined from petrophysical or well log properties) at well locations is analyzed. The results show that the statistical analysis not only aid in the identification of classification patterns; but more importantly, reservoir/not reservoir classification by classical petrophysical analysis strongly correlates with selected SOM winning neurons. Confidence in interpreted classification features is gained at the borehole and interpretation is readily extended as geobodies away from the well.

    Heather Bedle
    Assistant Professor, University of Oklahoma

    Gas Hydrates, Reefs, Channel Architecture, and Fizz Gas: SOM Applications in a Variety of Geologic Settings

    Students at the University of Oklahoma have been exploring the uses of SOM techniques for the last year. This presentation will review learnings and results from a few of these research projects. Two projects have investigated the ability of SOMs to aid in identification of pore space materials – both trying to qualitatively identify gas hydrates and under-saturated gas reservoirs. A third study investigated individual attributes and SOMs in recognizing various carbonate facies in a pinnacle reef in the Michigan Basin. The fourth study took a deep dive of various machine learning algorithms, of which SOMs will be discussed, to understand how much machine learning can aid in the identification of deepwater channel architectures.

    Fabian Rada
    Sr. Geophysicist, Petroleum Oil & Gas Servicest

    Fabian Rada joined Petroleum Oil and Gas Services, Inc (POGS) in January 2015 as Business Development Manager and Consultant to PEMEX. In Mexico, he has participated in several integrated oil and gas reservoir studies. He has consulted with PEMEX Activos and the G&G Technology group to apply the Paradise AI workbench and other tools. Since January 2015, he has been working with Geophysical Insights staff to provide and implement the multi-attribute analysis software Paradise in Petróleos Mexicanos (PEMEX), running a successful pilot test in Litoral Tabasco Tsimin Xux Asset. Mr. Rada began his career in the Venezuelan National Foundation for Seismological Research, where he participated in several geophysical projects, including seismic and gravity data for micro zonation surveys. He then joined China National Petroleum Corporation (CNPC) as QC Geophysicist until he became the Chief Geophysicist in the QA/QC Department. Then, he transitioned to a subsidiary of Petróleos de Venezuela (PDVSA), as a member of the QA/QC and Chief of Potential Field Methods section. Mr. Rada has also participated in processing land seismic data and marine seismic/gravity acquisition surveys. Mr. Rada earned a B.S. in Geophysics from the Central University of Venezuela.

    Hal GreenDirector, Marketing & Business Development - Geophysical Insights

    Introduction to Automatic Fault Detection and Applying Machine Learning to Detect Thin Beds

    Rapid advances in Machine Learning (ML) are transforming seismic analysis. Using these new tools, geoscientists can accomplish the following quickly and effectively: a combination of machine learning (ML) and deep learning applications, geoscientists apply Paradise to extract greater insights from seismic and well data for these and other objectives:

    • Run fault detection analysis in a few hours, not weeks
    • Identify thin beds down to a single seismic sample
    • Overlay fault images on stratigraphic analysis

    The brief introduction will orient you with the technology and examples of how machine learning is being applied to automate interpretation while generating new insights in the data.

    Sarah Stanley
    Senior Geoscientist and Lead Trainer

    Sarah Stanley joined Geophysical Insights in October, 2017 as a geoscience consultant, and became a full-time employee July 2018. Prior to Geophysical Insights, Sarah was employed by IHS Markit in various leadership positions from 2011 to her retirement in August 2017, including Director US Operations Training and Certification, the Operational Governance Team, and, prior to February 2013, Director of IHS Kingdom Training. Sarah joined SMT in May, 2002, and was the Director of Training for SMT until IHS Markit’s acquisition in 2011.

    Prior to joining SMT Sarah was employed by GeoQuest, a subdivision of Schlumberger, from 1998 to 2002. Sarah was also Director of the Geoscience Technology Training Center, North Harris College from 1995 to 1998, and served as a voluntary advisor on geoscience training centers to various geological societies. Sarah has over 37 years of industry experience and has worked as a petroleum geoscientist in various domestic and international plays since August of 1981. Her interpretation experience includes tight gas sands, coalbed methane, international exploration, and unconventional resources.

    Sarah holds a Bachelor’s of Science degree with majors in Biology and General Science and minor in Earth Science, a Master’s of Arts in Education and Master’s of Science in Geology from Ball State University, Muncie, Indiana. Sarah is both a Certified Petroleum Geologist, and a Registered Geologist with the State of Texas. Sarah holds teaching credentials in both Indiana and Texas.

    Sarah is a member of the Houston Geological Society and the American Association of Petroleum Geologists, where she currently serves in the AAPG House of Delegates. Sarah is a recipient of the AAPG Special Award, the AAPG House of Delegates Long Service Award, and the HGS President’s award for her work in advancing training for petroleum geoscientists. She has served on the AAPG Continuing Education Committee and was Chairman of the AAPG Technical Training Center Committee. Sarah has also served as Secretary of the HGS, and Served two years as Editor for the AAPG Division of Professional Affairs Correlator.

    Dr. Tom Smith
    President & CEO

    Dr. Tom Smith received a BS and MS degree in Geology from Iowa State University. His graduate research focused on a shallow refraction investigation of the Manson astrobleme. In 1971, he joined Chevron Geophysical as a processing geophysicist but resigned in 1980 to complete his doctoral studies in 3D modeling and migration at the Seismic Acoustics Lab at the University of Houston. Upon graduation with the Ph.D. in Geophysics in 1981, he started a geophysical consulting practice and taught seminars in seismic interpretation, seismic acquisition and seismic processing. Dr. Smith founded Seismic Micro-Technology in 1984 to develop PC software to support training workshops which subsequently led to development of the KINGDOM Software Suite for integrated geoscience interpretation with world-wide success.

    The Society of Exploration Geologists (SEG) recognized Dr. Smith’s work with the SEG Enterprise Award in 2000, and in 2010, the Geophysical Society of Houston (GSH) awarded him an Honorary Membership. Iowa State University (ISU) has recognized Dr. Smith throughout his career with the Distinguished Alumnus Lecturer Award in 1996, the Citation of Merit for National and International Recognition in 2002, and the highest alumni honor in 2015, the Distinguished Alumni Award. The University of Houston College of Natural Sciences and Mathematics recognized Dr. Smith with the 2017 Distinguished Alumni Award.

    In 2009, Dr. Smith founded Geophysical Insights, where he leads a team of geophysicists, geologists and computer scientists in developing advanced technologies for fundamental geophysical problems. The company launched the Paradise® multi-attribute analysis software in 2013, which uses Machine Learning and pattern recognition to extract greater information from seismic data.

    Dr. Smith has been a member of the SEG since 1967 and is a professional member of SEG, GSH, HGS, EAGE, SIPES, AAPG, Sigma XI, SSA and AGU. Dr. Smith served as Chairman of the SEG Foundation from 2010 to 2013. On January 25, 2016, he was recognized by the Houston Geological Society (HGS) as a geophysicist who has made significant contributions to the field of geology. He currently serves on the SEG President-Elect’s Strategy and Planning Committee and the ISU Foundation Campaign Committee for Forever True, For Iowa State.

    Carrie LaudonSenior Geophysical Consultant - Geophysical Insights

    Applying Machine Learning Technologies in the Niobrara Formation, DJ Basin, to Quickly Produce an Integrated Structural and Stratigraphic Seismic Classification Volume Calibrated to Wells

    This study will demonstrate an automated machine learning approach for fault detection in a 3D seismic volume. The result combines Deep Learning Convolution Neural Networks (CNN) with a conventional data pre-processing step and an image processing-based post processing approach to produce high quality fault attribute volumes of fault probability, fault dip magnitude and fault dip azimuth. These volumes are then combined with instantaneous attributes in an unsupervised machine learning classification, allowing the isolation of both structural and stratigraphic features into a single 3D volume. The workflow is illustrated on a 3D seismic volume from the Denver Julesburg Basin and a statistical analysis is used to calibrate results to well data.

    Ivan Marroquin
    Senior Research Geophysicist

    Iván Dimitri Marroquín is a 20-year veteran of data science research, consistently publishing in peer-reviewed journals and speaking at international conference meetings. Dr. Marroquín received a Ph.D. in geophysics from McGill University, where he conducted and participated in 3D seismic research projects. These projects focused on the development of interpretation techniques based on seismic attributes and seismic trace shape information to identify significant geological features or reservoir physical properties. Examples of his research work are attribute-based modeling to predict coalbed thickness and permeability zones, combining spectral analysis with coherency imagery technique to enhance interpretation of subtle geologic features, and implementing a visual-based data mining technique on clustering to match seismic trace shape variability to changes in reservoir properties.

    Dr. Marroquín has also conducted some ground-breaking research on seismic facies classification and volume visualization. This lead to his development of a visual-based framework that determines the optimal number of seismic facies to best reveal meaningful geologic trends in the seismic data. He proposed seismic facies classification as an alternative to data integration analysis to capture geologic information in the form of seismic facies groups. He has investigated the usefulness of mobile devices to locate, isolate, and understand the spatial relationships of important geologic features in a context-rich 3D environment. In this work, he demonstrated mobile devices are capable of performing seismic volume visualization, facilitating the interpretation of imaged geologic features.  He has definitively shown that mobile devices eventually will allow the visual examination of seismic data anywhere and at any time.

    In 2016, Dr. Marroquín joined Geophysical Insights as a senior researcher, where his efforts have been focused on developing machine learning solutions for the oil and gas industry. For his first project, he developed a novel procedure for lithofacies classification that combines a neural network with automated machine methods. In parallel, he implemented a machine learning pipeline to derive cluster centers from a trained neural network. The next step in the project is to correlate lithofacies classification to the outcome of seismic facies analysis.  Other research interests include the application of diverse machine learning technologies for analyzing and discerning trends and patterns in data related to oil and gas industry.

    Dr. Jie Qi
    Research Geophysicist

    Dr. Jie Qi is a Research Geophysicist at Geophysical Insights, where he works closely with product development and geoscience consultants. His research interests include machine learning-based fault detection, seismic interpretation, pattern recognition, image processing, seismic attribute development and interpretation, and seismic facies analysis. Dr. Qi received a BS (2011) in Geoscience from the China University of Petroleum in Beijing, and an MS (2013) in Geophysics from the University of Houston. He earned a Ph.D. (2017) in Geophysics from the University of Oklahoma, Norman. His industry experience includes work as a Research Assistant (2011-2013) at the University of Houston and the University of Oklahoma (2013-2017). Dr. Qi was with Petroleum Geo-Services (PGS), Inc. in 2014 as a summer intern, where he worked on a semi-supervised seismic facies analysis. In 2017, he served as a postdoctoral Research Associate in the Attributed Assisted-Seismic Processing and Interpretation (AASPI) consortium at the University of Oklahoma from 2017 to 2020.

    Rocky R. Roden
    Senior Consulting Geophysicist

    The Relationship of Self-Organization, Geology, and Machine Learning

    Self-organization is the nonlinear formation of spatial and temporal structures, patterns or functions in complex systems (Aschwanden et al., 2018). Simple examples of self-organization include flocks of birds, schools of fish, crystal development, formation of snowflakes, and fractals. What these examples have in common is the appearance of structure or patterns without centralized control. Self-organizing systems are typically governed by power laws, such as the Gutenberg-Richter law of earthquake frequency and magnitude. In addition, the time frames of such systems display a characteristic self-similar (fractal) response, where earthquakes or avalanches for example, occur over all possible time scales (Baas, 2002).

    The existence of nonlinear dynamic systems and ordered structures in the earth are well known and have been studied for centuries and can appear as sedimentary features, layered and folded structures, stratigraphic formations, diapirs, eolian dune systems, channelized fluvial and deltaic systems, and many more (Budd, et al., 2014; Dietrich and Jacob, 2018). Each of these geologic processes and features exhibit patterns through the action of undirected local dynamics and is generally termed “self-organization” (Paola, 2014).

    Artificial intelligence and specifically neural networks exhibit and reveal self-organization characteristics. The reason for the interest in applying neural networks stems from the fact that they are universal approximators for various kinds of nonlinear dynamical systems of arbitrary complexity (Pessa, 2008). A special class of artificial neural networks is aptly named self-organizing map (SOM) (Kohonen, 1982). It has been found that SOM can identify significant organizational structure in the form of clusters from seismic attributes that relate to geologic features (Strecker and Uden, 2002; Coleou et al., 2003; de Matos, 2006; Roy et al., 2013; Roden et al., 2015; Zhao et al., 2016; Roden et al., 2017; Zhao et al., 2017; Roden and Chen, 2017; Sacrey and Roden, 2018; Leal et al, 2019; Hussein et al., 2020; Hardage et al., 2020; Manauchehri et al., 2020). As a consequence, SOM is an excellent machine learning neural network approach utilizing seismic attributes to help identify self-organization features and define natural geologic patterns not easily seen or seen at all in the data.

    Rocky R. Roden
    Senior Consulting Geophysicist

    Rocky R. Roden started his own consulting company, Rocky Ridge Resources Inc. in 2003 and works with several oil companies on technical and prospect evaluation issues. He is also a principal in the Rose and Associates DHI Risk Analysis Consortium and was Chief Consulting Geophysicist with Seismic Micro-technology. Rocky is a proven oil finder with 37 years in the industry, gaining extensive knowledge of modern geoscience technical approaches.

    Rocky holds a BS in Oceanographic Technology-Geology from Lamar University and a MS in Geological and Geophysical Oceanography from Texas A&M University. As Chief Geophysicist and Director of Applied Technology for Repsol-YPF, his role comprised of advising corporate officers, geoscientists, and managers on interpretation, strategy and technical analysis for exploration and development in offices in the U.S., Argentina, Spain, Egypt, Bolivia, Ecuador, Peru, Brazil, Venezuela, Malaysia, and Indonesia. He has been involved in the technical and economic evaluation of Gulf of Mexico lease sales, farmouts worldwide, and bid rounds in South America, Europe, and the Far East. Previous work experience includes exploration and development at Maxus Energy, Pogo Producing, Decca Survey, and Texaco. Rocky is a member of SEG, AAPG, HGS, GSH, EAGE, and SIPES; he is also a past Chairman of The Leading Edge Editorial Board.

    Bob A. Hardage

    Bob A. Hardage received a PhD in physics from Oklahoma State University. His thesis work focused on high-velocity micro-meteoroid impact on space vehicles, which required trips to Goddard Space Flight Center to do finite-difference modeling on dedicated computers. Upon completing his university studies, he worked at Phillips Petroleum Company for 23 years and was Exploration Manager for Asia and Latin America when he left Phillips. He moved to WesternAtlas and worked 3 years as Vice President of Geophysical Development and Marketing. He then established a multicomponent seismic research laboratory at the Bureau of Economic Geology and served The University of Texas at Austin as a Senior Research Scientist for 28 years. He has published books on VSP, cross-well profiling, seismic stratigraphy, and multicomponent seismic technology. He was the first person to serve 6 years on the Board of Directors of the Society of Exploration Geophysicists (SEG). His Board service was as SEG Editor (2 years), followed by 1-year terms as First VP, President Elect, President, and Past President. SEG has awarded him a Special Commendation, Life Membership, and Honorary Membership. He wrote the AAPG Explorer column on geophysics for 6 years. AAPG honored him with a Distinguished Service award for promoting geophysics among the geological community.

    Bob A. Hardage

    Investigating the Internal Fabric of VSP data with Attribute Analysis and Unsupervised Machine Learning

    Examination of vertical seismic profile (VSP) data with unsupervised machine learning technology is a rigorous way to compare the fabric of down-going, illuminating, P and S wavefields with the fabric of up-going reflections and interbed multiples created by these wavefields. This concept is introduced in this paper by applying unsupervised learning to VSP data to better understand the physics of P and S reflection seismology. The zero-offset VSP data used in this investigation were acquired in a hard-rock, fast-velocity, environment that caused the shallowest 2 or 3 geophones to be inside the near-field radiation zone of a vertical-vibrator baseplate. This study shows how to use instantaneous attributes to backtrack down-going direct-P and direct-S illuminating wavelets to the vibrator baseplate inside the near-field zone. This backtracking confirms that the points-of-origin of direct-P and direct-S are identical. The investigation then applies principal component (PCA) analysis to VSP data and shows that direct-S and direct-P wavefields that are created simultaneously at a vertical-vibrator baseplate have the same dominant principal components. A self-organizing map (SOM) approach is then taken to illustrate how unsupervised machine learning describes the fabric of down-going and up-going events embedded in vertical-geophone VSP data. These SOM results show that a small number of specific neurons build the down-going direct-P illuminating wavefield, and another small group of neurons build up-going P primary reflections and early-arriving down-going P multiples. The internal attribute fabric of these key down-going and up-going neurons are then compared to expose their similarities and differences. This initial study indicates that unsupervised machine learning, when applied to VSP data, is a powerful tool for understanding the physics of seismic reflectivity at a prospect. This research strategy of analyzing VSP data with unsupervised machine learning will now expand to horizontal-geophone VSP data.

    Tom Smith
    President and CEO, Geophysical Insights

    Machine Learning for Incomplete Geoscientists

    This presentation covers big-picture machine learning buzz words with humor and unassailable frankness. The goal of the material is for every geoscientist to gain confidence in these important concepts and how they add to our well-established practices, particularly seismic interpretation. Presentation topics include a machine learning historical perspective, what makes it different, a fish factory, Shazam, comparison of supervised and unsupervised machine learning methods with examples, tuning thickness, deep learning, hard/soft attribute spaces, multi-attribute samples, and several interpretation examples. After the presentation, you may not know how to run machine learning algorithms, but you should be able to appreciate their value and avoid some of their limitations.

    Deborah Sacrey
    Owner, Auburn Energy

    Deborah is a geologist/geophysicist with 44 years of oil and gas exploration experience in Texas, Louisiana Gulf Coast and Mid-Continent areas of the US. She received her degree in Geology from the University of Oklahoma in 1976 and immediately started working for Gulf Oil in their Oklahoma City offices.

    She started her own company, Auburn Energy, in 1990 and built her first geophysical workstation using Kingdom software in 1996. She helped SMT/IHS for 18 years in developing and testing the Kingdom Software. She specializes in 2D and 3D interpretation for clients in the US and internationally. For the past nine years she has been part of a team to study and bring the power of multi-attribute neural analysis of seismic data to the geoscience public, guided by Dr. Tom Smith, founder of SMT. She has become an expert in the use of Paradise software and has seven discoveries for clients using multi-attribute neural analysis.

    Deborah has been very active in the geological community. She is past national President of SIPES (Society of Independent Professional Earth Scientists), past President of the Division of Professional Affairs of AAPG (American Association of Petroleum Geologists), Past Treasurer of AAPG and Past President of the Houston Geological Society. She is also Past President of the Gulf Coast Association of Geological Societies and just ended a term as one of the GCAGS representatives on the AAPG Advisory Council. Deborah is also a DPA Certified Petroleum Geologist #4014 and DPA Certified Petroleum Geophysicist #2. She belongs to AAPG, SIPES, Houston Geological Society, South Texas Geological Society and the Oklahoma City Geological Society (OCGS).

    Mike Dunn
    Senior Vice President Business Development

    Michael A. Dunn is an exploration executive with extensive global experience including the Gulf of Mexico, Central America, Australia, China and North Africa. Mr. Dunn has a proven a track record of successfully executing exploration strategies built on a foundation of new and innovative technologies. Currently, Michael serves as Senior Vice President of Business Development for Geophysical Insights.

    He joined Shell in 1979 as an exploration geophysicist and party chief and held increasing levels or responsibility including Manager of Interpretation Research. In 1997, he participated in the launch of Geokinetics, which completed an IPO on the AMEX in 2007. His extensive experience with oil companies (Shell and Woodside) and the service sector (Geokinetics and Halliburton) has provided him with a unique perspective on technology and applications in oil and gas. Michael received a B.S. in Geology from Rutgers University and an M.S. in Geophysics from the University of Chicago.

    Hal GreenDirector, Marketing & Business Development - Geophysical Insights

    Hal H. Green is a marketing executive and entrepreneur in the energy industry with more than 25 years of experience in starting and managing technology companies. He holds a B.S. in Electrical Engineering from Texas A&M University and an MBA from the University of Houston. He has invested his career at the intersection of marketing and technology, with a focus on business strategy, marketing, and effective selling practices. Mr. Green has a diverse portfolio of experience in marketing technology to the hydrocarbon supply chain – from upstream exploration through downstream refining & petrochemical. Throughout his career, Mr. Green has been a proven thought-leader and entrepreneur, while supporting several tech start-ups.

    He started his career as a process engineer in the semiconductor manufacturing industry in Dallas, Texas and later launched an engineering consulting and systems integration business. Following the sale of that business in the late 80’s, he joined Setpoint in Houston, Texas where he eventually led that company’s Manufacturing Systems business. Aspen Technology acquired Setpoint in January 1996 and Mr. Green continued as Director of Business Development for the Information Management and Polymer Business Units.

    In 2004, Mr. Green founded Advertas, a full-service marketing and public relations firm serving clients in energy and technology. In 2010, Geophysical Insights retained Advertas as their marketing firm. Dr. Tom Smith, President/CEO of Geophysical Insights, soon appointed Mr. Green as Director of Marketing and Business Development for Geophysical Insights, in which capacity he still serves today.

    Hana Kabazi
    Product Manager

    Hana Kabazi joined Geophysical Insights in October of 201, and is now one of our Product Managers for Paradise. Mrs. Kabazi has over 7 years of oil and gas experience, including 5 years and Halliburton – Landmark. During her time at Landmark she held positions as a consultant to many E&P companies, technical advisor to the QA organization, and as product manager of Subsurface Mapping in DecsionSpace. Mrs. Kabazi has a B.S. in Geology from the University of Texas Austin, and an M.S. in Geology from the University of Houston.

    Dr. Carrie LaudonSenior Geophysical Consultant - Geophysical Insights

    Carolan (Carrie) Laudon holds a PhD in geophysics from the University of Minnesota and a BS in geology from the University of Wisconsin Eau Claire. She has been Senior Geophysical Consultant with Geophysical Insights since 2017 working with Paradise®, their machine learning platform. Prior roles include Vice President of Consulting Services and Microseismic Technology for Global Geophysical Services and 17 years with Schlumberger in technical, management and sales, starting in Alaska and including Aberdeen, Scotland, Houston, TX, Denver, CO and Reading, England. She spent five years early in her career with ARCO Alaska as a seismic interpreter for the Central North Slope exploration team.