Unsupervised Machine Learning Applied to Direct-P and Converted-P Data – a free webinar, 17/18 February 2021

This webinar features a 45-minute presentation by Dr. Bob Hardage (CV below), a researcher and proponent of the application of Direct-P and Converted-P data since its inception in 2011. The technique is receiving renewed interest with the use of machine learning. An interactive Q&A with Dr. Hardage will follow his presentation.
Title: Unsupervised Machine Learning Applied to Direct-P and Converted-P Data | Presenter: Dr. Bob Hardage | Date: Wednesday/Thursday, 17/18 February 2021
What you will learn in this webinar:
  • Joint interpretation of direct-P and converted-P images of stacked turbidites.
  • Value provided by a long-ignored seismic mode generated by P sources and recorded by vertical geophones – the SV-P (or converted-P) mode.
  • Comparisons between principal components of P-P, P-SV, and SV-P data.
  • Comparisons of self-organized maps (SOMs) of direct-P and converted-P data.
  • SOM examples of damaging data-acquisition footprints.
  • Why direct-P data and converted-P data often do not detect the same rock boundaries.
  • Examples of turbidite attribute fabric defined by specific direct-P and converted-P winning neurons.
First, there are several types of machine learning so the qualifier “Unsupervised” machine learning is important to inform people as to what type of machine learning will be discussed.
Second, during the past 10 years, I have found frequently that any reference to “S” data turns off many people who do not want to jump into the arena of S-wave reflection seismology. In contrast, the term “converted-P” (which is what SV-P data are) usually attracts people. In fact, the most common reaction is that people are interested because they have never heard the term, converted-P, and are curious as to the nature of the data. So, I now use the term “converted-P” more frequently than “SV-P”.
As a side note, here is an analogy that may help us all understand how SV-P (or converted-P) data and P-SV (or converted-S) data are related. In my younger years, the most famous dancing team in the world was Fred Astaire and Ginger Rogers. Fred Astaire could do beautiful dance steps and was the more famous of the pair. However, Ginger Rogers did the same steps, but did them backwards and in high heels, so it is arguable as to who was the more talented.
Today, P-SV data are popular and respected. SV-P data are the mirror image of P-SV data, but do not have the esteem that P-SV data do. Thus, P-SV data are Fred Astaire, and SV-P data are Ginger Rogers. Whatever people have done to demonstrate the value of P-SV data can also be done with SV-P data. We just have to do everything backwards when processing SV-P data compared to the methodology that is used to process P-SV data. At any rock boundary, an SV-P reflectivity curve exactly follows the P-SV reflectivity curve for that same boundary, except that the magnitudes of the SV-P reflection coefficients are usually about 10 or 15 percent smaller than the magnitude of the P-SV curve. The result is that whatever P-SV data (Fred Astaire) can do, SV-P data (Ginger Rogers) can do the same thing by reversing the ray paths.

Any well log data to confirm the seismic interpretation?

There were 8 or 10 wells inside the image space, but there was a limited amount of log data at the depths of the Wolfberry turbidites. The seismic data used in this study were acquired 20 years ago before there was interest in fracking tight turbidites. Thus, log data acquisition focused on deeper intervals, below the Wolfberry turbidites, that were the drilling targets two decades ago.

How do you interpret by seeing only single SOM slices?

In order to interpret geology, you absolutely must view SOM results in 3D space and not rely on just a few slices through a SOM volume. I used single SOM slices because I was attempting to teach how winning neurons reveal clusters of attributes that look like the three key, large-scale, features of turbidites that I wanted people to focus on; i.e., (1) map views of linear shearing in the direction that a turbidite system slides into the basin, (2) map views of features oriented orthogonal to the direction of sediment movement that appear to be front edges of prograding debris flows, and (3) section views of folding units inside the turbidite system. Recognizing these 3 key features is essential to defining the basic internal architecture of debris flows that stacked to vertical thickness of 2000 to 5000 ft. in the Midland Basin.

Nice image. Clay K and μ moduli may vary too much. Which actual values did you use?

I assume you are referring to slides 24 and 25. The dots on slide 24 show the mineralogy counts made from SEM scans of approximately 150 thin sections cut from cores taken at various depths in the Wolfberry turbidite interval. The thickness of the Wolfberry stacked turbidites vary from 700-m to 1600-m across the basin. These cores were also acquired at widely spaced wells across the 300-km X 100-km area of the Midland Basin. I use slide 24 only to illustrate that the mineralogy mixtures in Wolfberry turbidites vary widely. The 3 choices of mineral mixtures used for the matrix of a numerical turbidite in slide 25 do not represent any particular data point in slide 24. Our rock physicists made 50 or 60 arbitrary choices of possible mineral mixtures, created a synthetic rock that had stiffness coefficients that imitated a medium that had each set of those minerals evenly distributed throughout the turbidite unit. All stiffness coefficients, including K and μ, were different for each synthetic rock. The rock physicist calculated P-P and SVP reflection coefficients for each synthetic rock boundary. I picked 3 pairs of reflection coefficient results that interested me and made slide 25. I did not document what the numerical values of K and μ were for these 3 choices. I fear the analysis cannot be retrieved. It was done in 2010. The rock physicist left the university shortly after the DV-P imaging stage of this study was done in 2010-2011.

How do you depth register P-P and SV-P data?

The best option is to use VSP data acquired inside the image space that is being interpreted. VSP data are the only data that are acquired simultaneously as a function of depth and time. Thus, VSP data are the most reliable information as to: (1) the travel time when P and S reflections are created and start their upward journey to the surface, and (2) the exact depths where those upward journeys start.
The second choice, if dipole sonic log data are available to provide Vs velocities, would be to make P and S synthetic seismograms and see what they predict for the arrival times of P and S reflections from a target depth. This method forces you to estimate the average Vp and Vs velocities from the surface to the depth where dipole sonic-log data begin, which can be challenging. In contrast, the P and S travel times from the surface to the top receiver in a VSP receiver array is captured precisely in VSP data.
There were no VSP data or any extensive dipole sonic-log data available inside the image space at the site that I discuss. We thus implemented a practice that was often used during the 18 years that my research group at UT interpreted multicomponent seismic data. This practice was that 2 or 3 of the best interpreters in our group used their knowledge and experience to agree that 2 or 3 P-P reflections were the same rock boundary defined by 2 or 3 reflections in the S-mode data that we were utilizing. Each interpreter presented their arguments, a collective decision was made, and then the S data were dynamically time-warped so that the 2 or 3 key S reflections were time-matched with their interpreted P-P equivalents. This procedure is simply guesswork by experienced interpreters, but we had a string of successes using this method. If you are a lone interpreter, my advice would be that you have a session with 2 or 3 interpreters that you respect, present your arguments as to what are depth-equivalent P and S reflections, and then let your colleagues present counter arguments. I predict you will, as a group, come to an acceptable conclusion. The real answer for all of us is provided by the “rotary lie detector” test, which is the drill bit.

Why didn’t you run a single SOM on P-P and SV-P?

That would have been an interesting and valuable SOM option to have implemented. I walked away from this data set after getting to the point that I discussed in the Webinar so that I could apply unsupervised machine learning to other seismic data. The field of applications for unsupervised ML in reflection seismology spans analysis of pre-stack data, comparisons of fast-S and slow-S images, vertical seismic profile data, field data acquired with source A and then with source B, and on and on. When you gain access to ML software like Geophysical Insights provides, you become a kid in a candy store and run from opportunity to opportunity. Researchers like me can change from study A to study B at a whim. Interpreters who are charged with drilling a specific geographic area do not have this flexibility and have to try every logical way to implement a variety of SOM analyses that focus on a specific reservoir within a specific area. Your suggested option should be on the list of SOM procedures that should be done.

Isn’t it inconsistent that curve 3 is flat in PP (incidence) = null gradient = decrease of Vp/Vs while it is zero for SV-P?

When I saw this result and made slide 25, I had no concerns. I have seen so many instances where a P-P reflection occurs at a rock boundary, but its companion S-mode creates no reflection that now I never get too concerned when I see contrasting P-mode and S-mode behaviors at a target boundary. As you do joint interpretations of P and S data, you will commonly find reflectivity behaviors that correspond to strong-P but weak-S, strong-S but weak-P, positive-P but negative-S, positive-S but negative-P, zero-P, yet some S, and zero-S yet some P. People who do joint interpretations of P and S data need a talented rock physicist by their side. Only when I did this, did I begin to fully understand what I saw in P and S images across the same geology, and the rock-physics answers as to why reflection differences occurred in P-P and S-mode images that I was examining.

On slide 29, the color of neuron 9 between 1 and 17. Is it arbitrary or a property of SOM mapping? How do you determine the periodicity of 9?

The color of any winning neuron is arbitrary. SOM is concerned only about the “number” that is assigned to each winning neuron, not the “color” that is assigned to that neuron. SOM migrates all neurons through multi-dimensional attribute space until they each locate all of the natural clusters of attributes where there are unique mixtures of certain amounts of certain attributes. All neurons usually complete this “search and find” activity after 60 or so epochs of searching through the entirety of attribute space. When searching ends, each neuron then is a “winning” neuron. At this point, SOM knows the X-Y-Z coordinates where each numbered neuron is located. SOM does not know or care what the color of a winning neuron is. The color is controlled by the software user. You use a color of your choice, not SOM’s choice, to see where each winning neuron found the natural clusters that it sought and what shape and appearance each natural cluster has.
I am not sure what you mean by “periodicity,” so I will not guess.

In extracting SV-P, how do you minimize cross-feed/leakage from P-P data?

SV-P reflections are not hidden in vertical-geophone data. In contrast, they are prominent, and every data processor has seen them for years. I had seen SV-P reflections for decades and assumed that they were interbedding multiples, just like data processors still do. Because SV-P stacking velocities are slower than P-P stacking velocities, SV-P reflections have greater curvature than do P-P reflections. Whatever procedure that data processors use to isolate large-curvature reflections from small-curvature reflections can be used to isolate P-P reflections and SV-P reflections into separate data-processing streams. These concepts are covered by four U.S. Patents that are owned by the Board of Regents of The University of Texas. The University has created a company named VertiShear to commercialize the technology. VertiShear can provide no-cost licenses to interested data-processing companies. Yes, I wrote this statement correctly – the cost for a data-processing company to offer these patented SV-P data-processing services to their clients is zero.

Any recommendations/tips on processing the converted-P data. Especially for someone new to utilizing these data?

Locating a proper seismic data-processing company in the U.S. is an increasing problem today (year 2021) because of the widespread collapse of our seismic reflection seismology profession. Data-processing companies are closing their doors everywhere for lack of business. Any data-processing company that is still operating can implement proper SV-P data-processing procedures by working with VertiShear, the company that was founded by The University of Texas to provide SV-P technology to the public.
Please contact Vertishear at www.vertishear.com for advice and recommendations. Several SV-P technical reports and SV-P image examples are available at this site in the Whitepapers section. These papers provide valuable information and education. I just went to the site and realized that several recently published SV-P studies had not been inserted into this Whitepapers section. I will correct that lapse of due diligence. Meanwhile, do a search of papers that appear in the Interpretation journal (which is jointly published by SEG and AAPG) for any publication that shows “Hardage” as the first author. This search will provide the SV-P papers that have not yet been added to the Vertishear Website.

I am surprised that the attributes that dominate the information in the direct-P, converted-P, and converted-S data are identical. Any comments about why that occurs?

I, too was surprised. I anticipated that any mode that traveled as an S-wave on either its down path or on its up path, would have a different attribute structure than a mode that was a pure P-wave on both its down path and its up path. This initial assumption was based on the fact that a displacement vector of a P mode probes a rock system in a direction that is perpendicular to the direction that an S-mode displacement vector probes that same rock. My simple assumption was that 2 orthogonal probes would be affected by different rock stiffness coefficients and thus would “probably” react to different rock attributes.
The PCA results shown in this webinar indicate that my anticipation was unfounded because they show that the attributes that dominate the information content of P-P, P-SV, and SV-P are identical. I have been studying this PCA behavior by applying ML analyses to VSP data. In these studies, I create (1) SOMs of down-going illuminating P and S VSP wavefields that are produced by a single vertical-vibrator, (2) SOMs of the P and S reflection wavefields that each of these 2 illuminating VSP wavefields create, and (3) SOMs of the interbed multiples that these up-going P and S reflections then create as they travel upward to surface receivers. These SOMs show that when any illuminating wavefield, be it a P wavefield or a S wavefield, creates a reflection (be that reflection either a P or a S reflection), the attribute structure of the illuminating wavefield transfers intact to each reflection that it generates. Similarly, these up-going reflections (be they either P or S) transfer their attributes (which are the same attributes in the illuminating wavefield) perfectly intact to interbed multiples.
The process is much like the DNA of a multi-generational family. The first generation (the illuminating wavefields) transfer their DNA to the second generation (the up-going reflections), and these up-going reflections transfer the DNA to the third generation (the interbed multiples). When we accept this DNA-type principle, then the issue of “why do P-P, P-SV, and SV-P modes have the same attributes” boils down to one question, which is “why do the down-going P and S wavefields produced by a vertical vibrator (the first-generation wavelets) have identical attribute structures.”
I offer my opinion, which is – “illuminating P and S wavefields produced by a vertical vibrator have the same attribute structure because both wavefields are created simultaneously, by the same vibrator sweep, experience the same baseplate-to-earth coupling and exactly the same ground-stiffness environment during their creation, and are born at exactly the same clock time and at exactly the same location on the earth surface”.
How can I test this hypothesis? One obvious way is to apply PCA and SOM analyses to revert to the old way of collecting P and S VSP data in which you use a vertical vibrator to generate the down-going P wavefield but a horizontal vibrator to generate the down-going S wavefield. In this original VSP practice, the down-going P and S illuminating wavefields are produced by different sources, with different baseplate-to-earth couplings, different frequency sweeps, at slightly different baseplate locations, in slightly different stiffness environments, and at different clock times.
My first assumption is that in such a test, P-P and P-SV data will have the same attribute structure because both modes are produced by the same illuminating P wave. My second assumption is that the attribute structure of SV-P data will be different because the SV-P mode was generated by a different source with a different baseplate coupling, different sweep rate, etc. If I am able to do such a ML investigation, we may have to have another Webinar sometime.

Why did an acquisition footprint appear in the P-SV data but not in the SV-P data?

The data were acquired 9 years before we implemented this first-ever effort to make a SV-P image from vertical-geophone data. Our research lab at UT was not involved in the acquisition design. The evidence that an acquisition footprint would be encountered in the P-SV data should have been shown by examining appropriate stacking-fold maps that survey designers always make. I have no explanation why the problem was not revealed during the acquisition design. In 2001, when the data were acquired, no one even thought of acquiring SV-P data, so no one made a stacking-fold map to show the fold behavior of SV-P illumination.
A large 200-mi2 vertical-geophone survey was implemented by company A, the data owner. Company A had never worked with 3C data, so they decided to lay out a (2-mi2) X (2-mi.2) grid of 3C geophones in the interior of the large vertical-geophone survey. They then recorded 3C data using a (3-mi2) X (3-mi2) grid of source stations from the large survey that was centered on the (2-mi2) X (2-mi2) grid of 3C geophones.
Here now is an important insight into P-SV and SV-P imaging. The P-SV image point between a source and a receiver is closer to the receiver than to the source. This principle means that all P-SV reflection points in this instance were concentrated around the small (2-mi2) X (2-mi2) receiver area. In contrast, the SV-P image point between a source and a receiver is closer to the source than to the receiver. This fact means that all SV-P reflection points in this instance tried to expand to fill the large (3-mi2) X (3-mi2) source area. The same number of source-receiver pairs were used to make both the P-SV image and the SV-P image. The concentration of N reflection points into a small image area produced a footprint. The expansion of these N reflection points into a large image area avoided a footprint.

The seismic data must have some characteristics in sample rate or process?

The seismic data used in PCA and SOM analyses can be any type of standard digital seismic data; i.e., the data can be 2D, 3D, pre-stack trace gathers, VSP, etc. Sample rate can be any standard sampling rate used in the seismic community. The data need to be in SEGY format in order to be read by the Paradise software.

How can you explain the difference between P-SV and SV-P? Can you model the differences you see?

I assume that you refer to the 3 slides that show side-by-side horizontal slices through the P-SV and SV-P SOM volumes. The differences in the spatial distributions of P-SV and SV-P winning neurons in these comparisons are strictly a matter of having an acquisition footprint and avoiding an acquisition footprint.
Yes, every seismic data-acquisition company has software that can model the pattern of the stacking fold that a particular source and receiver geometry will create. This software was originally based on analyzing only common-midpoint imaging like P-P data. In the 1990s, our profession began to focus on acquiring P-SV data, so this modeling software was modified to create stacking-fold charts of P-SV reflection points. This modification required that you provide the software the average Vp/Vs velocity ratio across the image area. Proper use of this modeling software should prevent people from implementing a source-receiver geometry that will create an acquisition footprint in P-SV image space.
Here now is a fortunate outcome in this type of modeling. We have found that you can use the standard P-SV modeling software to also calculate SV-P stacking-fold patterns if you simply invert the Vp/Vs velocity ratio that you have to input into the software in order to control where P-SV reflection points occur. In other words, if a Vp/Vs value of 2 produces accurate P-SV stacking-fold charts, then changing that parameter to 0.5 (the inverse of 2) will generate accurate SV-P stacking-fold charts.

Can you please provide a reference for SV-P processing?

To my knowledge, there is no published paper that spells out the specifics of SV-P data processing. Probably the best general description of the data-processing flow is illustrated in the U.S. patents that cover the principles of practicing full-mode S-wave reflection seismology with P sources. These patents can be provided by VertiShear if you address an inquiry to www.vertishear.com. Although these patents will be helpful, a patent has to be written in the most general language possible so that its claims will span across all specific procedures that data processors may use. People usually want specific information; patents have to be written in general language. I have presented Webinars sponsored by the Geophysical Society of Houston that address this topic. Perhaps it is time to do another of those Webinars.
I need to emphasize an important implication of the issue that is discussed here. The fact that there are no examples of SV-P images produced by P sources in any geophysical literature until 2011, nor any descriptions of how such imaging can be performed, illustrates how unique the concept of SV-P imaging with P sources is. I will attach a list of published papers immediately after the end of this string of question answering that will provide you what has been distributed publicly about SV-P technology.

What is your experience on detecting faults when using SV-P compared to using P-P waves?

The most recent investigations that I have done have focused on using SV-P data to determine the azimuth of maximum horizontal stress (SHmax). In this case, the sources were buried explosives in shot holes, and the receivers were vertical geophones. There were no faults inside this image space, but if SV-P data can indicate the azimuth of SHmax, then you also know the direction that faults should be oriented if they were present. I am always forced to work with data that become available, which may or may not be ideal data to study a specific topic.
SHmax azimuth is the same direction that fast-S shear modes are polarized, so if SV-P data allow you to determine fast-S and slow-S propagation directions, then you know SHmax azimuth. In this recent study, fast-S/slow-S analyses were done by constructing azimuth-dependent SV-P trace gathers in every stacking bin, at 3 different geological horizons, across a 24-mi2 area (98,000 stacking bins per horizon). This SV-P analysis thus totaled to almost 300,000 estimates of SHmax azimuth. The data indicated that SHmax was oriented 65 degrees clockwise from north at all 3 horizons. All local and regional SHmax information that we could find for the study area agreed with this answer.
Parallel with this SV-P effort, an amplitude-versus-azimuth (AVAZ) analysis was done using azimuth-dependent P-P trace gathers at each stacking bin. Again, this P-P effort involved almost 300,000 estimates of SHmax. The P-P AVAZ predictions agreed with the SV-P predictions at the shallowest horizon (about 2500-ft deep), had an undesirable scatter but still indicated approximately the same SHmax azimuth as did SV-P data at the second horizon (about 4500 ft deep, but had too much scatter to be definitive at the deepest horizon (about 6500 ft deep).
So, in this first-ever test of SV-P versus P-P for fault detection, SV-P data appear to be the better choice for detecting and characterizing faults. This study will be published in the May issue of the Interpretation journal that is jointly sponsored by SEG and AAPG. I have never had a paper accepted so quickly. The review and acceptance phases were done in 2 months. One well-known reviewer wrote a note to the Editor stating that the paper should receive the Best Paper Award for 2021. I doubt that will occur, but it is an indication of the impact of the investigation. The paper also illustrates how SV-P reflections can be separated from P-P reflections in vertical-geophone data, which addresses a preceding question.

When were the seismic data acquired, and what kind of seismic data-processing was applied, PSTM or PSDM?

The data were acquired in 2001. We did not process the data until 2010, which is an example of extracting SV-P reflections from legacy data. The P-P and SV-P data that I show were imaged using PSTM procedures.

The SV-P seems wormy and smoothed. Is this normal or a processing artifact?

I recommend that you base your judgment by comparing P-P and SV-P data in their wiggle-trace form. Specifically, look at slide 5 of the Webinar presentation. Most people who have seen this comparison have concluded that the SV-P data are showing appropriate intra-turbidite details, but P-P data are excessively smoothing the data, which is the opposite of what you conclude. Opinions differ between interpreters about all issues, and you are not alone in being suspicious of SV-P data produced by a P source and recorded by vertical geophones. Suspicion is good. We all learn by being suspicious of a new imaging concept.
Regarding the possibility of a processing artifact, I need to say that the data used in this webinar were processed in 2011 and were the first-ever example of a converted-P (i.e., SV-P) image that is extracted from the same vertical-geophone data that provide direct-P (i.e., P-P) images. This initial data processing was done by Fairfield Nodal. Since 2011, we have conducted converted-P studies in 10 basins. Every SV-P image, except one, was created by a commercial data-processing shop, not by my research staff. The one exception is a converted-P image that was made by my last Ph.D. student. In these 10-basin results, each commercial data-processing group produced converted-P images that were exact matches to P-SV data if we were using 3C data. So far, all SV-P images have agreed with available subsurface control if we had no access to P-SV data. My conclusion is that SV-P data are not prone to processing artifacts that differ in any appreciable way from processing artifacts that can be embedded in any seismic image.

Slide 28. What is the basis for neuron picking that represented geology?

The logic that people use to see geological information in SOM data varies from interpreter to interpreter, as it should. Seismic interpretation is often more of an art than it is a science. I arrived at the illustrations shown in slide 28 by looking at the patterns produced by each of the 64 winning neurons, one at a time, to see what each winning neuron contributed to the SOM data volume. Generally, all interpreters do some form of this type of one-by-one examination of winning-neuron patterns to reach a decision about the geological information that is revealed by SOM data.
You must keep in mind that a winning neuron only identifies a natural cluster of a certain mixture of attributes. The interpreter then has to decide if that natural cluster is true geology. A natural cluster of attributes will occur at coordinates in real-data space where there is a group of organized wavelets. Go back to real-data space and look at the same coordinates where a winning neuron exists in the SOM volume. You may then be more comfortable in concluding that the winning neuron is, or is not, real geology.

What is the meaning of the dashed line in attributes of specific neurons? Why is it missing for some neuron lists?

The lists that you refer to are lists of “winning” neurons. I have labeled the lists with the single word “Neuron”. I need to modify the graphic so that the lists are titled “Winning Neuron”. At the present time, I am satisfied when the attributes embedded in a winning neuron’s search list total up to approximately 90-percent of the total information that a winning neuron can carry. When the percentages of attribute contributions reach that targeted level of 90%, I simply have a habit of drawing a dashed line and terminating the attribute list. The absence of a dashed line on a list means only that I was careless and inconsistent in my habit of indicating the cutoff of an attribute list.

The selection of winning neurons is too arbitrary. Is there a consistent procedure for inspecting winning neurons, or do you need such a procedure?

The reply to the preceding question applies to this question also. An interpreter has no ability to position a winning neuron at specific coordinates in attribute space. The size, shape, and location of each natural cluster that is defined by a SOM is controlled completely by SOM calculations.
An interpreter’s control of SOM results ends once she/he defines the number of searching neurons that SOM can use. This neuron number coincides with the number of neurons that appear on the color pallet used to make SOM displays. If you define the number of searching neurons to be 25, and then create a second SOM using 49 searching neurons, the two SOM results will be different. For example, a natural cluster that appeared as a single cluster when positioning 25 winning neurons in attribute space may appear as two natural clusters that combine to form that single cluster when SOM uses 49 searching neurons.
An interpreter can do nothing but wait until SOM calculations end to see where a winning neuron is located in multi-dimensional attribute space. This is why a SOM procedure is called “unsupervised” machined learning. An interpreter has no ability to “supervise; i.e., control” SOM results; she/he can only wait to see what SOM calculated. Once SOM calculations end, the interpreter then has 100-percent control and supervision of the colors to use to display objects found by winning neurons. It will be unusual for any two interpreters to agree on what color pallet is best for viewing where a SOM positioned the winning neurons.

Have you considered any other unsupervised approach, e.g. tSNE or applying Topical Data Analysis, e.g. UMAP?

The Webinar lecturer, Bob Hardage, does not know the history of the research that Geophysical Insights has done to determine which machine learning (ML) procedures should be provided by Paradise software. Consequently, I defer to Rocky Roden, who has been engaged in developing most of the Paradise applications, and to Dr. Tom Smith, President of Geophysical Insights, to provide you an accurate picture of the various ML approaches that Geophysical Insights has investigated.

Rocky Roden
Geophysical Insights has not yet tested tSNE (t-distributed stochastic neighbor embedding), which is a relatively new non-linear, dimension-reducing approach. We have tested other topical approaches like U-Matrix. We have found that U-Matrix works best in identifying relationships between neurons, or groups of neurons when a large number of neurons (usually >200) are employed. In applications with seismic attributes, we have found that seldom should more than 100 neurons be employed. When there are more than 100 neurons, some neurons begin to identify themselves as individuals, not as a data point that is associated with a cluster. This behavior implies that too many neurons are being used in the analysis.

Tom Smith
Kohonen’s self-organizing map was a breakthrough in machine learning (ML for short) as a classification of high-dimensional data samples. After training, the neural network itself exhibits features of clustering. Prior to this and to the best of my knowledge (TAS, [email protected]), ML algorithms (both supervised and unsupervised) worked to classify data samples, but there was nothing to look at in the neural network itself. SOM neural networks exhibit an organization such that nearby, trained, neurons in the network are traced back to samples in nearby regions in the original data space. In other words, SOM neurons in the lower-left corner of the SOM map might be traced to samples in one portion of the original data space, while SOM neurons in the lower right corner might be traced to samples in another part of the data space. A nice introduction to high-dimensional classification may be found in Wikipedia (Cluster analysis – Wikipedia).
Indeed, natural clusters of samples in the original data space map to SOM winning neurons (see Haykin, Neural Networks, and Learning Machines, 3rd ed., 442-444 for discussion on how SOM neurons gravitate to natural clusters in attribute space). SOM training neurons are attracted to samples that stack (cluster) in the original data space (Smith, Taner, and Treitel, Self-Organizing Maps of Multi-Attribute 3D Seismic Reflection Surveys, SEG 2009 Workshop on “What’s New in Seismic Interpretation”).
We exploit this feature following the observation that certain trained neurons, or groups of nearby trained neurons, are associated with geologic-looking features in the seismic survey. This association has been confirmed by the drill bit many times and published several times in our peer-reviewed journals. If you have an electrical engineering background, you might be familiar with the concept of using a probe at various points on an operating “breadboard” circuit to track how a signal is modified across its various components.
Similarly, probe a self-organizing map to relate a trained winning neuron to seismic samples in the survey.
In general, there are three evolutionary stages of ML – self-adaptation, self-organization, and self-awareness (self-adaptation self-organization and self-awareness – Bing). Neural network recipes which exploit both dimensionality reduction and dimensionality expansion (autoencoders) were early examples of supervised neural networks that self-adapted to data. Today, an explosion of deep-learning algorithms has resulted in a variety of self-adaptations in supervised, semi-supervised, and unsupervised training sessions. SOM stands out as one of the earliest, and regrettably, one of only a few MLs with self-organization.
The concept of self-organization and probing neurons for geology is not restricted to SOM alone. While K-means is not self-organizing, tSNE is self-organizing. So, to get to your question, we are actively investigating tSNE as an area of research interest. Perhaps it will be a useful alternative to SOM.
I am not familiar with Topical Data Analysis and will have a look (Topological data analysis – Wikipedia).
UMAP is a display technique to look for separations between natural clusters.
Reports of UMAP are associated with SOM, but I see no reason why UMAP could not identify empty regions in an original data space for any self-organizing ML. UMAP detects separations between natural cluster regions by displaying the distance between adjacent winning neurons. Of course, as Rocky points out above, separations are more apparent when you have lots of neurons. Lots of trained neurons decompose parts of geologic geobodies into finer details that might result in missing the geobodies. SOM is a great tool for investigating geologic geobodies at different levels of detail. Our users typically run SOMs with a variety of neuron topologies (say 4×4, 6×6, 8×8, and perhaps 10×10) for their interpretation.
Fundamentally, our job is to deliver the very best ML tools to interpreters for them to make better predictions.

Please contact us to receive more information on the Paradise AI workbench featured in the webinar.

[forminator_form id=”34008″]

Bob Hardage
Bob A. Hardage, Ph.D.
Bob A. Hardage has 26 years of industry experience, starting at Phillips Petroleum where he advanced to the position of Exploration Manager Asia and Latin America, and then at WesternAtlas where he was Vice President of Geophysical Development and Marketing. His industry career was followed by a 28-year position as Senior Research Scientist at the Bureau of Economic Geology, where he established a multicomponent seismic research laboratory. He has served SEG as assistant editor, editor, first vice-president, president-elect, president, past-president, chair of the technical program committee for two annual SEG meetings, honorary lecturer, and short-course instructor. SEG has awarded Bob a special commendation, life membership, and honorary membership. He wrote the monthly Geophysical Corner column for AAPG’s Explorer magazine for 6 years. AAPG has honored Bob with a distinguished service award for promoting geophysics among the geologic community. He has authored 6 books and more than 60 peer-reviewed papers.
Shopping Cart
  • Registration confirmation will be emailed to you.

  • We're committed to your privacy. Geophysical Insights uses the information you provide to us to contact you about our relevant content, events, and products. You may unsubscribe from these communications at any time. For more information, check out our Privacy Policy

    Jan Van De MortelGeophysicist

    Jan Van De Mortel

    Jan is a geophysicist with a 30+ year international track record, including 20 years with Schlumberger, 4 years with Weatherford, and recent years actively involved in Machine Learning for both oilfield and non-oilfield applications. His work includes developing solutions and applications around transformer networks, probabilistic Machine Learning, etc. Jan currently works as a technical consultant at Geophysical Insights for Continental Europe, the Middle East, and Asia.

    Mike PowneyGeologist | Perceptum Ltd

    Mike Powney

    Mike began his career at SRC a consultancy formed from ECL where he worked extensively on seismic data offshore West Africa and the North Sea. Mike subsequently joined Geoex MCG where he provides global G&G technical expertise across their data portfolio. He also heads up the technical expertise within Geoex MCG on CCUS and natural hydrogen. Within his role at Perceptum, Mike leads the Machine Learning project investigating seismic and well data, offshore Equatorial Guinea.

    Tim GibbonsSales Representative

    Tim Gibbons

    Tim has a BA in Physics from the University of Oxford and an MSc in Exploration Geophysics from Imperial College, London. He started work as a geophysicist for BP in 1988 in London before moving to Aberdeen. There he also worked for Elf Exploration before his love of technology brought a move into the service sector in 1997. Since then, he has worked for Landmark, Paradigm, and TGS in a variety of managerial, sales, and business development roles. Since 2018, he has worked for Geophysical Insights, promoting Paradise throughout the European region.

    Dr. Carrie LaudonSenior Geophysical Consultant

    Applying Unsupervised Multi-Attribute Machine Learning for 3D Stratigraphic Facies Classification in a Carbonate Field, Offshore Brazil

    We present results of a multi-attribute, machine learning study over a pre-salt carbonate field in the Santos Basin, offshore Brazil. These results test the accuracy and potential of Self-organizing maps (SOM) for stratigraphic facies delineation. The study area has an existing detailed geological facies model containing predominantly reef facies in an elongated structure.

    Carrie LaudonSenior Geophysical Consultant - Geophysical Insights

    Automatic Fault Detection and Applying Machine Learning to Detect Thin Beds

    Rapid advances in Machine Learning (ML) are transforming seismic analysis. Using these new tools, geoscientists can accomplish the following quickly and effectively:

    • Run fault detection analysis in a few hours, not weeks
    • Identify thin beds down to a single seismic sample
    • Generate seismic volumes that capture structural and stratigraphic details

    Join us for a ‘Lunch & Learn’ sessions daily at 11:00 where Dr. Carolan (“Carrie”) Laudon will review the theory and results of applying a combination of machine learning tools to obtain the above results.  A detailed agenda follows.


    Automated Fault Detection using 3D CNN Deep Learning

    • Deep learning fault detection
    • Synthetic models
    • Fault image enhancement
    • Semi-supervised learning for visualization
    • Application results
      • Normal faults
      • Fault/fracture trends in complex reservoirs

    Demo of Paradise Fault Detection Thoughtflow®

    Stratigraphic analysis using machine learning with fault detection

    • Attribute Selection using Principal Component Analysis (PCA)
    • Multi-Attribute Classification using Self-Organizing Maps (SOM)
    • Case studies – stratigraphic analysis and fault detection
      • Fault-karst and fracture examples, China
      • Niobrara – Stratigraphic analysis and thin beds, faults
    Thomas ChaparroSenior Geophysicist - Geophysical Insights

    Paradise: A Day in The Life of the Geoscientist

    Over the last several years, the industry has invested heavily in Machine Learning (ML) for better predictions and automation. Dramatic results have been realized in exploration, field development, and production optimization. However, many of these applications have been single use ‘point’ solutions. There is a growing body of evidence that seismic analysis is best served using a combination of ML tools for a specific objective, referred to as ML Orchestration. This talk demonstrates how the Paradise AI workbench applications are used in an integrated workflow to achieve superior results than traditional interpretation methods or single-purpose ML products. Using examples from combining ML-based Fault Detection and Stratigraphic Analysis, the talk will show how ML orchestration produces value for exploration and field development by the interpreter leveraging ML orchestration.

    Aldrin RondonSenior Geophysical Engineer - Dragon Oil

    Machine Learning Fault Detection: A Case Study

    An innovative Fault Pattern Detection Methodology has been carried out using a combination of Machine Learning Techniques to produce a seismic volume suitable for fault interpretation in a structurally and stratigraphic complex field. Through theory and results, the main objective was to demonstrate that a combination of ML tools can generate superior results in comparison with traditional attribute extraction and data manipulation through conventional algorithms. The ML technologies applied are a supervised, deep learning, fault classification followed by an unsupervised, multi-attribute classification combining fault probability and instantaneous attributes.

    Thomas ChaparroSenior Geophysicist - Geophysical Insights

    Thomas Chaparro is a Senior Geophysicist who specializes in training and preparing AI-based workflows. Thomas also has experience as a processing geophysicist and 2D and 3D seismic data processing. He has participated in projects in the Gulf of Mexico, offshore Africa, the North Sea, Australia, Alaska, and Brazil.

    Thomas holds a bachelor’s degree in Geology from Northern Arizona University and a Master’s in Geophysics from the University of California, San Diego. His research focus was computational geophysics and seismic anisotropy.

    Aldrin RondonSenior Geophysical Engineer - Dragon Oil

    Bachelor’s Degree in Geophysical Engineering from Central University in Venezuela with a specialization in Reservoir Characterization from Simon Bolivar University.

    Over 20 years exploration and development geophysical experience with extensive 2D and 3D seismic interpretation including acquisition and processing.

    Aldrin spent his formative years working on exploration activity in PDVSA Venezuela followed by a period working for a major international consultant company in the Gulf of Mexico (Landmark, Halliburton) as a G&G consultant. Latterly he was working at Helix in Scotland, UK on producing assets in the Central and South North Sea.  From 2007 to 2021, he has been working as a Senior Seismic Interpreter in Dubai involved in different dedicated development projects in the Caspian Sea.

    Deborah SacreyOwner - Auburn Energy

    How to Use Paradise to Interpret Clastic Reservoirs

    The key to understanding Clastic reservoirs in Paradise starts with good synthetic ties to the wavelet data. If one is not tied correctly, then it will be easy to mis-interpret the neurons as reservoir, whin they are not. Secondly, the workflow should utilize Principal Component Analysis to better understand the zone of interest and the attributes to use in the SOM analysis. An important part to interpretation is understanding “Halo” and “Trailing” neurons as part of the stack around a reservoir or potential reservoir. Deep, high-pressured reservoirs often “leak” or have vertical percolation into the seal. This changes the rock properties enough in the seal to create a “halo” effect in SOM. Likewise, the frequency changes of the seismic can cause a subtle “dim-out”, not necessarily observable in the wavelet data, but enough to create a different pattern in the Earth in terms of these rock property changes. Case histories for Halo and trailing neural information include deep, pressured, Chris R reservoir in Southern Louisiana, Frio pay in Southeast Texas and AVO properties in the Yegua of Wharton County. Additional case histories to highlight interpretation include thin-bed pays in Brazoria County, including updated information using CNN fault skeletonization. Continuing the process of interpretation is showing a case history in Wharton County on using Low Probability to help explore Wilcox reservoirs. Lastly, a look at using Paradise to help find sweet spots in unconventional reservoirs like the Eagle Ford, a case study provided by Patricia Santigrossi.

    Mike DunnSr. Vice President of Business Development

    Machine Learning in the Cloud

    Machine Learning in the Cloud will address the capabilities of the Paradise AI Workbench, featuring on-demand access enabled by the flexible hardware and storage facilities available on Amazon Web Services (AWS) and other commercial cloud services. Like the on-premise instance, Paradise On-Demand provides guided workflows to address many geologic challenges and investigations. The presentation will show how geoscientists can accomplish the following workflows quickly and effectively using guided ThoughtFlows® in Paradise:

    • Identify and calibrate detailed stratigraphy using seismic and well logs
    • Classify seismic facies
    • Detect faults automatically
    • Distinguish thin beds below conventional tuning
    • Interpret Direct Hydrocarbon Indicators
    • Estimate reserves/resources

    Attend the talk to see how ML applications are combined through a process called "Machine Learning Orchestration," proven to extract more from seismic and well data than traditional means.

    Sarah Stanley
    Senior Geoscientist

    Stratton Field Case Study – New Solutions to Old Problems

    The Oligocene Frio gas-producing Stratton Field in south Texas is a well-known field. Like many onshore fields, the productive sand channels are difficult to identify using conventional seismic data. However, the productive channels can be easily defined by employing several Paradise modules, including unsupervised machine learning, Principal Component Analysis, Self-Organizing Maps, 3D visualization, and the new Well Log Cross Section and Well Log Crossplot tools. The Well Log Cross Section tool generates extracted seismic data, including SOMs, along the Cross Section boreholes and logs. This extraction process enables the interpreter to accurately identify the SOM neurons associated with pay versus neurons associated with non-pay intervals. The reservoir neurons can be visualized throughout the field in the Paradise 3D Viewer, with Geobodies generated from the neurons. With this ThoughtFlow®, pay intervals previously difficult to see in conventional seismic can finally be visualized and tied back to the well data.

    Laura Cuttill
    Practice Lead, Advertas

    Young Professionals – Managing Your Personal Brand to Level-up Your Career

    No matter where you are in your career, your online “personal brand” has a huge impact on providing opportunity for prospective jobs and garnering the respect and visibility needed for advancement. While geoscientists tackle ambitious projects, publish in technical papers, and work hard to advance their careers, often, the value of these isn’t realized beyond their immediate professional circle. Learn how to…

    • - Communicate who you are to high-level executives in exploration and development
    • - Avoid common social media pitfalls
    • - Optimize your online presence to best garner attention from recruiters
    • - Stay relevant
    • - Create content of interest
    • - Establish yourself as a thought leader in your given area of specialization
    Laura Cuttill
    Practice Lead, Advertas

    As a 20-year marketing veteran marketing in oil and gas and serial entrepreneur, Laura has deep experience in bringing technology products to market and growing sales pipeline. Armed with a marketing degree from Texas A&M, she began her career doing technical writing for Schlumberger and ExxonMobil in 2001. She started Advertas as a co-founder in 2004 and began to leverage her upstream experience in marketing. In 2006, she co-founded the cyber-security software company, 2FA Technology. After growing 2FA from a startup to 75% market share in target industries, and the subsequent sale of the company, she returned to Advertas to continue working toward the success of her clients, such as Geophysical Insights. Today, she guides strategy for large-scale marketing programs, manages project execution, cultivates relationships with industry media, and advocates for data-driven, account-based marketing practices.

    Fabian Rada
    Sr. Geophysicist, Petroleum Oil & Gas Services

    Statistical Calibration of SOM results with Well Log Data (Case Study)

    The first stage of the proposed statistical method has proven to be very useful in testing whether or not there is a relationship between two qualitative variables (nominal or ordinal) or categorical quantitative variables, in the fields of health and social sciences. Its application in the oil industry allows geoscientists not only to test dependence between discrete variables, but to measure their degree of correlation (weak, moderate or strong). This article shows its application to reveal the relationship between a SOM classification volume of a set of nine seismic attributes (whose vertical sampling interval is three meters) and different well data (sedimentary facies, Net Reservoir, and effective porosity grouped by ranges). The data were prepared to construct the contingency tables, where the dependent (response) variable and independent (explanatory) variable were defined, the observed frequencies were obtained, and the frequencies that would be expected if the variables were independent were calculated and then the difference between the two magnitudes was studied using the contrast statistic called Chi-Square. The second stage implies the calibration of the SOM volume extracted along the wellbore path through statistical analysis of the petrophysical properties VCL and PHIE, and SW for each neuron, which allowed to identify the neurons with the best petrophysical values in a carbonate reservoir.

    Heather Bedle
    Assistant Professor, University of Oklahoma

    Heather Bedle received a B.S. (1999) in physics from Wake Forest University, and then worked as a systems engineer in the defense industry. She later received a M.S. (2005) and a Ph. D. (2008) degree from Northwestern University. After graduate school, she joined Chevron and worked as both a development geologist and geophysicist in the Gulf of Mexico before joining Chevron’s Energy Technology Company Unit in Houston, TX. In this position, she worked with the Rock Physics from Seismic team analyzing global assets in Chevron’s portfolio. Dr. Bedle is currently an assistant professor of applied geophysics at the University of Oklahoma’s School of Geosciences. She joined OU in 2018, after instructing at the University of Houston for two years. Dr. Bedle and her student research team at OU primarily work with seismic reflection data, using advanced techniques such as machine learning, attribute analysis, and rock physics to reveal additional structural, stratigraphic and tectonic insights of the subsurface.

    Jie Qi
    Research Geophysicist

    An Integrated Fault Detection Workflow

    Seismic fault detection is one of the top critical procedures in seismic interpretation. Identifying faults are significant for characterizing and finding the potential oil and gas reservoirs. Seismic amplitude data exhibiting good resolution and a high signal-to-noise ratio are key to identifying structural discontinuities using seismic attributes or machine learning techniques, which in turn serve as input for automatic fault extraction. Deep learning Convolutional Neural Networks (CNN) performs well on fault detection without any human-computer interactive work. This study shows an integrated CNN-based fault detection workflow to construct fault images that are sufficiently smooth for subsequent fault automatic extraction. The objectives were to suppress noise or stratigraphic anomalies subparallel to reflector dip, and sharpen fault and other discontinuities that cut reflectors, preconditioning the fault images for subsequent automatic extraction. A 2D continuous wavelet transform-based acquisition footprint suppression method was applied time slice by time slice to suppress wavenumber components to avoid interpreting the acquisition footprint as artifacts by the CNN fault detection method. To further suppress cross-cutting noise as well as sharpen fault edges, a principal component edge-preserving structure-oriented filter is also applied. The conditioned amplitude volume is then fed to a pre-trained CNN model to compute fault probability. Finally, a Laplacian of Gaussian filter is applied to the original CNN fault probability to enhance fault images. The resulting fault probability volume is favorable with respect to traditional human-interpreter generated on vertical slices through the seismic amplitude volume.

    Dr. Jie Qi
    Research Geophysicist

    An integrated machine learning-based fault classification workflow

    We introduce an integrated machine learning-based fault classification workflow that creates fault component classification volumes that greatly reduces the burden on the human interpreter. We first compute a 3D fault probability volume from pre-conditioned seismic amplitude data using a 3D convolutional neural network (CNN). However, the resulting “fault probability” volume delineates other non-fault edges such as angular unconformities, the base of mass transport complexes, and noise such as acquisition footprint. We find that image processing-based fault discontinuity enhancement and skeletonization methods can enhance the fault discontinuities and suppress many of the non-fault discontinuities. Although each fault is characterized by its dip and azimuth, these two properties are discontinuous at azimuths of φ=±180° and for near vertical faults for azimuths φ and φ+180° requiring them to be parameterized as four continuous geodetic fault components. These four fault components as well as the fault probability can then be fed into a self-organizing map (SOM) to generate fault component classification. We find that the final classification result can segment fault sets trending in interpreter-defined orientations and minimize the impact of stratigraphy and noise by selecting different neurons from the SOM 2D neuron color map.

    Ivan Marroquin
    Senior Research Geophysicist

    Connecting Multi-attribute Classification to Reservoir Properties

    Interpreters rely on seismic pattern changes to identify and map geologic features of importance. The ability to recognize such features depends on the seismic resolution and characteristics of seismic waveforms. With the advancement of machine learning algorithms, new methods for interpreting seismic data are being developed. Among these algorithms, self-organizing maps (SOM) provides a different approach to extract geological information from a set of seismic attributes.

    SOM approximates the input patterns by a finite set of processing neurons arranged in a regular 2D grid of map nodes. Such that, it classifies multi-attribute seismic samples into natural clusters following an unsupervised approach. Since machine learning is unbiased, so the classifications can contain both geological information and coherent noise. Thus, seismic interpretation evolves into broader geologic perspectives. Additionally, SOM partitions multi-attribute samples without a priori information to guide the process (e.g., well data).

    The SOM output is a new seismic attribute volume, in which geologic information is captured from the classification into winning neurons. Implicit and useful geological information are uncovered through an interactive visual inspection of winning neuron classifications. By doing so, interpreters build a classification model that aids them to gain insight into complex relationships between attribute patterns and geological features.

    Despite all these benefits, there are interpretation challenges regarding whether there is an association between winning neurons and geological features. To address these issues, a bivariate statistical approach is proposed. To evaluate this analysis, three cases scenarios are presented. In each case, the association between winning neurons and net reservoir (determined from petrophysical or well log properties) at well locations is analyzed. The results show that the statistical analysis not only aid in the identification of classification patterns; but more importantly, reservoir/not reservoir classification by classical petrophysical analysis strongly correlates with selected SOM winning neurons. Confidence in interpreted classification features is gained at the borehole and interpretation is readily extended as geobodies away from the well.

    Heather Bedle
    Assistant Professor, University of Oklahoma

    Gas Hydrates, Reefs, Channel Architecture, and Fizz Gas: SOM Applications in a Variety of Geologic Settings

    Students at the University of Oklahoma have been exploring the uses of SOM techniques for the last year. This presentation will review learnings and results from a few of these research projects. Two projects have investigated the ability of SOMs to aid in identification of pore space materials – both trying to qualitatively identify gas hydrates and under-saturated gas reservoirs. A third study investigated individual attributes and SOMs in recognizing various carbonate facies in a pinnacle reef in the Michigan Basin. The fourth study took a deep dive of various machine learning algorithms, of which SOMs will be discussed, to understand how much machine learning can aid in the identification of deepwater channel architectures.

    Fabian Rada
    Sr. Geophysicist, Petroleum Oil & Gas Servicest

    Fabian Rada joined Petroleum Oil and Gas Services, Inc (POGS) in January 2015 as Business Development Manager and Consultant to PEMEX. In Mexico, he has participated in several integrated oil and gas reservoir studies. He has consulted with PEMEX Activos and the G&G Technology group to apply the Paradise AI workbench and other tools. Since January 2015, he has been working with Geophysical Insights staff to provide and implement the multi-attribute analysis software Paradise in Petróleos Mexicanos (PEMEX), running a successful pilot test in Litoral Tabasco Tsimin Xux Asset. Mr. Rada began his career in the Venezuelan National Foundation for Seismological Research, where he participated in several geophysical projects, including seismic and gravity data for micro zonation surveys. He then joined China National Petroleum Corporation (CNPC) as QC Geophysicist until he became the Chief Geophysicist in the QA/QC Department. Then, he transitioned to a subsidiary of Petróleos de Venezuela (PDVSA), as a member of the QA/QC and Chief of Potential Field Methods section. Mr. Rada has also participated in processing land seismic data and marine seismic/gravity acquisition surveys. Mr. Rada earned a B.S. in Geophysics from the Central University of Venezuela.

    Hal GreenDirector, Marketing & Business Development - Geophysical Insights

    Introduction to Automatic Fault Detection and Applying Machine Learning to Detect Thin Beds

    Rapid advances in Machine Learning (ML) are transforming seismic analysis. Using these new tools, geoscientists can accomplish the following quickly and effectively: a combination of machine learning (ML) and deep learning applications, geoscientists apply Paradise to extract greater insights from seismic and well data for these and other objectives:

    • Run fault detection analysis in a few hours, not weeks
    • Identify thin beds down to a single seismic sample
    • Overlay fault images on stratigraphic analysis

    The brief introduction will orient you with the technology and examples of how machine learning is being applied to automate interpretation while generating new insights in the data.

    Sarah Stanley
    Senior Geoscientist and Lead Trainer

    Sarah Stanley joined Geophysical Insights in October, 2017 as a geoscience consultant, and became a full-time employee July 2018. Prior to Geophysical Insights, Sarah was employed by IHS Markit in various leadership positions from 2011 to her retirement in August 2017, including Director US Operations Training and Certification, the Operational Governance Team, and, prior to February 2013, Director of IHS Kingdom Training. Sarah joined SMT in May, 2002, and was the Director of Training for SMT until IHS Markit’s acquisition in 2011.

    Prior to joining SMT Sarah was employed by GeoQuest, a subdivision of Schlumberger, from 1998 to 2002. Sarah was also Director of the Geoscience Technology Training Center, North Harris College from 1995 to 1998, and served as a voluntary advisor on geoscience training centers to various geological societies. Sarah has over 37 years of industry experience and has worked as a petroleum geoscientist in various domestic and international plays since August of 1981. Her interpretation experience includes tight gas sands, coalbed methane, international exploration, and unconventional resources.

    Sarah holds a Bachelor’s of Science degree with majors in Biology and General Science and minor in Earth Science, a Master’s of Arts in Education and Master’s of Science in Geology from Ball State University, Muncie, Indiana. Sarah is both a Certified Petroleum Geologist, and a Registered Geologist with the State of Texas. Sarah holds teaching credentials in both Indiana and Texas.

    Sarah is a member of the Houston Geological Society and the American Association of Petroleum Geologists, where she currently serves in the AAPG House of Delegates. Sarah is a recipient of the AAPG Special Award, the AAPG House of Delegates Long Service Award, and the HGS President’s award for her work in advancing training for petroleum geoscientists. She has served on the AAPG Continuing Education Committee and was Chairman of the AAPG Technical Training Center Committee. Sarah has also served as Secretary of the HGS, and Served two years as Editor for the AAPG Division of Professional Affairs Correlator.

    Dr. Tom Smith
    President & CEO

    Dr. Tom Smith received a BS and MS degree in Geology from Iowa State University. His graduate research focused on a shallow refraction investigation of the Manson astrobleme. In 1971, he joined Chevron Geophysical as a processing geophysicist but resigned in 1980 to complete his doctoral studies in 3D modeling and migration at the Seismic Acoustics Lab at the University of Houston. Upon graduation with the Ph.D. in Geophysics in 1981, he started a geophysical consulting practice and taught seminars in seismic interpretation, seismic acquisition and seismic processing. Dr. Smith founded Seismic Micro-Technology in 1984 to develop PC software to support training workshops which subsequently led to development of the KINGDOM Software Suite for integrated geoscience interpretation with world-wide success.

    The Society of Exploration Geologists (SEG) recognized Dr. Smith’s work with the SEG Enterprise Award in 2000, and in 2010, the Geophysical Society of Houston (GSH) awarded him an Honorary Membership. Iowa State University (ISU) has recognized Dr. Smith throughout his career with the Distinguished Alumnus Lecturer Award in 1996, the Citation of Merit for National and International Recognition in 2002, and the highest alumni honor in 2015, the Distinguished Alumni Award. The University of Houston College of Natural Sciences and Mathematics recognized Dr. Smith with the 2017 Distinguished Alumni Award.

    In 2009, Dr. Smith founded Geophysical Insights, where he leads a team of geophysicists, geologists and computer scientists in developing advanced technologies for fundamental geophysical problems. The company launched the Paradise® multi-attribute analysis software in 2013, which uses Machine Learning and pattern recognition to extract greater information from seismic data.

    Dr. Smith has been a member of the SEG since 1967 and is a professional member of SEG, GSH, HGS, EAGE, SIPES, AAPG, Sigma XI, SSA and AGU. Dr. Smith served as Chairman of the SEG Foundation from 2010 to 2013. On January 25, 2016, he was recognized by the Houston Geological Society (HGS) as a geophysicist who has made significant contributions to the field of geology. He currently serves on the SEG President-Elect’s Strategy and Planning Committee and the ISU Foundation Campaign Committee for Forever True, For Iowa State.

    Carrie LaudonSenior Geophysical Consultant - Geophysical Insights

    Applying Machine Learning Technologies in the Niobrara Formation, DJ Basin, to Quickly Produce an Integrated Structural and Stratigraphic Seismic Classification Volume Calibrated to Wells

    This study will demonstrate an automated machine learning approach for fault detection in a 3D seismic volume. The result combines Deep Learning Convolution Neural Networks (CNN) with a conventional data pre-processing step and an image processing-based post processing approach to produce high quality fault attribute volumes of fault probability, fault dip magnitude and fault dip azimuth. These volumes are then combined with instantaneous attributes in an unsupervised machine learning classification, allowing the isolation of both structural and stratigraphic features into a single 3D volume. The workflow is illustrated on a 3D seismic volume from the Denver Julesburg Basin and a statistical analysis is used to calibrate results to well data.

    Ivan Marroquin
    Senior Research Geophysicist

    Iván Dimitri Marroquín is a 20-year veteran of data science research, consistently publishing in peer-reviewed journals and speaking at international conference meetings. Dr. Marroquín received a Ph.D. in geophysics from McGill University, where he conducted and participated in 3D seismic research projects. These projects focused on the development of interpretation techniques based on seismic attributes and seismic trace shape information to identify significant geological features or reservoir physical properties. Examples of his research work are attribute-based modeling to predict coalbed thickness and permeability zones, combining spectral analysis with coherency imagery technique to enhance interpretation of subtle geologic features, and implementing a visual-based data mining technique on clustering to match seismic trace shape variability to changes in reservoir properties.

    Dr. Marroquín has also conducted some ground-breaking research on seismic facies classification and volume visualization. This lead to his development of a visual-based framework that determines the optimal number of seismic facies to best reveal meaningful geologic trends in the seismic data. He proposed seismic facies classification as an alternative to data integration analysis to capture geologic information in the form of seismic facies groups. He has investigated the usefulness of mobile devices to locate, isolate, and understand the spatial relationships of important geologic features in a context-rich 3D environment. In this work, he demonstrated mobile devices are capable of performing seismic volume visualization, facilitating the interpretation of imaged geologic features.  He has definitively shown that mobile devices eventually will allow the visual examination of seismic data anywhere and at any time.

    In 2016, Dr. Marroquín joined Geophysical Insights as a senior researcher, where his efforts have been focused on developing machine learning solutions for the oil and gas industry. For his first project, he developed a novel procedure for lithofacies classification that combines a neural network with automated machine methods. In parallel, he implemented a machine learning pipeline to derive cluster centers from a trained neural network. The next step in the project is to correlate lithofacies classification to the outcome of seismic facies analysis.  Other research interests include the application of diverse machine learning technologies for analyzing and discerning trends and patterns in data related to oil and gas industry.

    Dr. Jie Qi
    Research Geophysicist

    Dr. Jie Qi is a Research Geophysicist at Geophysical Insights, where he works closely with product development and geoscience consultants. His research interests include machine learning-based fault detection, seismic interpretation, pattern recognition, image processing, seismic attribute development and interpretation, and seismic facies analysis. Dr. Qi received a BS (2011) in Geoscience from the China University of Petroleum in Beijing, and an MS (2013) in Geophysics from the University of Houston. He earned a Ph.D. (2017) in Geophysics from the University of Oklahoma, Norman. His industry experience includes work as a Research Assistant (2011-2013) at the University of Houston and the University of Oklahoma (2013-2017). Dr. Qi was with Petroleum Geo-Services (PGS), Inc. in 2014 as a summer intern, where he worked on a semi-supervised seismic facies analysis. In 2017, he served as a postdoctoral Research Associate in the Attributed Assisted-Seismic Processing and Interpretation (AASPI) consortium at the University of Oklahoma from 2017 to 2020.

    Rocky R. Roden
    Senior Consulting Geophysicist

    The Relationship of Self-Organization, Geology, and Machine Learning

    Self-organization is the nonlinear formation of spatial and temporal structures, patterns or functions in complex systems (Aschwanden et al., 2018). Simple examples of self-organization include flocks of birds, schools of fish, crystal development, formation of snowflakes, and fractals. What these examples have in common is the appearance of structure or patterns without centralized control. Self-organizing systems are typically governed by power laws, such as the Gutenberg-Richter law of earthquake frequency and magnitude. In addition, the time frames of such systems display a characteristic self-similar (fractal) response, where earthquakes or avalanches for example, occur over all possible time scales (Baas, 2002).

    The existence of nonlinear dynamic systems and ordered structures in the earth are well known and have been studied for centuries and can appear as sedimentary features, layered and folded structures, stratigraphic formations, diapirs, eolian dune systems, channelized fluvial and deltaic systems, and many more (Budd, et al., 2014; Dietrich and Jacob, 2018). Each of these geologic processes and features exhibit patterns through the action of undirected local dynamics and is generally termed “self-organization” (Paola, 2014).

    Artificial intelligence and specifically neural networks exhibit and reveal self-organization characteristics. The reason for the interest in applying neural networks stems from the fact that they are universal approximators for various kinds of nonlinear dynamical systems of arbitrary complexity (Pessa, 2008). A special class of artificial neural networks is aptly named self-organizing map (SOM) (Kohonen, 1982). It has been found that SOM can identify significant organizational structure in the form of clusters from seismic attributes that relate to geologic features (Strecker and Uden, 2002; Coleou et al., 2003; de Matos, 2006; Roy et al., 2013; Roden et al., 2015; Zhao et al., 2016; Roden et al., 2017; Zhao et al., 2017; Roden and Chen, 2017; Sacrey and Roden, 2018; Leal et al, 2019; Hussein et al., 2020; Hardage et al., 2020; Manauchehri et al., 2020). As a consequence, SOM is an excellent machine learning neural network approach utilizing seismic attributes to help identify self-organization features and define natural geologic patterns not easily seen or seen at all in the data.

    Rocky R. Roden
    Senior Consulting Geophysicist

    Rocky R. Roden started his own consulting company, Rocky Ridge Resources Inc. in 2003 and works with several oil companies on technical and prospect evaluation issues. He is also a principal in the Rose and Associates DHI Risk Analysis Consortium and was Chief Consulting Geophysicist with Seismic Micro-technology. Rocky is a proven oil finder with 37 years in the industry, gaining extensive knowledge of modern geoscience technical approaches.

    Rocky holds a BS in Oceanographic Technology-Geology from Lamar University and a MS in Geological and Geophysical Oceanography from Texas A&M University. As Chief Geophysicist and Director of Applied Technology for Repsol-YPF, his role comprised of advising corporate officers, geoscientists, and managers on interpretation, strategy and technical analysis for exploration and development in offices in the U.S., Argentina, Spain, Egypt, Bolivia, Ecuador, Peru, Brazil, Venezuela, Malaysia, and Indonesia. He has been involved in the technical and economic evaluation of Gulf of Mexico lease sales, farmouts worldwide, and bid rounds in South America, Europe, and the Far East. Previous work experience includes exploration and development at Maxus Energy, Pogo Producing, Decca Survey, and Texaco. Rocky is a member of SEG, AAPG, HGS, GSH, EAGE, and SIPES; he is also a past Chairman of The Leading Edge Editorial Board.

    Bob A. Hardage

    Bob A. Hardage received a PhD in physics from Oklahoma State University. His thesis work focused on high-velocity micro-meteoroid impact on space vehicles, which required trips to Goddard Space Flight Center to do finite-difference modeling on dedicated computers. Upon completing his university studies, he worked at Phillips Petroleum Company for 23 years and was Exploration Manager for Asia and Latin America when he left Phillips. He moved to WesternAtlas and worked 3 years as Vice President of Geophysical Development and Marketing. He then established a multicomponent seismic research laboratory at the Bureau of Economic Geology and served The University of Texas at Austin as a Senior Research Scientist for 28 years. He has published books on VSP, cross-well profiling, seismic stratigraphy, and multicomponent seismic technology. He was the first person to serve 6 years on the Board of Directors of the Society of Exploration Geophysicists (SEG). His Board service was as SEG Editor (2 years), followed by 1-year terms as First VP, President Elect, President, and Past President. SEG has awarded him a Special Commendation, Life Membership, and Honorary Membership. He wrote the AAPG Explorer column on geophysics for 6 years. AAPG honored him with a Distinguished Service award for promoting geophysics among the geological community.

    Bob A. Hardage

    Investigating the Internal Fabric of VSP data with Attribute Analysis and Unsupervised Machine Learning

    Examination of vertical seismic profile (VSP) data with unsupervised machine learning technology is a rigorous way to compare the fabric of down-going, illuminating, P and S wavefields with the fabric of up-going reflections and interbed multiples created by these wavefields. This concept is introduced in this paper by applying unsupervised learning to VSP data to better understand the physics of P and S reflection seismology. The zero-offset VSP data used in this investigation were acquired in a hard-rock, fast-velocity, environment that caused the shallowest 2 or 3 geophones to be inside the near-field radiation zone of a vertical-vibrator baseplate. This study shows how to use instantaneous attributes to backtrack down-going direct-P and direct-S illuminating wavelets to the vibrator baseplate inside the near-field zone. This backtracking confirms that the points-of-origin of direct-P and direct-S are identical. The investigation then applies principal component (PCA) analysis to VSP data and shows that direct-S and direct-P wavefields that are created simultaneously at a vertical-vibrator baseplate have the same dominant principal components. A self-organizing map (SOM) approach is then taken to illustrate how unsupervised machine learning describes the fabric of down-going and up-going events embedded in vertical-geophone VSP data. These SOM results show that a small number of specific neurons build the down-going direct-P illuminating wavefield, and another small group of neurons build up-going P primary reflections and early-arriving down-going P multiples. The internal attribute fabric of these key down-going and up-going neurons are then compared to expose their similarities and differences. This initial study indicates that unsupervised machine learning, when applied to VSP data, is a powerful tool for understanding the physics of seismic reflectivity at a prospect. This research strategy of analyzing VSP data with unsupervised machine learning will now expand to horizontal-geophone VSP data.

    Tom Smith
    President and CEO, Geophysical Insights

    Machine Learning for Incomplete Geoscientists

    This presentation covers big-picture machine learning buzz words with humor and unassailable frankness. The goal of the material is for every geoscientist to gain confidence in these important concepts and how they add to our well-established practices, particularly seismic interpretation. Presentation topics include a machine learning historical perspective, what makes it different, a fish factory, Shazam, comparison of supervised and unsupervised machine learning methods with examples, tuning thickness, deep learning, hard/soft attribute spaces, multi-attribute samples, and several interpretation examples. After the presentation, you may not know how to run machine learning algorithms, but you should be able to appreciate their value and avoid some of their limitations.

    Deborah Sacrey
    Owner, Auburn Energy

    Deborah is a geologist/geophysicist with 44 years of oil and gas exploration experience in Texas, Louisiana Gulf Coast and Mid-Continent areas of the US. She received her degree in Geology from the University of Oklahoma in 1976 and immediately started working for Gulf Oil in their Oklahoma City offices.

    She started her own company, Auburn Energy, in 1990 and built her first geophysical workstation using Kingdom software in 1996. She helped SMT/IHS for 18 years in developing and testing the Kingdom Software. She specializes in 2D and 3D interpretation for clients in the US and internationally. For the past nine years she has been part of a team to study and bring the power of multi-attribute neural analysis of seismic data to the geoscience public, guided by Dr. Tom Smith, founder of SMT. She has become an expert in the use of Paradise software and has seven discoveries for clients using multi-attribute neural analysis.

    Deborah has been very active in the geological community. She is past national President of SIPES (Society of Independent Professional Earth Scientists), past President of the Division of Professional Affairs of AAPG (American Association of Petroleum Geologists), Past Treasurer of AAPG and Past President of the Houston Geological Society. She is also Past President of the Gulf Coast Association of Geological Societies and just ended a term as one of the GCAGS representatives on the AAPG Advisory Council. Deborah is also a DPA Certified Petroleum Geologist #4014 and DPA Certified Petroleum Geophysicist #2. She belongs to AAPG, SIPES, Houston Geological Society, South Texas Geological Society and the Oklahoma City Geological Society (OCGS).

    Mike Dunn
    Senior Vice President Business Development

    Michael A. Dunn is an exploration executive with extensive global experience including the Gulf of Mexico, Central America, Australia, China and North Africa. Mr. Dunn has a proven a track record of successfully executing exploration strategies built on a foundation of new and innovative technologies. Currently, Michael serves as Senior Vice President of Business Development for Geophysical Insights.

    He joined Shell in 1979 as an exploration geophysicist and party chief and held increasing levels or responsibility including Manager of Interpretation Research. In 1997, he participated in the launch of Geokinetics, which completed an IPO on the AMEX in 2007. His extensive experience with oil companies (Shell and Woodside) and the service sector (Geokinetics and Halliburton) has provided him with a unique perspective on technology and applications in oil and gas. Michael received a B.S. in Geology from Rutgers University and an M.S. in Geophysics from the University of Chicago.

    Hal GreenDirector, Marketing & Business Development - Geophysical Insights

    Hal H. Green is a marketing executive and entrepreneur in the energy industry with more than 25 years of experience in starting and managing technology companies. He holds a B.S. in Electrical Engineering from Texas A&M University and an MBA from the University of Houston. He has invested his career at the intersection of marketing and technology, with a focus on business strategy, marketing, and effective selling practices. Mr. Green has a diverse portfolio of experience in marketing technology to the hydrocarbon supply chain – from upstream exploration through downstream refining & petrochemical. Throughout his career, Mr. Green has been a proven thought-leader and entrepreneur, while supporting several tech start-ups.

    He started his career as a process engineer in the semiconductor manufacturing industry in Dallas, Texas and later launched an engineering consulting and systems integration business. Following the sale of that business in the late 80’s, he joined Setpoint in Houston, Texas where he eventually led that company’s Manufacturing Systems business. Aspen Technology acquired Setpoint in January 1996 and Mr. Green continued as Director of Business Development for the Information Management and Polymer Business Units.

    In 2004, Mr. Green founded Advertas, a full-service marketing and public relations firm serving clients in energy and technology. In 2010, Geophysical Insights retained Advertas as their marketing firm. Dr. Tom Smith, President/CEO of Geophysical Insights, soon appointed Mr. Green as Director of Marketing and Business Development for Geophysical Insights, in which capacity he still serves today.

    Hana Kabazi
    Product Manager

    Hana Kabazi joined Geophysical Insights in October of 201, and is now one of our Product Managers for Paradise. Mrs. Kabazi has over 7 years of oil and gas experience, including 5 years and Halliburton – Landmark. During her time at Landmark she held positions as a consultant to many E&P companies, technical advisor to the QA organization, and as product manager of Subsurface Mapping in DecsionSpace. Mrs. Kabazi has a B.S. in Geology from the University of Texas Austin, and an M.S. in Geology from the University of Houston.

    Dr. Carrie LaudonSenior Geophysical Consultant - Geophysical Insights

    Carolan (Carrie) Laudon holds a PhD in geophysics from the University of Minnesota and a BS in geology from the University of Wisconsin Eau Claire. She has been Senior Geophysical Consultant with Geophysical Insights since 2017 working with Paradise®, their machine learning platform. Prior roles include Vice President of Consulting Services and Microseismic Technology for Global Geophysical Services and 17 years with Schlumberger in technical, management and sales, starting in Alaska and including Aberdeen, Scotland, Houston, TX, Denver, CO and Reading, England. She spent five years early in her career with ARCO Alaska as a seismic interpreter for the Central North Slope exploration team.