Applying Machine Learning Technologies in the Niobrara Formation, DJ Basin, to Quickly Produce an Integrated Structural and Stratigraphic Seismic Classification Volume Calibrated to Wells

Carolan Laudon, Jie Qi, Yin-Kai Wang, Geophysical Research, LLC (d/b/a Geophysical Insights), University of Houston | Published with permission: Unconventional Resources Technology Conference (URTeC) DOI 10.15530 | June 2022

URTeC Best Paper Award

Copyright 2022, Unconventional Resources Technology Conference (URTeC) DOI 10.15530/urtec-2022-3701806

This Best of URTeC 2022 paper was prepared for presentation at the Unconventional Resources Technology Conference held in Houston, Texas, USA,
20-22 June 2022.


Objectives/Scope: This study will demonstrate an automated machine learning approach for fault detection in a 3D seismic volume. The result combines Deep Learning Convolution Neural Networks (CNN) with a conventional data pre-processing step and an image processing-based post processing approach to produce high quality fault attribute volumes of fault probability, fault dip magnitude and fault dip azimuth. These volumes are then combined with instantaneous attributes in an unsupervised machine learning classification, allowing the isolation of both structural and stratigraphic features into a single 3D volume. The workflow is illustrated on a 3D seismic volume from the Denver Julesburg Basin and a statistical analysis is used to calibrate results to well data.

Methods/Procedures/Process: Starting with a seismic amplitude volume, the method has four steps. Pre-processing produces the volume used as input to the CNN fault classification and the dip volumes used in post processing. Next, CNN applies a 3D synthetic fault engine to predict faults. Then, a directional 3D Laplacian of Gaussian filter enhances the faults in their primary direction and the final step, skeletonization, produces skeletonized probability, dip and azimuth. The result is higher quality when compared to the output from CNN alone (without pre and post processing). The fault volumes are next combined with instantaneous attributes in an unsupervised machine learning classification through Self-Organizing Maps (SOMs) to produce a classification volume from which faults and reservoir neurons can be isolated, calibrated to wells and converted to multi-attribute geobodies.

Results/Observations/Conclusions: The results provide a rapid, robust, and unbiased fault interpretation which can be used to create either fault plane or fault stick interpretations in a standard interpretation package. The SOM is preceded by principal component analysis to identify prominent attributes. These resolve the seismic character of the analysis interval (Top Niobrara to Top Greenhorn). In addition to enhanced fault identification, the Niobrara’s brittle chalk benches are easily distinguished from more ductile shale units and individual benches; A, B, and C benches each have unique sets of characteristics to isolate them in the volume. Extractions from SOM volumes at wells confirm the statistical relationships between SOM neurons and reservoir properties.

Applications/Significance/Novelty: Traditional seismic interpretation, including fault interpretation and stratigraphic horizon picking, is poorly suited to the demands of unconventional drilling with its typically high well densities. Geophysicists devote much of their efforts to well planning and working with the drilling team to land wells. Machine learning applied in seismic interpretation offers significant benefits by automating tedious and somewhat routine tasks such as fault and reservoir interpretation. Automation reduces the fault interpretation time from weeks/days to days/hours. Multi-attribute analysis accelerates the process of high grading reservoir sweet spots with the 3d volume. Statistical measures make the task of calibrating the unsupervised results feasible.


As stated in the abstract, applying machine learning technologies to seismic interpretations tasks brings the promise of automation to generating fault volumes through supervised classification. The resulting volumes can subsequently be used to extract fault interpretation.

This methodology is demonstrated in this paper through application to a 100 square mile volume from the Denver-Julesburg Basin. The use of SOMs for isolating chalk reservoirs in the Niobrara was first demonstrated by Laudon and others, 2019. In this study we expand on the original work in two ways: we create a single 3d seismic volume which integrates the result of both machine learning applications, and we calibrate the resulting volumes to well logs via a bivariate statistical analysis following the methodology of Leal and others, 2018.

The seismic data are from Phase 5 of a 1580 square mile, contiguous 3D seismic survey conducted from 2011 through 2014 by Geophysical Pursuit, Inc. and Geokinetics (Fairfield Geotechnologies replaced Geokinetics as second data owner). In 2018, the data were provided to Geophysical Insights to conduct proof of concept studies on machine learning techniques for seismic interpretation. Figure 1 shows the location of the study area along with the full outline of the multi-client survey.

Figure 1: Map of Geophysical Pursuit, Inc. and Fairfield Geotechnologies multi-client program and study area outline.

Geologic Setting of the Niobrara and Surrounding Formations

The Niobrara formation is late Cretaceous in age and was deposited in the Western Interior Seaway (Kaufmann, 1977). The Niobrara is subdivided into basal Fort Hays limestone and Smoky Hill members. The Smoky Hill member is further subdivided into three subunits informally termed Niobrara A, B, and C. These units consist of fractured chalk benches which are primary reservoirs with marls and shales between the benches which comprise source rocks and secondary reservoir targets (Figure 2). The Niobrara unconformably overlies the Codell sandstone and is overlain by the Sharon Springs member of the Pierre shale. The Codell is also late Cretaceous in age, and unconformably underlies the Fort Hays member of the Niobrara formation. The interval used for the machine learning studies was Top Niobrara to Top Greenhorn. Figure 2 (Sonnenberg, 2015) shows a generalized stratigraphic column and a structure map for the Niobrara in the DJ Basin along with an outline of the basin, the location of the Wattenberg Field and the approximate location of the study area.

Figure 2: Outline of the DJ Basin with Niobrara structure contours and generalized stratigraphic column that shows the source rock and reservoir intervals for late Cretaceous units in the basin (from Sonnenberg, 2015).

The study area has large antiforms on its western edge as the transition from basin to the Rocky Mountains is crossed. The area is normally faulted with most faults trending northeast to southwest. Landon and others (2001), and Finn and Johnson (2005) have also stated that the DJ basin contains the richest Niobrara source rocks with TOC contents reaching eight weight percent. Niobrara petroleum production is dependent on fractures in the hard, brittle, carbonate-rich zones. These zones are overlain and/or interbedded with soft, ductile marine shales that inhibit migration and seal the hydrocarbons in the fractured zones. Figure 3 shows the most positive curvature, K1 on the top Niobrara. The main fault trends can be seen as well as the potential effect curvature may have on fracturing in the brittle chalk layers.

Figure 3: Most positive curvature, K1 on top Niobrara. There are antiforms present in the lower left (NW) and lower right (SW) portions of the image. The faulting and fractures are complex with both NE-SW and NW-SE trends apparent. Area shown is approximately 100 square miles. Multi-client data shown courtesy of Geophysical Pursuit, Inc. and Fairfield Geotechnologies (from Laudon and others, 2019).

Fault Detection Methodology

Seismic amplitude is the basis for machine learning fault detection which uses deep learning Convolutional Neural Networks (CNNs), a form of supervised machine learning (Ronneberger and others, 2015; Wu and others, 2019; Zhao and Mukhopadhyay, 2018; Qi and others, 2020; Laudon and others, 2021). The results are fault volumes which can be used for seismic fault interpretation. There are different approaches to building a fault prediction engine. This study used CNN engines which were pre-trained using fully 3D synthetic fault models. A 3D synthetic model is unbiased in contrast to manually interpretating faults to build the CNN fault engine (Wu and others, 2019). Models remove the difficulty of picking faults on orientations out of the plane of the faults. Another advantage to using synthetic fault models is that fault prediction is very fast compared to the engine training compute time. The machine learning results are also significantly improved by applying post processing steps of Fault Enhancement and Skeletonization. While Qi and others, 2017, applied this technique to traditional edge detection volumes, we have found that CNN Fault volumes are also ideally suited to this technology. Figure 4 diagrams the high-level workflow consisting of four steps.

Figure 4: Flow diagram for machine learning enhanced fault detection

In step 1, the post stack amplitude data is run through structurally oriented filtering to sharpen discontinuities and further to suppress noise or stratigraphic anomalies sub-parallel to reflector dip. This step significantly improves the fault prediction. This study employed principal component filtering utilizing the University of Oklahoma AASPI consortium algorithms.

Figure 5: Example of Principal Component Structurally Oriented Filtering. Original amplitude, Filtered Amplitude and Rejected Noise. Multi-Client data presented with permission from Geophysicaal Pursuit, Inc. and Fairfield Geotechnologies.

Figure 5 illustrates a faulted amplitude section from the study volume before and after pre-process filtering alongside the rejected noise. The filtering process produces additional volumes, namely Inline Dip, Crossline Dip, and Similarity Total Energy which are required and employed in the post processing steps three and four; fault image enhancement and fault skeletonization and attribute computation (Qi and others, 2017; Qi and others, 2019).

Figure 6 shows the results for steps 2, 3 and 4 of the process. Step 2 is the machine learning fault prediction utilizing fault engines built from 3d synthetic seismic data using convolutional neural networks. Steps 3 and 4, the fault enhancement and skeletonization follows the methodology of Qi and others, 2017 and Qi and others, 2019. This fault image post-processing technique computes and decomposes the second-moment tensor to find the orientation of fault anomalies in the CNN fault probability volume. The 3D directional Laplacian of Gaussian (LoG) filter is then applied to smoothplanar features of the images. The fault anomalies through the direction that is parallel to the faults can be enhanced by the Gaussian operator and sharpened by the Laplacian operator. After that, the faults are skeletonized through the direction that is perpendicular to a given faults. The fault enhancement and skeletonization post-processing step can enhance planar features such as faults and suppress false positives that show non-planar features that cut reflectors. The final output from the workflow are three skeletonized fault volumes: fault probability, fault dip magnitude and fault dip azimuth. The output attributes can be combined to further isolate fault sets based on their geometric properties.

Figure 6: Output from CNN fault prediction, Output from post processing, final fault probability co-rendered with amplitude. Crossline 1399.
Multi-Client data presented with permission from Geophysical Pursuit, Inc. and Fairfield Geotechnologies.

Unsupervised Classification Utilizing Self-organizing maps (SOM)

In a previous study, this same seismic volume was used in a stratigraphic machine learning study which yielded detailed stratigraphic information via a multi-attribute classification technique, SOM (Laudon and others, 2019). In that study, nine instantaneous attributes from a suite of nineteen were selected via Principal Component Analysis (PCA). PCA is a linear dimensionality reduction technique frequently used in multi-attribute analysis to determine which attributes are most prominent in the data volume of interest (Roden and others, 2015). SOM is an unsupervised neural network classification which employs a non-linear approach to find natural clusters in multi-dimensional attribute space (Kohonen, 2001; Roden and others, 2015). SOM takes advantage of the organizational structure of the seismic data samples which are highly continuous, greatly redundant, and significantly noisy (Coleou and others, 2003; Roden and others, 2015). The samples from multiple seismic attribute volumes exhibit natural clusters with significant organizational structure in the presence of noise. The SOM classifications of these natural clusters reveal important information about the structure of natural groups that are difficult to perceive any other way. The SOM classifications reveal geologic features which are essential to the subsurface interpretation (Roden and others, 2015). When seismic attributes are organized in attribute space, the SOM algorithm introduces new samples called neurons which seek out natural clusters in attribute space. Through a series of cooperative and competitive training epochs resulting in a fully trained set of winning neurons of classification, each multi-attribute sample is classified to its nearest winning neuron in attribute space. The winning neurons form a 2D mesh that is illuminated in the final volume with a 2D color map for interactive evaluation. One advantage of SOM over other clustering techniques is that within the 2D color map, winning neurons adjacent to each other in the attribute space of the SOM analysis are also adjacent in the final 2D color map (Roden, and others, 2015).

Table 1 shows the attributes used in the SOM and their corresponding eigenvector (Laudon and others, 2019).

Figure 7 shows the original amplitude data on a N-S oriented inline with the zone of interest, Top Niobrara to Top Greenhorn highlighted and an 8×8 SOM for the same interval and line.

Figure 7: North-South inline showing the original amplitude data (upper) and the 8X8 SOM result (lower) from Top Niobrara through Greenhorn horizons (Laudon and others, 2019). Multi-client data shown courtesy of Geophysical Pursuit, Inc. and Fairfield Geotechnologies.

This inline runs through a well with a full suite of logs to correlate and calibrate the SOM results. In this part of the basin, there are three chalks of interest for drilling, A, B, and C, with the B bench having the best reservoir quality. The top Niobrara, on the amplitude section, is a strong peak followed by a broad trough through the highest TOC shale section. The subsequent peak generally corresponds to the B bench. On the SOM classification, the B bench can be seen in the yellow and red neurons and if examined closely, would appear to be mechanically different from the overlying marl. By extracting the red neurons only, the sweet spot within the bench can be visualized easily (Laudon and others 2019). In this study, the SOM classification was repeated with the addition of the skeletonized fault attributes. The resulting volume combines the main structural elements with the same level of stratigraphic detail seen in the previous study. The same inline in Figure 7 is shown in Figure 8 with the new SOM results. Although the actual neuron numbers have changed between SOM runs so the 2D color mapping differs, nevertheless, the stratigraphic picture is the same and detail is enhanced.

There several advantages of using SOM to isolate faults:

  • SOM normalizes the input volumes into discrete values allowing easy isolation and interrogation of the seismic volumes to visualize faults.
  • Fault volumes from multiple engines (aggressive and conservative) can be combined into a single volume.
  • Neuron-derived classifications can be converted into geobodies of common classification. These can be filtered by size if desired and used in a fault extraction workflow.

The SOM neurons which represent faults appear correct on the vertical section, but the real test of the result is the three-dimensional view of the fault neurons.

Figure 8: 8×8 SOM from inline combining instantaneous and fault attributes (top). Original 8×8 SOM utilizing only instantaneous attributes (Laudon and others 2019). Multi-client data shown courtesy of Geophysical Pursuit, Inc. and Fairfield Geotechnologies.

Figure 9 shows a 3D volume of classified seismic samples by a group of SOM fault neurons isolated within the volume using a 2D color map. The position on the color map of the fault neurons demonstrates the self-organizing aspect of natural clustering. The fault neurons are displayed over the Top Greenhorn time structure. The fault results are seen to form well defined fault planes and generally superior to conventional edge detection attributes. The well locations shown are wells which have a full suite of logs and were used for the statistical calibration of the wells to the stratigraphic SOM results as discussed in the next section.

Figure 9: 8×8 SOM result with only neurons representing faults displayed in the 3D volume displayed over the Top Greenhorn time structure. The neurons have been turned black in the 2D color map for contrast and visualization.

Calibration of SOM results to well logs using bivariate statistics

To calibrate the SOM results at well locations, the SOM neuron classifications were extracted at well locations and converted to measured depth using the same sampling interval as the wireline logs (0.5 ft). To combine the SOM neurons extracted from machine learning results and the reservoir properties discriminated from well logs, a contingency table consisting of two categorical variables, SOM neurons and Reservoir, was first created. Two subcategories of Reservoir: Non reservoir and Net reservoir, were determined by applying simple cutoffs to the petrophysical logs (Leal and others, 2018). The well log samples which pass the given petrophysical cutoffs were counted as Net, otherwise they were counted as Non reservoir. Figure 10 shows a schematic of data selection for further statistical analysis. Only data which are valid in both SOM classification and well logs are selected, and others, such as null values and missing points, are excluded from the statistical analysis to prevent mis-estimation.

The standard Chi-square statistical test of independence was first applied to establish the degree of association between two categorical variables. This test compares the observed frequencies to the expected frequencies (the value which was expected if the null hypothesis is true) and determines if there is a statistically significant relationship between variables or not. In this study, the null hypothesis states that two categorical variables are independent (no association between variables), against the alternative hypothesis that they are dependent (the statistical association between variables existed). If the calculated Pearson Chi-square (Chi2) value is higher than the theoretical Pearson Chi-square value (or the calculated p-value is less than the significant level), the null hypothesis is rejected and the alternative hypothesis is accepted. In other words, the occurrences of SOM neurons and the presence of Reservoir are tested for statistical dependence. Another standard statistical measure, natural logarithm of likelihood ratio, applied in this study to test independence is the G-test which measures the difference of the proportions between two categorical variables.

Figure 10: The schematic of data selection for statistical analysis

While these two tests of independence indicate two nominal variables are dependent, the statistical values don’t measure the strength of the relationship between variables. Hence, Cramér’s V, which quantifies the association between two variables by giving a value between 0 and 1, and the Bayes factor, which shows the evidence of a statistical relationship between variables by giving a ratio of the likelihood of the data under each of the two hypotheses, were also calculated. Additionally, to avoid violating the assumptions for the tests as well as the risk of overly optimistic computation of the Chi-square value, the original Nx2 contingency table was converted to a 2×2 contingency table (Table 2) as well as the odds ratio, which quantify the strength of the association between two categorical variables as a ratio of the odds of each category on one variable over the other, was calculated.

Table 2: The schematic of the converted 2×2 contingency table

Figure 11: Vertical wells available in study area
Figure 11: Vertical wells available in study area. Well highlighted in red were used in the statistical analysis. Wells in cross section (Figure 12) are also indicated.

Figure 11 is a map showing the vertical well control in the study area. Seven (7) wells highlighted in red had petrophysical results available for calibration (Holmes and others, 2019). Three wells circled in black in the map are shown in cross section in Figure 12 (A-A’).

The log template in Figure 12 contains 6 tracks and the cross section is flattened on the Top Niobrara. Track 1 (from left to right) displays Gamma Ray (black) and Vshale (gray); Track 2 is measured depth, Track 3 contains an 8×8 SOM extracted at the well, with the 2D color map used shown in the lower left. The SOM is overlain in Track 3by the Volume of Calcite (Vcalc); Track 4 is Total Organic Content (TOC), Track 5 is Effective Porosity (PHIE) and Track 6 is Deep Induction Resistivity (ILD). The Buxman 28-12 well is expanded to provide a more detailed view of the visual correlation between the individual logs and SOM neurons.

A visual examination shows that the neuron boundaries tie closely to formation tops and transitions in lithology indicator logs, GR, VCalc and TOC. The highest neuron numbers (reds and oranges) indicate high Vcalc and there is a high TOC zone near the Top Niobrara that corresponds to low neuron numbers (pink, purple) in the 2D color map.

Figure 12: Cross section A-A’ showing the Niobrara formation tops, well logs and SOM neurons. Note that the base Niobrara marker only includes the Smoky Hill member and excludes the Ft. Hays limestone.

Using the viusal correlations, the logs selected for the statistical analyses were Vcalc with a cutoff of >0.3 and PHIE with a cutoff of >.03. Net pay was also calculated using a cutoff of Water saturation (Sw) <0.7.

The histogram and Chi2 table in Figure 13 are based on the Smoky Hill portion of the Niobrara section and present each of the 64 neurons in the SOM which were encountered by any of the 7 wells. The histogram parts indicate non-reservoir (brown), net reservoir (yellow) and net pay (green). Hit counts are posted above each histogram bar. The histograms give quick visual indicators of which neurons are the most prevalent (Neuron 18) as well as indicating specific neurons that are strictly non reservoir (Neuron 23) and some which are almost entirely reservoir (Neuron 59).

The Chi2 value for this zone is 1172.9 and the theoretical Chi2 values is 61.6. Therefore, the null hypothesis is rejected, and we safely conclude that there is a strong statistical relationship between SOM neurons and the presence or absence of net reservoir (Leal and others, 2019). The converted 2×2 contingency table summarizes the number of neurons in the well samples which contain reservoir and the number of neurons in the well samples which contain non-reservoir (45 and 41 respectively).

Figures 14-16 show the statistical results for each Niobrara chalk bench individually as well as a 3D view of the neurons associated with each bench, A, B and C. By extracting the SOMs at well locations, we can use the histograms to view the 3D distribution of the neurons assemblages for each bench as determined at well locations. It is worth noting that since seismic samples have higher areal density than wells, there can be samples within a given zone that didn’t necessarily get sampled at all by wells.

Figure 13: Histogram of the Smoky Hill Member of the Niobrara Formation sampled at 7 vertical well locations. The table beneath the histogram lists the SOM neurons used in the calculation, the logs used for petrophysical cutoffs, the confusion matrix, the Chi2 value (calculated and theoretical), likelihood ratio, degrees of freedom, Cramér’s V, P value, and odds ratio. The table also indicates whether the Null hypothesis is accepted or rejected.

Note that the results of Chi2 tests of B and C benches (Figures 15 and 16) indicate that the Null hypothesis is accepted. Statistically here, SOM classifications do not correlate with well classifications. We investigate further to understand what acceptance of the Null hypothesis means. Note from histograms in these figures that there are many net reservoir samples but a lack of non-reservoir samples. While this may provide excellent reservoir results, these results offer poor statistics. A common rule of practice for the Chi2 test of independence is that 80% of cells in large tables in an analysis have 5 or more samples, and further that there are no cells with zero expected counts (see Pearson’s chi-squared test). As we note in these two cases, histograms show that both B and C benches have cells with no samples of non-reservoir classification (in most cases) and several neuron cells have no samples with reservoir. This leads to a failure of the test of independence, and it is due to pushing the statistics too far: these restricted zones of interest had too few samples of certain classifications for reliable statistical conclusions. We note however, that these histograms are still valuable because, along with the 2D colormap and a 3D display, reservoir elements are visualized with winning neurons for each bench.

Figure14: Histogram, Chi2 table and 3D view of neurons in the histogram for the A bench. The Null hypothesis is rejected for this zone. The histogram aids in selecting neurons to isolate in the 3D view.

Figure 15: Histogram, Chi2 table and 3D view of neurons in the histogram for the B bench. The histogram aids in selecting neurons to isolate in the 3D view. Note that the Null hypothesis is accepted in this case, but the visual histogram establishes that the zone is almost 100% reservoir.

Figure 16: Histogram, Chi2 table and 3D view of neurons in the histogram for the C bench. The histogram aids in selecting neurons to isolate in the 3D view. Note that the Null hypothesis is accepted in this case, but the visual histogram establishes that the one is almost 100% reservoir.

Each chalk bench is represented by a unique assemblage of neurons which is likely a reflection of differences in lithology, porosity and thickness. The neurons representing the A and C benches are closer in the 2D color map and share some of the same neurons meaning that in attribute space, the natural clusters represented by the neurons are nearer each other than those representing the B bench. In general, the B bench is a better reservoir with thicker chalk and slightly higher porosity when compared to A and C. By isolating the individual neurons for each bench, new seismic volumes representing each reservoir can be created and used for volumetrics and well planning.


This paper demonstrates that orchestrated machine learning technologies through a succession of processes has demonstrated the ability to automatically isolate faults and stratigraphy within a single seismic volume and further to link these results to well logs in a statistical, quantitative manner. The machine learning results shown provide a rapid, robust, and unbiased fault interpretation which can be used to create either fault plane or fault stick interpretations in a standard interpretation package. The SOM was preceded by principal component analysis to identify prominent instantaneous attributes. Two types of SOMs were created, one using only instantaneous attributes that highlight stratigraphy and another using instantaneous plus fault detection results that highlight both faults and stratigraphy. Going forward, the recommended approach is to incorporate the fault volumes into SOMs to produce a single classification volume. The SOM results resolve the seismic character of the analysis interval (Top Niobrara to Top Greenhorn). In addition to enhanced fault identification, the Niobrara’s brittle chalk benches are easily distinguished from more ductile shale units and individual benches; A, B, and C each have unique sets of neurons which can be isolated in the classification volume through utilization of a 2D color map. Extractions from SOM volumes at wells confirm the statistical relationships between SOM neurons and reservoir properties.


The authors thank Geophysical Insights for use of the Paradise AI workbench to conduct the analysis as well as insightful review and feedback. We thank Geophysical Pursuit, Inc. and Fairfield Geotechnologies for providing the seismic data and the permission to present the data and results.

Digital Formation created the petrophysical logs (Holmes and others, 2019).

Coleou, T., M. Poupon, and A. Kostia, 2003, Unsupervised seismic facies classification: A review and comparison of techniques and implementation, The Leading Edge, 22, 942–953.

Finn, T. M. and Johnson, R. C., 2005, Niobrara Total Petroleum System in the Southwestern Wyoming Province, Chapter 6 of Petroleum Systems and Geologic Assessment of Oil and Gas in the Southwestern Wyoming Province, Wyoming, Colorado, and Utah, USGS Southwestern Wyoming Province Assessment Team, U.S. Geological Survey Digital Data Series DDS–69–D.

Holmes, M., Holmes, A., and Holmes, D., 2019, A Methodology Using Triple-Combo Well Logs to Quantify In-Place Hydrocarbon Volumes for Inorganic and Organic Elements in Unconventional Reservoirs, Recognizing Differing Reservoir Wetting Characteristics – An Example from the Niobrara of the Denver-Julesburg, Colorado, URTeC 903, p. 4986-5001.

Kauffman, E.G., 1977, Geological and biological overview – Western Interior Cretaceous Basin, in Kauffman, E.G., ed., Cretaceous facies, faunas, and paleoenvironments across the Western Interior Basin: The Mountain Geologist, v. 14, nos. 3 and 4, p. 75–99.

Kohonen, T., 2001, Self organizing maps: Third extended addition, Springer, Series in Information Services.

Landon, S.M., Longman, M.W., and Luneau, B.A., 2001, Hydrocarbon source rock potential of the Upper Cretaceous Niobrara Formation, Western Interior Seaway of the Rocky Mountain region: The Mountain Geologist, v. 38, no. 1, p. 1–18.

Laudon, C., Qi, J., Rondon, A., Rouis, L., and Kabazi, H., 2021, An enhanced fault detection workflow combining machine learning and seismic attributes yields an improved fault model for Caspian Sea asset, First Break, v. 39, no. 10, p. 53- 60. DOI:

Laudon, C., Stanley, S., and Santogrossi, P., 2019, Machine Learning Applied to 3D Seismic Data from the Denver-Julesburg Basin Improves Stratigraphic Resolution in the Niobrara, URTeC 337, p. 4353-4369.

Leal, J., Jerónimo, R., Rada, F., Voliroia, R., and Roden, R., 2019, Net reservoir discrimination through multi-attribute analysis at single sample scale, First Break, v. 37, No. 9, p. 77-86.

Qi, J., Lyu, B., Alali, A., Machado, G., Hu, Y., and Marfurt, K. J., 2019, Image processing of seismic attributes for automatic fault extraction: Geophysics, 84, no. 1, O25–O37.

Qi, J., Lyu, B.,Wu, X., and Marfurt, K. J., 2020, Comparing convolutional neural networking and image processing seismic fault detection methods:90th Annual International Meeting, SEG, Expanded Abstracts, 1111-1115.

Qi, J., Machado, G., and Marfurt, K. J., 2017, A workflow to skeletonize faults and stratigraphic features: Geophysics, 82, O57–O70.

Roden, R., Smith, T., and Sacrey, D., 2015, Geologic Pattern Recognition from Seismic Attributes: Principal Component Analysis and Self-Organizing Maps, Interpretation, 3, no. 4, SAE59-SAE83.

Ronneberger, O., P. Fischer, and T. Brox, 2015, U-Net: Convolutional networks for biomedical image segmentation: International Conference on Medical Image Computing and Computer-Assisted Intervention, 234–241.

Sonnenberg, S.A., 2015. New reserves in an old field, the Niobrara/Codell resource plays in the Wattenberg Field, Denver Basin, Colorado. First Break, v. 33, no. 12, p. 55-62.

Wu, X., Liang, L., Shi, Y., and Fomel, S., 2019, FaultSeg3D: Using synthetic data sets to train an end-to-end convolutional neural network for 3D seismic fault segmentation: Geophysics, 84, IM35–IM45.

Zhao, T., and Mukhopadhyay, P., 2018, A fault-detection workflow using deep learning and image processing: 88th Annual International Meeting, SEG, Expanded Abstracts, 1966–1970.


Request access by filling the form below to download full PDF.
Most Popular Papers
Case Study: An Integrated Machine Learning-Based Fault Classification Workflow
We build a fully automated machine learning-based fault detection workflow to compute the parameterized fault classification results ...
Case Study with Petrobras: Applying Unsupervised Multi-Attribute Machine Learning for 3D Stratigraphic Facies Classification in a Carbonate Field, Offshore Brazil
This study with Petrobras presents the results of a multi-attribute, machine learning study over a pre-salt carbonate field in the ...
Applying Machine Learning Technologies in the Niobrara Formation, DJ Basin, to Quickly Produce an Integrated Structural and Stratigraphic Seismic Classification Volume Calibrated to Wells
Carolan Laudon, Jie Qi, Yin-Kai Wang, Geophysical Research, LLC (d/b/a Geophysical Insights), University of Houston | Published with permission: Unconventional Resources ...
Shopping Cart
  • Registration confirmation will be emailed to you.

  • We're committed to your privacy. Geophysical Insights uses the information you provide to us to contact you about our relevant content, events, and products. You may unsubscribe from these communications at any time. For more information, check out our Privacy Policy

    Jan Van De MortelGeophysicist

    Jan Van De Mortel

    Jan is a geophysicist with a 30+ year international track record, including 20 years with Schlumberger, 4 years with Weatherford, and recent years actively involved in Machine Learning for both oilfield and non-oilfield applications. His work includes developing solutions and applications around transformer networks, probabilistic Machine Learning, etc. Jan currently works as a technical consultant at Geophysical Insights for Continental Europe, the Middle East, and Asia.

    Mike PowneyGeologist | Perceptum Ltd

    Mike Powney

    Mike began his career at SRC a consultancy formed from ECL where he worked extensively on seismic data offshore West Africa and the North Sea. Mike subsequently joined Geoex MCG where he provides global G&G technical expertise across their data portfolio. He also heads up the technical expertise within Geoex MCG on CCUS and natural hydrogen. Within his role at Perceptum, Mike leads the Machine Learning project investigating seismic and well data, offshore Equatorial Guinea.

    Tim GibbonsSales Representative

    Tim Gibbons

    Tim has a BA in Physics from the University of Oxford and an MSc in Exploration Geophysics from Imperial College, London. He started work as a geophysicist for BP in 1988 in London before moving to Aberdeen. There he also worked for Elf Exploration before his love of technology brought a move into the service sector in 1997. Since then, he has worked for Landmark, Paradigm, and TGS in a variety of managerial, sales, and business development roles. Since 2018, he has worked for Geophysical Insights, promoting Paradise throughout the European region.

    Dr. Carrie LaudonSenior Geophysical Consultant

    Applying Unsupervised Multi-Attribute Machine Learning for 3D Stratigraphic Facies Classification in a Carbonate Field, Offshore Brazil

    We present results of a multi-attribute, machine learning study over a pre-salt carbonate field in the Santos Basin, offshore Brazil. These results test the accuracy and potential of Self-organizing maps (SOM) for stratigraphic facies delineation. The study area has an existing detailed geological facies model containing predominantly reef facies in an elongated structure.

    Carrie LaudonSenior Geophysical Consultant - Geophysical Insights

    Automatic Fault Detection and Applying Machine Learning to Detect Thin Beds

    Rapid advances in Machine Learning (ML) are transforming seismic analysis. Using these new tools, geoscientists can accomplish the following quickly and effectively:

    • Run fault detection analysis in a few hours, not weeks
    • Identify thin beds down to a single seismic sample
    • Generate seismic volumes that capture structural and stratigraphic details

    Join us for a ‘Lunch & Learn’ sessions daily at 11:00 where Dr. Carolan (“Carrie”) Laudon will review the theory and results of applying a combination of machine learning tools to obtain the above results.  A detailed agenda follows.


    Automated Fault Detection using 3D CNN Deep Learning

    • Deep learning fault detection
    • Synthetic models
    • Fault image enhancement
    • Semi-supervised learning for visualization
    • Application results
      • Normal faults
      • Fault/fracture trends in complex reservoirs

    Demo of Paradise Fault Detection Thoughtflow®

    Stratigraphic analysis using machine learning with fault detection

    • Attribute Selection using Principal Component Analysis (PCA)
    • Multi-Attribute Classification using Self-Organizing Maps (SOM)
    • Case studies – stratigraphic analysis and fault detection
      • Fault-karst and fracture examples, China
      • Niobrara – Stratigraphic analysis and thin beds, faults
    Thomas ChaparroSenior Geophysicist - Geophysical Insights

    Paradise: A Day in The Life of the Geoscientist

    Over the last several years, the industry has invested heavily in Machine Learning (ML) for better predictions and automation. Dramatic results have been realized in exploration, field development, and production optimization. However, many of these applications have been single use ‘point’ solutions. There is a growing body of evidence that seismic analysis is best served using a combination of ML tools for a specific objective, referred to as ML Orchestration. This talk demonstrates how the Paradise AI workbench applications are used in an integrated workflow to achieve superior results than traditional interpretation methods or single-purpose ML products. Using examples from combining ML-based Fault Detection and Stratigraphic Analysis, the talk will show how ML orchestration produces value for exploration and field development by the interpreter leveraging ML orchestration.

    Aldrin RondonSenior Geophysical Engineer - Dragon Oil

    Machine Learning Fault Detection: A Case Study

    An innovative Fault Pattern Detection Methodology has been carried out using a combination of Machine Learning Techniques to produce a seismic volume suitable for fault interpretation in a structurally and stratigraphic complex field. Through theory and results, the main objective was to demonstrate that a combination of ML tools can generate superior results in comparison with traditional attribute extraction and data manipulation through conventional algorithms. The ML technologies applied are a supervised, deep learning, fault classification followed by an unsupervised, multi-attribute classification combining fault probability and instantaneous attributes.

    Thomas ChaparroSenior Geophysicist - Geophysical Insights

    Thomas Chaparro is a Senior Geophysicist who specializes in training and preparing AI-based workflows. Thomas also has experience as a processing geophysicist and 2D and 3D seismic data processing. He has participated in projects in the Gulf of Mexico, offshore Africa, the North Sea, Australia, Alaska, and Brazil.

    Thomas holds a bachelor’s degree in Geology from Northern Arizona University and a Master’s in Geophysics from the University of California, San Diego. His research focus was computational geophysics and seismic anisotropy.

    Aldrin RondonSenior Geophysical Engineer - Dragon Oil

    Bachelor’s Degree in Geophysical Engineering from Central University in Venezuela with a specialization in Reservoir Characterization from Simon Bolivar University.

    Over 20 years exploration and development geophysical experience with extensive 2D and 3D seismic interpretation including acquisition and processing.

    Aldrin spent his formative years working on exploration activity in PDVSA Venezuela followed by a period working for a major international consultant company in the Gulf of Mexico (Landmark, Halliburton) as a G&G consultant. Latterly he was working at Helix in Scotland, UK on producing assets in the Central and South North Sea.  From 2007 to 2021, he has been working as a Senior Seismic Interpreter in Dubai involved in different dedicated development projects in the Caspian Sea.

    Deborah SacreyOwner - Auburn Energy

    How to Use Paradise to Interpret Clastic Reservoirs

    The key to understanding Clastic reservoirs in Paradise starts with good synthetic ties to the wavelet data. If one is not tied correctly, then it will be easy to mis-interpret the neurons as reservoir, whin they are not. Secondly, the workflow should utilize Principal Component Analysis to better understand the zone of interest and the attributes to use in the SOM analysis. An important part to interpretation is understanding “Halo” and “Trailing” neurons as part of the stack around a reservoir or potential reservoir. Deep, high-pressured reservoirs often “leak” or have vertical percolation into the seal. This changes the rock properties enough in the seal to create a “halo” effect in SOM. Likewise, the frequency changes of the seismic can cause a subtle “dim-out”, not necessarily observable in the wavelet data, but enough to create a different pattern in the Earth in terms of these rock property changes. Case histories for Halo and trailing neural information include deep, pressured, Chris R reservoir in Southern Louisiana, Frio pay in Southeast Texas and AVO properties in the Yegua of Wharton County. Additional case histories to highlight interpretation include thin-bed pays in Brazoria County, including updated information using CNN fault skeletonization. Continuing the process of interpretation is showing a case history in Wharton County on using Low Probability to help explore Wilcox reservoirs. Lastly, a look at using Paradise to help find sweet spots in unconventional reservoirs like the Eagle Ford, a case study provided by Patricia Santigrossi.

    Mike DunnSr. Vice President of Business Development

    Machine Learning in the Cloud

    Machine Learning in the Cloud will address the capabilities of the Paradise AI Workbench, featuring on-demand access enabled by the flexible hardware and storage facilities available on Amazon Web Services (AWS) and other commercial cloud services. Like the on-premise instance, Paradise On-Demand provides guided workflows to address many geologic challenges and investigations. The presentation will show how geoscientists can accomplish the following workflows quickly and effectively using guided ThoughtFlows® in Paradise:

    • Identify and calibrate detailed stratigraphy using seismic and well logs
    • Classify seismic facies
    • Detect faults automatically
    • Distinguish thin beds below conventional tuning
    • Interpret Direct Hydrocarbon Indicators
    • Estimate reserves/resources

    Attend the talk to see how ML applications are combined through a process called "Machine Learning Orchestration," proven to extract more from seismic and well data than traditional means.

    Sarah Stanley
    Senior Geoscientist

    Stratton Field Case Study – New Solutions to Old Problems

    The Oligocene Frio gas-producing Stratton Field in south Texas is a well-known field. Like many onshore fields, the productive sand channels are difficult to identify using conventional seismic data. However, the productive channels can be easily defined by employing several Paradise modules, including unsupervised machine learning, Principal Component Analysis, Self-Organizing Maps, 3D visualization, and the new Well Log Cross Section and Well Log Crossplot tools. The Well Log Cross Section tool generates extracted seismic data, including SOMs, along the Cross Section boreholes and logs. This extraction process enables the interpreter to accurately identify the SOM neurons associated with pay versus neurons associated with non-pay intervals. The reservoir neurons can be visualized throughout the field in the Paradise 3D Viewer, with Geobodies generated from the neurons. With this ThoughtFlow®, pay intervals previously difficult to see in conventional seismic can finally be visualized and tied back to the well data.

    Laura Cuttill
    Practice Lead, Advertas

    Young Professionals – Managing Your Personal Brand to Level-up Your Career

    No matter where you are in your career, your online “personal brand” has a huge impact on providing opportunity for prospective jobs and garnering the respect and visibility needed for advancement. While geoscientists tackle ambitious projects, publish in technical papers, and work hard to advance their careers, often, the value of these isn’t realized beyond their immediate professional circle. Learn how to…

    • - Communicate who you are to high-level executives in exploration and development
    • - Avoid common social media pitfalls
    • - Optimize your online presence to best garner attention from recruiters
    • - Stay relevant
    • - Create content of interest
    • - Establish yourself as a thought leader in your given area of specialization
    Laura Cuttill
    Practice Lead, Advertas

    As a 20-year marketing veteran marketing in oil and gas and serial entrepreneur, Laura has deep experience in bringing technology products to market and growing sales pipeline. Armed with a marketing degree from Texas A&M, she began her career doing technical writing for Schlumberger and ExxonMobil in 2001. She started Advertas as a co-founder in 2004 and began to leverage her upstream experience in marketing. In 2006, she co-founded the cyber-security software company, 2FA Technology. After growing 2FA from a startup to 75% market share in target industries, and the subsequent sale of the company, she returned to Advertas to continue working toward the success of her clients, such as Geophysical Insights. Today, she guides strategy for large-scale marketing programs, manages project execution, cultivates relationships with industry media, and advocates for data-driven, account-based marketing practices.

    Fabian Rada
    Sr. Geophysicist, Petroleum Oil & Gas Services

    Statistical Calibration of SOM results with Well Log Data (Case Study)

    The first stage of the proposed statistical method has proven to be very useful in testing whether or not there is a relationship between two qualitative variables (nominal or ordinal) or categorical quantitative variables, in the fields of health and social sciences. Its application in the oil industry allows geoscientists not only to test dependence between discrete variables, but to measure their degree of correlation (weak, moderate or strong). This article shows its application to reveal the relationship between a SOM classification volume of a set of nine seismic attributes (whose vertical sampling interval is three meters) and different well data (sedimentary facies, Net Reservoir, and effective porosity grouped by ranges). The data were prepared to construct the contingency tables, where the dependent (response) variable and independent (explanatory) variable were defined, the observed frequencies were obtained, and the frequencies that would be expected if the variables were independent were calculated and then the difference between the two magnitudes was studied using the contrast statistic called Chi-Square. The second stage implies the calibration of the SOM volume extracted along the wellbore path through statistical analysis of the petrophysical properties VCL and PHIE, and SW for each neuron, which allowed to identify the neurons with the best petrophysical values in a carbonate reservoir.

    Heather Bedle
    Assistant Professor, University of Oklahoma

    Heather Bedle received a B.S. (1999) in physics from Wake Forest University, and then worked as a systems engineer in the defense industry. She later received a M.S. (2005) and a Ph. D. (2008) degree from Northwestern University. After graduate school, she joined Chevron and worked as both a development geologist and geophysicist in the Gulf of Mexico before joining Chevron’s Energy Technology Company Unit in Houston, TX. In this position, she worked with the Rock Physics from Seismic team analyzing global assets in Chevron’s portfolio. Dr. Bedle is currently an assistant professor of applied geophysics at the University of Oklahoma’s School of Geosciences. She joined OU in 2018, after instructing at the University of Houston for two years. Dr. Bedle and her student research team at OU primarily work with seismic reflection data, using advanced techniques such as machine learning, attribute analysis, and rock physics to reveal additional structural, stratigraphic and tectonic insights of the subsurface.

    Jie Qi
    Research Geophysicist

    An Integrated Fault Detection Workflow

    Seismic fault detection is one of the top critical procedures in seismic interpretation. Identifying faults are significant for characterizing and finding the potential oil and gas reservoirs. Seismic amplitude data exhibiting good resolution and a high signal-to-noise ratio are key to identifying structural discontinuities using seismic attributes or machine learning techniques, which in turn serve as input for automatic fault extraction. Deep learning Convolutional Neural Networks (CNN) performs well on fault detection without any human-computer interactive work. This study shows an integrated CNN-based fault detection workflow to construct fault images that are sufficiently smooth for subsequent fault automatic extraction. The objectives were to suppress noise or stratigraphic anomalies subparallel to reflector dip, and sharpen fault and other discontinuities that cut reflectors, preconditioning the fault images for subsequent automatic extraction. A 2D continuous wavelet transform-based acquisition footprint suppression method was applied time slice by time slice to suppress wavenumber components to avoid interpreting the acquisition footprint as artifacts by the CNN fault detection method. To further suppress cross-cutting noise as well as sharpen fault edges, a principal component edge-preserving structure-oriented filter is also applied. The conditioned amplitude volume is then fed to a pre-trained CNN model to compute fault probability. Finally, a Laplacian of Gaussian filter is applied to the original CNN fault probability to enhance fault images. The resulting fault probability volume is favorable with respect to traditional human-interpreter generated on vertical slices through the seismic amplitude volume.

    Dr. Jie Qi
    Research Geophysicist

    An integrated machine learning-based fault classification workflow

    We introduce an integrated machine learning-based fault classification workflow that creates fault component classification volumes that greatly reduces the burden on the human interpreter. We first compute a 3D fault probability volume from pre-conditioned seismic amplitude data using a 3D convolutional neural network (CNN). However, the resulting “fault probability” volume delineates other non-fault edges such as angular unconformities, the base of mass transport complexes, and noise such as acquisition footprint. We find that image processing-based fault discontinuity enhancement and skeletonization methods can enhance the fault discontinuities and suppress many of the non-fault discontinuities. Although each fault is characterized by its dip and azimuth, these two properties are discontinuous at azimuths of φ=±180° and for near vertical faults for azimuths φ and φ+180° requiring them to be parameterized as four continuous geodetic fault components. These four fault components as well as the fault probability can then be fed into a self-organizing map (SOM) to generate fault component classification. We find that the final classification result can segment fault sets trending in interpreter-defined orientations and minimize the impact of stratigraphy and noise by selecting different neurons from the SOM 2D neuron color map.

    Ivan Marroquin
    Senior Research Geophysicist

    Connecting Multi-attribute Classification to Reservoir Properties

    Interpreters rely on seismic pattern changes to identify and map geologic features of importance. The ability to recognize such features depends on the seismic resolution and characteristics of seismic waveforms. With the advancement of machine learning algorithms, new methods for interpreting seismic data are being developed. Among these algorithms, self-organizing maps (SOM) provides a different approach to extract geological information from a set of seismic attributes.

    SOM approximates the input patterns by a finite set of processing neurons arranged in a regular 2D grid of map nodes. Such that, it classifies multi-attribute seismic samples into natural clusters following an unsupervised approach. Since machine learning is unbiased, so the classifications can contain both geological information and coherent noise. Thus, seismic interpretation evolves into broader geologic perspectives. Additionally, SOM partitions multi-attribute samples without a priori information to guide the process (e.g., well data).

    The SOM output is a new seismic attribute volume, in which geologic information is captured from the classification into winning neurons. Implicit and useful geological information are uncovered through an interactive visual inspection of winning neuron classifications. By doing so, interpreters build a classification model that aids them to gain insight into complex relationships between attribute patterns and geological features.

    Despite all these benefits, there are interpretation challenges regarding whether there is an association between winning neurons and geological features. To address these issues, a bivariate statistical approach is proposed. To evaluate this analysis, three cases scenarios are presented. In each case, the association between winning neurons and net reservoir (determined from petrophysical or well log properties) at well locations is analyzed. The results show that the statistical analysis not only aid in the identification of classification patterns; but more importantly, reservoir/not reservoir classification by classical petrophysical analysis strongly correlates with selected SOM winning neurons. Confidence in interpreted classification features is gained at the borehole and interpretation is readily extended as geobodies away from the well.

    Heather Bedle
    Assistant Professor, University of Oklahoma

    Gas Hydrates, Reefs, Channel Architecture, and Fizz Gas: SOM Applications in a Variety of Geologic Settings

    Students at the University of Oklahoma have been exploring the uses of SOM techniques for the last year. This presentation will review learnings and results from a few of these research projects. Two projects have investigated the ability of SOMs to aid in identification of pore space materials – both trying to qualitatively identify gas hydrates and under-saturated gas reservoirs. A third study investigated individual attributes and SOMs in recognizing various carbonate facies in a pinnacle reef in the Michigan Basin. The fourth study took a deep dive of various machine learning algorithms, of which SOMs will be discussed, to understand how much machine learning can aid in the identification of deepwater channel architectures.

    Fabian Rada
    Sr. Geophysicist, Petroleum Oil & Gas Servicest

    Fabian Rada joined Petroleum Oil and Gas Services, Inc (POGS) in January 2015 as Business Development Manager and Consultant to PEMEX. In Mexico, he has participated in several integrated oil and gas reservoir studies. He has consulted with PEMEX Activos and the G&G Technology group to apply the Paradise AI workbench and other tools. Since January 2015, he has been working with Geophysical Insights staff to provide and implement the multi-attribute analysis software Paradise in Petróleos Mexicanos (PEMEX), running a successful pilot test in Litoral Tabasco Tsimin Xux Asset. Mr. Rada began his career in the Venezuelan National Foundation for Seismological Research, where he participated in several geophysical projects, including seismic and gravity data for micro zonation surveys. He then joined China National Petroleum Corporation (CNPC) as QC Geophysicist until he became the Chief Geophysicist in the QA/QC Department. Then, he transitioned to a subsidiary of Petróleos de Venezuela (PDVSA), as a member of the QA/QC and Chief of Potential Field Methods section. Mr. Rada has also participated in processing land seismic data and marine seismic/gravity acquisition surveys. Mr. Rada earned a B.S. in Geophysics from the Central University of Venezuela.

    Hal GreenDirector, Marketing & Business Development - Geophysical Insights

    Introduction to Automatic Fault Detection and Applying Machine Learning to Detect Thin Beds

    Rapid advances in Machine Learning (ML) are transforming seismic analysis. Using these new tools, geoscientists can accomplish the following quickly and effectively: a combination of machine learning (ML) and deep learning applications, geoscientists apply Paradise to extract greater insights from seismic and well data for these and other objectives:

    • Run fault detection analysis in a few hours, not weeks
    • Identify thin beds down to a single seismic sample
    • Overlay fault images on stratigraphic analysis

    The brief introduction will orient you with the technology and examples of how machine learning is being applied to automate interpretation while generating new insights in the data.

    Sarah Stanley
    Senior Geoscientist and Lead Trainer

    Sarah Stanley joined Geophysical Insights in October, 2017 as a geoscience consultant, and became a full-time employee July 2018. Prior to Geophysical Insights, Sarah was employed by IHS Markit in various leadership positions from 2011 to her retirement in August 2017, including Director US Operations Training and Certification, the Operational Governance Team, and, prior to February 2013, Director of IHS Kingdom Training. Sarah joined SMT in May, 2002, and was the Director of Training for SMT until IHS Markit’s acquisition in 2011.

    Prior to joining SMT Sarah was employed by GeoQuest, a subdivision of Schlumberger, from 1998 to 2002. Sarah was also Director of the Geoscience Technology Training Center, North Harris College from 1995 to 1998, and served as a voluntary advisor on geoscience training centers to various geological societies. Sarah has over 37 years of industry experience and has worked as a petroleum geoscientist in various domestic and international plays since August of 1981. Her interpretation experience includes tight gas sands, coalbed methane, international exploration, and unconventional resources.

    Sarah holds a Bachelor’s of Science degree with majors in Biology and General Science and minor in Earth Science, a Master’s of Arts in Education and Master’s of Science in Geology from Ball State University, Muncie, Indiana. Sarah is both a Certified Petroleum Geologist, and a Registered Geologist with the State of Texas. Sarah holds teaching credentials in both Indiana and Texas.

    Sarah is a member of the Houston Geological Society and the American Association of Petroleum Geologists, where she currently serves in the AAPG House of Delegates. Sarah is a recipient of the AAPG Special Award, the AAPG House of Delegates Long Service Award, and the HGS President’s award for her work in advancing training for petroleum geoscientists. She has served on the AAPG Continuing Education Committee and was Chairman of the AAPG Technical Training Center Committee. Sarah has also served as Secretary of the HGS, and Served two years as Editor for the AAPG Division of Professional Affairs Correlator.

    Dr. Tom Smith
    President & CEO

    Dr. Tom Smith received a BS and MS degree in Geology from Iowa State University. His graduate research focused on a shallow refraction investigation of the Manson astrobleme. In 1971, he joined Chevron Geophysical as a processing geophysicist but resigned in 1980 to complete his doctoral studies in 3D modeling and migration at the Seismic Acoustics Lab at the University of Houston. Upon graduation with the Ph.D. in Geophysics in 1981, he started a geophysical consulting practice and taught seminars in seismic interpretation, seismic acquisition and seismic processing. Dr. Smith founded Seismic Micro-Technology in 1984 to develop PC software to support training workshops which subsequently led to development of the KINGDOM Software Suite for integrated geoscience interpretation with world-wide success.

    The Society of Exploration Geologists (SEG) recognized Dr. Smith’s work with the SEG Enterprise Award in 2000, and in 2010, the Geophysical Society of Houston (GSH) awarded him an Honorary Membership. Iowa State University (ISU) has recognized Dr. Smith throughout his career with the Distinguished Alumnus Lecturer Award in 1996, the Citation of Merit for National and International Recognition in 2002, and the highest alumni honor in 2015, the Distinguished Alumni Award. The University of Houston College of Natural Sciences and Mathematics recognized Dr. Smith with the 2017 Distinguished Alumni Award.

    In 2009, Dr. Smith founded Geophysical Insights, where he leads a team of geophysicists, geologists and computer scientists in developing advanced technologies for fundamental geophysical problems. The company launched the Paradise® multi-attribute analysis software in 2013, which uses Machine Learning and pattern recognition to extract greater information from seismic data.

    Dr. Smith has been a member of the SEG since 1967 and is a professional member of SEG, GSH, HGS, EAGE, SIPES, AAPG, Sigma XI, SSA and AGU. Dr. Smith served as Chairman of the SEG Foundation from 2010 to 2013. On January 25, 2016, he was recognized by the Houston Geological Society (HGS) as a geophysicist who has made significant contributions to the field of geology. He currently serves on the SEG President-Elect’s Strategy and Planning Committee and the ISU Foundation Campaign Committee for Forever True, For Iowa State.

    Carrie LaudonSenior Geophysical Consultant - Geophysical Insights

    Applying Machine Learning Technologies in the Niobrara Formation, DJ Basin, to Quickly Produce an Integrated Structural and Stratigraphic Seismic Classification Volume Calibrated to Wells

    This study will demonstrate an automated machine learning approach for fault detection in a 3D seismic volume. The result combines Deep Learning Convolution Neural Networks (CNN) with a conventional data pre-processing step and an image processing-based post processing approach to produce high quality fault attribute volumes of fault probability, fault dip magnitude and fault dip azimuth. These volumes are then combined with instantaneous attributes in an unsupervised machine learning classification, allowing the isolation of both structural and stratigraphic features into a single 3D volume. The workflow is illustrated on a 3D seismic volume from the Denver Julesburg Basin and a statistical analysis is used to calibrate results to well data.

    Ivan Marroquin
    Senior Research Geophysicist

    Iván Dimitri Marroquín is a 20-year veteran of data science research, consistently publishing in peer-reviewed journals and speaking at international conference meetings. Dr. Marroquín received a Ph.D. in geophysics from McGill University, where he conducted and participated in 3D seismic research projects. These projects focused on the development of interpretation techniques based on seismic attributes and seismic trace shape information to identify significant geological features or reservoir physical properties. Examples of his research work are attribute-based modeling to predict coalbed thickness and permeability zones, combining spectral analysis with coherency imagery technique to enhance interpretation of subtle geologic features, and implementing a visual-based data mining technique on clustering to match seismic trace shape variability to changes in reservoir properties.

    Dr. Marroquín has also conducted some ground-breaking research on seismic facies classification and volume visualization. This lead to his development of a visual-based framework that determines the optimal number of seismic facies to best reveal meaningful geologic trends in the seismic data. He proposed seismic facies classification as an alternative to data integration analysis to capture geologic information in the form of seismic facies groups. He has investigated the usefulness of mobile devices to locate, isolate, and understand the spatial relationships of important geologic features in a context-rich 3D environment. In this work, he demonstrated mobile devices are capable of performing seismic volume visualization, facilitating the interpretation of imaged geologic features.  He has definitively shown that mobile devices eventually will allow the visual examination of seismic data anywhere and at any time.

    In 2016, Dr. Marroquín joined Geophysical Insights as a senior researcher, where his efforts have been focused on developing machine learning solutions for the oil and gas industry. For his first project, he developed a novel procedure for lithofacies classification that combines a neural network with automated machine methods. In parallel, he implemented a machine learning pipeline to derive cluster centers from a trained neural network. The next step in the project is to correlate lithofacies classification to the outcome of seismic facies analysis.  Other research interests include the application of diverse machine learning technologies for analyzing and discerning trends and patterns in data related to oil and gas industry.

    Dr. Jie Qi
    Research Geophysicist

    Dr. Jie Qi is a Research Geophysicist at Geophysical Insights, where he works closely with product development and geoscience consultants. His research interests include machine learning-based fault detection, seismic interpretation, pattern recognition, image processing, seismic attribute development and interpretation, and seismic facies analysis. Dr. Qi received a BS (2011) in Geoscience from the China University of Petroleum in Beijing, and an MS (2013) in Geophysics from the University of Houston. He earned a Ph.D. (2017) in Geophysics from the University of Oklahoma, Norman. His industry experience includes work as a Research Assistant (2011-2013) at the University of Houston and the University of Oklahoma (2013-2017). Dr. Qi was with Petroleum Geo-Services (PGS), Inc. in 2014 as a summer intern, where he worked on a semi-supervised seismic facies analysis. In 2017, he served as a postdoctoral Research Associate in the Attributed Assisted-Seismic Processing and Interpretation (AASPI) consortium at the University of Oklahoma from 2017 to 2020.

    Rocky R. Roden
    Senior Consulting Geophysicist

    The Relationship of Self-Organization, Geology, and Machine Learning

    Self-organization is the nonlinear formation of spatial and temporal structures, patterns or functions in complex systems (Aschwanden et al., 2018). Simple examples of self-organization include flocks of birds, schools of fish, crystal development, formation of snowflakes, and fractals. What these examples have in common is the appearance of structure or patterns without centralized control. Self-organizing systems are typically governed by power laws, such as the Gutenberg-Richter law of earthquake frequency and magnitude. In addition, the time frames of such systems display a characteristic self-similar (fractal) response, where earthquakes or avalanches for example, occur over all possible time scales (Baas, 2002).

    The existence of nonlinear dynamic systems and ordered structures in the earth are well known and have been studied for centuries and can appear as sedimentary features, layered and folded structures, stratigraphic formations, diapirs, eolian dune systems, channelized fluvial and deltaic systems, and many more (Budd, et al., 2014; Dietrich and Jacob, 2018). Each of these geologic processes and features exhibit patterns through the action of undirected local dynamics and is generally termed “self-organization” (Paola, 2014).

    Artificial intelligence and specifically neural networks exhibit and reveal self-organization characteristics. The reason for the interest in applying neural networks stems from the fact that they are universal approximators for various kinds of nonlinear dynamical systems of arbitrary complexity (Pessa, 2008). A special class of artificial neural networks is aptly named self-organizing map (SOM) (Kohonen, 1982). It has been found that SOM can identify significant organizational structure in the form of clusters from seismic attributes that relate to geologic features (Strecker and Uden, 2002; Coleou et al., 2003; de Matos, 2006; Roy et al., 2013; Roden et al., 2015; Zhao et al., 2016; Roden et al., 2017; Zhao et al., 2017; Roden and Chen, 2017; Sacrey and Roden, 2018; Leal et al, 2019; Hussein et al., 2020; Hardage et al., 2020; Manauchehri et al., 2020). As a consequence, SOM is an excellent machine learning neural network approach utilizing seismic attributes to help identify self-organization features and define natural geologic patterns not easily seen or seen at all in the data.

    Rocky R. Roden
    Senior Consulting Geophysicist

    Rocky R. Roden started his own consulting company, Rocky Ridge Resources Inc. in 2003 and works with several oil companies on technical and prospect evaluation issues. He is also a principal in the Rose and Associates DHI Risk Analysis Consortium and was Chief Consulting Geophysicist with Seismic Micro-technology. Rocky is a proven oil finder with 37 years in the industry, gaining extensive knowledge of modern geoscience technical approaches.

    Rocky holds a BS in Oceanographic Technology-Geology from Lamar University and a MS in Geological and Geophysical Oceanography from Texas A&M University. As Chief Geophysicist and Director of Applied Technology for Repsol-YPF, his role comprised of advising corporate officers, geoscientists, and managers on interpretation, strategy and technical analysis for exploration and development in offices in the U.S., Argentina, Spain, Egypt, Bolivia, Ecuador, Peru, Brazil, Venezuela, Malaysia, and Indonesia. He has been involved in the technical and economic evaluation of Gulf of Mexico lease sales, farmouts worldwide, and bid rounds in South America, Europe, and the Far East. Previous work experience includes exploration and development at Maxus Energy, Pogo Producing, Decca Survey, and Texaco. Rocky is a member of SEG, AAPG, HGS, GSH, EAGE, and SIPES; he is also a past Chairman of The Leading Edge Editorial Board.

    Bob A. Hardage

    Bob A. Hardage received a PhD in physics from Oklahoma State University. His thesis work focused on high-velocity micro-meteoroid impact on space vehicles, which required trips to Goddard Space Flight Center to do finite-difference modeling on dedicated computers. Upon completing his university studies, he worked at Phillips Petroleum Company for 23 years and was Exploration Manager for Asia and Latin America when he left Phillips. He moved to WesternAtlas and worked 3 years as Vice President of Geophysical Development and Marketing. He then established a multicomponent seismic research laboratory at the Bureau of Economic Geology and served The University of Texas at Austin as a Senior Research Scientist for 28 years. He has published books on VSP, cross-well profiling, seismic stratigraphy, and multicomponent seismic technology. He was the first person to serve 6 years on the Board of Directors of the Society of Exploration Geophysicists (SEG). His Board service was as SEG Editor (2 years), followed by 1-year terms as First VP, President Elect, President, and Past President. SEG has awarded him a Special Commendation, Life Membership, and Honorary Membership. He wrote the AAPG Explorer column on geophysics for 6 years. AAPG honored him with a Distinguished Service award for promoting geophysics among the geological community.

    Bob A. Hardage

    Investigating the Internal Fabric of VSP data with Attribute Analysis and Unsupervised Machine Learning

    Examination of vertical seismic profile (VSP) data with unsupervised machine learning technology is a rigorous way to compare the fabric of down-going, illuminating, P and S wavefields with the fabric of up-going reflections and interbed multiples created by these wavefields. This concept is introduced in this paper by applying unsupervised learning to VSP data to better understand the physics of P and S reflection seismology. The zero-offset VSP data used in this investigation were acquired in a hard-rock, fast-velocity, environment that caused the shallowest 2 or 3 geophones to be inside the near-field radiation zone of a vertical-vibrator baseplate. This study shows how to use instantaneous attributes to backtrack down-going direct-P and direct-S illuminating wavelets to the vibrator baseplate inside the near-field zone. This backtracking confirms that the points-of-origin of direct-P and direct-S are identical. The investigation then applies principal component (PCA) analysis to VSP data and shows that direct-S and direct-P wavefields that are created simultaneously at a vertical-vibrator baseplate have the same dominant principal components. A self-organizing map (SOM) approach is then taken to illustrate how unsupervised machine learning describes the fabric of down-going and up-going events embedded in vertical-geophone VSP data. These SOM results show that a small number of specific neurons build the down-going direct-P illuminating wavefield, and another small group of neurons build up-going P primary reflections and early-arriving down-going P multiples. The internal attribute fabric of these key down-going and up-going neurons are then compared to expose their similarities and differences. This initial study indicates that unsupervised machine learning, when applied to VSP data, is a powerful tool for understanding the physics of seismic reflectivity at a prospect. This research strategy of analyzing VSP data with unsupervised machine learning will now expand to horizontal-geophone VSP data.

    Tom Smith
    President and CEO, Geophysical Insights

    Machine Learning for Incomplete Geoscientists

    This presentation covers big-picture machine learning buzz words with humor and unassailable frankness. The goal of the material is for every geoscientist to gain confidence in these important concepts and how they add to our well-established practices, particularly seismic interpretation. Presentation topics include a machine learning historical perspective, what makes it different, a fish factory, Shazam, comparison of supervised and unsupervised machine learning methods with examples, tuning thickness, deep learning, hard/soft attribute spaces, multi-attribute samples, and several interpretation examples. After the presentation, you may not know how to run machine learning algorithms, but you should be able to appreciate their value and avoid some of their limitations.

    Deborah Sacrey
    Owner, Auburn Energy

    Deborah is a geologist/geophysicist with 44 years of oil and gas exploration experience in Texas, Louisiana Gulf Coast and Mid-Continent areas of the US. She received her degree in Geology from the University of Oklahoma in 1976 and immediately started working for Gulf Oil in their Oklahoma City offices.

    She started her own company, Auburn Energy, in 1990 and built her first geophysical workstation using Kingdom software in 1996. She helped SMT/IHS for 18 years in developing and testing the Kingdom Software. She specializes in 2D and 3D interpretation for clients in the US and internationally. For the past nine years she has been part of a team to study and bring the power of multi-attribute neural analysis of seismic data to the geoscience public, guided by Dr. Tom Smith, founder of SMT. She has become an expert in the use of Paradise software and has seven discoveries for clients using multi-attribute neural analysis.

    Deborah has been very active in the geological community. She is past national President of SIPES (Society of Independent Professional Earth Scientists), past President of the Division of Professional Affairs of AAPG (American Association of Petroleum Geologists), Past Treasurer of AAPG and Past President of the Houston Geological Society. She is also Past President of the Gulf Coast Association of Geological Societies and just ended a term as one of the GCAGS representatives on the AAPG Advisory Council. Deborah is also a DPA Certified Petroleum Geologist #4014 and DPA Certified Petroleum Geophysicist #2. She belongs to AAPG, SIPES, Houston Geological Society, South Texas Geological Society and the Oklahoma City Geological Society (OCGS).

    Mike Dunn
    Senior Vice President Business Development

    Michael A. Dunn is an exploration executive with extensive global experience including the Gulf of Mexico, Central America, Australia, China and North Africa. Mr. Dunn has a proven a track record of successfully executing exploration strategies built on a foundation of new and innovative technologies. Currently, Michael serves as Senior Vice President of Business Development for Geophysical Insights.

    He joined Shell in 1979 as an exploration geophysicist and party chief and held increasing levels or responsibility including Manager of Interpretation Research. In 1997, he participated in the launch of Geokinetics, which completed an IPO on the AMEX in 2007. His extensive experience with oil companies (Shell and Woodside) and the service sector (Geokinetics and Halliburton) has provided him with a unique perspective on technology and applications in oil and gas. Michael received a B.S. in Geology from Rutgers University and an M.S. in Geophysics from the University of Chicago.

    Hal GreenDirector, Marketing & Business Development - Geophysical Insights

    Hal H. Green is a marketing executive and entrepreneur in the energy industry with more than 25 years of experience in starting and managing technology companies. He holds a B.S. in Electrical Engineering from Texas A&M University and an MBA from the University of Houston. He has invested his career at the intersection of marketing and technology, with a focus on business strategy, marketing, and effective selling practices. Mr. Green has a diverse portfolio of experience in marketing technology to the hydrocarbon supply chain – from upstream exploration through downstream refining & petrochemical. Throughout his career, Mr. Green has been a proven thought-leader and entrepreneur, while supporting several tech start-ups.

    He started his career as a process engineer in the semiconductor manufacturing industry in Dallas, Texas and later launched an engineering consulting and systems integration business. Following the sale of that business in the late 80’s, he joined Setpoint in Houston, Texas where he eventually led that company’s Manufacturing Systems business. Aspen Technology acquired Setpoint in January 1996 and Mr. Green continued as Director of Business Development for the Information Management and Polymer Business Units.

    In 2004, Mr. Green founded Advertas, a full-service marketing and public relations firm serving clients in energy and technology. In 2010, Geophysical Insights retained Advertas as their marketing firm. Dr. Tom Smith, President/CEO of Geophysical Insights, soon appointed Mr. Green as Director of Marketing and Business Development for Geophysical Insights, in which capacity he still serves today.

    Hana Kabazi
    Product Manager

    Hana Kabazi joined Geophysical Insights in October of 201, and is now one of our Product Managers for Paradise. Mrs. Kabazi has over 7 years of oil and gas experience, including 5 years and Halliburton – Landmark. During her time at Landmark she held positions as a consultant to many E&P companies, technical advisor to the QA organization, and as product manager of Subsurface Mapping in DecsionSpace. Mrs. Kabazi has a B.S. in Geology from the University of Texas Austin, and an M.S. in Geology from the University of Houston.

    Dr. Carrie LaudonSenior Geophysical Consultant - Geophysical Insights

    Carolan (Carrie) Laudon holds a PhD in geophysics from the University of Minnesota and a BS in geology from the University of Wisconsin Eau Claire. She has been Senior Geophysical Consultant with Geophysical Insights since 2017 working with Paradise®, their machine learning platform. Prior roles include Vice President of Consulting Services and Microseismic Technology for Global Geophysical Services and 17 years with Schlumberger in technical, management and sales, starting in Alaska and including Aberdeen, Scotland, Houston, TX, Denver, CO and Reading, England. She spent five years early in her career with ARCO Alaska as a seismic interpreter for the Central North Slope exploration team.