Machine Learning Applied to 3D Seismic Data from the Denver-Julesburg Basin Improves Stratigraphic Resolution in the Niobrara

Machine Learning Applied to 3D Seismic Data from the Denver-Julesburg Basin Improves Stratigraphic Resolution in the Niobrara

By Carolan Laudon, Sarah Stanley, Patricia Santogrossi, 
Published with permission: Unconventional Resources Technology Conference (URTeC 2019)
July 2019

Abstract

Seismic attributes can be both powerful and challenging to incorporate into interpretation and analysis. Recent developments with machine learning have added new capabilities to multi-attribute seismic analysis. In 2018, Geophysical Insights conducted a proof of concept on 100 square miles of multi-client 3D data jointly owned by Geophysical Pursuit, Inc. (GPI) and Fairfield Geotechnologies (FFG) in the Denver-Julesburg Basin (DJ). The purpose of the study was to evaluate the effectiveness of a machine learning workflow to improve resolution within the reservoir intervals of the Niobrara and Codell formations, the primary targets for development in this portion of the basin.

The seismic data are from Phase 5 of the GPI/Fairfield Niobrara program in northern Colorado. A preliminary workflow which included synthetics, horizon picking and correlation of 28 wells was completed. The seismic volume was re-sampled from 2 ms to 1 ms. Detailed well time-depth charts were created for the Top Niobrara, Niobrara A, B and C benches, Fort Hays and Codell intervals. The interpretations, along with the seismic volume, were loaded into the Paradise® machine learning application, and two suites of attributes were generated, instantaneous and geometric. The first step in the machine learning workflow is Principal Component Analysis (PCA). PCA is a method of identifying attributes that have the greatest contribution to the data and that quantifies the relative contribution of each. PCA aids in the selection of which attributes are appropriate to use in a Self-Organizing Map (SOM). In this case, 15 instantaneous attribute volumes, plus the parent amplitude volume, were used in the PCA and eight were selected to use in SOMs. The SOM is a neural network-based machine learning process that is applied to multiple attribute volumes simultaneously. The SOM produces a non-linear classification of the data in a designated time or depth window.

For this study, a 60-ms interval that encompasses the Niobrara and Codell formations was evaluated using several SOM topologies. One of the main drilling targets, the B chalk, is approximately 30 feet thick; making horizontal well planning and execution a challenge for operators. An 8 X 8 SOM applied to 1 ms seismic data improves the stratigraphic resolution of the B bench. The neuron classification also images small but significant structural variations within the chalk bench. These variations correlate visually with the geometric curvature attributes. This improved resolution allows for precise well planning for horizontals within the bench. The 25 foot thick C bench and the 17 to 25 foot thick Codell are also seismically resolved via SOM analysis. Petrophysical analyses from wireline logs run in seven wells within the survey by Digital Formation; together with additional results from SOMs show the capability to differentiate a high TOC upper unit within the A marl which presents an additional exploration target. Utilizing 2D color maps and geobodies extracted from the SOMs combined with petrophysical results allows calculation of reserves for the individual reservoir units as well as the recently identified high TOC target within the A marl.

The results show that a multi-attribute machine learning workflow improves the seismic resolution within the Niobrara reservoirs of the DJ Basin and results can be utilized in both exploration and development.

Introduction and preliminary work

The Denver-Julesburg Basin is an asymmetrical foreland basin that covers approximately 70,000 square miles over parts of Colorado, Wyoming, Kansas and Nebraska. The basin has over 47,000 oil and gas wells with a production history that dates back to 1881 (Higley, 2015). In 2009, operators in the Wattenberg field began to drill and complete horizontal wells in the chalk benches of the Niobrara formation and within the Codell sandstone. As of October 2018, approximately 9500 horizontal wells have been drilled and completed within Colorado and Wyoming in the Niobrara and Codell formations (shaleprofile.com/2019/01/29/niobrara-co-wy-update-through-october-2018).

The transition to horizontal drilling necessitated the acquisition of modern, 3D seismic data (long offset, wide azimuth) to properly image the complex faulting and fracturing within the basin. In 2011, Geophysical Pursuit, Inc., in partnership with the former Geokinetics Inc., embarked on a multi-year, multi-client seismic program that ultimately resulted in the acquisition of 1580 square miles of contiguous 3D seismic data. In 2018, Geophysical Pursuit, Inc. (GPI) and joint-venture partner Fairfield Geotechnologies (FFG) provided Geophysical Insights with seismic data in the Denver-Julesburg Basin to conduct a proof of concept evaluation of the effectiveness of a machine learning workflow to improve resolution within the reservoir intervals of the Niobrara and Codell formations, currently the primary targets for development in this portion of the basin. The GPI/FFG seismic data analyzed are 100 square miles from the Niobrara Phase 5 multi-client 3D program in northern Colorado (Figure 1). Prior to the machine learning workflow, a preliminary interpretation workflow was carried out, that included synthetics, horizon picking and well correlation on 28 public wells with digital data. The seismic volume was resampled from 2 ms to 1 ms. Time depth charts were made with detailed well ties for the Top Niobrara, Niobrara A, B, and C benches, Fort Hays and Codell. The interpretations, along with the re-sampled seismic amplitude volume, were loaded into the Paradise® machine learning application. The machine learning software has several options for computing seismic attributes, and two suites were selected for the study: standard instantaneous attributes and geometric attributes from the AASPI (Attribute Assisted Seismic Processing and Interpretation) consortium (http://mcee.ou.edu/aaspi/).

Figure 1: Map of GPI FFG multi-client program and study area outline

Geologic Setting of the Niobrara and Surrounding Formations

The Niobrara formation is late Cretaceous in age and was deposited in the Western Interior Seaway (Kaufmann, 1977). The Niobrara is subdivided into the basal Fort Hays limestone and the Smoky Hill member. The Smoky Hill member is further subdivided into three subunits informally termed Niobrara A, B, and C. These units consist of fractured chalk benches which are primary reservoirs with marls and shales between the benches which comprise source rocks and secondary reservoir targets. (Figure 2). The Niobrara unconformably overlies the Codell sandstone and is overlain by the Sharon Springs member of the Pierre shale.

The Codell is also late Cretaceous in age, and unconformably underlies the Fort Hays member of the Niobrara formation. In general, the Codell thins from north to south due to erosional truncation (Sterling, Bottjer and Smith, 2016). In the study area, the thickness of the Codell ranges from 18 to 25 feet. Lewis (2013) inferred an eastern provenance for the Codell with a limited area of deposition or subsequent erosion through much of the DJ Basin. Based upon geochemical analyses, Sterling and others (2016) state that hydrocarbons produced from the Codell are sourced from the Niobrara, primarily the C marl, and the thermal maturity provides evidence of migration into the Codell. The same study found that oil produced from the Niobrara C chalk was generated in-situ.

Figure 2 (Sonnenberg, 2015) shows a generalized stratigraphic column and a structure map for the Niobrara in the DJ Basin along with an outline of the DJ basin and the location of the Wattenberg Field within which the study area is contained.

Figure 2: Outline of the DJ Basin with Niobrara structure contours and generalized stratigraphic column that shows the source rock and reservoir intervals for late Cretaceous units in the basin (from Sonnenberg, 2015).

Figure 3 shows the structural setting of the Niobrara in the study area, as well as types of fractures which can be expected to provide storage capacity and permeability for reservoirs within the chalk benches (Friedman and others, 1992). The study area covers approximately 100 square miles and shows large antiforms on the western edge. The area is normally faulted with most faults trending northeast to southwest. The Top Niobrara time structure also shows extensive small-scale structural relief which is visualized in a curvature attribute volume as shown in Figure 4. This implies that a significant amount of fracturing is present within the Niobrara.

Figure 3: Gross structure of the Niobrara in the study area in seismic two-way travel time. Insets from Friedman and others, 1992, showing predicted fracture types from structural elements. Area shown is approximately 100 square miles.

Figure 4: Most positive curvature, K1 on top Niobrara. The faulting and fractures are complex with both NE-SW and NW-SE trends apparent. Area shown is approximately 100 square miles. Seismic data provided courtesy of GPI and FFG.

Meissner and others (1984) and Landon and others (2001) have stated that the Niobrara formation kerogen is Type-II and oil-prone. Landon and others, and Finn and Johnson (2005) have also stated that the DJ basin contains the richest Niobrara source rocks with TOC contents reaching eight weight percent. Niobrara petroleum production is dependent on fractures in the hard, brittle, carbonate-rich zones. These zones are overlain and/or interbedded with soft, ductile marine shales that inhibit migration and seal the hydrocarbons in the fractured zones.

Why Utilize Machine Learning?

In the study area, the Niobrara to Greenhorn section is represented in approximately 60 milliseconds of two-way travel time in the seismic data. Figure 5 shows an amplitude section through a well within the study area. Figure 6 is an index map of wells used in the study with the Anderson 11-2 well highlighted in red. It is apparent that the top Niobrara is a well resolved positive amplitude or peak which can be picked on either a normal amplitude section or an instantaneous phase display. The individual units within the Niobrara A bench, A marl, B bench, B marl, C bench, C marl, Fort Hays and Codell present a significant challenge for an interpreter to resolve using only one or two attributes. The use of simultaneous multiple seismic attributes holds promise to resolve thin beds and a machine learning approach is one methodology which has been documented to successfully resolve stratigraphy below tuning (Roden and others, 2015, Santogrossi, 2017).

Figure 5: Amplitude section shows the approximately 60 milliseconds between marked horizons which contain the Niobrara and Codell reservoirs. Trace spacing is 110 feet, vertical scale is two-way time in seconds. Seismic data are shown courtesy of GPI and FFG.

Figure 6: Index map of vertical wells used in study. The dashed lines connect well names to well locations. Wells were obtained from the Colorado Oil and Gas Conservation Commission public database.

Machine Learning Data Preparation

The Niobrara Phase 5 3D data used for this study consisted of a 32-bit seismic amplitude volume that covers approximately 100 square miles. The survey contained 5.118 seconds of data with a bin spacing of 110 feet. Machine learning classifications benefit from sharper natural clusters of information through one level of finer trace sampling. Machine learned seismic resolution also benefits from sample-by-sample classification when compared to conventional wavelet analysis. Therefore, the data were upsampled to 1 ms from its original 2 ms interval by Geophysical Insights. The 1 ms amplitude data were used for seismic attribute generation.

Focus should be placed on the time interval that encompasses the geologic units of interest. The time interval selected for this study was 0.5 seconds to 2.2 seconds.

A total of 44 digital wells were obtained, 40 of which were within the seismic survey.

Classification by Principal Component Analysis (PCA)

Multi-dimensional analysis and multi-attribute analysis go hand in hand. Because individuals are grounded in three-dimensional space, it is difficult to visualize what data in a higher number dimensional space looks like. Fortunately, mathematics doesn’t have this limitation and the results can be easily understood with conventional 2D and 3D viewers.

Working with multiple instantaneous or geometric seismic attributes generates tremendous volumes of data. These volumes contain huge numbers of data points which may be highly continuous, greatly redundant, and/or noisy. (Coleou et al., 2003). Principal Component Analysis (PCA) is a linear technique for data reduction which maintains the variation associated with the larger data sets (Guo and others, 2009; Haykin, 2009; Roden and others, 2015). PCA has the ability to separate attribute types by frequency, distribution, and even character. PCA technology is used to determine which attributes to use and which may be ignored due to their very low impact on neural network solutions.

Figure 7 illustrates the analysis of a data cluster in two directions offset by 90 degrees. The first principal component (eigenvector 1) analyses the data cluster along the longest axis. The second principal component (eigenvector 2) analyses the data cluster variations perpendicular to the first principal component. As stated in the diagram, each eigenvector is associated with an eigenvalue which shows how much variance is in the data.

Figure 7: 2 attribute data set demonstrating the concept of PCA

Eigenvectors and eigenvalues from inline 1683 were consistently used for Principal Component Analysis because line 1683 bisected the deepest well in the study area. The entire pre-Niobrara, Niobrara, Codell, and post-Niobrara depositional events were present in the borehole.

PCA results for the first two eigenvectors for the interval Top Niobrara to Top Greenhorn are shown in Figure 8. Results show the most significant attributes in the first eigenvector are Sweetness, Envelope, and Relative Acoustic Impedance; each contributes approximately 60% of the maximum value for the eigenvector. PCA results for the second eigenvector show Thin Bed and Instantaneous Frequency are the most significant attributes. Figure 9 shows instantaneous attributes from the first eigenvector (sweetness) and second eigenvector (thin bed indicator) extracted near the B chalk of the Niobrara. The table shown in Figure 9 lists the instantaneous attributes that PCA indicated contain the most significance in the survey and the eigenvector associated with the attribute. This selection of attributes comprises a ‘recipe’ for input to the Self-Organizing Maps for the interval Niobrara to Greenhorn.

Figure 8: Eigenvalue charts for Eigenvectors 1 and 2 from PCA for Top Niobrara to Top Greenhorn. Attributes that contribute more than 50% of the maximum were selected for input to SOM

Figure 9: Instantaneous attributes near the Niobrara B chalk. These are prominent attributes in Eigenvectors 1 and 2. On the right of the figure is a list of eight selected attributes for SOM analysis. Seismic data is shown courtesy of GPI and FFG.

Self-Organzing Maps

Teuvo Kohonen, a Finnish mathematician, invented the concepts of Self-organizing Maps (SOM) in 1982 (Kohonen, T., 2001). Self-Organizing Maps employ the use of unsupervised neural networks to reduce very high dimensions of data to a scale that can be easily visualized (Roden and others, 2015). Another important aspect of SOMs is that every seismic sample is used as input to classification as opposed to wavelet-based classification.

Figures 10 and 11 illustrate classification by SOM. Within the 3D seismic survey, samples are first organized into attribute points with similar properties called natural clusters in attribute space. Within each cluster new, empty, multi-attribute samples, named neurons, are introduced. The SOM neurons will seek out natural clusters of like characteristics in the seismic data and produce a 2D mesh that can be illustrated with a two- dimensional color map.

Figure 10: Example SOM classification of two attributes into 4 clusters (neurons)

In other words, the neurons “learn” the characteristics of a data cluster through an iterative process (epochs) of cooperative then competitive training. When the learning is completed each unique cluster is assigned to a neuron number and each seismic sample is now classified (Smith, 2016).

Figure 11: Illustration of how SOM works with 3D seismic volumes

Note that the two-dimensional color map in Figure 11 shows an 8X8 topology. Topology is important. The finer the topology of the two-dimensional color map the finer the data clusters associated with each neuron become. For example: an 8X8 topology distributes 64 neurons throughout an attribute set, while a 12X12 topology distributes 144 neurons. Finer topologies help to refine variations in lithologies, porosity, and other reservoir characteristics. Although there is no theoretical limit to a two-dimensional map topology, experience has shown that there is a practical limit to the number of neuron topologies for geological resolution. Conversely, a coarser neuron topology is associated with much larger data clusters and helps to define structural features. For the Niobrara project an 8X8 topology appeared to give the best stratigraphic resolution for instantaneous attributes and a 5X5 topology resolved the geometric attributes most effectively.

SOM Results for the Survey and their Interpretation

The SOM topology selected to best resolve the sub-Niobrara stratigraphy from the eight instantaneous attributes is an 8X8 hexagonal which yields 64 individual neurons. The SOM interval selected was Top Niobrara to Top Greenhorn. The next sequence of figures highlights the improved resolution provided by the SOM when compared to the original amplitude data. Figure 12 shows a north-south inline through the survey and through the Rotharmel 11-33 well which was one of the wells selected for petrophysical analysis. The original amplitude data is shown along with the SOM result for the interval.

Figure 12: North-South inline showing the original amplitude data (upper) and the 8X8 SOM result (lower) from Top Niobrara through Greenhorn horizons. Seismic data is shown courtesy of GPI and FFG.

The next image, Figure 13, zooms into the SOM and highlights the correlation with lithology from petrophysical analysis. The B chalk is noted by a stacked pattern of yellow-red-yellow neurons, with the red representing the maximum carbonate content within the middle of the chalk bench.

Figure 13: 8X8 Instantaneous SOM through Rotharmel 11-33 with well log composite. The B bench, highlighted in green on the wellbore, ties the yellow-red-yellow sequence of neurons. Seismic data is shown courtesy of GPI and FFG.

One can see on the SOM the sweet spot within the B chalk and that there is a fair amount of small-scale structural relief present. These results aid in the resolution of structural offset within the reservoir away from well control which is critical for staying in a 20 to 30 foot zone when drilling horizontally. Each classified sample is 1 ms in thickness which converted to depth equates to roughly 7 feet.

Figure 14 shows the K2 curvature attribute co-rendered with the SOM results in vertical sections. The Rotharmel 11-33 is at the intersection of the vertical sections. The curvature is extracted at the middle of the B chalk and shows good agreement with the SOM. The entire B bench is represented by only 5-6 ms of seismic data.

Figure 14: Most negative curvature, K2 rendered at the middle of the B chalk. Vertical sections are an 8X8 instantaneous SOM Top Niobrara to Top Greenhorn. Seismic data is shown courtesy of GPI and FFG.

A Marl Results

Seven wells within the survey were sent to a third party for petrophysical analysis (Figure 15). The analysis identified zones of interest within the Niobrara marls which are typically considered source rocks. The calculations show a high TOC zone in the upper A marl which the analysis identifies as shale pay (Figure 16). A seismic cross-section of the 8X8 instantaneous SOM (Figure 16) through the three wells depicted shows that this zone is well imaged. The neurons can be isolated and volumetric calculations derived from the representative neurons.

Figure 15: Index map for wells used in petrophysical analysis (in red)

Figure 16: Petrophysical results and SOM for three wells in the study area. The TOC curve (Track 12) and Shale pay curve (Track 10), highlighted in yellow, indicate the Upper A marl is both a rich source rock and a potential shale reservoir. Seismic data is shown courtesy of GPI and FFG.

Codell Results

The Codell sandstone in general and within the study area shows more heterogeneity in reservoir properties than the Niobrara chalk benches. The petrophysical analysis on 7 wells shows net pay ranging from zero feet to three feet. The gross thickness ranges from 17 feet to 25 feet. The SOM results reflect this heterogeneity, resolve the Codell gross interval throughout most of the study area, and thus, can be useful for horizontal well planning.

Figures 17 and 18 shows inline 60 through a well with the Top Niobrara to Greenhorn 8X8 SOM results. The 2D color map has been manipulated to emphasize the lower interval from approximately base Niobrara through the Codell. Figure 18 zooms into the well and shows the specific neurons associated with the Codell interval. Figures 19 shows a N-S traverse through four wells again with the Codell interval highlighted through use of a 2D color map. The western and southwest areas of the survey show a much more continuous character to the classification with only two neurons representing the Codell interval (6 and 48). Figure 20 shows both the N-S traverse and a crossline through the anomaly.

Figure 17: Instantaneous 8X8 SOM, Top Niobrara to Greenhorn. Seismic data is shown courtesy of GPI and FFG.

Figure 18: Detailed look at the Codell portion of the SOM at the Haythorn 4-12 with GR in background. The 2D color map shows how neurons can be isolated to show a specific stratigraphic interval. Seismic data is shown courtesy of GPI and FFG.

Figure 19: Traverse through 4 wells in the western part of the study area showing the isolation of the Codell sandstone within the SOM. The south west part of the line shows the Codell being represented by only 2 neurons (6 and 48). The colormap can be interrogated to determine which attributes contribute to any given neuron. Seismic data is shown courtesy of GPI and FFG.

Figure 20: View of the SW Codell anomaly where the neuron stacking pattern changes to two neurons only (6 and 47). Seismic data is shown courtesy of GPI and FFG.

Figure 21: 3D view of neurons isolated from the SOM in the Codell interval. The areas where red is prominent and continuous show the extent of Codell represented by neurons 6 and 47 only. Also, an area in the eastern part of the study is outlined. The Codell is not represented in this area by the six neurons highlighted in the 2D color map. Seismic data is shown courtesy of GPI and FFG.

Unfortunately, vertical well control was not available through this southwestern anomaly. To examine the extent of individual neurons within the SOM at Codell level, the next image, Figure 21, shows a 3D view of the isolated Codell neurons. The southwest anomaly is apparent as well as similar anomalies in the northern portion of the survey. What is also immediately apparent is that in the east-central portion of the survey, the Codell is not represented by the six neurons (6,7,47, 48, 55, 56) previously used to isolate it within the volume. Figure 22 takes a closer look at the SOM results through this area and also utilizes the original amplitude data. Both the SOM and the amplitude data show a change in character throughout the entire section, but the SOM results only change significantly in the lower Niobrara to Greenhorn portion of the interval.

The machine learning application has a feature in which individual neurons can be queried for statistics on how individual seismic attributes contribute to the cluster which makes up the neuron. Queries were done on all of the neurons within the Codell and shown are the results for neuron 6 which is one of 2 neurons characteristic of the southwestern Codell anomaly and on neuron 61in the area where the SOM changes significantly in Figure 23. Neuron 6 has equal contributions from Instantaneous Frequency, Hilbert, Thin Bed, and Relative Acoustic Impedance. Neuron 61 shows Instantaneous Q as the top attribute which is consistent with the interpretation of the section being structurally disturbed or highly fractured.

Figure 22: West-East crossline through two wells showing the SOM and amplitude data through the blank area from Figure 23. The seismic character and classification results differ significantly in this portion of the survey for the lower Niobrara, Fort Hays and Codell. This area is interpreted to be highly fractured. Seismic data is shown courtesy of GPI and FFG.

Figure 23: Example of attribute details for individual neurons (6 and 61). This shows the contribution of individual attributes to the neuron.

Structural Attributes

The machine learning workflow can be applied to geometric attributes. PCA and SOM need to be run separately from the instantaneous attributes since PCA assumes a Gaussian distribution of the attributes. This assumption doesn’t hold for geometric attributes but the SOM process does not assume any distribution and thus still finds patterns in the data. To produce a structural SOM, four attributes were selected from PCA: Curvature_K1, Similarity, Energy Ratio, Texture Entropy, and Texture Homogeneity. These were combined with the original amplitude data to generate SOMs from the Top Niobrara to Top Greenhorn interval. Several SOM topologies were generated with geometric attributes and a 5X5 yielded good results. Figure 24 shows the geometrical SOM results at the Top Niobrara, B bench, and Codell. The Top Niobrara level shows major faults, but not nearly as much structural disturbance as the mid-Niobrara B bench or the Codell level. The eastern part of the survey where the instantaneous classification changed also shows significant differences between the B bench and Codell and agrees with the interpretation that this is a highly fractured area for the lower Niobrara and Codell. The B bench appears more structurally disrupted than the Top Niobrara but shows fewer areal changes compared to Codell. Pressure and production data could help confirm how these features relate to reservoir quality.

Figure 24: 5X5 Structural SOM at 3 levels. There are significant changes both vertically and areally

Conclusions

Seismic multi-attribute analysis has always held the promise of improving interpretations via the integration of attributes which respond to subsurface conditions such as stratigraphy, lithology, faulting, fracturing, fluids, pressure, etc. Machine learning augments traditional interpretation and attribute analysis by utilizing attribute space to simultaneously classify suites of attributes into sample based, high dimension clusters that are subsequently visualized and further interpreted in the 3D seismic survey. 2D colormaps aid in their interpretation and visualization.

In the DJ Basin, we have resolved the primary reservoir targets, the Niobrara chalk benches and the Codell formation, represented within approximately 60 ms of data in two-way time, to the level of one to five neurons which is approximately 7 to 35 feet in thickness. Structural SOM classifications with a suite of geometric attributes better image the complex faulting and fracturing and its variations throughout the reservoir interval. The classification volumes are designed to aid in drilling target identification, reserves calculations and horizontal well planning.

Acknowledgements

The authors would like to thank their colleagues at Geophysical Insights for their valuable insight and suggestions and Digital Formation for the petrophysical analysis. We also thank Geophysical Pursuit, Inc. and Fairfield Geotechnologies for use of their data and permission to publish this paper.

References

Coleou, T., M. Poupon, and A. Kostia, 2003, Unsupervised seismic facies classification: A review and comparison
of techniques and implementation: The Leading Edge, 22, 942–953, doi: 10.1190/1.1623635.

Finn, T. M. and Johnson, R. C., 2005, Niobrara Total Petroleum System in the Southwestern Wyoming Province, Chapter 6 of Petroleum Systems and Geologic Assessment of Oil and Gas in the Southwestern Wyoming Province, Wyoming, Colorado, and Utah, USGS Southwestern Wyoming Province Assessment Team, U.S. Geological Survey Digital Data Series DDS–69–D.

Guo, H., K. J. Marfurt, and J. Liu, 2009, Principal component spectral analysis: Geophysics, 74, no. 4, p. 35–43.

Haykin, S., 2009, Neural networks and learning machines, 3rd ed.: Pearson.

Kauffman, E.G., 1977, Geological and biological overview— Western Interior Cretaceous Basin, in Kauffman, E.G., ed., Cretaceous facies, faunas, and paleoenvironments across the Western Interior Basin: The Mountain Geologist, v. 14, nos. 3 and 4, p. 75–99.

Kohonen, T., 2001, Self-organizing maps: Third extended addition, Springer, Series in Information Services, Vol. 30.

Landon, S.M., Longman, M.W., and Luneau, B.A., 2001, Hydrocarbon source rock potential of the Upper Cretaceous Niobrara Formation, Western Interior Seaway of the Rocky Mountain region: The Mountain Geologist, v. 38, no. 1, p. 1–18.

Lewis, R.K., 2013, Stratigraphy and Depositional Environments of the Late Cretaceous (Late Turonian) Codell Sandstone and Juana Lopez Member of the Carlile Shale, Southeast Colorado: Colorado School of Mines MS Thesis, 190 p.

Longman, M.W., Luneau, B.A., and Landon, S.M., 1998, Nature and distribution of Niobrara lithologies in the Cretaceous Western Interior Seaway of the Rocky Mountain Region: The Mountain Geologist, v. 35, no. 4, p. 137–170.

Luneau, B., Longman, M., Kaufman, P., and Landon, S., 2011, Stratigraphy and Petrophysical Characteristics of the Niobrara Formation in the Denver Basin, Colorado and Wyoming, AAPG Search and Discovery Article #50469.

Meissner, F.F., Woodward, J., and Clayton, J.L., 1984, Stratigraphic relationships and distribution of source rocks in the greater Rocky Mountain region, in Woodward, J., Meissner, F.F., and Clayton, J.L., eds., Hydrocarbon source rocks of the greater Rocky Mountain region: Rocky Mountain Association of Geologists Guidebook, p. 1–34.

Molenaar, C.M., and Rice, D.D., 1988, Cretaceous rocks of the Western Interior Basin, in Sloss, L.L., ed., Sedimentary cover-North American craton, U.S.: Geological Society of America, The Geology of North America, v. D–2, p. 77–82.

Roden, R., Smith, T., and Sacrey, D., 2015, Geologic pattern recognition from seismic attributes: Principal component analysis and self-organizing maps, Interpretation, Vol. 3, No. 4, p. SAE59-SAE83.

Santogrossi, P., 2017, Classification/Corroboration of Facies Architecture in the Eagle Ford Group: A Case Study in Thin Bed Resolution, URTeC 2696775, doi 10.15530-urtec-2017-<2696775>.

Smith, T., 2016, Why SOM is an Appealing Learning Machine, Internal Geophysical Insights Paper.

Sonnenberg, S.A., 2015. Geologic Factors Controlling Production in the Codell Sandstone, Wattenberg Field, Colorado. URTeC Paper 2145312 presented at the Unconventional Resources Technology Conference, San Antonio, TX, July 20-22.

Sonnenberg, S.A., 2015. New reserves in an old field, the Niobrara/Codell resource plays in the Wattenberg Field, Denver Basin, Colorado. EAGE First Break, v. 33, p. 55-62.

Sterling, R., Bottjer, R. and Smith, K., 2016, Codell SS, A review of the Northern DJ oil resource play Laramie County, WY and Weld, County, CO, AAPG Search and Discovery Article #10754.

Machine Learning with Deborah Sacrey – AAPG Energy Insights Podcast

Machine Learning with Deborah Sacrey – AAPG Energy Insights Podcast

One of our very own esteemed geoscientists, Deborah Sacrey, sat down with Vern Stefanic to talk about Machine Learning in the energy industry.

To watch the video, please click here.

Transcript of podcast

Full Transcript

VERN STEFANIC: Hi, I’m Vern Stefanic. And welcome to another edition of AAPG’s Podcast, Energy Insights, where we talk to the leaders and the people of the energy industry who are making things happen and bringing the world more energy, the energy that it needs to keep going.

Today, we’re very happy to have as our guest Deborah Sacrey, who is Auburn Energy, consultant working outside of Houston, Texas. But somebody who’s got experience working in the energy industry for a long time, who’s been through many changes in the industry, and who keeps evolving to find new ways to make herself valuable to the profession and to the industry that’s going forward. Deborah, welcome, and thank you for being here with us today.

DEBORAH SACREY: I’m delighted. Thank you so much for inviting me.

VERN STEFANIC: Well one reason– we’re doing this from the AAPG Annual Convention in San Antonio, where you have been one of the featured speakers. And you were talking about what the future of petroleum geologist is going to be. Which was perfect, because you found yourself somebody who’s had to sort of evolve and change your focus, the focus of your career several times. Could you tell us a little bit about your journey?

DEBORAH SACREY: Well, what I found is that every time there’s a major technology change, a paradigm shift in the way we look at data, there are consequences to that. There are benefits and consequences. If you’re not prepared to accept that technology change, you get left behind. And it makes it hard for you to find a job.

But if you accept that technology change, and embrace it, and learn about it, then you can morph yourself into a very successful career, until the next time the technology changes again. So you’re constantly– you have ups and downs, and you’re constantly morphing yourself and evolving yourself to embrace new technology changes as they come along.

VERN STEFANIC: Which is important, because we live in– in the industry right now, there have been rapid change, which we’re going to talk about some of the places where we’re going on that. But because of that we always hear stories of a lot of petroleum geologists or professional geoscientists, who find themselves awkwardly lost in the shuffle somewhere and not knowing what to do. What I love about your message is that it’s the understanding of how technology is driving all of this and being aware of that. You’ve experienced this several times in your career, is that right?

DEBORAH SACREY: Oh, absolutely. I’ve been– I got out of school in 1976. So this makes my 43rd year in the career. What I’ve gone through is, we had a digital transformation. When I got out of school, we were always looking at paper seismic records. And I went to work for Gulf, and they’d be rolled up every night, and they’d be put in a tube, and they’d be locked behind the door. And during the day, you’d go check them out and take them to your office and work on paper.

So the digital transformation is when we moved from paper records into workstations, where we could actually scale the seismic, and can see the seismic, and blow it up, and do different things with it. That was a huge transformation. When we went from paper logs, which a lot of people still use today, to something that you can see on the screen, and blow it up and see all the nuances, of the information in the well.

Then the next major transformation came in the middle ’90s, when software was available for the smaller clients and independents to start looking at 3D. So we transition from the 2D world into the 3D world. And that was huge. I mean, it’s amazing to me that there’s any space left on the Gulf Coast that doesn’t have a 3D covering yet at this point. And now, we’re getting ready to go into another major transformation. And it’s all about data.

VERN STEFANIC: People have been told that, I think, maybe a couple of times, that oh, yeah, I understand that I have to change, and I have to be aware of it. But they really not have the skills or the insight on how to make some of those changes happen. I’m just curious, in your career do you recall some of the ways that you had to– just some of the realizations that you had. First, not just that you had to change, but some of the steps that you did to make it happen.

DEBORAH SACREY: Well, I think a lot of it, and what was important to me, is when I could see the changes coming. I had to educate myself. I didn’t have a resource to go to. I wasn’t working for a big company that had– that would send you off to classes. So it’s a matter of doing the research and understanding the technology that you’re facing.

And what I told people yesterday in my talk is that the AAPG Convention or any convention is an excellent resource for free education. Go out and look what the vendors are trying to do. And that’s your insight into how the technology is changing. And people can walk around the convention center. And they can listen to presentations for free and try to get an inkling of what’s getting ready to happen.

VERN STEFANIC: That’s great advice. That’s great insight. By the way, I’ve noticed that too in myself, in walking around the convention floor. That’s where I heard many things for the first time. Thought, oh, when I was at the Explorer, thought, oh, I ought to do a story about this.

DEBORAH SACREY: Right, exactly. And I think it’s especially important for the young people, the early career people, or the kids coming out of school, to understand that their lives will not always be with one company. When I went to work for Gulf in 1976, the gentleman who interviewed me on campus, looked me in the eye and said, Gulf will be your place of employment for life.

And I referenced yesterday a really good book. It’s called Who Moved my Cheese. So our cheese in our careers is constantly getting moved. And we have to be able to accept that and adapt to it. And you can only do it through education.

VERN STEFANIC: When did you realize, or was there a moment, when you saw that, oh, big data is important? Because it seems very obvious that we would see that. But I’m not sure everybody clicks on to– not just big data is going to be the name of the game, but this is what I’m going to do about it. What was your experience with that?

DEBORAH SACREY: Well, in 2011, a gentleman whom I’d been working with for a long time, Tom Smith, and he was the– Dr. Smith was the guy who started S&P, or Kingdom. When he sold Kingdom, he started doing research into ways that we could extend our understanding of seismic data and do applications using seismic attributes.

So he brought me in to help work with the developers to make this software geoscience friendly. Because our brains are wired a little bit differently from other people, other industries. And the technology he was using is machine learning, but it’s cluster analysis and it’s pattern recognition. Now what’s happened in the big data world is, all these companies, all the majors, all the large independents, have been drilling wells for years. And a lot of times, they’ve just been shoving the logs, and the drilling reports, and everything in a file.

So that’s all this paper that’s out there, that they’re just now starting to digitize, but you have to get it in a way that’s easily retrievable. So the big data– every time you drill a well now, you’re generating 10 gigabytes of information. And think about the wells that are being drilled, and how that information is being organized, and how it’s being put in– so if you use a keyword, like 24% porosity, you can go in and retrieve information on wells where they’ve determined that there’s 24% porosity in reservoirs. And that’s some of the data transformation we’re getting ready to go through, to make it accessible, because there’s so much out there.

VERN STEFANIC: OK, so understanding that having data is the key to having more knowledge, is the key to actually being a success, not just with your company, but also with actually bringing energy to the world.

DEBORAH SACREY: Right, I mean it’s not getting any easier to find. So we’re having to use advanced methods of technology and understanding the data information to be able to find the more subtle traps.

VERN STEFANIC: So– and I don’t know if this is too much of a jump– in fact, we can fill in the blanks if it is– but today we’re talking about machine learning and its applications and implications for the energy industry. And I know you are somebody who has been a little bit ahead of the curve on this one, in recognizing the need to understand what this is all about. So for some of us who don’t understand like you do, could you talk a little bit about that?

DEBORAH SACREY: Well, I can be specific about the technology that I’ve been using for the last five years.

VERN STEFANIC: OK, yeah.

DEBORAH SACREY: And like I said earlier, Tom brought me in to help guide the developers. But the basics behind the software I’ve been using is that instead of looking at the wavelet in the seismic data, I’m parsing the data down to a sample level. I’m looking at sample statistics.

So if your wavelet, if you’re in low frequency data, and your wavelet’s 30 milliseconds between the trough and the peak, I may be looking at 2 millisecond sample intervals. So I’m parsing the data 15 times as densely as you would if you were looking at the wavelet. What this allows me to do is, it allows me to see very thin beds at depth. Because I’m not looking at conventional seismic tuning anymore. I’m looking at statistics and cluster analysis that comes back to the workstation. Because every sample has an X, Y, and Z. So it has its place in the earth. And then I’m looking at true lithology patterns, like we’ve never been able to see before.

VERN STEFANIC: OK, well, never been able to see before is a remarkable statement. Are we talking about a game change for the profession at this point?

DEBORAH SACREY: Most definitely. I give a lot of talks on case histories. I’ve probably worked on a hundred 3Ds in the last five years, all around the world. And I have one example in the Eagle Ford set. The Eagle Ford is only a 30 millisecond thick formation in most of Texas. And so you’re looking at a peak and a trough, and you’re looking at two zero crossings. That’s four sample points.

But when you’re looking at that kind of discrete information that I can get out of it, I can see all six facies strats from the clay base, up through the brittle zone and the ash top, right underneath the Austin Chalk. Well, it’s the brittle zone, in the middle, where the higher TOC is, and what people are trying to stay in when they’re drilling the Eagle.

If you can define that and you can isolate it, then you can geosteer better. You can get better results from your well. But you’re talking about something that’s only 150 feet deep. And you’re trying to discern a very special part of that, where the hydrocarbons are really located. And so that’s going to be a game-changer.

VERN STEFANIC: So what would you say to people– but this is still you– you’re bringing your skills, your talents, everything that has brought you to this point in your career, and applying them with this new technology. What about the criticism, which may be completely invalid, but what about the criticism that people say that because of machine learning, we’re headed to a place where the very nature of the jobs of the professional geologists are going to be threatened? Is that a possibility? Is that something that we should even think about?

DEBORAH SACREY: Well, I think it’s a possibility. And why I say that is because a lot of the machine learning applications that are being developed out there are really improving efficiency, especially when it comes to the field and monitoring pressure gauges and things like that. They’re doing it remotely. And they’re getting into the artificial intelligence aspect of it. But the efficiency that you can bring to the field and operations will get rid of some of the people who go down and check the wells every day. Because they’ll be able to monitor it– they’ll know when rest is getting to one day or whether bad weather comes through and they’ve had problems. They’ll be able to know immediately without having to send someone out to the field.

Now, when you relate it to the geosciences, especially on the seismic side, you’re going to still need the experience. Because it’s a matter of maybe having a different way to view the data, but someone’s still going to have to interpret it. Someone’s still going to have to have knowledge about the attributes to use in the first place. That takes a person with some experience. And it’s not something that’s usually learned overnight. So I think some aspects of it will improving efficiency in the industry and do get rid of some jobs, and then some other aspects will not get rid of jobs.

VERN STEFANIC: Well, let me go down a difficult path then in our conversation, we all are aware of demographics within the industry, within the profession. And so, let’s start first with the baby boomers. Right, so there is an example. We have a case history of how we can approach that. From your perspective, though, it’s actually just being aware that change is necessary.

DEBORAH SACREY: Yeah, and you know, there are a lot of people out there who are in denial. And they think they can keep on working that same square of earth all the rest of their career. And those will be the guys who get left behind. One of the things I tried to emphasize yesterday is that old dogs can learn new tricks. And this is not that hard.

This kind of technology has been on Wall Street, it’s been in the medical industry for years. We’re just now getting to the point where we’re applying it to the oil and industry, to the energy industry.

VERN STEFANIC: Is there any advice that you could give to maybe younger, mid-career, the Gen X, or even kind of YPs, who are just now getting into the industry, special things they should be looking for or trying to do to enhance their careers?

DEBORAH SACREY: Well, certainly, if they’re working for a large company, American companies have already started making the shift to machine learning or artificial intelligence data mining. I’ve done a lot of work with Anadarko. They put a whole business unit together some years ago specifically to look into methodologies to improve the efficiencies on how they can get more out of this data. All the big companies have research departments. They’re getting into it.

I have a friend who just got a PhD several years ago in data mining. And she said her company is looking and screening all the new resumes coming in for any kind of statistics, any kind of data mining technology, or any kind of advanced machine learning. And they need a reference that they’ve had exposure to it, but that’s becoming a discriminator for finding a job in some instance, because they’re all making the digital transformation to efficiency and machine learning.

VERN STEFANIC: I don’t want to– I don’t want to overlook what might be obvious to some people, but I’d like to put it on the record. Auburn Energy, you, in recognizing and embracing the need to evolve along with the industry, as technology changes, you’ve had a little bit of success at this.

DEBORAH SACREY: I’ve been very lucky. I’ve been blessed in life with the successes I’ve had. I was getting bored with mapping and 3D. And so several years ago, at the time I was involved in this, I started looking in to different attributes, and what kind of reactions to the rock properties and sizing to get these different attributes, which is why this machine learning technology came along at the right time for me. Because it’s just a gradual going on. I’m not looking at one attribute at a time, I’m looking at 10 at the same time.

And in doing so, and in looking at the earth in a different way, I’ve been able to pick up some nuances that people have missed and had discoveries. I had a two million barrel field I found a couple years ago. I had an 80 bcf field that I found a couple years ago. I just had a discovery in Mississippi and in Oklahoma, in southern Oklahoma. And we’re expanding our lease activities to pick up on what I’m seeing in my technology there. So not only has it revitalized my love of digging in and looking at seismics, but it proves to be profitable as well.

VERN STEFANIC: So let me go ahead and maybe put you on the spot. Don’t mean to be– but because you are a person who’s gone through many stages of the industry, what can you see happening next? Do you have any kind of crystal ball look out to– or even just to say this is what needs to happen next to help you do your job better.

DEBORAH SACREY: Well, certainly the message I’m trying to get out to people of all ages is that this paradigm shift that we’re getting ready to go through, and you hear it over and over at all the conventions, is going to substantially change their lives. And we need to get on the train before it leaves the station or they will be left behind. And each time we’ve had a major paradigm shift, there have been some people who’ve been reluctant or didn’t want to get outside their box. And they wake up five or six years later and don’t recognize the world. Their world has completely changed and people have moved on.

And each time that happens, you can lose a certain part of the brain power and people who have knowledge of one county and one piece of Texas, because they just didn’t want to make– they didn’t want to bother themselves. And so I’m trying to get the word out to people that this change is coming. And it’s something that can be easily embraced and you should not be afraid of it, and just get on the bandwagon. I mean, it’s not that hard.

The technology and the software that I’m seeing being developed out there, it’s a piece of cake to use. You just have to have some knowledge of the seismic or logs. There’s a technology called convolutional neural network and it’s being used to map faults through 3D. So you may go in and map the faults in 10 lines out of 80 blocks of actual data. And the machine goes in and learns what certain kinds of faults look like from the 10 lines that you’ve mapped. And it will finish mapping all the faults in the whole bunch of blocks in the offshore data.

VERN STEFANIC: Wow.

DEBORAH SACREY: It’s scary. But fault picking is like one of the most boring things we can do in seismic data. So if you can find– if you can find an animal out there that will crawl through that data, and pick out the faults for you, that’s wonderful. That saves tons of human hours. And it’s good for stratigraphy. You give it some learning lines where you’ve mapped out blocks of clastics or carbonates, or turbidites, something like that, and it learns from that. And then it goes and maps that stratigraphy anywhere it can find it in the 3D. It’s very unique. You need to start educating yourself about what’s out there.

VERN STEFANIC: Well, you’re absolutely right. I try to in the world that I work with, but I’m always impressed that in the world that you’re part of, that there’s so much change that keeps coming. And it’s just fast. And it’s again, and again, and again. And the ability that people, such as yourself, has had to embrace that and to use technology in the new way– in fact, I’m going to guess– I’m going to guess that– have you offered suggestions to anyone, who are developing technology, have you got to the point where you say, you know what we need, we need now for it to do this?

DEBORAH SACREY: I’m still on a development team for the software I’m using.

VERN STEFANIC: You’re on the development– OK.

DEBORAH SACREY: Yeah, so we’re forward thinking two years down the road what kind of– what can we anticipate the technology needs to be doing two to three years down the road.

VERN STEFANIC: Can you talk about any of that?

DEBORAH SACREY: Well, I mean, I can. And certainly, this CNN technology is part of it. We’ve been approached by several larger companies to put this into our software. And they’re willing to help pay for the effort to do that, because it would take their departments too long. We’re too far advanced where we are. And it would take them too long to recreate the wheel.

So they’d rather support us to get the technology that they need, that they need for their data. And the beauty of all this is that you don’t have to shoot anything new. You don’t even necessarily have to reprocess it. You’re just getting more out of it than you’ve ever been able to get before.

VERN STEFANIC: That is beauty.

DEBORAH SACREY: It is cool. Because a lot of people don’t have the money to go shoot more data or reprocess it. They just want to take advantage of the stuff they already have in their archives.

VERN STEFANIC: When people talk about the industry being a sunset industry, I think they’re not giving it proper credit for what’s going on.

DEBORAH SACREY: Oh, I see this totally revitalizing– one of the examples I showed yesterday was the two million barrel oil field that I found in Brazoria County, Texas. And it’s from a six-foot thick off-shore bar at 10,800 feet. Well, that reflector is so weak– I mean, it’s not a bright spot. It doesn’t show up. People would have ignored it a long time, and have ignored it a long time, for drilling.

But I can prove that there’s two million barrels of oil in that six-foot thick sand that covers about 1,900 acres. So how many of those little things that we’ve ignored for years and years are still out there to be found. That’s what I’m saying. This technology is going to give us another little push. It’ll make us more efficient in the unconventional world. It will definitely help us find the subtle traps in the conventional world.

VERN STEFANIC: So there you have it. If you’re part of this profession now, you’re part of this industry now, don’t be discouraged. There’s actually great work to be done.

DEBORAH SACREY: Oh, there’s a lot of stuff left to find. We haven’t begun to quit finding yet. I mean, it’s just like Oklahoma– I grew up in Oklahoma. And for years and years, all the structural trap had been drilled, and all the clays had gone through. And everyone said, well, Oklahoma’s had it. And we turn around and there’s a new play. And you turn around, there’s the unconventional. There’s the SCOOP and STACK. There’s all the Woodford. There’s all these things that reenergized Oklahoma. And it’s been poked and punched for over 100 years. And people are still finding stuff. So we just– we just have to put better glasses on.

VERN STEFANIC: Yeah.

DEBORAH SACREY: We have to sharpen our goggles. And get in there and see what’s left.

VERN STEFANIC: Great words. Deb, thanks for this conversation today.

DEBORAH SACREY: You’re welcome.

VERN STEFANIC: Thank you. I hope it’s a conversation that we’ll continue. We’ll continue having this talk, because it sounds like there’s going to be new chapters added to the story.

DEBORAH SACREY: Oh, yeah, and I’m really– you know, I’m 66 years old, but I’m not ready to give it up yet. I’m having way too much fun.

VERN STEFANIC: That’s great. Thank you.

DEBORAH SACREY: You’re welcome.

VERN STEFANIC: And thank you for being part of this edition of Energy Insights, the AAPG Podcast, coming to you on the AAPG website, but now coming to you on platforms wherever you want to look. Look up a AAPG Energy Insights, we’ll be there. And we’re glad you’re part of it. But for now, thanks for listening.

The Oil Industry’s Cyber–Transformation Is Closer Than You Think

The Oil Industry’s Cyber–Transformation Is Closer Than You Think

By David Brown, Explorer Correspondent
Published with permission: AAPG Explorer
June 2019

The concept of digital transformation in the oil and gas industry gets talked about a lot these days, even though the phrase seems to have little specific meaning.

So, will there really be some kind of extensive cyber-transformation of the industry over the next decade?

“No,” said Tom Smith, president and CEO of Geophysical Insights in Houston.

Instead, it will happen “over the next three years,” he predicted.

Machine Learning

Much of the industry’s transformation will come from advances in machine learning, as well as continuing developments in computing and data analysis going on outside of oil and gas, Smith said.

Through machine learning, computers can develop, modify and apply algorithms and statistical models to perform tasks without explicit instructions.

“There’s basically been two types of machine learning. There’s ‘machine learning’ where you are training the machine to learn and adapt. After that’s done, you can take that little nugget (of adapted programming) and use it on other data. That’s supervised machine learning,” Smith explained.

“What makes machine learning so profoundly different is this concept that the program itself will be modified by the data. That’s profound,” he said.

Smith earned his master’s degree in geology from Iowa State University, then joined Chevron Geophysical as a processing geophysicist. He later left to complete his doctoral studies in 3-D modeling and migration at the University of Houston.

In 1984, he founded the company Seismic Micro-Technology, which led to development of the KINGDOM software suite for integrated geoscience interpretation. Smith launched Geophysical Insights in 2009 and introduced the Paradise analysis software, which uses machine learning and pattern recognition to extract information from seismic data.

He’s been named a distinguished alumnus of both Iowa State and the University of Houston College of Natural Sciences and Mathematics, and received the Society of Exploration Geophysicists Enterprise Award in 2000.

Smith sees two primary objectives for machine learning: replacing repetitive tasks with machines – essentially, doing things faster – and discovery, or identifying something new.

“Doing things faster, that’s the low-hanging fruit. We see that happening now,” Smith said.

Machine learning is “very susceptible to nuances of the data that may not be apparent to you and I. That’s part of the ‘discovery’ aspect of it,” he noted. “It isn’t replacing anybody, but it’s the whole process of the data changing the program.”

Most machine learning now uses supervised learning, which employs an algorithm and a training dataset to “teach” improvement. Through repeated processing, prediction and correction, the machine learns to achieve correct outcomes.

“Another aspect is that the first, fundamental application of supervised machine learning is in classification,” Smith said,

But, “in the geosciences, we’re not looking for more of the same thing. We’re looking for anomalies,” he observed.

Multidimensional Analysis

The next step in machine learning is unsupervised learning. Its primary goal Is to learn more about datasets by modeling the structure or distribution of the data – “to self-discover the characteristics of the data,” Smith said.

“If there are concentrations of information in the data, the unsupervised machine learning will gravitate toward those concentrations,” he explained.

As a result of changes in geology and stratigraphy, patterns are created in the amplitude and attributes generated from the seismic response. Those patterns correspond to subsurface conditions and can be understood using machine-learning and deep-learning techniques, Smith said.

Human seismic interpreters can see only in three dimensions, he noted, but the patterns resulting from multiple seismic attributes are multidimensional. He used the term “attribute space” to distinguish from three-dimensional seismic volumes.

In geophysics, unsupervised machine learning was first used to analyze multiple seismic attributes to classify these patterns, a result of concentrations of neurons.

“We see the effectiveness of (using multiple) attributes to resolve thin beds in unconventional plays and to expose direct hydrocarbon Indicators in conventional settings. Existing computing hardware and software now routinely handle multiple-attribute analysis, with 5 to 10 being typical numbers,” he said.

Machine-learning and deep-learning technology, such as the use of convolutional neural networks (CNN), has important practical applications in oil and gas, Smith noted. For instance, the “subtleties of shale-sand fan sequences are highly suited” to analysis by machine learning-enhanced neural networks, he said.

“Seismic facies classification and fault detection are just two of the important applications of CNN technology that we are putting into our Paradise machine-learning workbench this year,” he said.

A New Commodity

Just as a seismic shoot or a seismic imaging program have monetary value, algorithms enhanced by machine-learning systems also are valuable for the industry, explained Smith.

In the future, “people will be able to buy, sell and exchange machine-learning changes in algorithms. There will be industry standards for exchanging these ‘machine-learning engines,’ if you will,” he said.

As information technology continues to advance, those developments will affect computing and data analysis in oil and gas. Smith said he’s been pleased to see the industry “embracing the cloud” as a shared computing-and-data-storage space.

“An important aspect of this is, the way our industry does business and the way the world does business are very different,” Smith noted.

“When you look at any analysis of Web data, you are looking at many, many terabytes of information that’s constantly changing,” he said.

In a way, the oil and gas industry went to school on very large sets of seismic data when huge datasets were not all that common. Now the industry has some catching up to do with today’s dynamic data-and-processing approach.

For an industry accustomed to thinking in terms of static, captured datasets and proprietary algorithms, that kind of mind-shift could be a challenge.

“There are two things we’re going to have to give up. The first thing is giving up the concept of being able to ‘freeze’ all the input data,” Smith noted.

“The second thing we have to give up is, there’s been quite a shift to using public algorithms. They’re cheap, but they are constantly changing,” he said.

Moving the Industry Forward

Smith will serve as moderator of the opening plenary session, “Business Breakthroughs with Digital Transformation Crossing Disciplines,” at the upcoming Energy in Data conference in Austin, Texas.

Presentations at the Energy in Data conference will provide information and insights for geologists, geophysicists and petroleum engineers, but its real importance will be in moving the industry forward toward an integrated digital transformation, Smith said.

“We have to focus on the aspects of machine-learning impact not just on these three, major disciplines, but on the broader perspective,” Smith explained. “The real value of this event, in my mind, has to be the integration, the symbiosis of these disciplines.”

While the conference should appeal to everyone from a company’s chief information officer on down, recent graduates will probably find the concepts most accessible, Smith said.

“Early-career professionals will get it. Mid-managers will find it valuable if they dig a little deeper into things,” he said.

And whether it’s a transformation or simply part of a larger transition, the coming change in computing and data in oil and gas will be one of many steps forward, Smith said.

“Three years from now we’re going to say, ‘Gosh, we were in the Dark Ages three years ago,’” he said. “And it’s not going to be over.”

Applications of Machine Learning for Geoscientists – Permian Basin

Applications of Machine Learning for Geoscientists – Permian Basin

By Carrie Laudon
Published with permission: Permian Basin Geophysical Society 60th Annual Exploration Meeting
May 2019

Abstract

Over the last few years, because of the increase in low-cost computer power, individuals and companies have stepped up investigations into the use of machine learning in many areas of E&P. For the geosciences, the emphasis has been in reservoir characterization, seismic data processing, and to a lesser extent interpretation. The benefits of using machine learning (whether supervised or unsupervised) have been demonstrated throughout the literature, and yet the technology is still not a standard workflow for most seismic interpreters. This lack of uptake can be attributed to several factors, including a lack of software tools, clear and well-defined case histories and training. Fortunately, all these factors are being mitigated as the technology matures. Rather than looking at machine learning as an adjunct to the traditional interpretation methodology, machine learning techniques should be considered the first step in the interpretation workflow.

By using statistical tools such as Principal Component Analysis (PCA) and Self Organizing Maps (SOM) a multi-attribute 3D seismic volume can be “classified”. The PCA reduces a large set of seismic attributes both instantaneous and geometric, to those that are the most meaningful. The output of the PCA serves as the input to the SOM, a form of unsupervised neural network, which, when combined with a 2D color map facilitates the identification of clustering within the data volume. When the correct “recipe” is selected, the clustered or classified volume allows the interpreter to view and separate geological and geophysical features that are not observable in traditional seismic amplitude volumes. Seismic facies, detailed stratigraphy, direct hydrocarbon indicators, faulting trends, and thin beds are all features that can be enhanced by using a classified volume.

The tuning-bed thickness or vertical resolution of seismic data traditionally is based on the frequency content of the data and the associated wavelet. Seismic interpretation of thin beds routinely involves estimation of tuning thickness and the subsequent scaling of amplitude or inversion information below tuning. These traditional below-tuning-thickness estimation approaches have limitations and require assumptions that limit accuracy. The below tuning effects are a result of the interference of wavelets, which are a function of the geology as it changes vertically and laterally. However, numerous instantaneous attributes exhibit effects at and below tuning, but these are seldom incorporated in thin-bed analyses. A seismic multi-attribute approach employs self-organizing maps to identify natural clusters from combinations of attributes that exhibit below-tuning effects. These results may exhibit changes as thin as a single sample interval in thickness. Self-organizing maps employed in this fashion analyze associated seismic attributes on a sample-by-sample basis and identify the natural patterns or clusters produced by thin beds. Examples of this approach to improve stratigraphic resolution in both the Eagle Ford play, and the Niobrara reservoir of the Denver-Julesburg Basin will be used to illustrate the workflow.

Introduction

Seismic multi-attribute analysis has always held the promise of improving interpretations via the integration of attributes which respond to subsurface conditions such as stratigraphy, lithology, faulting, fracturing, fluids, pressure, etc. The benefits of using machine learning (whether supervised or unsupervised) has been demonstrated throughout the literature and yet the technology is still not a standard workflow for most seismic interpreters. This lack of uptake can be attributed to several factors, including a lack of software tools, clear and well-defined case histories, and training. This paper focuses on an unsupervised machine learning workflow utilizing Self-Organizing Maps (Kohonen, 2001) in combination with Principal Component Analysis to produce classified seismic volumes from multiple instantaneous attribute volumes. The workflow addresses several significant issues in seismic interpretation: it analyzes large amounts of data simultaneously; it determines relationships between different types of data; it is sample based and produces high-resolution results and, reveals geologic features that are difficult to see in conventional approaches.

Principal Component Analysis (PCA)

Multi-dimensional analysis and multi-attribute analysis go hand in hand. Because individuals are grounded in three-dimensional space, it is difficult to visualize what data in a higher number dimensional space looks like. Fortunately, mathematics doesn’t have this limitation and the results can be easily understood with conventional 2D and 3D viewers.

Working with multiple instantaneous or geometric seismic attributes generates tremendous volumes of data. These volumes contain huge numbers of data points which may be highly continuous, greatly redundant, and/or noisy. (Coleou et al., 2003). Principal Component Analysis (PCA) is a linear technique for data reduction which maintains the variation associated with the larger data sets (Guo and others, 2009; Haykin, 2009; Roden and others, 2015). PCA can separate attribute types by frequency, distribution, and even character. PCA technology is used to determine which attributes may be ignored due to their very low impact on neural network solutions and which attributes are most prominent in the data. Figure 1 illustrates the analysis of a data cluster in two directions, offset by 90 degrees. The first principal component (eigenvector 1) analyses the data cluster along the longest axis. The second principal component (eigenvector 2) analyses the data cluster variations perpendicular to the first principal component. As stated in the diagram, each eigenvector is associated with an eigenvalue which shows how much variance there is in the data.

two attribute data set

Figure 1. Two attribute data set illustrating the concept of PCA

The next step in PCA analysis is to review the eigen spectrum to select the most prominent attributes in a data set. The following example is taken from a suite of instantaneous attributes over the Niobrara formation within the Denver­ Julesburg Basin. Results for eigenvectors 1 are shown with three attributes: sweetness, envelope and relative acoustic impedance being the most prominent.

two attribute data set

Figure 2. Results from PCA for first eigenvector in a seismic attribute data set

Utilizing a cutoff of 60% in this example, attributes were selected from PCA for input to the neural network classification. For the Niobrara, eight instantaneous attributes from the four of the first six eigenvectors were chosen and are shown in Table 1. The PCA allowed identification of the most significant attributes from an initial group of 19 attributes.

Results from PCA for Niobrara Interval

Table 1: Results from PCA for Niobrara Interval shows which instantaneous attributes will be used in a Self-Organizing Map (SOM).

Self-Organizing Maps

Teuvo Kohonen, a Finnish mathematician, invented the concepts of Self-Organizing Maps (SOM) in 1982 (Kohonen, T., 2001). Self-Organizing Maps employ the use of unsupervised neural networks to reduce very high dimensions of data to a classification volume that can be easily visualized (Roden and others, 2015). Another important aspect of SOMs is that every seismic sample is used as input to classification as opposed to wavelet-based classification.

Figure 3 diagrams the SOM concept for 10 attributes derived from a 3D seismic amplitude volume. Within the 3D seismic survey, samples are first organized into attribute points with similar properties called natural clusters in attribute space. Within each cluster new, empty, multi-attribute samples, named neurons, are introduced. The SOM neurons will seek out natural clusters of like characteristics in the seismic data and produce a 2D mesh that can be illustrated with a two- dimensional color map. In other words, the neurons “learn” the characteristics of a data cluster through an iterative process (epochs) of cooperative than competitive training. When the learning is completed each unique cluster is assigned to a neuron number and each seismic sample is now classified (Smith, 2016).

two attribute data set

Figure 3. Illustration of the concept of a Self-Organizing Map

Figures 4 and 5 show a simple example using 2 attributes, amplitude, and Hilbert transform on a synthetic example. Synthetic reflection coefficients are convolved with a simple wavelet, 100 traces created, and noise added. When the attributes are cross plotted, clusters of points can be seen in the cross plot. The colored cross plot shows the attributes after SOM classification into 64 neurons with random colors assigned. In Figure 5, the individual clusters are identified and mapped back to the events on the synthetic. The SOM has correctly distinguished each event in the synthetic.

Two attribute synthetic example of a Self-Organizing Map

Figure 4. Two attribute synthetic example of a Self-Organizing Map. The amplitude and Hilbert transform are cross plotted. The colored cross plot shows the attributes after classification into 64 neurons by SOM.

Synthetic SOM example

Figure 5. Synthetic SOM example with neurons identified by number and mapped back to the original synthetic data

Results for Niobrara and Eagle Ford

In 2018, Geophysical Insights conducted a proof of concept on 100 square miles of multi-client 3D data jointly owned by Geophysical Pursuit, Inc. (GPI) and Fairfield Geotechnologies (FFG) in the Denver¬ Julesburg Basin (DJ). The purpose of the study is to evaluate the effectiveness of a machine learning workflow to improve resolution within the reservoir intervals of the Niobrara and Codell formations, the primary targets for development in this portion of the basin. An amplitude volume was resampled from 2 ms to 1 ms and along with horizons, loaded into the Paradise® machine learning application and attributes generated. PCA was used to identify which attributes were most significant in the data, and these were used in a SOM to evaluate the interval Top Niobrara to Greenhorn (Laudon and others, 2019).

Figure 6 shows results of an 8X8 SOM classification of 8 instantaneous attributes over the Niobrara interval along with the original amplitude data. Figure 7 is the same results with a well composite focused on the B chalk, the best section of the reservoir, which is difficult to resolve with individual seismic attributes. The SOM classification has resolved the chalk bench as well as other stratigraphic features within the interval.

North-South Inline showing the original amplitude data (upper) and the 8X8 SOM result (lower) from Top Niobrara

Figure 6. North-South Inline showing the original amplitude data (upper) and the 8X8 SOM result (lower) from Top Niobrara through Greenhorn horizons. Seismic data is shown courtesy of GPI and FFG.

8X8 Instantaneous SOM through Rotharmel 11-33 with well log composite

Figure 7. 8X8 Instantaneous SOM through Rotharmel 11-33 with well log composite. The B bench, highlighted in green on the wellbore, ties the yellow-red-yellow sequence of neurons. Seismic data is shown courtesy of GPI and FFG

 

8X8 SOM results through the Eagle Ford

Figure 8. 8X8 SOM results through the Eagle Ford. The primary target, the Lower Eagle Ford shale had 16 neuron classes over 14-29 milliseconds of data. Seismic data shown courtesy of Seitel.

The results shown in Figure 9 reveal non-layer cake facies bands that include details in the Eagle )RUG,v basal clay-rich shale, high resistivity and low resistivity Eagle Ford shale objectives, the Eagle Ford ash, and the upper Eagle Ford marl, which are overlain disconformably by the Austin Chalk.

Eagle Ford SOM classification shown with well results

Figure 9. Eagle Ford SOM classification shown with well results. The SOM resolves a high resistivity interval, overlain by a thin ash layer and finally a low resistivity layer. The SOM also resolves complex 3-dimensional relationships between these facies

Convolutional Neural Networks (CNN)

A promising development in machine learning is supervised classification via the applications of convolutional neural networks (CNNs). Supervised methods have, in the past, not been efficient due to the laborious task of training the neural network. CNN is a deep learning seismic classification. We apply CNN to fault detection on seismic data. The examples that follow show CNN fault detection results which did not require any interpreter picked faults for training, rather the network was trained using synthetic data. Two results are shown, one from the North Sea, Figure 10, and one from the Great South Basin, New Zealand, Figure 11.

Side by side comparison of coherence attribute to CNN fault probability attribute, North Sea

Figure 10. Side by side comparison of coherence attribute to CNN fault probability attribute, North Sea

Side by side comparison of coherence attribute to CNN fault probability attribute, North Sea

Figure 11. Comparison of Coherence to CNN fault probability attribute, New Zealand

Conclusions

Advances in compute power and algorithms are making the use of machine learning available on the desktop to seismic interpreters to augment their interpretation workflow. Taking advantage of today’s computing technology, visualization techniques, and an understanding of machine learning as applied to seismic data, PCA combined with SOMs efficiently distill multiple seismic attributes into classification volumes. When applied on a multi-attribute seismic sample basis, SOM is a powerful nonlinear cluster analysis and pattern recognition machine learning approach that helps interpreters identify geologic patterns in the data and has been able to reveal stratigraphy well below conventional tuning thickness.

In the fault interpretation domain, recent development of a Convolutional Neural Network that works directly on amplitude data shows promise to efficiently create fault probability volumes without the requirement of a labor-intensive training effort.

References

Coleou, T., M. Poupon, and A. Kostia, 2003, Unsupervised seismic facies classification: A review and comparison of techniques and implementation: The Leading Edge, 22, 942–953, doi: 10.1190/1.1623635.

Guo, H., K. J. Marfurt, and J. Liu, 2009, Principal component spectral analysis: Geophysics, 74, no. 4, 35–43.

Haykin, S., 2009. Neural networks and learning machines, 3rd ed.: Pearson

Kohonen, T., 2001,Self organizing maps: Third extended addition, Springer, Series in Information Services, Vol. 30.

Laudon, C., Stanley, S., and Santogrossi, P., 2019, Machine Leaming Applied to 3D Seismic Data from the Denver-Julesburg Basin Improves Stratigraphic Resolution in the Niobrara, URTeC 337, in press

Roden, R., and Santogrossi, P., 2017, Significant Advancements in Seismic Reservoir Characterization with Machine Learning, The First, v. 3, p. 14-19

Roden, R., Smith, T., and Sacrey, D., 2015, Geologic pattern recognition from seismic attributes: Principal component analysis and self-organizing maps, Interpretation, Vol. 3, No. 4, p. SAE59-SAE83.

Santogrossi, P., 2017, Classification/Corroboration of Facies Architecture in the Eagle Ford Group: A Case Study in Thin Bed Resolution, URTeC 2696775, doi 10.15530-urtec-2017-<2696775>.