Applications of Machine Learning for Geoscientists – Permian Basin

Applications of Machine Learning for Geoscientists – Permian Basin

By Carrie Laudon
Published with permission: Permian Basin Geophysical Society 60th Annual Exploration Meeting
May 2019

Abstract

Over the last few years, because of the increase in low-cost computer power, individuals and companies have stepped up investigations into the use of machine learning in many areas of E&P. For the geosciences, the emphasis has been in reservoir characterization, seismic data processing, and to a lesser extent interpretation. The benefits of using machine learning (whether supervised or unsupervised) have been demonstrated throughout the literature, and yet the technology is still not a standard workflow for most seismic interpreters. This lack of uptake can be attributed to several factors, including a lack of software tools, clear and well-defined case histories and training. Fortunately, all these factors are being mitigated as the technology matures. Rather than looking at machine learning as an adjunct to the traditional interpretation methodology, machine learning techniques should be considered the first step in the interpretation workflow.

By using statistical tools such as Principal Component Analysis (PCA) and Self Organizing Maps (SOM) a multi-attribute 3D seismic volume can be “classified”. The PCA reduces a large set of seismic attributes both instantaneous and geometric, to those that are the most meaningful. The output of the PCA serves as the input to the SOM, a form of unsupervised neural network, which, when combined with a 2D color map facilitates the identification of clustering within the data volume. When the correct “recipe” is selected, the clustered or classified volume allows the interpreter to view and separate geological and geophysical features that are not observable in traditional seismic amplitude volumes. Seismic facies, detailed stratigraphy, direct hydrocarbon indicators, faulting trends, and thin beds are all features that can be enhanced by using a classified volume.

The tuning-bed thickness or vertical resolution of seismic data traditionally is based on the frequency content of the data and the associated wavelet. Seismic interpretation of thin beds routinely involves estimation of tuning thickness and the subsequent scaling of amplitude or inversion information below tuning. These traditional below-tuning-thickness estimation approaches have limitations and require assumptions that limit accuracy. The below tuning effects are a result of the interference of wavelets, which are a function of the geology as it changes vertically and laterally. However, numerous instantaneous attributes exhibit effects at and below tuning, but these are seldom incorporated in thin-bed analyses. A seismic multi-attribute approach employs self-organizing maps to identify natural clusters from combinations of attributes that exhibit below-tuning effects. These results may exhibit changes as thin as a single sample interval in thickness. Self-organizing maps employed in this fashion analyze associated seismic attributes on a sample-by-sample basis and identify the natural patterns or clusters produced by thin beds. Examples of this approach to improve stratigraphic resolution in both the Eagle Ford play, and the Niobrara reservoir of the Denver-Julesburg Basin will be used to illustrate the workflow.

Introduction

Seismic multi-attribute analysis has always held the promise of improving interpretations via the integration of attributes which respond to subsurface conditions such as stratigraphy, lithology, faulting, fracturing, fluids, pressure, etc. The benefits of using machine learning (whether supervised or unsupervised) has been demonstrated throughout the literature and yet the technology is still not a standard workflow for most seismic interpreters. This lack of uptake can be attributed to several factors, including a lack of software tools, clear and well-defined case histories, and training. This paper focuses on an unsupervised machine learning workflow utilizing Self-Organizing Maps (Kohonen, 2001) in combination with Principal Component Analysis to produce classified seismic volumes from multiple instantaneous attribute volumes. The workflow addresses several significant issues in seismic interpretation: it analyzes large amounts of data simultaneously; it determines relationships between different types of data; it is sample based and produces high-resolution results and, reveals geologic features that are difficult to see in conventional approaches.

Principal Component Analysis (PCA)

Multi-dimensional analysis and multi-attribute analysis go hand in hand. Because individuals are grounded in three-dimensional space, it is difficult to visualize what data in a higher number dimensional space looks like. Fortunately, mathematics doesn’t have this limitation and the results can be easily understood with conventional 2D and 3D viewers.

Working with multiple instantaneous or geometric seismic attributes generates tremendous volumes of data. These volumes contain huge numbers of data points which may be highly continuous, greatly redundant, and/or noisy. (Coleou et al., 2003). Principal Component Analysis (PCA) is a linear technique for data reduction which maintains the variation associated with the larger data sets (Guo and others, 2009; Haykin, 2009; Roden and others, 2015). PCA can separate attribute types by frequency, distribution, and even character. PCA technology is used to determine which attributes may be ignored due to their very low impact on neural network solutions and which attributes are most prominent in the data. Figure 1 illustrates the analysis of a data cluster in two directions, offset by 90 degrees. The first principal component (eigenvector 1) analyses the data cluster along the longest axis. The second principal component (eigenvector 2) analyses the data cluster variations perpendicular to the first principal component. As stated in the diagram, each eigenvector is associated with an eigenvalue which shows how much variance there is in the data.

two attribute data set

Figure 1. Two attribute data set illustrating the concept of PCA

The next step in PCA analysis is to review the eigen spectrum to select the most prominent attributes in a data set. The following example is taken from a suite of instantaneous attributes over the Niobrara formation within the Denver­ Julesburg Basin. Results for eigenvectors 1 are shown with three attributes: sweetness, envelope and relative acoustic impedance being the most prominent.

two attribute data set

Figure 2. Results from PCA for first eigenvector in a seismic attribute data set

Utilizing a cutoff of 60% in this example, attributes were selected from PCA for input to the neural network classification. For the Niobrara, eight instantaneous attributes from the four of the first six eigenvectors were chosen and are shown in Table 1. The PCA allowed identification of the most significant attributes from an initial group of 19 attributes.

Results from PCA for Niobrara Interval

Table 1: Results from PCA for Niobrara Interval shows which instantaneous attributes will be used in a Self-Organizing Map (SOM).

Self-Organizing Maps

Teuvo Kohonen, a Finnish mathematician, invented the concepts of Self-Organizing Maps (SOM) in 1982 (Kohonen, T., 2001). Self-Organizing Maps employ the use of unsupervised neural networks to reduce very high dimensions of data to a classification volume that can be easily visualized (Roden and others, 2015). Another important aspect of SOMs is that every seismic sample is used as input to classification as opposed to wavelet-based classification.

Figure 3 diagrams the SOM concept for 10 attributes derived from a 3D seismic amplitude volume. Within the 3D seismic survey, samples are first organized into attribute points with similar properties called natural clusters in attribute space. Within each cluster new, empty, multi-attribute samples, named neurons, are introduced. The SOM neurons will seek out natural clusters of like characteristics in the seismic data and produce a 2D mesh that can be illustrated with a two- dimensional color map. In other words, the neurons “learn” the characteristics of a data cluster through an iterative process (epochs) of cooperative than competitive training. When the learning is completed each unique cluster is assigned to a neuron number and each seismic sample is now classified (Smith, 2016).

two attribute data set

Figure 3. Illustration of the concept of a Self-Organizing Map

Figures 4 and 5 show a simple example using 2 attributes, amplitude, and Hilbert transform on a synthetic example. Synthetic reflection coefficients are convolved with a simple wavelet, 100 traces created, and noise added. When the attributes are cross plotted, clusters of points can be seen in the cross plot. The colored cross plot shows the attributes after SOM classification into 64 neurons with random colors assigned. In Figure 5, the individual clusters are identified and mapped back to the events on the synthetic. The SOM has correctly distinguished each event in the synthetic.

Two attribute synthetic example of a Self-Organizing Map

Figure 4. Two attribute synthetic example of a Self-Organizing Map. The amplitude and Hilbert transform are cross plotted. The colored cross plot shows the attributes after classification into 64 neurons by SOM.

Synthetic SOM example

Figure 5. Synthetic SOM example with neurons identified by number and mapped back to the original synthetic data

Results for Niobrara and Eagle Ford

In 2018, Geophysical Insights conducted a proof of concept on 100 square miles of multi-client 3D data jointly owned by Geophysical Pursuit, Inc. (GPI) and Fairfield Geotechnologies (FFG) in the Denver¬ Julesburg Basin (DJ). The purpose of the study is to evaluate the effectiveness of a machine learning workflow to improve resolution within the reservoir intervals of the Niobrara and Codell formations, the primary targets for development in this portion of the basin. An amplitude volume was resampled from 2 ms to 1 ms and along with horizons, loaded into the Paradise® machine learning application and attributes generated. PCA was used to identify which attributes were most significant in the data, and these were used in a SOM to evaluate the interval Top Niobrara to Greenhorn (Laudon and others, 2019).

Figure 6 shows results of an 8X8 SOM classification of 8 instantaneous attributes over the Niobrara interval along with the original amplitude data. Figure 7 is the same results with a well composite focused on the B chalk, the best section of the reservoir, which is difficult to resolve with individual seismic attributes. The SOM classification has resolved the chalk bench as well as other stratigraphic features within the interval.

North-South Inline showing the original amplitude data (upper) and the 8X8 SOM result (lower) from Top Niobrara

Figure 6. North-South Inline showing the original amplitude data (upper) and the 8X8 SOM result (lower) from Top Niobrara through Greenhorn horizons. Seismic data is shown courtesy of GPI and FFG.

8X8 Instantaneous SOM through Rotharmel 11-33 with well log composite

Figure 7. 8X8 Instantaneous SOM through Rotharmel 11-33 with well log composite. The B bench, highlighted in green on the wellbore, ties the yellow-red-yellow sequence of neurons. Seismic data is shown courtesy of GPI and FFG

 

8X8 SOM results through the Eagle Ford

Figure 8. 8X8 SOM results through the Eagle Ford. The primary target, the Lower Eagle Ford shale had 16 neuron classes over 14-29 milliseconds of data. Seismic data shown courtesy of Seitel.

The results shown in Figure 9 reveal non-layer cake facies bands that include details in the Eagle )RUG,v basal clay-rich shale, high resistivity and low resistivity Eagle Ford shale objectives, the Eagle Ford ash, and the upper Eagle Ford marl, which are overlain disconformably by the Austin Chalk.

Eagle Ford SOM classification shown with well results

Figure 9. Eagle Ford SOM classification shown with well results. The SOM resolves a high resistivity interval, overlain by a thin ash layer and finally a low resistivity layer. The SOM also resolves complex 3-dimensional relationships between these facies

Convolutional Neural Networks (CNN)

A promising development in machine learning is supervised classification via the applications of convolutional neural networks (CNNs). Supervised methods have, in the past, not been efficient due to the laborious task of training the neural network. CNN is a deep learning seismic classification. We apply CNN to fault detection on seismic data. The examples that follow show CNN fault detection results which did not require any interpreter picked faults for training, rather the network was trained using synthetic data. Two results are shown, one from the North Sea, Figure 10, and one from the Great South Basin, New Zealand, Figure 11.

Side by side comparison of coherence attribute to CNN fault probability attribute, North Sea

Figure 10. Side by side comparison of coherence attribute to CNN fault probability attribute, North Sea

Side by side comparison of coherence attribute to CNN fault probability attribute, North Sea

Figure 11. Comparison of Coherence to CNN fault probability attribute, New Zealand

Conclusions

Advances in compute power and algorithms are making the use of machine learning available on the desktop to seismic interpreters to augment their interpretation workflow. Taking advantage of today’s computing technology, visualization techniques, and an understanding of machine learning as applied to seismic data, PCA combined with SOMs efficiently distill multiple seismic attributes into classification volumes. When applied on a multi-attribute seismic sample basis, SOM is a powerful nonlinear cluster analysis and pattern recognition machine learning approach that helps interpreters identify geologic patterns in the data and has been able to reveal stratigraphy well below conventional tuning thickness.

In the fault interpretation domain, recent development of a Convolutional Neural Network that works directly on amplitude data shows promise to efficiently create fault probability volumes without the requirement of a labor-intensive training effort.

References

Coleou, T., M. Poupon, and A. Kostia, 2003, Unsupervised seismic facies classification: A review and comparison of techniques and implementation: The Leading Edge, 22, 942–953, doi: 10.1190/1.1623635.

Guo, H., K. J. Marfurt, and J. Liu, 2009, Principal component spectral analysis: Geophysics, 74, no. 4, 35–43.

Haykin, S., 2009. Neural networks and learning machines, 3rd ed.: Pearson

Kohonen, T., 2001,Self organizing maps: Third extended addition, Springer, Series in Information Services, Vol. 30.

Laudon, C., Stanley, S., and Santogrossi, P., 2019, Machine Leaming Applied to 3D Seismic Data from the Denver-Julesburg Basin Improves Stratigraphic Resolution in the Niobrara, URTeC 337, in press

Roden, R., and Santogrossi, P., 2017, Significant Advancements in Seismic Reservoir Characterization with Machine Learning, The First, v. 3, p. 14-19

Roden, R., Smith, T., and Sacrey, D., 2015, Geologic pattern recognition from seismic attributes: Principal component analysis and self-organizing maps, Interpretation, Vol. 3, No. 4, p. SAE59-SAE83.

Santogrossi, P., 2017, Classification/Corroboration of Facies Architecture in the Eagle Ford Group: A Case Study in Thin Bed Resolution, URTeC 2696775, doi 10.15530-urtec-2017-<2696775>.

Approach Aids Multiattribute Analysis

Approach Aids Multiattribute Analysis

By: Rocky Roden, Geophysical Insights, and Deborah Sacrey, Auburn Energy
Published with permission: American Oil and Gas Reporter
September 2015

Seismic attributes, which are any measurable properties of seismic data, aid interpreters in identifying geologic features that are not understood clearly in the original data. However, the enormous amount of information generated from seismic attributes and the difficulty in understanding how these attributes when combined define geology, requires another approach in the interpretation workflow.

To address these issues, “machine learning” to evaluate seismic attributes has evolved over the last few years. Machine learning uses computer algorithms that learn iteratively from the data and adapt independently to produce reliable, repeatable results. Applying current computing technology and visualization techniques, machine learning addresses two significant issues in seismic interpretation:

• The big data problem of trying to interpret dozens, if not hundreds, of volumes of data; and

• The fact that humans cannot understand the relationship of several types of data all at once.

Principal component analysis (PCA) and self-organizing maps (SOMs) are machine learning approaches that when applied to seismic multiattribute analysis are producing results that reveal geologic features not previously identified or easily interpreted. Applying principal component analysis can help interpreters identify seismic attributes that show the most variance in the data for a given geologic setting, which helps determine which attributes to use in a multiattribute analysis using self-organizing maps. SOM analysis enables interpreters to identify the natural organizational patterns in the data from multiple seismic attributes.

Multiple-attribute analyses are beneficial when single attributes are indistinct. These natural patterns or clusters represent geologic information embedded in the data and can help identify geologic features, geobodies, and aspects of geology that often cannot be interpreted by any other means. SOM evaluations have proven to be beneficial in essentially all geologic settings, including unconventional resource plays, moderately compacted onshore regions, and offshore unconsolidated sediments.

This indicates the appropriate seismic attributes to employ in any SOM evaluation should be based on the interpretation problem to be solved and the associated geologic setting. Applying PCA and SOM can not only identify geologic patterns not seen previously in the seismic data, it also can increase or decrease confidence in features already interpreted. In other words, this multiattribute approach provides a methodology to produce a more accurate risk assessment of a geoscientist’s interpretation and may represent the next generation of advanced interpretation.

Seismic Attributes

A seismic attribute can be defined as any measure of the data that helps to visually enhance or quantify features of interpretation interest. There are hundreds of types of attributes, but Table 1 shows a composite list of seismic attributes and associated categories routinely employed in seismic interpretation. Interpreters wrestle continuously with evaluating the numerous seismic attribute volumes, including visually co-blending two or three attributes and even generating attributes from other attributes in an effort to better interpret their data.

This is where machine learning approaches such as PCA and SOM can help interpreters evaluate their data more efficiently, and help them understand the relationships between numerous seismic attributes to produce more accurate results.

Principal Component Analysis

Principal component analysis is a linear mathematical technique for reducing a large set of seismic attributes to a small set that still contains most of the variation in the large set. In other words, PCA is a good approach for identifying the combination of the most meaningful seismic attributes generated from an original volume.

principal component analysis for multiattribute analysis

Results from Principal Component Analysis in Paradise® utilizing 18 instantaneous seismic attributes are shown here. 1A shows histograms of the highest eigenvalues for in-lines in the seismic 3-D volume, with red histograms representing eigenvalues over the field. 1B shows the average of eigenvalues over the field (red), with the first principal component in orange and associated seismic attribute contributions to the right. 1C shows the second principal component over the field with the seismic attribute contributions to the right. The top five attributes in 1B were run in SOM A and the top four attributes in 1C were run in SOM B.

The first principal component accounts for as much of the variability in the data as possible, and each succeeding component (orthogonal to each preceding component) accounts for as much of the remaining variability. Given a set of seismic attributes generated from the same original volume, PCA can identify the attributes producing the largest variability in the data, suggesting these combinations of attributes will better identify specific geologic features of interest.

Even though the first principal component represents the largest linear attribute combinations best representing the variability of the bulk of the data, it may not identify specific features of interest. The interpreter should evaluate succeeding principal components also because they may be associated with other important aspects of the data and geologic features not identified with the first principal component.

In other words, PCA is a tool that, when employed in an interpretation workflow, can give direction to meaningful seismic attributes and improve interpretation results. It is logical, therefore, that a PCA evaluation may provide important information on appropriate seismic attributes to take into generating a self-organizing map.

Self-Organizing Maps

The next level of interpretation requires pattern recognition and classification of the often subtle information embedded in the seismic attributes. Taking advantage of today’s computing technology, visualization techniques and understanding of appropriate parameters, self-organizing maps distill multiple seismic attributes efficiently into classification and probability volumes. SOM is a powerful non- linear cluster analysis and pattern recognition approach that helps interpreters identify patterns in their data that can relate to desired geologic characteristics such as those listed in Table 1.

Seismic data contain huge amounts of data samples and are highly continuous, greatly redundant and significantly noisy. The tremendous amount of samples from numerous seismic attributes exhibit significant organizational structure in the midst of noise. SOM analysis identifies these natural organizational structures in the form of clusters. These clusters reveal significant information about the classification structure of natural groups that is difficult to view any other way. The natural groups and patterns in the data identified by clusters reveal the geology and aspects of the data that are difficult to interpret otherwise.

Offshore Case Study

Offshore Case Study 01

This shows SOM A results from Paradise on a north-south inline through the field. 1A shows the original stacked amplitude. 2B shows SOM results with the associated five-by-five color map displaying all 25 neurons. 2C shows SOM results with four neurons elected that isolate attenuation effects.

Offshore Case Study 02

SOM B results from Paradise are shown on the same in-line as Figure 2. 3A is the original stacked amplitude. 3B shows SOM results with the associated five-by-five color map. 3C is the SOM results with a color map showing two neurons that highlight flat spots in the data.

 

A case study is provided by a lease located in the Gulf of Mexico offshore Louisiana in 470 feet of water. This shallow field (approximately 3,900 feet) has two producing wells that were drilled on the upthrown side of an east-west trending normal fault and into an amplitude anomaly identified on the available 3-D seismic data. The normally pressured reservoir is approximately 100 feet thick and is located in a typical “bright spot” setting, i.e. a Class 3 AVO geologic setting (Rutherford and Williams, 1989).

The goal of this multiattribute analysis is to more clearly identify possible direct hydrocarbon indicator characteristics such as flat spots (hydrocarbon contacts) and attenuation effects and to better understand the reservoir and provide important approaches for decreasing the risk of future exploration in the area.

Initially, 18 instantaneous seismic attributes were generated from the 3-D data in the area. These were put into a PCA evaluation to determine which produced the largest variation in the data and the most meaningful attributes for SOM analysis.

The PCA was computed in a window 20 milliseconds above and 150 milliseconds below the mapped top of the reservoir over the entire survey, which encompassed approximately 10 square miles. Each bar in Figure 1A represents the highest eigenvalue on its associated in-line over the portion of the survey displayed.

An eigenvalue shows how much variance there is in its associated eigenvector, and an eigenvector is a direction showing the spread in the data. The red bars in Figure 1A specifically denote the in-lines that cover the areal extent of the amplitude feature, and the average of their eigenvalue results are displayed in Figures 1B and 1C.

Figure 1B displays the principal components from the selected in-lines over the anomalous feature with the highest eigenvalue (first principal component), indicating the percentage of seismic attributes contributing to this largest variation in the data. In this first principal component, the top seismic attributes include trace envelope, envelope modulated phase, envelope second derivative, sweetness and average energy, all of which account for more than 63 percent of the variance of all the instantaneous attributes in this PCA evaluation.

Figure 1C displays the PCA results, but this time the second highest eigenvalue was selected and produced a different set of seismic attributes. The top seismic attributes from the second principal component include instantaneous frequency, thin bed indicator, acceleration of phase, and dominant frequency, which total almost 70 percent of the variance of the 18 instantaneous seismic attributes analyzed. These results suggest that when applied to a SOM analysis, perhaps the two sets of seismic attributes for the first and second principal components will help define different types of anomalous features or different characteristics of the same feature.

The first SOM analysis (SOM A) incorporates the seismic attributes defined by the PCA with the highest variation in the data, i.e., the five highest percentage contributing attributes in Figure 1B.

Several neuron counts for SOM analyses were run on the data, and lower count matrices revealed broad, discrete features, while the higher counts displayed more detail and less variation. The SOM results from a five-by-five matrix of neurons (25) were selected for this article.

 

Detecting Attenuation

The north-south line through the field in Figures 2 and 3 show the original stacked amplitude data and classification results from the SOM analyses. In Figure 2B, the color map associated with the SOM classification results indicates all 25 neurons are displayed. Figure 2C shows results with four interpreted neurons highlighted.

Based on the location of the hydrocarbons determined from well control, it is interpreted from the SOM results that attenuation in the reservoir is very pronounced. As Figures 2B and 2C reveal, there is apparent absorption banding in the reservoir above the known hydrocarbon contacts defined by the wells in the field. This makes sense because the seismic attributes employed are sensitive to relatively low-frequency, broad variations in the seismic signal often associated with attenuation effects.

This combination of seismic attributes employed in the SOM analysis generates a more pronounced and clearer picture of attenuation in the reservoir than any of the seismic attributes or the original amplitude volume individually. Downdip of the field is another undrilled anomaly that also reveals apparent attenuation effects.

The second SOM evaluation (SOM B) includes the seismic attributes with the highest percentages from the second principal component, based on the PCA (see Figure 1). It is important to note that these attributes are different from the attributes determined from the first principal component. With a five-by-five neuron matrix, Figure 3 shows the classification results from this SOM evaluation on the same north-south line as Figure 2, and it identifies clearly several hydrocarbon contacts in the form of flat spots. These hydrocarbon contacts are confirmed by the well control.

Figure 3B defines three apparent flat spots that are further isolated in Figure 3C, which displays these features with two neurons. The gas/oil contact in the field was very difficult to see in the original seismic data, but is well defined and can be mapped from this SOM analysis.

The oil/water contact in the field is represented by a flat spot that defines the overall base of the hydrocarbon reservoir. Hints of this oil/water contact were interpreted from the original amplitude data, but the second SOM classification provides important information to clearly define the areal extent of reservoir.

Downdip of the field is another apparent flat spot event that is undrilled and is similar to the flat spots identified in the field. Based on SOM evaluations A and B in the field, which reveal similar known attenuation and flat spot results, respectively, there is a high probability this undrilled feature contains hydrocarbons.

West Texas Case Study

Unlike the Gulf of Mexico case study, attribute analyses on the Fasken Ranch in the Permian Basin involved using a “recipe” of seismic attributes, based on their ability to sort out fluid properties, porosity trends and hydrocarbon sensitivities. Rather than use principal component analysis to see which attributes had the greatest variation in the data, targeted use of specific attributes helped solve an issue regarding conventional porosity zones within an unconventional depositional environment in the Spraberry and Wolfcamp formations.

The Fasken Ranch is located in portions of Andrews, Ector, Martin and Midland counties, Tx. The approximately 165,000-acre property, which consists of surface and mineral rights, is held privately. This case study shows the SOM analysis results for one well, the Fasken Oil and Ranch No. 303 FEE BI, which was drilled as a straight hole to a depth of 11,195 feet. The well was drilled through the Spraberry and Wolfcamp formations and encountered a porosity zone from 8,245 to 8,270 feet measured depth.

This enabled the well to produce more than four times the normal cumulative production found in a typical vertical Spraberry well. The problem was being able to find that zone using conventional attribute analysis in the seismic data. Figure 4A depicts cross-line 516, which trends north-south and shows the intersection with well 303. The porosity zone is highlighted with a red circle.

water oil contact

4A is bandwidth extension amplitude volume, highlighting the No. 303 well and porosity zone. Wiggle trace overlay is from amplitude volume. 4B is SOM classification volume, highlighting the No. 303 well and porosity zone. Topology was 10-by-10 neurons with a 30-millisecond window above and below the zone of interest. Wiggle trace overlay is from amplitude volume.

Seven attributes were used in the neural analysis: attenuation, BE14-100 (amplitude volume), average energy, envelope time derivative, density (derived through prestack inversion), spectral decomposition envelop sub-band at 67.3 hertz, and sweetness.

Figure 4B is the same cross-line 516, showing the results of classifying the seven attributes referenced. The red ellipse shows the pattern in the data that best represents the actual porosity zone encountered in the well, but could not be identified readily by conventional attribute analysis.

Figure 5 is a 3-D view of the cluster of neurons that best represent porosity. The ability to isolate specific neurons enables one to more easily visualize specific stratigraphic events in the data.

neural cluster with colormap

This SOM classification volume in 3-D view shows the combination of a neural “cluster” that represents the porosity zone seen in the No. 303 well, but not seen in surrounding wells.

 

 

Conclusions

Seismic attributes help identify numerous geologic features in conventional seismic data. Applying principal component analysis can help interpreters identify seismic attributes that show the most variance in the data for a given geologic setting, and help them determine which attributes to use in a multiattribute analysis using self-organizing maps. Applying current computing technology, visualization techniques, and understanding of appropriate parameters for SOM enables interpreters to take multiple seismic attributes and identify the natural organizational patterns in the data.

Multiple-attribute analyses are beneficial when single attributes are indistinct. These natural patterns or clusters represent geologic information embedded in the data and can help identify geologic features that often cannot be interpreted by any other means. Applying SOM to bring out geologic features and anomalies of significance may indicate this approach represents the next generation of advanced interpretation.

 

Editor’s Note

The authors wish to thank the staff of Geophysical Insights for researching and developing the applications used in this article. The seismic data for the Gulf of Mexico case study is courtesy of Petroleum Geo-Services. Thanks to T. Englehart for insight into the Gulf of Mexico case study. The authors also would like to acknowledge Glenn Winters and Dexter Harmon of Fasken Ranch for the use of the Midland Merge 3-D seismic survey in the West Texas case study.

Rocky Roden ROCKY RODEN runs his own consulting company, Rocky Ridge Resources Inc., and works with oil companies around the world on interpretation technical issues, prospect generation, risk analysis evaluations, and reserve/resource calculations. He is a senior consulting geophysicist with Houston-based Geophysical
Insights, helping develop advanced geophysical technology for interpretation.
He also is a principal in the Rose and Associates DHI Risk Analysis Consortium,
which is developing a seismic amplitude risk analysis program and worldwide
prospect database. Roden also has worked with Seismic Microtechnology
and Rock Solid Images on integrating advanced geophysical software applications.
He holds a B.S. in oceanographic technology-geology from Lamar University
and a M.S. in geological and geophysical oceanography from Texas A&M University.
Deborah Sacrey DEBORAH KING SACREY is a geologist/geophysicist with 39 years of oil and gas exploration experience in the Texas and Louisiana Gulf Coast, and Mid-Continent areas. For the past three years, she has been part of a Geophysical Insights team working to bring the power of multiattribute neural analysis of seismic data to the geoscience public. Sacrey received a degree in geology from the University of Oklahoma in 1976, and immediately started working for Gulf Oil. She started her own company, Auburn Energy, in 1990, and built her first geophysical workstation using
Kingdom software in 1995. She specializes in 2-D and 3-D interpretation
for clients in the United States and internationally. Sacrey is a DPA certified
petroleum geologist and DPA certified petroleum geophysicist.