What is Machine Learning?

What is Machine Learning?

If you’re new to Machine Learning, let’s start at the top. The whole field of artificial intelligence is broken up into two categories – Strong AI and Narrow AI.

Strong AI is coming up with a robot that looks and behaves like a person. Narrow AI, or “neural networks” attempt to duplicate the brain’s neurological processes that have been perfected over millions of years of biological development.

Machine Learning is a subset of Narrow AI that does pattern classification. It’s an engine – an algorithm that learns without explicit programming. It learns from the data. What does that mean? Given one set of data, it’s going to come up with an answer. But given a different set of data, it will come up with something different.

A Self-Organizing Map is a type of neural network that adjusts to training data. However, it makes no assumptions about the characteristics of the data. So, if you look at the whole field of artificial intelligence, and then we look at machine learning as a subset of that, there’s two parts: supervised neural networks and unsupervised neural networks. Unsupervised is where you feed it the data and say “you go figure it out.” In supervised neural networks, you give it both the data and the right answer. Some examples of supervised neural networks would be convolutional neural networks and deep learning algorithms. Convolutional is a more classical type of a supervised neural network, where for every data sample, we know the answer.

Here’s a classical example of a supervised neural network: Your uncle just passed away and gave you the canning operations in Cordova, Alaska. You go there and observe the employees taking the fish off the conveyor and manually sorting them by type – buckets for eels and buckets for flounder and so forth. Can you use AI (machine learning) to do something more efficient? Perhaps have those employees do something more productive? Absolutely! As the eels come along, you weigh them, you take a picture of them, you see what the scales are, general texture, you get some idea about the general shape of them. There’s three properties already. You continue running eels through and maybe get up to four or five properties, including measurements, etc. The neural network is then trained on eels. Then, you do the same thing with all the flounder. There are going to be variations, of course, but in attribute space, of those four or five properties that we made for each one, they’re going to wind up in a different cluster in attribute space. And that’s how we tell the difference between eels and flounder. Everything else that you can’t classify very well, you don’t know. All of that goes into the algorithm. That’s the difference between supervised neural networks and unsupervised neural networks.

At Geophysical Insights, we believe we should be able to query our seismic data for information with learning machines just as effortlessly and with as much reliability as we query the web for the nearest gas station.

Self-Organizing Neural Nets for Automatic Anomaly Identification

Self-Organizing Neural Nets for Automatic Anomaly Identification

By Tom Smith, Geophysical Insights and Sven Treitel, TriDekon

Self-organizing maps are a practical way to identify natural clusters in multi-attribute seismic data. Curvature measure identifies neurons that have found natural clusters from those that have not. Harvesting is a methodology for measuring consistency and delivering the most consistent classification. Those portions of the classification with low probability are an indicator of multi-attribute anomalies which warrant further investigation.

Introduction

Over the past several years, the growth in seismic data volumes has multiplied many times. Often a prospect is evaluated with a primary 3D survey along with 5 to 25 attributes which serve both general and unique purposes. These are well laid out by Chopra and Marfurt, 2007. Self-organizing maps (Kohonen, 2001), or SOM for short, are a type of unsupervised neural network which fit themselves to the pattern of information in multi-dimensional data in an orderly fashion.

Multi-attributes and natural clusters

We organize a 3D seismic survey data volume regularly sampled in location X, Y and time T (or depth estimate Z). Each survey sample is represented by a number of attributes, f1, f2, …, fF. An individual sample is represented in bold as a vector with four subscripts. Together, they represent the survey space k , so the set of samples

with indices c, d, e and f represent time, trace, line number and attribute number, respectively. It is convenient to represent a sample fixed in space as a vector of F attributes in attribute space. Let this set of attribute samples {x1, x2,…xi,…, xI} be taken from k and range from 1 to I. The layout of survey space representing a 3D attribute space is illustrated in Figure 1. An attribute sample, marked as resting on the top of pins, consists of a vector of three attribute values at a fixed location.

3D Surveys

Figure 1: Three 3D surveys are bricked together in a survey space comprising 3 attributes. Marked in the survey labeled Attribute 1 is a blue-green data sample. Connected to it are samples in the other attributes at the same position in the survey.

Attribute Space

Figure 2: An example attribute space is marked here as Amplitude, Semblance and Frequency. The data sample in Figure 1 is located in the cluster of other blue-green data samples. Also shown are natural clusters of red samples (lower) and white samples (upper).

The sample of Figure 1 resides in attribute space as shown in Figure 2. Included in the illustration are other samples with similar properties. These natural clusters are regions of higher density which can constitute various seismic events with varying attribute characteristics. A natural cluster would register as a maximum in a probability distribution function. However, a large number of attributes entails a histogram of impractically high dimensionality.

Self-Organizing Map (SOM)

A SOM neuron lies in attribute space alongside the data samples. Therefore, a neuron is also an F-dimensional vector noted here as w in bold. The neuron w lies in a topology j called the neuron space. At this point in the discussion, the topology is unspecified so use a single subscript t as a place marker for any number of dimensions. Whereas data samples remain fixed in attribute space,

neurons are allowed to move freely in attribute space. They are then progressively drawn toward the data samples.

A neuron “learns” by adjusting its position within the attribute space as it is drawn toward nearby data points. Then let us define a self-organizing neural network as a collection of neurons {w1, w2,…, wi,…, wJ} with an index ranging from 1 through J. The neural network learns as its neurons adjust to natural clusters in attribute space. In general the problem is to discover and identify an unknown number of natural clusters distributed in attribute space, given the following information: I data samples in survey space; F attributes in attribute space; and J neurons in neuron space. The SOM was invented by T. Kohonen and discussed in Kohonen, T., 2001. It addresses such issues as a classic problem in statistical classification.

Neural Network Analysis - Attribute Spacep class=”wp-caption-text”>Figure 3:The winning neuron is the one which is closest to the selected data point.

In Figure 3 we place three neurons at arbitrary locations in attribute space from Figure 2. A simple process of learning proceeds as follows. Given the first sample, one computes the distance from the data sample to each of the 3 neurons and selects the closest one. We choose the Euclidean distance as our measure of distance.

The winning neuron with subscript k is defined

where j ranges over all neurons. In Figure 3, the neuron on the left is identified as the winning neuron. The winning neuron advances toward the data sample along a vector which is a fraction of the distance from the data sample. Then the second sample is selected and the process is repeated. In this example, the neuron marked as the winning neuron may end up in the leftmost cluster of the Figure. The lowermost neuron may end up near the center of the lowermost cluster on the right and the third neuron might end up in the cluster in the upper right of the Figure. This type of learning is called competitive because only the winning neuron moves toward the data.

A key point to note is that after one complete pass through the data samples, although not every neuron may have moved toward any data points, nevertheless every sample has one and only one winning neuron. A complete pass through the data is called an epoch. Many epochs may be required before the neurons have completed their clustering task.

The process just described is the basis of the SOM. A SOM neuron adjusts itself by the following recursion.

where wj(n) is the attribute position of neuron j at time step n and k is the winning neuron number. The recursion proceeds from time step n to step n + 1. The update is in the direction toward x along the “error” direction xwj(n). The amount of displacement is controlled by the learning control parameters, η and h, which are both scalars.

The η term grows smaller with each time step, so large neuron adjustments during early epochs smoothly taper to smaller adjustments later.

The h term embodies still another type of learning and which is also part of the SOM learning process.

Here d is the Euclidean distance between neurons in the neuron space introduced in equation (2)

And

Neuron Network Topologies

Figure 4:An assortment of neuron network topologies is shown here. Let the neuron position be r(p) with the distance between a neuron and its nearest neighbor set to 1 unit.

In equation (7), y is a positional vector in the neuron topology. Several options for neuron topology in neuron space are shown in Figure 4.

From equation (6) we observe that not only is the winning neuron moving toward a data point, other neurons around the winning neuron are moving as well. These neurons constitute the winning neuron neighborhood.

In the hexagonal topology of Figure 4, note that the marked neuron has 6 nearest neighbors. If this neuron is selected as a winning neuron, equations (6) and (7) indicate that the 6 nearest neurons move toward the data sample by a like amount. More distant neurons from the winning neuron move a lesser amount.

Neighborhoods of neuron movement constitute cooperative learning. For a 2D neuron space, hexagonal topology offers the maximum number of similarly distant neurons. Here we have chosen a hexagonal neural network because it maximizes cooperative learning in 2D. The SOM embodies both competitive and cooperative learning rules (Haykin, 2009).

Curvature measure

To search for natural clusters and to avoid the curse of dimensionality (Bishop, 2007), we allow the SOM to find them for us. However, there is no assurance that at the end of such a SOM analysis the neurons have come to rest at or near the centers of natural clusters. To address this issue, we turn to the simple definition of a maximum. A natural cluster is by definition a denser region of attribute space. It is identified as a maximum in a probability distribution function through analysis of a histogram. In 1D the histogram has a maximum; in 2D the histogram is a maximum in 2 orthogonal directions and so on.

In F-dimensional attribute space, a natural cluster is revealed by a peak in the probability distribution function of all F attributes. Recall that at the end of an epoch there is a one-to-one relationship between a data sample and its winning neuron. That implies that to every winning neuron there corresponds a set of one or more data samples.

Then for some winning neuron with index k, there exists a set of data samples x for which

and where k N include the samples drawn from k for that winning neuron. Some winning neurons in equation (9) have a small collection of x samples while others will have a larger collection.

In the set x for a winning neuron wk, for each of the f attributes [1 to F] we can determine a histogram. If there is a peak in the histogram we have found a higher magnitude in the probability distribution in that particular dimension and so score this attribute as a success. We count all attributes in this way (1 for success or 0 for failure) and divide the result by the number of attributes. Curvature measures density and lies in the range [0,1]. Each neuron and each attribute has a curvature measure.

Harvesting and consistency

A harvesting process consists of three steps. First, unsupervised neural network analyses are run on independent sets of data that have been drawn from a 2D or

3D survey. A rule is then used to decide which candidate is the best solution. Finally, the set of neurons of the best solution are used to classify the entire survey.

We have conducted a series of SOM analysis and classification steps for the Stratton Field 3D Seismic Survey (provided to us by courtesy of the Bureau of Economic Geology and the University of Texas). A time window of 1.2 to 1.8s was selected for SOM analysis with an 8 x 8 hexagonal network and 100 learning epochs. We measured performance by standard deviation of error between data samples and their winning neurons. Standard deviation error reduction was typically 35%. A separate SOM analysis was conducted on each of the 100 lines in order to assess consistency of results. The SOM results were highly consistent with a variation of final standard deviation of error of only 1.5% of the mean. The rule used here is to select the SOM solution for the line which best fits its data through smallest error of fit.

BEG Line 53 Full Classification - CopyFigure 5: SOM classification of Line 53. Timing lines at 1.3 and 1.6s are marked by arrows.

Honeycomb DefaultFigure 6: Hexagonal colorbar for Figure 5.

The SOM results form a new attribute volume of winning neuron classifications. Every sample in this new volume is the index of the winning neuron number for a given data sample. The SOM analysis was derived from 50 attributes which included basic trace attributes, geometric attributes and spectral decompositions. A solution line is shown in Figure 5.

First observe that the SOM results more or less track geology as shown by flat reflections near 1.3s. At the well are shown SP and GR well log curves with lithologic shading, formation top picks as well as a synthetic seismogram (white WTVA trace along the borehole). A second reflector above 1.6s is a second key marker. Also notice the green patches near 1.5s. These were identified as patches with lateral consistency. We have no geologic interpretation at this time. In Figure 6 we show the colorbar patterned to the neuron topology and colored to assist in identification of regions of neuron clusters.

salt_dome_preSOM_analysis

Figure 7:Vertical section through a 3D survey of a Gulf of Mexico
salt dome.

SOM Classification

Figure 8: SOM classification with p > .1 corresponding to the same
slice as Figure 7.

Time slice

Figure 9: Time slice for Figure 8.

Curvature measure (CM) of the neurons fell in the range .72 to .9 except for one neuron which had a curvature measure of .26. It was found that this winning neuron resulted from only 8 samples while others had 17 to 139.

We have investigated the CM attribute measure and found that there were 3 poor performing attributes (CM < .2); 4 attributes which we consider questionable (CM ≈ .5) and 43 strong attributes (CM > .8).

Automatic anomaly identification

Equation 9 is the basis on which we classify the quality of classification samples. For any winning neuron, we select samples with probability p which exceed threshold pmin

based on the distance between the winning neuron and its samples, shown here as equation 10. Those samples that are near their winning neuron have higher probability.
From Figure 7, the classification of Figure 8 results from a SOM analysis with an 8 x 8 neuron network, designed on the basis of F=13 attributes and 100 epochs. Those samples whose probabilities lie below pmin are anomalous and assigned a white color. The rest of the classification uses the colorbar of Figure 6.

The red lines of Figures 8 and 9 register their views so the white area anomaly on the right side of the salt in Figure 8 has an areal extent that appears to be bounded by the salt in Figure 9. We are not suggesting that all anomalies are of geologic interest. However, those of sufficient size are worthy of investigation for geologic merit.

Conclusion

This presentation investigates the areas within a 3D survey where multiple attributes are of unusual character. Self-organizing maps assist with an automatic identification process. While these results are encouraging, it is readily apparent that additional investigations must be made into appropriate learning controls. Various structural and stratigraphic questions which might be posed to SOM analysis will require careful selection of appropriate attributes. It is also clear that calibration of SOM analyses with borehole information offers an attractive area of investigation.

Acknowledgment

Tury Taner was our inspiration and pioneer in this area.

THOMAS A. SMITH received BS and MS degrees in Geology from Iowa State University. In 1971, he joined Chevron Geophysical as a processing geophysicist. In 1980 he left to pursue doctoral studies in Geophysics at the University of Houston. Dr. Smith founded Seismic Micro-Technology in 1984 and there led the development of the KINGDOM software suite for seismic interpretation. In 2007, he sold the majority position in the company but retained a position on the Board of DIrectors. SMT is in the process of being acquired by IHS. On completion, the SMT Board will be dissolved. IN 2008, he founded Geophysical Insights where he and several other geophysicists are developing advanced technologies for fundamental geophysical problems.

The SEG awarded Tom the SEG Enterprise Award in 2000, and in 2010, GSH awarded him the Honorary Membership Award. Iowa State University awarded him Distinguished Alumnus Lecturer Aware in 1996 and Citation of Merit for National and International Recognition in 2002. Seismic Micro-Technology received a GSH Corporate Star Award in 2005. In 2008, he founded Geophysical Insights to develop advanced technologies to address fundamental geophysical problems. Dr. Smith has been a member of the SEG since 1967 and is also a member of the HGS, EAGE, SIPES, AAPG, GSH, Sigma XI, SSA, and AGU.

Distillation of Seismic Attributes to Geologic Significance

Distillation of Seismic Attributes to Geologic Significance

By: Rocky Roden, Geophysical Insights
Published with permission: Offshore Technology Conference
May 2015

Abstract

The generation of seismic attributes has enabled geoscientists to better understand certain geologic features in their seismic data. Seismic attributes are a measurable property of seismic data, such as amplitude, dip, frequency, phase and polarity. Attributes can be measured at one instant in time/depth or over a time/depth window, and may be measured on a single trace, on a set of traces, or on a surface interpreted from the seismic data. Commonly employed categories of seismic attributes include instantaneous, AVO, spectral decomposition, inversion, geometric and amplitude accentuating. However, the industry abounds with dozens, if not hundreds, of seismic attributes that at times are difficult to understand and not all have interpretive significance. Over the last few years there have been efforts to distill these numerous seismic attributes into volumes that can be easily evaluated to determine their geologic significance and improve seismic interpretations. With increased computer power and research that has determined appropriate parameters, self-organizing maps (SOM), a form of unsupervised neural networks, has proven to be an excellent method to take many of these seismic attributes and produce meaningful and easily interpretable results. SOM analysis reveals the natural clustering and patterns in the data and has been beneficial in defining stratigraphy, seismic facies (pressure), DHI features, and sweet spots for shale plays. Recent work utilizing SOM, along with principal component analysis (PCA), has revealed geologic features not identified or easily interpreted previously from the data. The ultimate goal in this multiattribute analysis is to enable the geoscientist to produce a more accurate interpretation and reduce exploration and development risk.

Introduction

The object of seismic interpretation is to extract all the geological information possible from the data as it relates to structure, stratigraphy, rock properties, and perhaps reservoir fluid changes in space and time (Liner, 1999). Over the last two decades the industry has seen significant advancements in interpretation capabilities, strongly driven by increased computer power and associated visualization technology. Advanced picking and tracking algorithms for horizons and faults, integration of pre-stack and post-stack seismic data, detailed mapping capabilities, integration of well data, development of geological models, seismic analysis and fluid modeling, and generation of seismic attributes are all part of the seismic interpreter’s toolkit. What is the next advancement in seismic interpretation?

A significant issue in today’s interpretation environment is the enormous amount of data that is employed and generated in and for our workstations. Seismic gathers, regional 3D surveys with numerous processing versions, large populations of wells and associated data, and dozens if not hundreds of seismic attributes that routinely produce quantities of data in the terabytes. The ability for the interpreter to make meaningful interpretations from these huge projects can be difficult and at times quite inefficient. Is the next step in the advancement of interpretation the ability to interpret large quantities of seismic data more effectively and potentially derive more meaningful information from the data?

This paper describes the methodologies to analyze combinations of seismic attributes for meaningful patterns that correspond to geological features. A seismic attribute is any measurable property of seismic data, such as amplitude, dip, phase, frequency, and polarity and can be measured at one instant in time/depth over a time/depth window, on a single trace, on a set of traces, or on a surface interpreted from the seismic data (Schlumberger Oil Field Dictionary). Seismic attributes reveal features, relationships, and patterns in the seismic data that otherwise might not be noticed (Chopra and Marfurt, 2007). Therefore, it is only logical to deduce that a multi-attribute approach with the proper input parameters can produce even more meaningful results and help reduce risk in prospects and projects. Principal Component Analysis (PCA) and Self-Organizing Maps (SOM) provide multi-attribute analyses that have proven to be an excellent pattern recognition approach in the seismic interpretation workflow.

Seismic Attributes

Balch (1971) and Anstey at Seiscom-Delta in the early 1970’s are credited with producing some of the first generation of seismic attributes and stimulated the industry to rethink standard methodology when these results were presented in color. Further development was advanced with the publications by Taner and Sheriff (1977) and Taner et al. (1979) who presented complex trace attributes to display aspects of seismic data in color not seen before, at least in the interpretation community. The primary complex trace attributes including reflection strength (envelope), instantaneous phase, and instantaneous frequency inspired several generations of new seismic attributes that evolved as our visualization and computer power improved. Since the 1970’s there has been an explosion of seismic attributes to such an extent that there is not a standard approach to categorize these attributes. Table 1 is a composite list of seismic attributes and associated categories routinely employed in seismic interpretation today. There are of course many more seismic attributes and combinations of seismic attributes than listed in Table 1, but as Barnes (2006) suggests, if you don’t know what an attribute means or is used for, discard it. Barnes prefers attributes with geological or geophysical significance and avoids attributes with purely mathematical meaning.

In an effort to improve interpretation of seismic attributes, interpreters began to co-blend two and three attributes together to better visualize features of interest. Even the generation of attributes on attributes has been employed. Abele and Roden (2012) describe an example of this where dip of maximum similarity, a type of coherency, was generated for two spectral decomposition volumes (high and low bands) which displayed high energy at certain frequencies in the Eagle Ford Shale interval of South Texas. The similarity results at the Eagle Ford from the high frequency data showed more detail of fault and fracture trends than the similarity volume of the full frequency data. Even the low frequency similarity results displayed better regional trends than the original full frequency data. From the evolution of ever more seismic attributes that multiply the information to interpret, we investigate principal component analysis and self-organizing maps to derive more useful information from multi-attribute data in the search for oil and gas.

Seismic Attributes Categories and Types

Table 1— Typical seismic attribute categories and types and their associated interpretive uses

Principal Component Analysis

The first step in a seismic multi-attribute analysis is to determine which seismic attributes to select for the SOM. Interpreters familiar with seismic attributes and what they reveal (see Table 1) in their geologic setting may select a group of attributes and run a SOM. If it is unclear which attributes to select, a principal component analysis (PCA) may be beneficial. PCA is a linear mathematical technique to reduce a large set of variables (seismic attributes) to a small set that still contains most of the variation in the large set.

Principal Compment Analysis PCA in Paradise

Figure 1 —Principal Component Analysis (PCA) results displayed in Paradise® with top histograms displaying highest eigenvalues for 3D inlines and bottom portion displaying the highest eigenvalue at the red histogram location above. The bottom right display indicates the percentage contribution of the attributes in the first principal component.

In other words, to find the most meaningful seismic attributes. Figure 1 displays a PCA analysis where the blue histograms on top show the highest eigenvalues for every inline in that seismic survey. An eigenvalue is the value showing how much variance there is in its associated eigenvector and an eigenvector is the direction showing the spread in the data. An interpreter is looking for what seismic attributes make up the highest eigenvalues to determine appropriate seismic attributes to input into a SOM run. The selected eigenvalue (in red) on the top of Figure 1 is expanded by showing all eigenvalues (largest to smallest left to right) on the lower leftmost portion of the figure. Seismic attributes for the largest eigenvector show their contribution to the largest variance in the data. In this example S impedance, MuRho, and Young’s brittleness make up over 95% of the highest eigenvalue. This suggests these three attributes show significant variance in the overall set of nine attributes employed in this PCA analysis and may be important attributes to employ in a SOM analysis. Several highest-ranking attributes of the highest and perhaps the second highest eigenvalues are evaluated to determine the consistency in the seismic attributes contributing to the PCA. This process enables the interpreter to determine appropriate seismic attributes for the SOM evaluation.

Self-Organizing Maps

The next level of interpretation requires pattern recognition and classification of this often subtle information embedded in the seismic attributes. Taking advantage of today’s computing technology, visualization techniques, and understanding of appropriate parameters, Self-Organizing Maps (SOMs) (Kohonen, 2001) efficiently distills multiple seismic attributes into classification and probability volumes (Smith and Taner, 2010). SOM is a powerful non-linear cluster analysis and pattern recognition approach that helps interpreters identify patterns in their data that can relate to desired geologic characteristics as listed in Table 1. Seismic data contains huge amounts of data samples, is highly continuous, greatly redundant, and significantly noisy (Coleou et al., 2003). The tremendous amount of samples from numerous seismic attributes exhibit significant organizational structure in the midst of noise (Taner, Treitel, and Smith, 2009). SOM analysis identifies these natural organizational structures in the form of clusters. These clusters reveal significant information about the classification structure of natural groups that is difficult to view any other way. The natural groups and patterns in the data identified by clusters reveal the geology and aspects of the data that are difficult to interpret otherwise.

Seismic Attributes for SOM Analysis

Figure 2—Classification map at the Yegua sand level and Classification line through the successful well. OTC-25718-MS 5 Source: Images courtesy of Deborah Sacrey of Auburn Energy.

Specific Clusters in 2D Colormpa in paradise

Figure 3—Volume rendered displays at the Yegua sand with 2D colormaps in Paradise®. Specific clusters are identified by the 2D colormaps. Source: Images courtesy of Deborah Sacrey of Auburn Energy.

Case Study Examples

Once a set or perhaps several sets of seismic attributes are selected, often from a PCA evaluation, these sets of seismic attributes are input into separate SOM analyses. The SOM setup allows the interpreter to select the number of clusters, window size, and various training parameters for a SOM evaluation. Figure 2 displays the classification results from an onshore Texas geologic setting exploring for prospective Yegua sands. Hydrocarbon Yegua sands in this area typically produce Class 2 AVO seismic responses and the AVO seismic attributes employed in the SOM analysis are listed in Figure 2. The SOM classification map shows an anomalous area downthrown to a northeast-southwest trending fault which was drilled and found to be productive. The line displays the SOM anomaly through the field. Figure 3 displays volume rendered results of the SOM analysis where specific clusters or patterns are identified by associated 2D colormaps. An additional successful well was drilled north of the original well where a similar SOM anomaly was identified. The 2D colormaps are unique visualization approaches to identify geologic features and anomalous areas from SOM classification volumes.

Seismic Attributes for Flat Spots

Figure 4—SOM classification line employing seismic attributes specifically for flat spots. This line clearly identifies hydrocarbon contacts in the reservoir.

Seismic Attributes for Attenuation

Figure 5—SOM classification line employing seismic attributes to define hydrocarbon attenuation. The attenuation effects in the reservoir are prominent. OTC-25718-MS 7 Seismic data provided courtesy of Petroleum Geo-Services (PGS).

In a shallow water offshore Gulf of Mexico setting, anomalous seismic amplitudes were evaluated for DHI characteristics such as possible hydrocarbon contacts (flat spots) and attenuation with various SOM analyses. With input from PCA evaluation, Figure 4 lists the seismic attributes employed in an effort to identify flat spots. The SOM analyses for flat spots clearly denotes not only a gas/oil contact, but also an oil/water contact which was corroborated by two wells in the field. These hydrocarbon contacts were not clearly defined or identified from the conventional seismic data alone. To further evaluate this anomaly, a series of seismic attributes were selected to define attenuation, an important DHI characteristic and indicative of the presence of hydrocarbons. Figure 5 lists the seismic attributes employed in this SOM analysis. As the SOM classification line of Figure 5 displays, the anomalous attenuation effects in the hydrocarbon sand reservoir are very prominent. Figures 4 and 5 indicate with the appropriate selection of seismic attributes and SOM parameters, DHI characteristics such as flat spots and attenuation can be more easily identified with SOM analyses and ultimately decrease the risk in prospective targets for this geologic setting.

Conclusions

Seismic attributes help identify numerous geologic features in conventional seismic data. The application of Principal Component Analysis (PCA) can help interpreters identify seismic attributes that show the most variance in the data for a given geologic setting and help determine which attributes to use in a multi-attribute analysis using Self-Organizing Maps (SOMs). Applying current computing technology, visualization techniques, and understanding of appropriate parameters for SOM, enable interpreters to take multiple seismic attributes and identify the natural organizational patterns in the data. Multiple attribute analyses are beneficial when single attributes are indistinct. These natural patterns or clusters represent geologic information embedded in the data and can help identify geologic features that often cannot be interpreted by any other means. The application of SOM to bring out geologic features and anomalies of significance may indicate this approach represents the next generation of advanced interpretation.

Acknowledgements

The author would like to thank the staff of Geophysical Insights for the research and development of the PCA and SOM applications. Thanks also to Deborah Sacrey for providing the information for the Yegua case study.

References

Abele, S. and R. Roden, 2012, Fracture detection interpretation beyond conventional seismic approaches: Poster AAPG-ICE, Milan.

Balch, A. H., 1971, Color sonograms: a new dimension in seismic data interpretation: Geophysics, 36, 1074–1098.

Barnes, A., 2006, Too many seismic attributes? CSEG Recorder, March, 41–45. Chopra, S. and K. Marfurt, 2007, Seismic attributes for prospect identification and reservoir characterization: SEG Geophysical Development Series No. 11.

Coleou, T., M. Poupon, and A. Kostia, 2003, Unsupervised seismic facies classification: A review and comparison of techniques and implementation: The Leading Edge, 22, 942–953.

Kohonen, T., 2001, Self Organizing Maps: third extended addition, Springer Series in Information Services, Vol. 30.

Liner, C., 1999, Elements of 3-D Seismology: PennWell.

Schlumberger Oilfield Glossary, online reference.

Smith, T. and M. T. Taner, 2010, Natural Clusters in Multi-Attribute Seismics Found With Self-Organizing Maps: Extended Abstracts, Robinson-Treitel Spring Symposium by GSH/SEG, March 10-11, 2010, Houston, Tx.

Taner, M. T., F. Koehler, and R. E. Sheriff, 1979, Complex seismic trace analysis: Geophysics, 44, 1041–1063.

Taner, M. T., and R. E. Sheriff, 1977, Application of amplitude, frequency, and other attributes, to stratigraphic and hydrocarbon determination, in C. E. Payton, ed., Applications to hydrocarbonexploration: AAPG Memoir 26, 301–327.

Taner, M.T., S. Treitel, and T. Smith, 2009, Self-Organizing Maps of Multi-Attribute 3D Seismic Reflection Surveys: SEG 2009 Workshop on “What’s New In Seismic Interpretation?,” Houston, Tx.

 

Rocky RodenROCKY RODEN owns his own consulting company, Rocky Ridge Resources Inc., and works with several oil companies on technical and prospect evaluation issues. He also is a principal in the Rose and Associates DHI Risk Analysis Consortium and was Chief Consulting Geophysicist with Seismic Micro-technology. He is a proven oil finder (36 years in the industry) with extensive knowledge of modern geoscience technical approaches (past Chairman – The Leading Edge Editorial Board). As Chief Geophysicist and Director of Applied Technology for Repsol-YPF, his role comprised advising corporate officers, geoscientists, and managers on interpretation, strategy and technical analysis for exploration and development in offices in the U.S., Argentina, Spain, Egypt, Bolivia, Ecuador, Peru, Brazil, Venezuela, Malaysia, and Indonesia. He has been involved in the technical and economic evaluation of Gulf of Mexico lease sales, farmouts worldwide, and bid rounds in South America, Europe, and the Far East. Previous work experience includes exploration and development at Maxus Energy, Pogo Producing, Decca Survey, and Texaco. He holds a BS in Oceanographic Technology-Geology from Lamar University and a MS in Geological and Geophysical Oceanography from Texas A&M University. Rocky is a member of SEG, AAPG, HGS, GSH, EAGE, and SIPES.

 

 

Introduction to Self-Organizing Maps in Multi-Attribute Seismic Data

Introduction to Self-Organizing Maps in Multi-Attribute Seismic Data

By Tom Smith and Sven Treitel
Published with permission: Geophysical Society of Houston
January 2011

Unsupervised neural network searches multi-dimensional data for natural clusters. Neurons are attracted to areas of higher information density. The SOM analysis relates to subsurface geometry and rock properties while noting multi-attribute seismic properties at the wells, correlating to rock lithologies, with those away from the wells.

Computers that think like a human are well beyond our current capabilities but computers that learn are not. They are around us every day. Pocket cameras identify faces in a live digital image and automatically adjust the focus when the shutter is pressed. Post offices scan the mail and route the documents appropriately. Offices scan documents as bitmaps and convert them to text documents for editing. Web documents are indexed for content, while search engines deliver these documents through key word searches in unprecedented detail and with extraordinary speed.

We have seen a tremendous growth in the size of 3D survey seismic data volumes, and it is common today for both 2D and 3D seismic surveys to be integrated into the interpretation. Moreover, the primary survey of reflection amplitude is interpreted along with derived surveys of perhaps 5 to 25 attributes. The attributes of both 2D and 3D surveys represent multidimensional data. The problem is to keep all this data in one’s head while trying to find oil and gas. Much interpretation effort is devoted to building a geologic framework from the seismic data, identifying key reflecting intervals where oil and gas might be found and finding an interesting anomaly. At this point attributes are the framework in which we evaluate the anomaly. But this is the point where we can easily mislead ourselves. It is quite easy to build a plausible model for a prospect using only those attributes which fit our model and ignore the rest. This is bad enough, but there is even a greater crime. Lurking in the data may be combinations of attributes which are legitimate anomalies but which are never found at all.

Learning machines are artificial neural networks which can construct an experience data base from multidimensional data such as multi-attribute seismic surveys. There are two main classes of neural networks – supervised and unsupervised. With supervised neural networks, a network classifies data into groups sharing given characteristics that have already been classified by an expert. After careful processing, synthetic seismograms that are prepared at well sites serve as the expert’s data. Then the neural network is trained to classify these data at the wells. After training, the neural network literally roams the seismic data to classify areas which might be similar in some given sense to models developed at the well locations.

Alternatively, an unsupervised neural network searches multidimensional data for natural clusters. Neurons are attracted to areas of higher information density. The most popular unsupervised neural network, self-organizing maps (SOM), were introduced by Teuvo Kohonen in 1981 [1]. SOM was successfully applied to seismic facies analysis by Poupon, Azbel and Ingram in 1999 (Stratimagic) [2]. We preface recent efforts to bring SOM to bear on multiattribute seismic interpretation with a simple SOM example used by Kohonen to illustrate some of its basic features.

Quality of Life

An early problem considered by Kohonen and his research team was to identify natural clusters as they relate to quality of life factors based on World Bank data. A study that included 126 countries, considered a total of 39 measurements describing the level of poverty found in each country. While the data matrix was somewhat limited by incomplete reporting, the SOM results are still quite interesting. Shown in Figure 1 is the SOM which resulted from the learning process. Canada (CAN) and the United States of America (USA) clustered at the same neuron location shown at the 6th row of the 1st column in the figure. Ethiopia (ETH) is found on the right edge at column 13, row 5. Other country abbreviations and further details are in [3].

 

SOM

Figure 1: Self-organizing map (SOM) of World Bank quality of life data.

The reason that countries of similar quality of life cluster in similar neuron areas has to do with learning principles that are built into SOM. In this study, every country is a sample and that sample is a column vector of 39 elements. In other words, there are 39 attributes in this problem. Countries of similar characteristics (a natural cluster) plot in about the same place in attribute space. At the beginning of the learning process, neurons of 39 dimensions are assigned random numbers. During the learning process, the neurons move toward natural clusters. The data points never move. The mathematics of SOM learning define both competitive and cooperative learning. For a given data sample, the Euclidean distance is computed between the sample and each neuron. The neuron which is “nearest” to the data sample is declared the “winning” neuron and allowed to advance a fraction of the distance toward the data sample. The neuron movement is the essence of machine learning. Competitive learning is embodied in the strategy that the winning neuron moves toward the data sample.

This aspect of cooperative learning is related to the layout of the neural network. In SOM learning, the neural network is commonly a 2D hexagonal grid. This constitutes the neuron topology; the choice of a hexagonal grid rather than a rectangular grid will be apparent shortly. When a winning neuron has been found, cooperative learning takes place because the neurons in the vicinity of the winning neuron (the neighborhood) are also allowed to move toward the data sample, but by an amount less than the winning neuron. In fact, the further a neighborhood is away from the winning neuron, the less it is allowed to move. Hexagonal grids move more neurons than rectangular grids because they have 6 points of contact with their immediate neighbors instead of 4. Learning continues as winning and neighborhood neurons move toward each sample in turn until the entire set of samples has been processed. At this point, one epoch of learning has been completed. The event is marked as one time step in the learning process. For each subsequent epoch the distance a winning neuron may move toward a data sample is reduced slightly and the size of the neighborhood is also reduced. The learning process terminates when there is no further appreciable movement of the neurons. Often the number of such epochs can be in the hundreds or thousands.

As demonstrated in Figure 1, natural clustering of like-quality of life countries arises from both competitive and cooperative learning. But one may ask how is SOM learning unsupervised when the SOM map displays country labels? The answer is that in the steps just described, there is no need to order the sequence of samples in the SOM learning process. The Ethiopia sample may be processed between samples for Canada and USA with no effect on the outcome. The sample order of countries may be scrambled randomly.

ClassificationMap

Figure 2: Classification of quality of life data

salt_dome_preSOM_analysis

Figure 3: Gulf of Mexico Salt Dome

In Kohonen’s analysis of the World Bank data, the names of countries are known, however. When the SOM learning process is completed, the neuron which is closest to each country sample is labeled by the country label as shown in Figure 1. The neuron colors are arbitrary. Figure 2 is a world map in which each country is colored with the color scale used in Figure 1. Countries with similar quality of life are therefore colored similarly. Several countries which did not contribute data for the report are colored gray (Russia, Iceland, Cuba and several others). Figure 2 illustrates how the results of neural network analysis are used to classify the data. We shall see in the next section how SOM analysis and classification is an important addition to seismic interpretation.

Gulf of Mexico Salt Dome Survey

A SOM analysis was conducted on a 3D survey in the Gulf of Mexico provided by FairfieldNodal. See [4] for a description of SOM theory and a discussion of the processing steps. In particular, the introduction of a so-called curvature measure and the harvesting process are particularly relevant. Figure 3 is a vertical amplitude section across the center of the salt. Figure 4 shows the SOM analysis of 13 attributes across the same location. The SOM map is a 2D colorbar based on an 8 x 8 hexagonal grid. There are 100 epochs in the present analysis. It is readily apparent that the SOM classification is tracking seismic reflections.

SOM Classification

Figure 4: SOM classification and map. Red horizontal line marks the time of Figure 5.

Time slice

Figure 5: Time slice. Red line marks the location of Figure 4.

Shown in Figure 4 are white portions in which data have been “declassified”, a concept which we now explain. After the SOM analysis is completed, every sample in the survey is associated with a winning neuron. This implies that every neuron is associated with a given set of samples.

For any particular neuron, some samples are nearby in attribute space and others are far away. This means that there is a statistical population of distances on which to declassify what we shall call “outliers”. When a neuron is near a data sample, the probability that the sample is correctly classified is high. If a neuron and sample coincide, the probability is 100%. In Figure 4, those samples for which the probability is less than 10% are not assigned any classification. We identify such outliers as SOM anomalies. SOM anomalies are scattered about the section, with several which are larger and more compact. The horizontal red line marks the time of the time slice shown in Figure 5.

The horizontal line in Figure 5 marks the location of the section in Figure 4. Notice the white area to the right of the salt dome crossed by the red line in Figure 4 is identified as the same white area right of the salt dome and crossed by the red line in Figure 5. We note that the SOM anomaly is a discrete geobody which appears to be related to the upturned beds flanking the salt. By geobody, we mean a contiguous region of samples in the survey which share some characteristic.

Arbitrary LineFigure 6: Arbitrary line through 3D survey passing through gas-show well (left) and producing gas (right)

SOM Classifcation of 3D Survey

Figure 7: SOM classification of 3D survey. Red horizontal line marks the time for the time slice of Figure 8

Wharton County Survey

A SOM analysis was also conducted on a 3D survey in Wharton County, Texas provided by Auburn Energy. Details of this study are found in [5]. An arbitrary line through the survey between two wells is shown in Figure 6.
The well at the left presented a gas show while the well at the right developed a single-well gas field. Note the association of gas with faults F1, F2 and F3.

Figure 7 shows a portion of the results of a SOM classification run designed in the same way as in the previous example, namely by use of the same 13 attributes, an 8 x 8 hexagonal topology of neurons and a probability cut-off of 10%. Notice that this selection of attributes did not delineate the faults very well, yet SOM anomalies are found near both wells. The time slice of Figure 8 confirms that the SOM anomaly to the left of the gas-show well (left) is a geobody. A smaller second SOM anomaly is shown right of the F2 fault. Figure 9 is a time slice through the lower SOM anomaly near the gas well (right) of Figure 7. Notice that it too is a geobody. Prior to the present SOM analysis, an earlier thorough interpretation had been conducted with all available geophysical and geological data. A large set of attributes was used, including AVO gathers, offset stacks, advanced processing as well as some proprietary attributes. As a result, four wells were drilled. Two wells had no gas shows and are not marked here. No SOM anomaly was found at or near either of the two dry wells.

Upper SOM Anomaly

Figure 8: Time slice through the upper SOM anomaly of Figure 7. The red line marks the location of the arbitrary line location.

Time slice 2

Figure 9: Time slice through the lower SOM
anomaly of Figure 7.

Further Work

The next step in this work is to gain a better understanding how the patterns obtained with SOM analysis relate to subsurface geometry and its rock properties. Research is currently underway in an attempt to answer questions of this kind. It is also important to further address the relationship between multi-attribute seismic properties at the wells, which correlate to rock lithologies, with those away from the wells.

Conclusion

Computerized information management has become an indispensable tool for organizing and presenting geophysical and geological data for seismic interpretation. Databases provide the underlying environment to achieve this goal. Machine learning is another area in which computers may one day offer an indispensable tool as well. The point is particularly germane in light of successes achieved by machine learning in other fields. The engines to help us reach this objective could well be neural networks that adapt to the data and present its various structures in a way that is meaningful to the interpreter. We believe that neural networks offer many advantages which our industry is just now recognizing.

References

1. Kohonen, T., 2001, Self-Organizing Maps, 3rd edition: Springer

2. Poupon, M., Azbel K. and Ingram, J., 1999, Integrating seismic facies and petro-acoustic modeling: World Oil Magazine, June, 1999

3. http://www.cis.hut.fi/research/som-research/worldmap.html accessed 10 November, 2010

4. Smith, T. and Treitel, S., 2010, Self-organizing artificial neural nets for automatic anomaly identification: SEG International Convention (Denver) Extended Abstracts

5. Smith, T., 2010, Unsupervised neural networks – disruptive technology for seismic interpretation: Oil & Gas Journal, Oct. 4, 2010

Dr. Thomas SmithTHOMAS A. SMITH

Tom Smith received BS and MS degrees in Geology from Iowa State University. In 1971, he joined Chevron Geophysical as a processing geophysicist. In 1980 he left to pursue doctoral studies in Geophysics at the University of Houston. Dr. Smith founded Seismic Micro-Technology in 1984 and there led the development of the KINGDOM software suite for seismic interpretation.  In 2007, he sold the majority position in the company but retained a position on the Board of Directors.  SMT is in the process of being acquired by IHS. On completion, the SMT Board will be dissolved. IN 2008, he founded Geophysical Insights where he and several other geophysicists are developing advanced technologies for fundamental geophysical problems.

The SEG awarded Tom the SEG Enterprise Award in 2000, and in 2010, GSH awarded him the Honorary Membership Award.  Iowa State University awarded him Distinguished Alumnus Lecturer Aware in 1996 and Citation of Merit for National and International Recognition in 2002. Seismic Micro-Technology received a GSH Corporate Star Award in 2005.  In 2008, he founded Geophysical Insights to develop advanced technologies to address fundamental geophysical problems. Dr. Smith has been a member of the SEG since 1967 and is also a member of the HGS, EAGE, SIPES, AAPG, GSH, Sigma XI, SSA, and AGU.