Self-Organizing Neural Nets for Automatic Anomaly Identification

Self-Organizing Neural Nets for Automatic Anomaly Identification

By Tom Smith, Geophysical Insights and Sven Treitel, TriDekon

Self-organizing maps are a practical way to identify natural clusters in multi-attribute seismic data. Curvature measure identifies neurons that have found natural clusters from those that have not. Harvesting is a methodology for measuring consistency and delivering the most consistent classification. Those portions of the classification with low probability are an indicator of multi-attribute anomalies which warrant further investigation.


Over the past several years, the growth in seismic data volumes has multiplied many times. Often a prospect is evaluated with a primary 3D survey along with 5 to 25 attributes which serve both general and unique purposes. These are well laid out by Chopra and Marfurt, 2007. Self-organizing maps (Kohonen, 2001), or SOM for short, are a type of unsupervised neural network which fit themselves to the pattern of information in multi-dimensional data in an orderly fashion.

Multi-attributes and natural clusters

We organize a 3D seismic survey data volume regularly sampled in location X, Y and time T (or depth estimate Z). Each survey sample is represented by a number of attributes, f1, f2, …, fF. An individual sample is represented in bold as a vector with four subscripts. Together, they represent the survey space k , so the set of samples

with indices c, d, e and f represent time, trace, line number and attribute number, respectively. It is convenient to represent a sample fixed in space as a vector of F attributes in attribute space. Let this set of attribute samples {x1, x2,…xi,…, xI} be taken from k and range from 1 to I. The layout of survey space representing a 3D attribute space is illustrated in Figure 1. An attribute sample, marked as resting on the top of pins, consists of a vector of three attribute values at a fixed location.

3D Surveys

Figure 1: Three 3D surveys are bricked together in a survey space comprising 3 attributes. Marked in the survey labeled Attribute 1 is a blue-green data sample. Connected to it are samples in the other attributes at the same position in the survey.

Attribute Space

Figure 2: An example attribute space is marked here as Amplitude, Semblance and Frequency. The data sample in Figure 1 is located in the cluster of other blue-green data samples. Also shown are natural clusters of red samples (lower) and white samples (upper).

The sample of Figure 1 resides in attribute space as shown in Figure 2. Included in the illustration are other samples with similar properties. These natural clusters are regions of higher density which can constitute various seismic events with varying attribute characteristics. A natural cluster would register as a maximum in a probability distribution function. However, a large number of attributes entails a histogram of impractically high dimensionality.

Self-Organizing Map (SOM)

A SOM neuron lies in attribute space alongside the data samples. Therefore, a neuron is also an F-dimensional vector noted here as w in bold. The neuron w lies in a topology j called the neuron space. At this point in the discussion, the topology is unspecified so use a single subscript t as a place marker for any number of dimensions. Whereas data samples remain fixed in attribute space,

neurons are allowed to move freely in attribute space. They are then progressively drawn toward the data samples.

A neuron “learns” by adjusting its position within the attribute space as it is drawn toward nearby data points. Then let us define a self-organizing neural network as a collection of neurons {w1, w2,…, wi,…, wJ} with an index ranging from 1 through J. The neural network learns as its neurons adjust to natural clusters in attribute space. In general the problem is to discover and identify an unknown number of natural clusters distributed in attribute space, given the following information: I data samples in survey space; F attributes in attribute space; and J neurons in neuron space. The SOM was invented by T. Kohonen and discussed in Kohonen, T., 2001. It addresses such issues as a classic problem in statistical classification.

Neural Network Analysis - Attribute Spacep class=”wp-caption-text”>Figure 3:The winning neuron is the one which is closest to the selected data point.

In Figure 3 we place three neurons at arbitrary locations in attribute space from Figure 2. A simple process of learning proceeds as follows. Given the first sample, one computes the distance from the data sample to each of the 3 neurons and selects the closest one. We choose the Euclidean distance as our measure of distance.

The winning neuron with subscript k is defined

where j ranges over all neurons. In Figure 3, the neuron on the left is identified as the winning neuron. The winning neuron advances toward the data sample along a vector which is a fraction of the distance from the data sample. Then the second sample is selected and the process is repeated. In this example, the neuron marked as the winning neuron may end up in the leftmost cluster of the Figure. The lowermost neuron may end up near the center of the lowermost cluster on the right and the third neuron might end up in the cluster in the upper right of the Figure. This type of learning is called competitive because only the winning neuron moves toward the data.

A key point to note is that after one complete pass through the data samples, although not every neuron may have moved toward any data points, nevertheless every sample has one and only one winning neuron. A complete pass through the data is called an epoch. Many epochs may be required before the neurons have completed their clustering task.

The process just described is the basis of the SOM. A SOM neuron adjusts itself by the following recursion.

where wj(n) is the attribute position of neuron j at time step n and k is the winning neuron number. The recursion proceeds from time step n to step n + 1. The update is in the direction toward x along the “error” direction xwj(n). The amount of displacement is controlled by the learning control parameters, η and h, which are both scalars.

The η term grows smaller with each time step, so large neuron adjustments during early epochs smoothly taper to smaller adjustments later.

The h term embodies still another type of learning and which is also part of the SOM learning process.

Here d is the Euclidean distance between neurons in the neuron space introduced in equation (2)


Neuron Network Topologies

Figure 4:An assortment of neuron network topologies is shown here. Let the neuron position be r(p) with the distance between a neuron and its nearest neighbor set to 1 unit.

In equation (7), y is a positional vector in the neuron topology. Several options for neuron topology in neuron space are shown in Figure 4.

From equation (6) we observe that not only is the winning neuron moving toward a data point, other neurons around the winning neuron are moving as well. These neurons constitute the winning neuron neighborhood.

In the hexagonal topology of Figure 4, note that the marked neuron has 6 nearest neighbors. If this neuron is selected as a winning neuron, equations (6) and (7) indicate that the 6 nearest neurons move toward the data sample by a like amount. More distant neurons from the winning neuron move a lesser amount.

Neighborhoods of neuron movement constitute cooperative learning. For a 2D neuron space, hexagonal topology offers the maximum number of similarly distant neurons. Here we have chosen a hexagonal neural network because it maximizes cooperative learning in 2D. The SOM embodies both competitive and cooperative learning rules (Haykin, 2009).

Curvature measure

To search for natural clusters and to avoid the curse of dimensionality (Bishop, 2007), we allow the SOM to find them for us. However, there is no assurance that at the end of such a SOM analysis the neurons have come to rest at or near the centers of natural clusters. To address this issue, we turn to the simple definition of a maximum. A natural cluster is by definition a denser region of attribute space. It is identified as a maximum in a probability distribution function through analysis of a histogram. In 1D the histogram has a maximum; in 2D the histogram is a maximum in 2 orthogonal directions and so on.

In F-dimensional attribute space, a natural cluster is revealed by a peak in the probability distribution function of all F attributes. Recall that at the end of an epoch there is a one-to-one relationship between a data sample and its winning neuron. That implies that to every winning neuron there corresponds a set of one or more data samples.

Then for some winning neuron with index k, there exists a set of data samples x for which

and where k N include the samples drawn from k for that winning neuron. Some winning neurons in equation (9) have a small collection of x samples while others will have a larger collection.

In the set x for a winning neuron wk, for each of the f attributes [1 to F] we can determine a histogram. If there is a peak in the histogram we have found a higher magnitude in the probability distribution in that particular dimension and so score this attribute as a success. We count all attributes in this way (1 for success or 0 for failure) and divide the result by the number of attributes. Curvature measures density and lies in the range [0,1]. Each neuron and each attribute has a curvature measure.

Harvesting and consistency

A harvesting process consists of three steps. First, unsupervised neural network analyses are run on independent sets of data that have been drawn from a 2D or

3D survey. A rule is then used to decide which candidate is the best solution. Finally, the set of neurons of the best solution are used to classify the entire survey.

We have conducted a series of SOM analysis and classification steps for the Stratton Field 3D Seismic Survey (provided to us by courtesy of the Bureau of Economic Geology and the University of Texas). A time window of 1.2 to 1.8s was selected for SOM analysis with an 8 x 8 hexagonal network and 100 learning epochs. We measured performance by standard deviation of error between data samples and their winning neurons. Standard deviation error reduction was typically 35%. A separate SOM analysis was conducted on each of the 100 lines in order to assess consistency of results. The SOM results were highly consistent with a variation of final standard deviation of error of only 1.5% of the mean. The rule used here is to select the SOM solution for the line which best fits its data through smallest error of fit.

BEG Line 53 Full Classification - CopyFigure 5: SOM classification of Line 53. Timing lines at 1.3 and 1.6s are marked by arrows.

Honeycomb DefaultFigure 6: Hexagonal colorbar for Figure 5.

The SOM results form a new attribute volume of winning neuron classifications. Every sample in this new volume is the index of the winning neuron number for a given data sample. The SOM analysis was derived from 50 attributes which included basic trace attributes, geometric attributes and spectral decompositions. A solution line is shown in Figure 5.

First observe that the SOM results more or less track geology as shown by flat reflections near 1.3s. At the well are shown SP and GR well log curves with lithologic shading, formation top picks as well as a synthetic seismogram (white WTVA trace along the borehole). A second reflector above 1.6s is a second key marker. Also notice the green patches near 1.5s. These were identified as patches with lateral consistency. We have no geologic interpretation at this time. In Figure 6 we show the colorbar patterned to the neuron topology and colored to assist in identification of regions of neuron clusters.


Figure 7:Vertical section through a 3D survey of a Gulf of Mexico
salt dome.

SOM Classification

Figure 8: SOM classification with p > .1 corresponding to the same
slice as Figure 7.

Time slice

Figure 9: Time slice for Figure 8.

Curvature measure (CM) of the neurons fell in the range .72 to .9 except for one neuron which had a curvature measure of .26. It was found that this winning neuron resulted from only 8 samples while others had 17 to 139.

We have investigated the CM attribute measure and found that there were 3 poor performing attributes (CM < .2); 4 attributes which we consider questionable (CM ≈ .5) and 43 strong attributes (CM > .8).

Automatic anomaly identification

Equation 9 is the basis on which we classify the quality of classification samples. For any winning neuron, we select samples with probability p which exceed threshold pmin

based on the distance between the winning neuron and its samples, shown here as equation 10. Those samples that are near their winning neuron have higher probability.
From Figure 7, the classification of Figure 8 results from a SOM analysis with an 8 x 8 neuron network, designed on the basis of F=13 attributes and 100 epochs. Those samples whose probabilities lie below pmin are anomalous and assigned a white color. The rest of the classification uses the colorbar of Figure 6.

The red lines of Figures 8 and 9 register their views so the white area anomaly on the right side of the salt in Figure 8 has an areal extent that appears to be bounded by the salt in Figure 9. We are not suggesting that all anomalies are of geologic interest. However, those of sufficient size are worthy of investigation for geologic merit.


This presentation investigates the areas within a 3D survey where multiple attributes are of unusual character. Self-organizing maps assist with an automatic identification process. While these results are encouraging, it is readily apparent that additional investigations must be made into appropriate learning controls. Various structural and stratigraphic questions which might be posed to SOM analysis will require careful selection of appropriate attributes. It is also clear that calibration of SOM analyses with borehole information offers an attractive area of investigation.


Tury Taner was our inspiration and pioneer in this area.

THOMAS A. SMITH received BS and MS degrees in Geology from Iowa State University. In 1971, he joined Chevron Geophysical as a processing geophysicist. In 1980 he left to pursue doctoral studies in Geophysics at the University of Houston. Dr. Smith founded Seismic Micro-Technology in 1984 and there led the development of the KINGDOM software suite for seismic interpretation. In 2007, he sold the majority position in the company but retained a position on the Board of DIrectors. SMT is in the process of being acquired by IHS. On completion, the SMT Board will be dissolved. IN 2008, he founded Geophysical Insights where he and several other geophysicists are developing advanced technologies for fundamental geophysical problems.

The SEG awarded Tom the SEG Enterprise Award in 2000, and in 2010, GSH awarded him the Honorary Membership Award. Iowa State University awarded him Distinguished Alumnus Lecturer Aware in 1996 and Citation of Merit for National and International Recognition in 2002. Seismic Micro-Technology received a GSH Corporate Star Award in 2005. In 2008, he founded Geophysical Insights to develop advanced technologies to address fundamental geophysical problems. Dr. Smith has been a member of the SEG since 1967 and is also a member of the HGS, EAGE, SIPES, AAPG, GSH, Sigma XI, SSA, and AGU.

Introduction to Self-Organizing Maps in Multi-Attribute Seismic Data

Introduction to Self-Organizing Maps in Multi-Attribute Seismic Data

By Tom Smith and Sven Treitel
Published with permission: Geophysical Society of Houston
January 2011

Unsupervised neural network searches multi-dimensional data for natural clusters. Neurons are attracted to areas of higher information density. The SOM analysis relates to subsurface geometry and rock properties while noting multi-attribute seismic properties at the wells, correlating to rock lithologies, with those away from the wells.

Computers that think like a human are well beyond our current capabilities but computers that learn are not. They are around us every day. Pocket cameras identify faces in a live digital image and automatically adjust the focus when the shutter is pressed. Post offices scan the mail and route the documents appropriately. Offices scan documents as bitmaps and convert them to text documents for editing. Web documents are indexed for content, while search engines deliver these documents through key word searches in unprecedented detail and with extraordinary speed.

We have seen a tremendous growth in the size of 3D survey seismic data volumes, and it is common today for both 2D and 3D seismic surveys to be integrated into the interpretation. Moreover, the primary survey of reflection amplitude is interpreted along with derived surveys of perhaps 5 to 25 attributes. The attributes of both 2D and 3D surveys represent multidimensional data. The problem is to keep all this data in one’s head while trying to find oil and gas. Much interpretation effort is devoted to building a geologic framework from the seismic data, identifying key reflecting intervals where oil and gas might be found and finding an interesting anomaly. At this point attributes are the framework in which we evaluate the anomaly. But this is the point where we can easily mislead ourselves. It is quite easy to build a plausible model for a prospect using only those attributes which fit our model and ignore the rest. This is bad enough, but there is even a greater crime. Lurking in the data may be combinations of attributes which are legitimate anomalies but which are never found at all.

Learning machines are artificial neural networks which can construct an experience data base from multidimensional data such as multi-attribute seismic surveys. There are two main classes of neural networks – supervised and unsupervised. With supervised neural networks, a network classifies data into groups sharing given characteristics that have already been classified by an expert. After careful processing, synthetic seismograms that are prepared at well sites serve as the expert’s data. Then the neural network is trained to classify these data at the wells. After training, the neural network literally roams the seismic data to classify areas which might be similar in some given sense to models developed at the well locations.

Alternatively, an unsupervised neural network searches multidimensional data for natural clusters. Neurons are attracted to areas of higher information density. The most popular unsupervised neural network, self-organizing maps (SOM), were introduced by Teuvo Kohonen in 1981 [1]. SOM was successfully applied to seismic facies analysis by Poupon, Azbel and Ingram in 1999 (Stratimagic) [2]. We preface recent efforts to bring SOM to bear on multiattribute seismic interpretation with a simple SOM example used by Kohonen to illustrate some of its basic features.

Quality of Life

An early problem considered by Kohonen and his research team was to identify natural clusters as they relate to quality of life factors based on World Bank data. A study that included 126 countries, considered a total of 39 measurements describing the level of poverty found in each country. While the data matrix was somewhat limited by incomplete reporting, the SOM results are still quite interesting. Shown in Figure 1 is the SOM which resulted from the learning process. Canada (CAN) and the United States of America (USA) clustered at the same neuron location shown at the 6th row of the 1st column in the figure. Ethiopia (ETH) is found on the right edge at column 13, row 5. Other country abbreviations and further details are in [3].



Figure 1: Self-organizing map (SOM) of World Bank quality of life data.

The reason that countries of similar quality of life cluster in similar neuron areas has to do with learning principles that are built into SOM. In this study, every country is a sample and that sample is a column vector of 39 elements. In other words, there are 39 attributes in this problem. Countries of similar characteristics (a natural cluster) plot in about the same place in attribute space. At the beginning of the learning process, neurons of 39 dimensions are assigned random numbers. During the learning process, the neurons move toward natural clusters. The data points never move. The mathematics of SOM learning define both competitive and cooperative learning. For a given data sample, the Euclidean distance is computed between the sample and each neuron. The neuron which is “nearest” to the data sample is declared the “winning” neuron and allowed to advance a fraction of the distance toward the data sample. The neuron movement is the essence of machine learning. Competitive learning is embodied in the strategy that the winning neuron moves toward the data sample.

This aspect of cooperative learning is related to the layout of the neural network. In SOM learning, the neural network is commonly a 2D hexagonal grid. This constitutes the neuron topology; the choice of a hexagonal grid rather than a rectangular grid will be apparent shortly. When a winning neuron has been found, cooperative learning takes place because the neurons in the vicinity of the winning neuron (the neighborhood) are also allowed to move toward the data sample, but by an amount less than the winning neuron. In fact, the further a neighborhood is away from the winning neuron, the less it is allowed to move. Hexagonal grids move more neurons than rectangular grids because they have 6 points of contact with their immediate neighbors instead of 4. Learning continues as winning and neighborhood neurons move toward each sample in turn until the entire set of samples has been processed. At this point, one epoch of learning has been completed. The event is marked as one time step in the learning process. For each subsequent epoch the distance a winning neuron may move toward a data sample is reduced slightly and the size of the neighborhood is also reduced. The learning process terminates when there is no further appreciable movement of the neurons. Often the number of such epochs can be in the hundreds or thousands.

As demonstrated in Figure 1, natural clustering of like-quality of life countries arises from both competitive and cooperative learning. But one may ask how is SOM learning unsupervised when the SOM map displays country labels? The answer is that in the steps just described, there is no need to order the sequence of samples in the SOM learning process. The Ethiopia sample may be processed between samples for Canada and USA with no effect on the outcome. The sample order of countries may be scrambled randomly.


Figure 2: Classification of quality of life data


Figure 3: Gulf of Mexico Salt Dome

In Kohonen’s analysis of the World Bank data, the names of countries are known, however. When the SOM learning process is completed, the neuron which is closest to each country sample is labeled by the country label as shown in Figure 1. The neuron colors are arbitrary. Figure 2 is a world map in which each country is colored with the color scale used in Figure 1. Countries with similar quality of life are therefore colored similarly. Several countries which did not contribute data for the report are colored gray (Russia, Iceland, Cuba and several others). Figure 2 illustrates how the results of neural network analysis are used to classify the data. We shall see in the next section how SOM analysis and classification is an important addition to seismic interpretation.

Gulf of Mexico Salt Dome Survey

A SOM analysis was conducted on a 3D survey in the Gulf of Mexico provided by FairfieldNodal. See [4] for a description of SOM theory and a discussion of the processing steps. In particular, the introduction of a so-called curvature measure and the harvesting process are particularly relevant. Figure 3 is a vertical amplitude section across the center of the salt. Figure 4 shows the SOM analysis of 13 attributes across the same location. The SOM map is a 2D colorbar based on an 8 x 8 hexagonal grid. There are 100 epochs in the present analysis. It is readily apparent that the SOM classification is tracking seismic reflections.

SOM Classification

Figure 4: SOM classification and map. Red horizontal line marks the time of Figure 5.

Time slice

Figure 5: Time slice. Red line marks the location of Figure 4.

Shown in Figure 4 are white portions in which data have been “declassified”, a concept which we now explain. After the SOM analysis is completed, every sample in the survey is associated with a winning neuron. This implies that every neuron is associated with a given set of samples.

For any particular neuron, some samples are nearby in attribute space and others are far away. This means that there is a statistical population of distances on which to declassify what we shall call “outliers”. When a neuron is near a data sample, the probability that the sample is correctly classified is high. If a neuron and sample coincide, the probability is 100%. In Figure 4, those samples for which the probability is less than 10% are not assigned any classification. We identify such outliers as SOM anomalies. SOM anomalies are scattered about the section, with several which are larger and more compact. The horizontal red line marks the time of the time slice shown in Figure 5.

The horizontal line in Figure 5 marks the location of the section in Figure 4. Notice the white area to the right of the salt dome crossed by the red line in Figure 4 is identified as the same white area right of the salt dome and crossed by the red line in Figure 5. We note that the SOM anomaly is a discrete geobody which appears to be related to the upturned beds flanking the salt. By geobody, we mean a contiguous region of samples in the survey which share some characteristic.

Arbitrary LineFigure 6: Arbitrary line through 3D survey passing through gas-show well (left) and producing gas (right)

SOM Classifcation of 3D Survey

Figure 7: SOM classification of 3D survey. Red horizontal line marks the time for the time slice of Figure 8

Wharton County Survey

A SOM analysis was also conducted on a 3D survey in Wharton County, Texas provided by Auburn Energy. Details of this study are found in [5]. An arbitrary line through the survey between two wells is shown in Figure 6.
The well at the left presented a gas show while the well at the right developed a single-well gas field. Note the association of gas with faults F1, F2 and F3.

Figure 7 shows a portion of the results of a SOM classification run designed in the same way as in the previous example, namely by use of the same 13 attributes, an 8 x 8 hexagonal topology of neurons and a probability cut-off of 10%. Notice that this selection of attributes did not delineate the faults very well, yet SOM anomalies are found near both wells. The time slice of Figure 8 confirms that the SOM anomaly to the left of the gas-show well (left) is a geobody. A smaller second SOM anomaly is shown right of the F2 fault. Figure 9 is a time slice through the lower SOM anomaly near the gas well (right) of Figure 7. Notice that it too is a geobody. Prior to the present SOM analysis, an earlier thorough interpretation had been conducted with all available geophysical and geological data. A large set of attributes was used, including AVO gathers, offset stacks, advanced processing as well as some proprietary attributes. As a result, four wells were drilled. Two wells had no gas shows and are not marked here. No SOM anomaly was found at or near either of the two dry wells.

Upper SOM Anomaly

Figure 8: Time slice through the upper SOM anomaly of Figure 7. The red line marks the location of the arbitrary line location.

Time slice 2

Figure 9: Time slice through the lower SOM
anomaly of Figure 7.

Further Work

The next step in this work is to gain a better understanding how the patterns obtained with SOM analysis relate to subsurface geometry and its rock properties. Research is currently underway in an attempt to answer questions of this kind. It is also important to further address the relationship between multi-attribute seismic properties at the wells, which correlate to rock lithologies, with those away from the wells.


Computerized information management has become an indispensable tool for organizing and presenting geophysical and geological data for seismic interpretation. Databases provide the underlying environment to achieve this goal. Machine learning is another area in which computers may one day offer an indispensable tool as well. The point is particularly germane in light of successes achieved by machine learning in other fields. The engines to help us reach this objective could well be neural networks that adapt to the data and present its various structures in a way that is meaningful to the interpreter. We believe that neural networks offer many advantages which our industry is just now recognizing.


1. Kohonen, T., 2001, Self-Organizing Maps, 3rd edition: Springer

2. Poupon, M., Azbel K. and Ingram, J., 1999, Integrating seismic facies and petro-acoustic modeling: World Oil Magazine, June, 1999

3. accessed 10 November, 2010

4. Smith, T. and Treitel, S., 2010, Self-organizing artificial neural nets for automatic anomaly identification: SEG International Convention (Denver) Extended Abstracts

5. Smith, T., 2010, Unsupervised neural networks – disruptive technology for seismic interpretation: Oil & Gas Journal, Oct. 4, 2010

Dr. Thomas Smith THOMAS A. SMITH

Tom Smith received BS and MS degrees in Geology from Iowa State University. In 1971, he joined Chevron Geophysical as a processing geophysicist. In 1980 he left to pursue doctoral studies in Geophysics at the University of Houston. Dr. Smith founded Seismic Micro-Technology in 1984 and there led the development of the KINGDOM software suite for seismic interpretation.  In 2007, he sold the majority position in the company but retained a position on the Board of Directors.  SMT is in the process of being acquired by IHS. On completion, the SMT Board will be dissolved. IN 2008, he founded Geophysical Insights where he and several other geophysicists are developing advanced technologies for fundamental geophysical problems.

The SEG awarded Tom the SEG Enterprise Award in 2000, and in 2010, GSH awarded him the Honorary Membership Award.  Iowa State University awarded him Distinguished Alumnus Lecturer Aware in 1996 and Citation of Merit for National and International Recognition in 2002. Seismic Micro-Technology received a GSH Corporate Star Award in 2005.  In 2008, he founded Geophysical Insights to develop advanced technologies to address fundamental geophysical problems. Dr. Smith has been a member of the SEG since 1967 and is also a member of the HGS, EAGE, SIPES, AAPG, GSH, Sigma XI, SSA, and AGU.