Solving Interpretation Problems using Machine Learning on Multi-Attribute, Sample-Based Seismic Data

Solving Interpretation Problems using Machine Learning on Multi-Attribute, Sample-Based Seismic Data

Solving Interpretation Problems using Machine Learning on Multi-Attribute, Sample-Based Seismic Data
Presented by Deborah Sacrey, Owner of Auburn Energy
Challenges addressed in this webinar include:

  • Reducing risk in drilling marginal or dry holes
  • Interpretation of thin bedded reservoirs far below conventional seismic tuning
  • How to better understand reservoir characteristics
  • Interpretation of reservoirs in deep, pressured environments
  • Using the classification process to help with correlations in difficult stratigraphic or structural environments

The webinar is open to those interested in learning more about how the application of machine learning is key to seismic interpretation.

 
Deborah Sacrey

Deborah Sacrey

Owner

Auburn Energy

Deborah Sacrey is a geologist/geophysicist with 41 years of oil and gas exploration experience in the Texas, Louisiana Gulf Coast, and Mid-Continent areas of the US. Deborah specializes in 2D and 3D interpretation for clients in the US and internationally.

She received her degree in Geology from the University of Oklahoma in 1976 and began her career with Gulf Oil in Oklahoma City. She started Auburn Energy in 1990 and built her first geophysical workstation using the Kingdom software in 1996. Deborah then worked closely with SMT (now part of IHS) for 18 years developing and testing Kingdom. For the past eight years, she has been part of a team to study and bring the power of multi-attribute neural analysis of seismic data to the geoscience community, guided by Dr. Tom Smith, founder of SMT. Deborah has become an expert in the use of the Paradise® software and has over five discoveries for clients using the technology.

Deborah is very active in the geological community. She is past national President of SIPES (Society of Independent Professional Earth Scientists), past President of the Division of Professional Affairs of AAPG (American Association of Petroleum Geologists), Past Treasurer of AAPG and Past President of the Houston Geological Society. She is currently the incoming President of the Gulf Coast Association of Geological Societies (GCAGS) and is a member of the GCAGS representation on the AAPG Advisory Council. Deborah is also a DPA Certified Petroleum Geologist #4014 and DPA Certified Petroleum Geophysicist #2. She is active in the Houston Geological Society, South Texas Geological Society and the Oklahoma City Geological Society (OCGS).

Machine Learning Revolutionizing Seismic Interpretation

Machine Learning Revolutionizing Seismic Interpretation

By Thomas A. Smith and Kurt J. Marfurt
Published with permission: The American Oil & Gas Reporter
July 2017

The science of petroleum geophysics is changing, driven by the nature of the technical and business demands facing geoscientists as oil and gas activity pivots toward a new phase of unconventional reservoir development in an economic environment that rewards efficiency and risk mitigation. At the same time, fast-evolving technologies such as machine learning and multiattribute data analysis are introducing powerful new capabilities in investigating and interpreting the seismic record.

Through it all, however, the core mission of the interpreter remains the same as ever: extracting insights from seismic data to describe the subsurface and predict geology between existing well locations–whether they are separated by tens of feet on the same horizontal well pad or tens of miles in adjacent deepwater blocks. Distilled to its fundamental level, the job of the data interpreter is to determine where (and where not) to drill and complete a well. Getting the answer correct to that million-dollar question gives oil and gas companies a competitive edge. The ability to arrive at the right answers in the timeliest manner possible is invariably the force that pushes technological boundaries in seismic imaging and interpretation. The state of the art in seismic interpretation is being redefined partly by the volume and richness of high-density, full-azimuth 3-D surveying methods and processing techniques such as reverse time migration and anisotropic tomography. Combined, these solutions bring new resolution and clarity to processed subsurface images that simply are unachievable using conventional imaging methods. In data interpretation, analytical tools such as machine learning, pattern recognition, multiattribute analysis and self-organizing maps are enhancing the interpreter’s ability to classify, model and manipulate data in multidimensional space. As crucial as the technological advancements are, however, it is clear that the future of petroleum geophysics is being shaped largely by the demands of North American unconventional resource plays. Optimizing the economic performance of tight oil and shale gas projects is not only impacting the development of geophysical technology, but also dictating the skill sets that the next generation of successful interpreters must possess. Resource plays shift the focus of geophysics to reservoir development, challenging the relevance of seismic-based methods in an engineering-dominated business environment. Engineering holds the purse strings in resource plays, and the problems geoscientists are asked to solve with 3-D seismic are very different than in conventional exploration geophysics. Identifying shallow drilling hazards overlying a targeted source rock, mapping the orientation of natural fractures or faults, and characterizing changes in stress profiles or rock properties is related as much to engineering as to geophysics.

Given the requirements in unconventional plays, there are four practical steps to creating value with seismic analysis methods. The first and obvious step is for oil and gas companies to acquire 3-D seismic and incorporate the data into their digital databases.  Some operators active in unconventional plays fully embrace 3-D technology, while others only apply it selectively. If interpreters do not have access to high-quality data and the tools to evaluate that information, they cannot possibly add value to company’s bottom line.The second step is to break the conventional resolution barrier on the seismic reflection wavelet, the so-called quarter-wave length limit. This barrier is based on the overlapping reflections of seismic energy from the top and bottom of a layer, and depends on layer velocity, thickness, and wavelet frequencies. Below the quarter-wave length, the wavelets start to overlap in time and interfere with one another, making it impossible by conventional means to resolve separate events. The third step is correlating seismic reflection data–including compressional wave energy, shear wave energy and density–to quantitative rock property and geomechanical information from geology and petrophysics. Connecting seismic data to the variety of very detailed information available at the borehole lowers risk and provides a clearer picture of the subsurface between wells, which is fundamentally the purpose of acquiring a 3-D survey. The final step is conducting a broad, multiscaled analysis that fully integrates all available data into a single rock volume encompassing geophysical, geologic and petrophysical features. Whether an unconventional shale or a conventional carbonate, bringing all the data together in a unified rock volume resolves issues in subsurface modeling and enables more realistic interpretations of geological characteristics.

The Role of Technology

Every company faces pressures to economize, and the pressures to run an efficient business only ratchet up at lower commodity prices. The business challenges also relate to the personnel side of the equation, and that should never be dismissed. Companies are trying to bridge the gap between older geoscientists who seemingly know everything and the ones entering the business who have little experience but benefit from mentoring, education and training. One potential solution is using information technology to capture best practices across a business unit, and then keeping a scorecard of those practices in a database that can offer expert recommendations based on past experience. Keylogger applications can help by tracking how experienced geoscientists use data and tools in their day-to-day workflows. However, there is no good substitute for a seasoned interpreter. Technologies such as machine learning and pattern recognition have game-changing possibilities in statistical analysis, but as petroleum geologist Wallace Pratt pointed out in the 1950s, oil is first found in the human mind. The role of computing technology is to augment, not replace, the interpreter’s creativity and intuitive reasoning (i.e., the “geopsychology” of interpretation).

Delivering Value

A self-organizing map (SOM) is a neural network-based, machine learning process that is simultaneously applied to multiple seismic attribute volumes. This example shows a class II amplitude-variation-with-offset response from the top of gas sands, representing the specific conventional geological settings where most direct hydrocarbon indicator characteristics are found. From the top of the producing reservoir, the top image shows a contoured time structure map overlain by amplitudes in color. The bottom image is a SOM classification with low probability (less than 1 percent) denoted by white areas. The yellow line is the downdip edge of the high-amplitude zone designated in the top image. Consequently, seismic data interpreters need to make the estimates they derive from geophysical data more quantitative and more relatable for the petroleum engineer. Whether it is impedance inversion or anisotropic velocity modeling, the predicted results must add some measure of accuracy and risk estimation. It is not enough to simply predict a higher porosity at a certain reservoir depth. To be of consequence to engineering workflows, porosity predictions must be reliably delivered within a range of a few percentage points at depths estimated on a scale of plus or minus a specific number of feet.

3-d seismic image

Class II amplitude-variation-with-offset response from the top of gas sand.

Machine learning techniques apply statistics-based algorithms that learn iteratively from the data and adapt independently to produce repeatable results. The goal is to address the big data problem of interpreting massive volumes of data while helping the interpreter better understand the interrelated relationships of different types of attributes contained within 3-D data. The technology classifies attributes by breaking data into what computer scientists call “objects” to accelerate the evaluation of large datasets and allow the interpreter to reach conclusions much faster. Some computer scientists believe “deep learning” concepts can be applied directly to 3-D prestack seismic data volumes, with an algorithm figuring out the relations between seismic amplitude data patterns and the desired property of interest.  While Amazon, Alphabet and others are successfully using deep learning in marketing and other functions, those applications have access to millions of data interactions a day. Given the significantly fewer number of seismic interpreters in the world, and the much greater sensitivity of 3-D data volumes, there may never be sufficient access to training data to develop deep learning algorithms for 3-D interpretation.The concept of “shallow learning” mitigates this problem.
 
Stratigraphy above the Buda

Conventional amplitude seismic display from a northwest-to-southeast seismic section across a well location is contrasted with SOM results using multiple instantaneous attributes.

First, 3-D seismic data volumes are converted to well-established relations that represent waveform shape, continuity, orientation and response with offsets and azimuths that have proven relations (“attributes”) to porosity, thickness, brittleness, fractures and/or the presence of hydrocarbons. This greatly simplifies the problem, with the machine learning algorithms only needing to find simpler (i.e., shallower) relations between the attributes and properties of interest.In resource plays, seismic data interpretations increasingly are based on statistical rather than deterministic predictions. In development projects with hundreds of wells within a 3-D seismic survey area, operators rely on the interpreter to identify where to drill and predict how a well will complete and produce. Given the many known and unknown variables that can impact drilling, completion and production performance, the challenge lies with figuring out how to use statistical tools to apply data measurements from the previous wells to estimate the performance of the next well drilled within the 3-D survey area. Therein lies the value proposition of any kind of science, geophysics notwithstanding. The value of applying machine learning-based interpretation boils down to one word: prediction. The goal is not to score 100 percent accuracy, but to enhance the predictions made from seismic analysis to avoid drilling uneconomic or underproductive wells. Avoiding investments in only a couple bad wells can pay for all the geophysics needed to make those predictions. And because the statistical models are updated with new data as each well is drilled and completed, the results continually become more quantitative for improved prediction accuracy over time.

New Functionalities

In terms of particular interpretation functionalities, three specific concepts are being developed around machine learning capabilities:

  • Evaluating multiple seismic attributes simultaneously using self-organizing maps (multiattribute analysis);
  • Relating in multidimensional space natural clusters or groupings of attributes that represent geologic information embedded in the data; and
  • Graphically representing the clustered information as geobodies to quantify the relative contributions of each attribute in a given seismic volume in a form that is intrinsic to geoscientific workflows.

A 3-D seismic volume contains numerous attributes, expressed as a mathematical construct representing a class of data from simultaneous analysis. An individual class of data can be any measurable property that is used to identify geologic features, such as rock brittleness, total organic carbon or formation layering. Supported by machine learning and neural networks, multiattribute technology enhances the geoscientist’s ability to quickly investigate large data volumes and delineate anomalies for further analysis, locate fracture trends and sweet spots in shale plays, identify geologic and stratigraphic features, map subtle changes in facies at or even below conventional seismic resolution, and more. The key breakthrough is that the new technology works on machine learning analysis of multiattribute seismic samples.While applied exclusively to seismic data at present, there are many types of attributes contained within geologic, petrophysical and engineering datasets. In fact, literally, any type of data that can be put into rows and columns on a spreadsheet is applicable to multiattribute analysis. Eventually, multiattribute analysis will incorporate information from different disciplines and allow all of it to be investigated within the same multidimensional space that leads to the second concept: Using machine learning to organize and evaluate natural clusters of attribute classes. If an interpreter is analyzing eight attributes in an eight-dimensional space, the attributes can be grouped into natural clusters that populate that space. The third component is delivering the information found in the clusters in high-dimensionality space in a form that quantifies the relative contribution of the attributes to the class of data, such as simple geobodies displayed with a 2-D color index map. This approach allows multiple attributes to be mapped over large areas to obtain a much more complete picture of the subsurface, and has demonstrated the ability to achieve resolution below conventional seismic tuning thickness. For example, in an application in the Eagle Ford Shale in South Texas, multiattribute analysis was able to match 24 classes of attributes within a 150-foot vertical section across 200 square miles of a 3-D survey. Using these results, a stratigraphic diagram of the seismic facies has been developed over the entire survey area to improve geologic predictions between boreholes, and ultimately, correlate seismic facies with rock properties measured at the boreholes. Importantly, the mathematical foundation now exists to demonstrate the relationships of the different attributes and how they tie with pixel components in geobody form using machine learning. Understanding how the attribute data mathematically relate to one another and to geological properties gives geoscientists confidence in the interpretation results.

Leveraging Integration

The term “exploration geophysics” is becoming almost a misnomer in North America, given the focus on unconventional reservoirs, and how seismic methods are being used in these plays to develop rather than find reservoirs. With seismic reflection data being applied across the board in a variety of ways and at different resolutions in unconventional development programs, operators are combining 3-D seismic with data from other disciplines into a single integrated subsurface model. Fully leveraging the new sets of statistical and analytical tools to make better predictions from integrated multidisciplinary datasets is crucial to reducing drilling and completion risk and improving operational decision making. Multidimensional classifiers and attribute selection lists using principal component analysis and independent component analysis can be used with geophysical, geological, engineering, petrophysical and other attributes to create general-purpose multidisciplinary tools of benefit to all oil and gas company departments and disciplines. As noted, the integrated models used in resource plays increasingly are based on statistics, so any evaluation to develop the models also needs to be statistical. In the future, a basic part of conducting a successful analysis will be the ability to understand statistical data and how the data can be organized to build more tightly integrated models. And if oil and gas companies require more integrated interpretations, it follows that interpreters will have to possess more integrated skills and knowledge. The geoscientist of tomorrow may need to be more of a multidisciplinary professional with the blended capabilities of a geologist, geophysicist, engineer and applied statistician. But whether a geoscientist is exploring, appraising or developing reservoirs, he or she only can be as good as the prediction of the final model. By applying technologies such as machine learning and multiattribute analysis during the workup, interpreters can use their creative energies to extract more knowledge from their data and make more knowledgeable predictions about undrilled locations.

THOMAS A. SMITH is president and chief executive officer of Geophysical Insights, which he founded in 2008 to develop machine learning processes for multiattribute seismic analysis. Smith founded Seismic Micro-Technology in 1984, focused on personal computer-based seismic interpretation. He began his career in 1971 as a processing geophysicist at Chevron Geophysical. Smith is a recipient of the Society of Exploration Geophysicists’ Enterprise Award, Iowa State University’s Distinguished Alumni Award and the University of Houston’s Distinguished Alumni Award for Natural Sciences and Mathematics. He holds a B.S. and an M.S. in geology from Iowa State, and a Ph.D. in geophysics from the University of Houston.
Dr. Kurt Marfurt KURT J. MARFURT is the Frank and Henrietta Schultz Chair and Professor of Geophysics in the ConocoPhillips School of Geology & Geophysics at the University of Oklahoma. He has devoted his career to seismic processing, seismic interpretation and reservoir characterization, including attribute analysis, multicomponent 3-D, coherence and spectral decomposition. Marfurt began his career at Amoco in 1981. After 18 years of service in geophysical research, he became director of the University of Houston’s Center for Applied Geosciences & Energy. He joined the University of Oklahoma in 2007. Marfurt holds an M.S. and a Ph.D. in applied geophysics from Columbia University.

Geobody Interpretation Through Multi-Attribute Surveys, Natural Clusters and Machine Learning

By Thomas A. Smith 
June 2017

Geobody interpretation through multi-attribute surveys, natural clusters and machine learning

Summary

Multi-attribute seismic samples (even as entire attribute surveys), Principal Component Analysis (PCA), attribute selection lists, and natural clusters in attribute space are candidate inputs to machine learning engines that can operate on these data to train neural network topologies and generate autopicked geobodies. This paper sets out a unified mathematical framework for the process from seismic samples to geobodies.  SOM is discussed in the context of inversion as a dimensionality-reducing classifier to deliver a winning neuron set.  PCA is a means to more clearly illuminate features of a particular class of geologic geobodies.  These principles are demonstrated with geobody autopicking below conventional thin bed resolution on a standard wedge model.

Introduction

Seismic attributes are now an integral component of nearly every 3D seismic interpretation.  Early development in seismic attributes is traced to Taner and Sheriff (1977).  Attributes have a variety of purposes for both general exploration and reservoir characterization, as laid out clearly by Chopra and Marfurt (2007).  Taner (2003) summarizes attribute mathematics with a discussion of usage.

Self-Organizing Maps (SOM) are a type of unsupervised neural networks that self-train in the sense that they obtain information directly from the data.  The SOM neural network is completely self-taught, which is in contrast to the perceptron and its various cousins undergo supervised training.  The winning neuron set that results from training then classifies the training samples to test itself by finding the nearest neuron to each training sample (winning neuron).  In addition, other data may be classified as well.  First discovered by Kohonen (1984), then advanced and expanded by its success in a number of areas (Kohonen, 2001; Laaksonen, 2011), SOM has become a part of several established neural network textbooks, namely Haykin (2009) and Dutta, Hart and Stork (2001).  Although the style of SOM discussed here has been used commercially for several years, only recently have results on conventional DHI plays been published (Roden, Smith and Sacrey, 2015).

Three Spaces

The concept of framing seismic attributes as multi-attribute seismic samples for SOM training and classification was presented by Taner, Treitel, and Smith (2009) in an SEG Workshop.  In that presentation, survey data and their computed attributes reside in survey space.  The neural network resides in neuron topology space.  These two meet in attribute space where neurons hunt for natural clusters and learn their characteristics.

Results were shown for 3D surveys over the venerable Stratton Field and a Gulf of Mexico salt dome.  The Stratton Field SOM results clearly demonstrated that there are continuous geobody events in the weak reflectivity zone between C38 and F11 events, some of which are well below seismic tuning thickness, that could be tied to conventional reflections and which correlated with wireline logs at the wells.  Studies of SOM machine learning of seismic models were presented by Smith and Taner (2010).  They showed how winning neurons distribute themselves in attribute space in proportion to the density of multi-attribute samples.  Finally, interpretation of SOM salt dome results found a low probability zone where multi-attribute samples of poor fit correlated with an apparent salt seal and DHI down-dip conformance (Smith and Treitel, 2010).

Survey Space to Attribute Space:

Ordinary seismic samples of amplitude traces in a 3D survey may be described as an ordered  set .  A multi-attribute survey is a “Super 3D Survey” constructed by combining a number of attribute surveys with the amplitude survey.  This adds another dimension to the set and another subscript, so the new set of samples including the additional attributes is .  These data may be thought of as separate surveys or equivalently separate samples within one survey.  Within a single survey, each sample is a multi-attribute vector.  This reduces the subscript by one count so the set of multi-attribute vectors  .

Next, a two-way mapping function may be defined that references the location of any sample in the 3D survey by single and triplet indices  Now the three survey coordinates may be gathered into a single index so the multi-attribute vector samples are also an unordered set in attribute space  The index map is a way to find a sample a sample in attribute space from survey space and vice versa.

Multi-attribute sample and set in attribute space: 

A multi-attribute seismic sample is a column vector in an ordered set of three subscripts c,d,e representing sample index, trace index, and line index. Survey bins refer to indices d and e.  These samples may also be organized into an unordered set with subscript i.  They are members of an -dimensional real space.  The attribute data are normalized so in fact multi-attribute samples reside in scaled attribute space.

Natural clusters in attribute space: 

Just as there are reflecting horizons in survey space, there must be clusters of coherent energy in attribute space.  Random samples, which carry no information, are uniformly distributed in attribute space just as in survey space.  The set  of natural clusters in attribute space is unordered and contains m  members.  Here, the brackets [1, M]  indicate an index range.  The natural clusters may reside anywhere in attribute space, but attribute space is filled with multi-attribute samples, only some of which are meaningful natural clusters.  Natural clusters may be big or small, tightly packed or diffuse.  The rest of the samples are scattered throughout F-space.  Natural clusters are discovered in attribute space with learning machines imbued with simple training rules and aided by properties of their neural networks.

A single natural cluster: 

A natural cluster may have elements in it.  Every natural cluster is expected to have a different number of multi-attribute samples associated with it.  Each element is taken from the pool of the set of all multi-attribute samples   Every natural cluster may have a different number of multi-attribute samples associated with it so for any natural cluster,  then N(m).  Every natural cluster has its own unique properties described by the subset of samples  that are associated with it.  Some sample subsets associated with a winning neuron are small (“not so popular”) and some subsets are large (“very popular”).  The distribution of Euclidean distances may be tight (“packed”) or loose (“diffuse”).

Geobody sample and geobody set in survey space: 

For this presentation, a geobody G_b is defined as a contiguous region in survey space composed of elements which are identified by members g.  The members of a geobody are an ordered set  which registers with those coordinates of members of the multi-attribute seismic survey .

A geobody member is just an identification number (id), an integer .  Although the 3D seismic survey is a fully populated “brick” with members ,  the geobody members  register at certain contiguous locations, but not all of them.  The geobody  is an amorphous, but contiguous, “blob” within the “brick” of the 3D survey.  The coordinates of the geobody blob in the earth are  where  By this, all the multi-attribute samples in the geobody may be found, given the id and three survey coordinates of a seed point.

A single geobody in survey space

Each geobody  is a set of  N geobody  members with the same id.  That is, there are N members in , so N(b).  The geobody members for this geobody are taken from the pool of all geobody samples, the set  Some geobodies are small and others large.  Some are tabular, some lenticular, some channels, faults, columns, etc.  So how are geobodies and natural clusters related?

A geobody is not a natural cluster

This expression is short but sweet.  It says a lot.  On the left is the set of all B geobodies.  On the right is the set of M natural clusters.  The expression says that these two sets aren’t the same.  On the left, the geobody members are id numbers  These are in survey space.  On the right, the natural clusters  These are in attribute space.  What this means is that geobodies are not directly revealed by natural clusters.  So, what is missing?

Interpretation is conducted in survey space.  Machine learning is conducted in attribute space.  Someone has to pick the list of attributes.  The attributes must be tailored to the geological question at hand.  And a good geological question is always the best starting point for any interpretation.

A natural cluster is an imaged geobody

Here, a natural cluster C_m is defined as an unorganized set of two kinds of objects: a function I of a set of geobodies G_i and random noise N.  The number of geobodies is I and unspecified.  The function  is an illumination function which places the geobodies in  The illumination function is defined by the choice of attributes.  This is the attribute selection list.  The number of geobodies in a natural cluster C_m is zero or more, 0<i<I.  The geobodies are distributed throughout the 3D survey.

The natural cluster concentrates geobodies of similar illumination properties.  If there are no geobodies or there is no illumination with a particular attribute selection list,  , so the set is only noise.  The attribute selection list is a critically import part of multi-attribute seismic interpretation.  The wrong attribute list may not illuminate any geobodies at all.

Geobody inversion from a math perspective

Multi-attribute seismic interpretation proceeds from the preceding equation in three parts.  First, as part of an inversion process, a natural cluster   is statistically estimated by a machine learning classifier such as SOM  with a neural network topology.  See Chopra, Castagna and Potniaguie (2006) for a contrasting inversion methodology.  Secondly, SOM employs a simple training rule that a neuron nearest a selected training sample is declared the winner and the winning neuron advances toward the sample a small amount.  Neurons are trained by attraction to samples.  One complete pass through the training samples is called an epoch.  Other machine learning algorithm have other training rules to adapt to data.  Finally, SOM has a dimensionality reducing feature because information contained in natural clusters is transferred (imperfectly) to the winning neuron set in the finalized neural network topology through cooperative learning.  Neurons in winning neuron neighborhood topology move along with the winning neuron in attribute space.  SOM training is also dynamic in that the size of the neighborhood decreases with each training time step so that eventually the neighborhood shrinks so that all subsequent training steps are competitive.

Because  is a statistical estimate, let it be called the statistical estimate of the “signal” part of .  The true geobody is independent of an illumination function.  The dimensionality reduction   associated with multi-attribute interpretation has a purpose of geobody recognition through identification, dimensionality reduction and classification.  In fact, in the chain of steps there is a mapping and un-mapping process with no guarantee that the geobody will be recovered: 

However, the image function   may be inappropriate to illuminate the geobody in F-space because of a poor choice of attributes.  So at best, the geobodies is illuminated by an imperfect set of attributes and detected by a classifier that is primitive.  The results often must be combined, edited and packaged into useful, interpreted geobody units, ready to be incorporated into an evolving geomodel on which the interpretation will rest.

Attribute Space Illumination

One fundamental aspect of machine learning is dimensionality reduction from attribute space because its dimensions are usually beyond our grasp.  The approach taken here is from the perspective of manifolds which are defined as spaces with the property of “mapability” where Euclidean coordinates may be safely employed within any local neighborhood (Haykin, 2009, p.437-442).

The manifold assumption is important because SOM learning is routinely conducted on multi-attribute samples in attribute space using Euclidean distances to move neurons during training.  One of the first concerns of dimensionality reduction is the potential to lose details in natural clusters.  In practice, it has been found that halving the original amplitude sample interval is advantageous, but further downsampling has not proven to be beneficial.  Infilling a natural cluster allows neurons during competitive training to adapt to subtle details that might be missed in the original data.

Curse of Dimensionality

The Curse of Dimensionality (Haykin, 2009) is, in fact, many curses.  One problem is that uniformly sampled space increases dramatically with increasing dimensionality.  This has implications when gathering training samples for a neural network.  For example, cutting a unit length bar (1-D) with a sample interval of .01 results in 100 samples.  Dividing a unit length hypercube in 10-D with a similar sample interval results in 1020 samples (1010 x 102).  If the nature of attribute space requires uniform sampling across a broad numerical range, then a large number of attributes may not be practical.  However, uniform sampling is not an issue here because the objective is to locate and detail features of natural clusters.

Also, not all attributes are important.  In the hunt for natural clusters, PCA (Haykin, 2009) is often a valuable tool to assess the relative merits of each attribute in a SOM attribute selection list.  Depending on geologic objectives, several dominant attributes may be picked from the first, second or even third principal eigenvectors or may pick all attributes from one principle eigenvector.

Geobody inversion from an interpretation perspective

Multi-attribute seismic interpretation is finding geobodies in survey space aided by machine learning tools that hunt for natural clusters in attribute space.  The interpreter’s critical role in this process is the following:

  • Choose questions that carry exploration toward meaningful conclusions.
  • Be creative with seismic attributes so as to effectively address illumination of geologic geobodies.
  • Pick attribute selection lists with the assistance of PCA.
  • Review the results of machine learning which may identify interesting geobodies  in natural clusters autopicked by SOM.
  • Look through the noise to edit and build geobodies  with a workbench of visualization displays and a variety of statistical decision-making tools.
  • Construct geomodels by combining autopicked geobodies which in turn allow predictions on where to make better drilling decisions.

The Geomodel

After classification, picking geobodies from their winning neurons starts by filling an empty geomodel .  Natural clusters are consolidators of geobodies with common properties in attribute space so M < B.  In fact, it is often found that M << B .  That is, geobodies “stack” in attribute space.  Seismic data is noisy.  Natural clusters are consequentially statistical.  Not every sample g classified by a winning neuron is important although SOM classifies every sample. Samples that are a poor fit are probably noise.  Construction of a sensible geomodel depends on answering well thought out geological questions and phrased by selection of appropriate attribute selection lists.

Working below classic seismic tuning thickness

Classical seismic tuning thickness is λ/4.  Combining vertical incidence layer thickness  with  λ=V/f leads to a critical layer thickness  Resolution below classical seismic tuning thickness has been demonstrated with multi-attribute seismic samples and a machine learning classifier operating on those samples in scaled attribute space (Roden, et. al., 2015). High-quality natural clusters in attribute space imply tight, dense balls (low entropy, high density).  SOM training and classification of a classical wedge model at three noise levels is shown in Figures 1 and 2 which show tracking well below tuning thickness.

Seismic Processing: Processing the survey at a fine sample interval is preferred over resampling the final survey to a fine sample interval. Highest S/N ratio is always preferred. Preprocessing: Fine sample interval of base survey is preferred to raising the density of natural clusters and then computing attributes, but do not compute attributes and then resample because some attributes are not continuous functions. Derive all attributes from a single base survey in order to avoid misties. Attribute Selection List: Prefer attributes that address the specific properties of an intended geologic geobody. Working below tuning, prefer instantaneous attributes over attributes requiring spatial sampling.  Thin bed results on 3D surveys in the Eagle Ford Shale Facies of South Texas and in the Alibel horizon of the Middle Frio Onshore Texas and Group corroborated with extensive well control to verify consistent results for more accurate mapping of facies below tuning without usual traditional frequency assumptions (Roden, Smith, Santogrossi and Sacrey, personal communication, 2017).

Conclusion

There is a firm mathematical basis for a unified treatment of multi-attribute seismic samples, natural clusters, geobodies and machine learning classifiers such as SOM.  Interpretation of multi-attribute seismic data is showing great promise, having demonstrated resolution well below conventional seismic thin bed resolution due to high-quality natural clusters in attribute space which have been detected by a robust classifier such as SOM.

Acknowledgments

I am thankful to have worked with two great geoscientists, Tury Taner and Sven Treitel during the genesis of these ideas.  I am also grateful to work with an inspired and inspiring team of coworkers who are equally committed to excellence.  In particular, Rocky Roden and Deborah Sacrey are longstanding associates with a shared curiosity to understand things and colleagues of a hunter’s spirit.

Figure 1: Wedge models for three noise levels trained and classified by SOM with attribute list of amplitude and Hilbert transform (not shown) on 8 x 8 hexagonal neuron topology. Upper displays are amplitude. Middle displays are SOM classifications with a smooth color map. Lower displays are SOM classifications with a random color map. The rightmost vertical column is an enlargement of wedge model tips at highest noise level.  Multi-attribute classification samples are clearly tracking well below tuning thickness which is left of the center in the right column displays.

Figure 2: Attribute space for three wedge models with horizontal axis of amplitude and vertical axis of Hilbert transform. Upper displays are multi-attribute samples before SOM training and lower displays after training and samples classified by winning neurons in lower left with smooth color map.  Upper right is an enlargement of tip of third noise level wedge model from Figure 1 where below-tuning bed thickness is right of the thick vertical black line.

References

Chopra, S. J. Castagna and O. Potniaguine, 2006, Thin-bed reflectivity inversion, Extended abstracts, SEG Annual Meeting, New Orleans.

Chopra, S. and K.J. Marfurt, 2007, Seismic attributes for prospect identification and reservoir characterization, Geophysical Developments No. 11, SEG.

Dutta, R.O., P.E. Hart and D.G. Stork, 2001, Pattern Classification, 2nd ed.: Wiley.

Haykin, S., 2009, Neural networks and learning machines, 3rd ed.: Pearson.

Kohonen, T., 1984, Self-organization and associative memory, pp 125-245. Springer-Verlag. Berlin.

Kohonen, T., 2001, Self-organizing maps: Third extended addition, Springer, Series in Information Services.

Laaksonen, J. and T. Honkela, 2011, Advances in self-organizing maps, 8th International Workshop, WSOM 2011 Espoo, Finland, Springer.

Ma, Y. and Y. Fu, 2012, Manifold Learning Theory and Applications, CRC Press, Boca Raton.

Roden, R., T. Smith and D. Sacrey, 2015, Geologic pattern recognition from seismic attributes, principal component analysis and self-organizing maps, Interpretation, SEG, November 2015, SAE59-83.

Smith, T., and M.T. Taner, 2010, Natural clusters in multi-attribute seismics found with self-organizing maps: Source and signal  processing section paper 5: Presented at Robinson-Treitel Spring Symposium by GSH/SEG, Extended Abstracts.

Smith, T. and S. Treitel, 2010, Self-organizing artificial neural nets for automatic anomaly identification, Expanded abstracts, SEG Annual Convention, Denver.

Taner, M.T., 2003, Attributes revisited, http://www.rocksolidimages.com/attributes-revisited/, accessed 22 March 2017.

Taner, M.T., and R.E. Sheriff, 1977, Application of amplitude, frequency, and other attributes, to stratigraphic and hydrocarbon  determination, in C.E. Payton, ed., Applications to hydrocarbon exploration: AAPG Memoir 26, 301–327.

Taner, M.T., S. Treitel, and T. Smith, 2009, Self-organizing maps of multi-attribute 3D seismic reflection surveys, Presented at the 79th International SEG Convention, SEG 2009 Workshop on “What’s New in Seismic Interpretation,” Paper no. 6.

ChingWen Chen, seismic interpreter THOMAS A. SMITH is president and chief executive officer of Geophysical Insights, which he founded in 2008 to develop machine learning processes for multiattribute seismic analysis. Smith founded Seismic Micro-Technology in 1984, focused on personal computer-based seismic interpretation. He began his career in 1971 as a processing geophysicist at Chevron Geophysical. Smith is a recipient of the Society of Exploration Geophysicists’ Enterprise Award, Iowa State University’s Distinguished Alumni Award and the University of Houston’s Distinguished Alumni Award for Natural Sciences and Mathematics. He holds a B.S. and an M.S. in geology from Iowa State, and a Ph.D. in geophysics from the University of Houston.
   

Comparison of Seismic Inversion and SOM Seismic Multi-Attribute Analysis

Self-Organizing Maps (SOM) is a relatively new approach for seismic interpretation in our industry and should not be confused with seismic inversion or rock modeling.  The descriptions below differentiate SOM, which is a statistical classifier, from seismic inversion.

Seismic Inversion
The purpose of seismic inversion is to transform seismic reflection data into rock and fluid properties.  This is done by trying to convert reflectivity data (interface properties) to layer properties.  If elastic parameters are desired, then the reflectivity from AVO must be performed.  The most basic inversion calculates acoustic impedance (density X velocity) of layers from which predictions about lithology and porosity can be made.  The more advanced inversion methods attempt to discriminate specifically between lithology, porosity, and fluid effects.  Inversions can be grouped into categories: pre-stack vs. post-stack, deterministic vs. geostatistical, or relative vs. absolute.  Necessary for most inversions is the estimation of the wavelet and a calculation of the low frequency trend obtained from well control and velocity information.  Without an accurate calibration of these parameters, the inversion is non-unique.  Inversion requires a stringent set of data conditions from the well logs and seismic.  The accuracy of inversion results are directly related to significant good quality well control, usually requiring numerous wells in the same stratigraphic interval for reasonable results.

SOM Seismic Multi-Attribute Analysis
Self-Organizing Maps (SOM) is a non-linear mathematical approach that classifies data into patterns or clusters.  It is an artificial neural network that employs unsupervised learning.  SOM requires no previous information for training, but evaluates the natural patterns and clusters present in the data.  A seismic multi-attribute approach involves selecting several attributes that potentially reveal aspects of geology and evaluate how these data form natural organizational patterns with SOM.  The results from a SOM analysis are revealed by a 2D color map that identify the patterns present in the multi-attribute data set.  The data for SOM are any type of seismic attribute which is any measurable property of the seismic.  Any type of inversion is an attribute type that can be included in a SOM analysis.  A SOM analysis will reveal geologic features in the data, which is dictated by the type of seismic attributes employed. The SOM classification patterns can relate to defining stratigraphy, seismic facies, direct hydrocarbon indicators, thin beds, aspects of shale plays, such as fault/fracture trends and sweet spots, etc.  The primary considerations for SOM are the sample rate, seismic attributes employed, and seismic data quality.  SOM addresses the issues of evaluating dozens of seismic attribute volumes (Big Data) and understanding how these numerous volumes are inter-related.

Seismic inversion attempts to invert the seismic data into rock and fluid properties predicted by converting seismic data from interface properties into layers.  Numerous wells and good quality well information in the appropriate zone is necessary for successful inversion calculations, otherwise solutions are non-unique.  For successful inversions, wavelet effects must be removed and the low frequency trend must be accurate.

SOM identifies the natural organizational patterns in a multi-attribute classification approach.  Geologic features and geobodies exhibit natural patterns or clusters which can be corroborated with well control if present, but not necessary for the SOM analysis.  For successful SOM analysis the appropriate seismic attributes must be selected.

Rocky Roden

Sr. Consulting Geophysicist | Geophysical Insights

ROCKY R. RODEN has extensive knowledge of modern geoscience technical approaches (past Chairman-The Leading Edge Editorial Board).  As former Chief Geophysicist and Director of Applied Technology for Repsol-YPF, his role comprised advising corporate officers, geoscientists, and managers on interpretation, strategy and technical analysis for exploration and development in offices in the U.S., Argentina, Spain, Egypt, Bolivia, Ecuador, Peru, Brazil, Venezuela, Malaysia, and Indonesia.  He has been involved in the technical and economic evaluation of Gulf of Mexico lease sales, farmouts worldwide, and bid rounds in South America, Europe, and the Far East.  Previous work experience includes exploration and development at Maxus Energy, Pogo Producing, Decca Survey, and Texaco.  He holds a B.S. in Oceanographic Technology-Geology from Lamar University and a M.S. in Geological and Geophysical Oceanography from Texas A&M University.