web analytics
Applications of Convolutional Neural Networks (CNN) to Seismic Interpretation

Applications of Convolutional Neural Networks (CNN) to Seismic Interpretation

As part of our quarterly series on machine learning, we were delighted to have had Dr. Tao Zhao present applications of Convolutional Neural Networks (CNN) in a worldwide webinar on 20 March 2019 that was attended by participants on every continent.  Dr. Zhao highlighted applications in seismic facies classification, fault detection, and extracting large scale channels using CNN technology.  If you missed the webinar, no problem!  A video of the webinar can be streamed via the video player below.  Please provide your name and business email address so that we may invite you to future webinars and other events.  The abstract for Dr. Zhao’s talk follows:

We welcome your comments and questions and look forward to discussions on this timely topic.

Abstract:  Leveraging Deep Learning in Extracting Features of Interest from Seismic Data

Mapping and extracting features of interest is one of the most important objectives in seismic data interpretation. Due to the complexity of seismic data, geologic features identified by interpreters on seismic data using visualization techniques are often challenging to extract. With the rapid development in GPU computing power and the success obtained in computer vision, deep learning techniques, represented by convolutional neural networks (CNN), start to entice seismic interpreters in various applications. The main advantages of CNN over other supervised machine learning methods are its spatial awareness and automatic attribute extraction. The high flexibility in CNN architecture enables researchers to design different CNN models to identify different features of interest. In this webinar, using several seismic surveys acquired from different regions, I will discuss three CNN applications in seismic interpretation: seismic facies classification, fault detection, and channel extraction. Seismic facies classification aims at classifying seismic data into several user-defined, distinct facies of interest. Conventional machine learning methods often produce a highly fragmented facies classification result, which requires a considerable amount of post-editing before it can be used as geobodies. In the first application, I will demonstrate that a properly built CNN model can generate seismic facies with higher purity and continuity. In the second application, compared with traditional seismic attributes, I deploy a CNN model built for fault detection which provides smooth fault images and robust noise degradation. The third application demonstrates the effectiveness of extracting large scale channels using CNN. These examples demonstrate that CNN models are capable of capturing the complex reflection patterns in seismic data, providing clean images of geologic features of interest, while also carrying a low computational cost.

 

Future of Seismic Interpretation with Machine Learning and Deep Learning

By: Iván Marroquín, Ph.D. – Senior Research Geophysicist

I am very excited to participate as a speaker in the workshop on Big Data and Machine Learning organized by European Association of Geoscientists & Engineers. My presentation is about using machine learning and deep learning to advance seismic interpretation process for the benefit of hydrocarbons exploration and production.

Companies in the oil and gas industry invest millions of dollars in an effort to improve their understanding of their reservoir characteristics and predict their future behavior. An integral part of this effort consists of using traditional workflows for interpreting large volumes of seismic data. Geoscientists are required to manually define relationships between geological features and seismic patterns. As a result, the task of finding significative seismic responses to recognize reservoir characteristics can be overwhelming.

In this era of big data revolution, we are at the beginning of the next fundamental shift in seismic interpretation. Knowledge discovery, based on machine learning and deep learning, supports geoscientists in two ways. First, it interrogates volumes of seismic data without preconceptions. The objective is to automatically find key insights, hidden patterns, and correlations. So then, geoscientists gain visibility into complex relationships between geologic features and seismic data. To illustrate this point, Figure 1a shows a thin bed reservoir scenario from Texas (USA). In terms of seismic data, it is difficult to discern the presence of the seismic event associated with the producing zone at well location. The use of machine learning to derive a seismic classification output (Figure 1b) brought forward a much rich stratigraphic information. Upon closer examination using time slice views (Figure 1c), it is indicated that the reservoir is an offshore bar. Note how well oil production matches the extent of the reservoir body.  

Figure 1. Seismic classification result using machine learning (result provided by Deborah Sacrey, senior geologist with Geophysical Insights).  

Another way knowledge discovery can help geoscientists is to automate elements of seismic interpretation process. At the rate machine learning and deep learning can consume large amounts of seismic data, it makes possible to constantly review, modify, and take appropriate actions at the right time. With these possibilities, geoscientists are free to focus on other, more valuable tasks. The following example demonstrates that a deep learning model trained can be trained on seismic data or derived attributes (e.g., seismic classification, instantaneous, geometric, etc.) to identify desired outcomes, such as fault locations. In this case, a seismic classification volume (Figure 2a) was generated from seismic amplitude data (Taranaki Basin, west coast of New Zealand). Figure 2b shows the predicted faults displayed against the classification volume. To corroborate the quality of the prediction, the faults are also displayed against the seismic amplitude data (Figure 2c). It is important to note that the seismic classification volume provides an additional benefit to the process of seismic interpretation. It has the potential to expose stratigraphic information not readily apparent in seismic amplitude data.  

Figure 2. Fault location predictions using deep learning (result provided by Dr. Tao Zhao, research geophysicist with Geophysical Insights).

Unsupervised vs. Supervised classifiers – Comparing classification results

Unsupervised vs. Supervised classifiers – Comparing classification results

By: Ivan Marroquin, Ph.D. – Senior Research Geophysicist

In machine learning, there is a very interesting challenge in comparing the quality of the classification result generated by either unsupervised or supervised classifiers. Most of the time, we opt for one technique over the other. Sometimes, we perform a comparison study and use a visual examination to decide which classifier produced the best outcome.

Can we do better than this? I believe so! Let’s assume that we have a dataset that consists of three well-defined groups of data points. Then, we use an unsupervised classifier to generate three clusters. The algorithm produces two outputs: (1) cluster centers and (2) membership of each data point to its closest cluster center. As a consequence, we get the boundaries of clusters (see figure A below). If we present the same data to a supervised classifier, assuming that the data points already have a class label assigned to them, the algorithm generates boundaries that separate a class from each other (see figure B below). So far, you would think: I cannot still compare the classification outputs. However, there is a common trait between these results: the presence of boundaries. What if I tell you that we can take advantage of the notion of the boundaries in the context of supervised classifiers. In a way, it can help to derive cluster centers associated with each predicted class (see cluster center symbols with dashed patterns in red in figure B below).

SOM analysis for seismic interpretation

There are so many different types of classification problems, I focused on the case of lithofacies classification from wireline well log data. I used this data to implement a machine learning pipeline to derive cluster centers. The pipeline consists of three steps (see diagram below): (1) generate a lithofacies classification, (2) derive cluster centers from lithofacies classification result, and (3) validate cluster centers. Each of these steps was addressed with a specific machine learning algorithm. For the first step, a multi-class feedforward neural network was used. In the second step, an evolutionary algorithm was used. And in the last step, I used a metric learning algorithm. To ensure that the best performing model in each step of the pipeline was obtained, the algorithms interacted with an automated machine learning method. New research efforts in machine learning have brought forward a concept known as “automated machine learning”. The objective of this new shift is to take us away from the manual adjustment of hyperparameters to using machine learning to optimize another machine learning by finding its best hyperparameters configuration.

SOM analysis for seismic interpretation

To demonstrate the effectiveness of the proposed machine learning pipeline and the quality of the obtained cluster centers, a lithofacies classification was produced from the derived cluster centers. In the next figure, from left to right, the first four panels show the wireline log data used to train the neural network. The following panel displays the neural network-based lithofacies classification. Note that three lithofacies classes were predicted: reservoir sand (bands in yellow), tight sand (bands in cyan), and floodplain rocks (bands in gray). The last panel displays the lithofacies classification from the derived cluster centers. There is a strong match between the two classifications in terms of the occurrence of reservoir sands, but also in the lithofacies sequence and boundaries. I am thankful to Geophysical Insights to grant the permission to present this research work at the upcoming SEG-SBGf Workshop on Machine Learning.

SOM analysis for seismic interpretation

If you are interested in learning on how we extract meaningful geological information from seismic with machine learning, and how our technology has helped geoscientists in finding hydrocarbons, please visit us at https://www.geoinsights.com/.

Or, if you desire further information, feel free to contact us.

Comparison of Seismic Inversion and SOM Seismic Multi-Attribute Analysis

Self-Organizing Maps (SOM) is a relatively new approach for seismic interpretation in our industry and should not be confused with seismic inversion or rock modeling.  The descriptions below differentiate SOM, which is a statistical classifier, from seismic inversion.

Seismic Inversion
The purpose of seismic inversion is to transform seismic reflection data into rock and fluid properties.  This is done by trying to convert reflectivity data (interface properties) to layer properties.  If elastic parameters are desired, then the reflectivity from AVO must be performed.  The most basic inversion calculates acoustic impedance (density X velocity) of layers from which predictions about lithology and porosity can be made.  The more advanced inversion methods attempt to discriminate specifically between lithology, porosity, and fluid effects.  Inversions can be grouped into categories: pre-stack vs. post-stack, deterministic vs. geostatistical, or relative vs. absolute.  Necessary for most inversions is the estimation of the wavelet and a calculation of the low frequency trend obtained from well control and velocity information.  Without an accurate calibration of these parameters, the inversion is non-unique.  Inversion requires a stringent set of data conditions from the well logs and seismic.  The accuracy of inversion results are directly related to significant good quality well control, usually requiring numerous wells in the same stratigraphic interval for reasonable results.

SOM Seismic Multi-Attribute Analysis
Self-Organizing Maps (SOM) is a non-linear mathematical approach that classifies data into patterns or clusters.  It is an artificial neural network that employs unsupervised learning.  SOM requires no previous information for training, but evaluates the natural patterns and clusters present in the data.  A seismic multi-attribute approach involves selecting several attributes that potentially reveal aspects of geology and evaluate how these data form natural organizational patterns with SOM.  The results from a SOM analysis are revealed by a 2D color map that identify the patterns present in the multi-attribute data set.  The data for SOM are any type of seismic attribute which is any measurable property of the seismic.  Any type of inversion is an attribute type that can be included in a SOM analysis.  A SOM analysis will reveal geologic features in the data, which is dictated by the type of seismic attributes employed. The SOM classification patterns can relate to defining stratigraphy, seismic facies, direct hydrocarbon indicators, thin beds, aspects of shale plays, such as fault/fracture trends and sweet spots, etc.  The primary considerations for SOM are the sample rate, seismic attributes employed, and seismic data quality.  SOM addresses the issues of evaluating dozens of seismic attribute volumes (Big Data) and understanding how these numerous volumes are inter-related.

Seismic inversion attempts to invert the seismic data into rock and fluid properties predicted by converting seismic data from interface properties into layers.  Numerous wells and good quality well information in the appropriate zone is necessary for successful inversion calculations, otherwise solutions are non-unique.  For successful inversions, wavelet effects must be removed and the low frequency trend must be accurate.

SOM identifies the natural organizational patterns in a multi-attribute classification approach.  Geologic features and geobodies exhibit natural patterns or clusters which can be corroborated with well control if present, but not necessary for the SOM analysis.  For successful SOM analysis the appropriate seismic attributes must be selected.

Rocky Roden

Sr. Consulting Geophysicist | Geophysical Insights

ROCKY R. RODEN has extensive knowledge of modern geoscience technical approaches (past Chairman-The Leading Edge Editorial Board).  As former Chief Geophysicist and Director of Applied Technology for Repsol-YPF, his role comprised advising corporate officers, geoscientists, and managers on interpretation, strategy and technical analysis for exploration and development in offices in the U.S., Argentina, Spain, Egypt, Bolivia, Ecuador, Peru, Brazil, Venezuela, Malaysia, and Indonesia.  He has been involved in the technical and economic evaluation of Gulf of Mexico lease sales, farmouts worldwide, and bid rounds in South America, Europe, and the Far East.  Previous work experience includes exploration and development at Maxus Energy, Pogo Producing, Decca Survey, and Texaco.  He holds a B.S. in Oceanographic Technology-Geology from Lamar University and a M.S. in Geological and Geophysical Oceanography from Texas A&M University.

The Value of Instantaneous Attributes

The Value of Instantaneous Attributes

Self-Organizing Maps (SOM) is a relatively new approach for seismic interpretation in our industry and should not be confused with seismic inversion or rock modeling.  The descriptions below differentiate SOM, which is a statistical classifier, from seismic inversion.

Seismic Inversion
The purpose of seismic inversion is to transform seismic reflection data into rock and fluid properties.  This is done by trying to convert reflectivity data (interface properties) to layer properties.  If elastic parameters are desired, then the reflectivity from AVO must be performed.  The most basic inversion calculates acoustic impedance (density X velocity) of layers from which predictions about lithology and porosity can be made.  The more advanced inversion methods attempt to discriminate specifically between lithology, porosity, and fluid effects.  Inversions can be grouped into categories: pre-stack vs. post-stack, deterministic vs. geostatistical, or relative vs. absolute.  Necessary for most inversions is the estimation of the wavelet and a calculation of the low frequency trend obtained from well control and velocity information.  Without an accurate calibration of these parameters, the inversion is non-unique.  Inversion requires a stringent set of data conditions from the well logs and seismic.  The accuracy of inversion results are directly related to significant good quality well control, usually requiring numerous wells in the same stratigraphic interval for reasonable results.

SOM Seismic Multi-Attribute Analysis
Self-Organizing Maps (SOM) is a non-linear mathematical approach that classifies data into patterns or clusters.  It is an artificial neural network that employs unsupervised learning.  SOM requires no previous information for training, but evaluates the natural patterns and clusters present in the data.  A seismic multi-attribute approach involves selecting several attributes that potentially reveal aspects of geology and evaluate how these data form natural organizational patterns with SOM.  The results from a SOM analysis are revealed by a 2D color map that identify the patterns present in the multi-attribute data set.  The data for SOM are any type of seismic attribute which is any measurable property of the seismic.  Any type of inversion is an attribute type that can be included in a SOM analysis.  A SOM analysis will reveal geologic features in the data, which is dictated by the type of seismic attributes employed. The SOM classification patterns can relate to defining stratigraphy, seismic facies, direct hydrocarbon indicators, thin beds, aspects of shale plays, such as fault/fracture trends and sweet spots, etc.  The primary considerations for SOM are the sample rate, seismic attributes employed, and seismic data quality.  SOM addresses the issues of evaluating dozens of seismic attribute volumes (Big Data) and understanding how these numerous volumes are inter-related.

Seismic inversion attempts to invert the seismic data into rock and fluid properties predicted by converting seismic data from interface properties into layers.  Numerous wells and good quality well information in the appropriate zone is necessary for successful inversion calculations, otherwise solutions are non-unique.  For successful inversions, wavelet effects must be removed and the low frequency trend must be accurate.

SOM identifies the natural organizational patterns in a multi-attribute classification approach.  Geologic features and geobodies exhibit natural patterns or clusters which can be corroborated with well control if present, but not necessary for the SOM analysis.  For successful SOM analysis the appropriate seismic attributes must be selected.

Patricia Santogrossi

Sr. Geoscientist | Geophysical Insights

Patricia Santogrossi is a geoscientist who has enjoyed 40 years in the oil business. She is currently a Consultant to Geophysical Insights, producer of the Paradise multi-attribute analysis software platform. Formerly, she was a Leading Reservoir Geoscientist and Non-operated Projects Manager with Statoil USA E & P. In this role Ms. Santogrossi was engaged for nearly nine years in Gulf of Mexico business development, corporate integration, prospect maturation, and multiple appraisal projects in the deep and ultra-deepwater Gulf of Mexico. Ms. Santogrossi has previously worked with domestic and international Shell Companies, Marathon Oil Company, and Arco/Vastar Resources in research, exploration, leasehold and field appraisal as well as staff development. She has also been Chief Geologist for Chroma Energy, who possessed proprietary 3D voxel multi-attribute visualization technology, and for Knowledge Reservoir, a reservoir characterization and simulation firm that specialized in Deepwater project evaluations. A longtime member of SEPM, AAPG, GCSSEPM, HGS and SEG, Ms. Santogrossi has held various elected and appointed positions in these industry organizations. She has recently begun her fourth three-year term as a representative to the AAPG House of Delegates from the Houston Geological Society (HGS). In addition, she has been invited to continue her role this fall on the University of Illinois’ Department of Geology Alumni Board. Ms. Santogrossi was born, raised, and educated in Illinois before she headed to Texas to work for Shell after she received her MS in Geology from the University of Illinois, Champaign-Urbana. Her other ‘foreign assignments’ have included New Orleans and London. She resides in Houston with her husband of twenty-four years, Joe Delasko.

Machine Learning – The Next Generation Seismic Interpretation

Machine Learning – The Next Generation Seismic Interpretation

Most people associate neural networks, big data and big number crunching as parts of a single paradigm for access to web information. Articulate a query and wait for a list of answers. But in the oil and gas exploration and reservoir replacement business – particularly at this time – we “must” place neural networks and big data tools in the hands of seismic interpreters.

Seismic interpreters are accustomed to working on interactive workstations, not using web-based queries. We suggest this “must” not because seismic interpreters are a narrow-minded bunch who are unwilling to work with newer web-styled tools. No – there are strong technical reasons which will allow us to dig deeper into our seismic data. The key is not the platform, be it an interactive workstation or the web. The real reason is that we simply do not and cannot understand much about multi-attribute seismic data with conventional seismic interpretation methods – we don’t know the semantics of the words in these data. Instead, we must let neural networks fly through the seismic data unattended to discover whatever properties they might discover. In other words, multi-attribute seismic data ain’t English. We don’t know the language yet. Today, we are starting to use web-style tools to find things in seismic data in just the same way they search through web pages looking for word patterns.

Someday integration of machine learning and seismic interpretation will evolve into a practice no longer considered a novel but a mundane operation. Then, articulation of queries like, “In this 3D survey, are there any geobodies with statistical properties of Miocene fluvial channel sands?” will be practical. In fact, a large multinational oil company with a world-wide petabyte seismic library might ask a query, “In the eastern Gulf of Mexico, do we have any 3D surveys with geobodies that might be Miocene fluvial channel sands?”

Queries like this will take lots of number crunching – recognizing and indexing geobodies in 3D surveys and all this will be done long before queries are made – but the web paradigm of today applies directly to the seismic interpretation of tomorrow. In terms of technology crossover, it’s easier than falling off a log. Just time, people and money…We happen to be a little ahead of the pack.

We have been told that at a presentation at a major oil company a particular slide drew much attention.  On that slide was drawn two curves – curves that must be part of some internal assessment.  One curve was total volume of annual digital seismic data acquired and processed and the second curve was the volume of these data which has been actually inspected. The two curves diverged and they were concerned. They realize that the solution to this kind of problem is addressed by Big Data and therefore, presentations have been made by HP, IBM and now us, Geophysical Insights.

The beauty of web information access today is that it is non-confrontational. For a well-posed question, there are often many potential answers. These are ranked in decreasing order of importance and suspected relevance.  We’ll be able to do the same with seismic data.

So what we are building in Paradise® today is a workbench for practitioners. The process of neural network analysis is not run once to deliver some magic answer.  It is run many times with each run under the careful eye of an interpreter.  And our Geophysical Insights geoscientists say that the most challenging part of learning to use this new technology is to appreciate the results. The more seasoned the interpreter => the higher the likelihood that the results of machine learning will have some meaning in the eyes of the interpreter. These new tools are part of a new kind of seismic interpretation where fresh eyes bring new insight. Sometimes significant results are not obvious but often the obvious oil & gas prospects have been evaluated. Subtle but significant results are OK too but if and only if the drill proves out a promising prediction.

Dr. Tom Smith

President/CEO | Geophysical Insights

Dr. Tom Smith received a BS and MS degree in Geology from Iowa State University. His graduate research focused on a shallow refraction investigation of the Manson astrobleme. In 1971, he joined Chevron Geophysical as a processing geophysicist but resigned in 1980 to complete his doctoral studies in 3D modeling and migration at the Seismic Acoustics Lab at the University of Houston. Upon graduation with the Ph.D. in Geophysics in 1981, he started a geophysical consulting practice and taught seminars in seismic interpretation, seismic acquisition and seismic processing. Dr. Smith founded Seismic Micro-Technology in 1984 to develop PC software to support training workshops which subsequently and there led to the development of the KINGDOM software suite for seismic interpretation.

The Society of Exploration Geologists (SEG) recognized Dr. Smith’s work with the SEG Enterprise Award in 2000, and in 2010, the Geophysical Society of Houston (GSH) awarded him an Honorary Membership. Iowa State University has recognized Dr. Smith throughout his career with the Distinguished Alumnus Lecturer Award in 1996, the Citation of Merit for National and International Recognition in 2002, and the highest alumni honor in 2015, the Distinguished Alumni Award.

In 2009, Dr. Smith founded Geophysical Insights, where he leads a team of geophysicists and computer scientists in developing advanced technologies for fundamental geophysical problems. Following over 3 years of development, Geophysical Insights launched the Paradise® multi-attribute analysis software, which uses machine learning and pattern recognition to extract greater information from seismic data. Dr. Smith has been a member of the SEG since 1967 and is also a member of the SEG, GSH, HGS, EAGE, SIPES, AAPG, GSH, Sigma XI, SSA and AGU. Dr. Smith served as Chairman of the SEG Foundation from 2010 to 2013. On January 25, 2016, he was recognized the by the Houston Geological Society (HGS) as a geophysicist who has made significant contributions to the field of geology.