Geophysical Insights Announces Call for Abstracts – University Challenge

Geophysical Insights Announces Call for Abstracts – University Challenge

Geophysical Insights – University Challenge Topics

Call for Abstracts

The following “Challenge Topics” are offered to universities who are part of the Paradise University Program. Those universities are encouraged to consider pursuing one or more of the topics below in their research work with Paradise® and related interpretation technologies. Students interested in researching and publishing on one or more of these topics are welcome to submit an abstract to Geophysical Insights, including an explanation of their interest in the topic. The management of Geophysical Insights will select the best abstract per Challenge Topic and provide a grant of $1,000 to each student upon the completion of the research work. Student(s) who undertake the research may count on additional forms of support from Geophysical Insights, including:

    • • Potential job interview after graduation
    • • Special recognition at the Geophysical Insights booth at a future SEG
    • • Occasional collaboration via web meeting, email, or phone with a senior geoscientist
    • • Inclusion in invitations to webinars hosted by Geophysical Insights on geoscience topics

Challenge Research Topics

Develop a geophysical basis for the identification of thin beds below classic seismic tuning

The research on this topic will investigate applications of new levels of seismic resolution afforded by multi-attribute Self-Organizing Maps (SOM), the unsupervised machine learning process in the Paradise software. The mathematical basis of detecting events below classical seismic tuning through simultaneous multi-attribute analysis – using machine learning – has been reported by Smith (2017) in an abstract submitted to SEG 2018. (Subsequently, the abstract has been placed online as a white paper resource). Examples of thin-bed resolution have been documented in a Frio onshore Texas reservoir, and in the Texas Eagle Ford Shale by Roden, et al., (2017). Therefore, the researcher is challenged to develop a better understanding of the physical basis for the resolution of events below seismic tuning vs. results from wavelet-based methods. Additional empirical results of the detection of thin beds are also welcomed. This approach has wide potential for both exploration and development in the interpretation of facies and stratigraphy and impact on reserve/resource calculations.  For unconventional plays, thin bed delineation will have a significant influence on directional drilling programs.

Determine the effectiveness of ‘machine learning’ determined geobodies in estimating reserves/resources and reservoir properties

The Paradise software has the capability of isolating and quantifying geobodies that result from a SOM machine learning process. Initial studies conducted with the technology suggest that the estimated reservoir volume is approximately what is being realized through the life of the field. This Challenge is to apply the geobody tool in Paradise along with other reservoir modeling techniques and field data to determine the effectiveness of geobodies in estimating reserves. If this proves to be correct, the estimating of reserves from geobodies could be done early in the lifecycle of the field, saving engineering time while reducing risk.

Corroborate SOM classification results to well logs or lithofacies

A challenge to cluster-based classification techniques is corroborating well log curves to lithofacies. Up to this point, such corroboration has been an iterative process of running different neural configurations and visually comparing each classification result to “ground truth”. Some geoscientists (results yet to be published) have used bivariate statistical analysis from petrophysical well logs in combination with the SOM classification results to develop a representation of the static reservoir properties, including reservoir distribution and storage capacity. The challenge is to develop a methodology incorporating SOM seismic results with lithofacies determination from well logs.

Explore the significance of SOM low-probability anomalies (DHIs, anomalous features, etc.)

In addition to a standard classification volume resulting from a SOM analysis, Paradise also produces a “Probability” volume that is composed of a probability value at each voxel for a given neural class (neuron). This technique is a gauge of the consistency of a feature to the surrounding region. Direct Hydrocarbon Indicators (DHIs) tend to be identified in the Paradise software as “low probability” or “anomalous” events because their properties are often inconsistent with the region. These SOM low probability features have been documented by Roden et al. (2015) and Roden and Chen (2017).  However, the Probability volume changes with the size of the region analyzed, and with respect to DHIs and anomalous features. This Challenge will determine the effectiveness of using the probability measure from a SOM result as a valid gauge of DHIs and set out the relationships among the optimum neural configuration, the size of the region, and extent of the DHIs.

Map detailed facies distribution from SOM results

SOM results have proven to provide detailed information in the delineation and distribution of facies in essentially any geologic setting (Roden et al., 2015; Roden and Santogrossi, 2017; Santogrossi, 2017). Due to the high-resolution output of appropriate SOM analysis, individual facies units can often be defined in much more detail than conventional interpretation approaches. Research topics should be related to determining facies distribution in different geological environments utilizing the SOM process, available well log curves, and regional knowledge of stratigraphy.

For more information on Paradise or the University Challenge Program, please contact:

Hal Green
Email: [email protected]
Mobile:  713.480.2260

Video: Leveraging Deep Learning in Extracting Features of Interest from Seismic Data

Video: Leveraging Deep Learning in Extracting Features of Interest from Seismic Data

 

Abstract:

 

Mapping and extracting features of interest is one of the most important objectives in seismic data interpretation. Due to the complexity of seismic data, geologic features identified by interpreters on seismic data using visualization techniques are often challenging to extract. With the rapid development in GPU computing power and the success obtained in computer vision, deep learning techniques, represented by convolutional neural networks (CNN), start to entice seismic interpreters in various applications. The main advantages of CNN over other supervised machine learning methods are its spatial awareness and automatic attribute extraction. The high flexibility in CNN architecture enables researchers to design different CNN models to identify different features of interest. In this webinar, using several seismic surveys acquired from different regions, I will discuss three CNN applications in seismic interpretation: seismic facies classification, fault detection, and channel extraction. Seismic facies classification aims at classifying seismic data into several user-defined, distinct facies of interest. Conventional machine learning methods often produce a highly fragmented facies classification result, which requires a considerable amount of post-editing before it can be used as geobodies. In the first application, I will demonstrate that a properly built CNN model can generate seismic facies with higher purity and continuity. In the second application, compared with traditional seismic attributes, I deploy a CNN model built for fault detection which provides smooth fault images and robust noise degradation. The third application demonstrates the effectiveness of extracting large scale channels using CNN. These examples demonstrate that CNN models are capable of capturing the complex reflection patterns in seismic data, providing clean images of geologic features of interest, while also carrying a low computational cost.

Tao Zhao

Research Geophysicist | Geophysical Insights

TAO ZHAO joined Geophysical Insights in 2017. As a Research Geophysicist, Dr. Zhao develops and applies shallow and deep machine learning techniques on seismic and well log data, and advances multiattribute seismic interpretation workflows. He received a B.S. in Exploration Geophysics from the China University of Petroleum in 2011, an M.S. in Geophysics from the University of Tulsa in 2013, and a Ph.D. in geophysics from the University of Oklahoma in 2017. During his Ph.D. work at the University of Oklahoma, Dr. Zhao was an active member of the Attribute-Assisted Seismic Processing and Interpretation (AASPI) Consortium developing pattern recognition and seismic attribute algorithms.

Webinar: Leveraging Deep Learning in Extracting Features of Interest from Seismic Data

Webinar: Leveraging Deep Learning in Extracting Features of Interest from Seismic Data

20 March 2019

We are delighted to kick off our quarterly machine learning webinar series this year with a presentation on applications of Convolutional Neural Networks (CNN) to seismic interpretation by Dr. Tao Zhao, Sr. Research Geophysicists with Geophysical Insights in Houston, Texas. Enabled by the development of GPU computing CNN, a form of deep learning, is proving to be an effective tool for many pattern recognition problems. Dr. Zhao will focus on three important applications of CNN that can have an immediate impact on interpretation workflows:

  • Seismic facies classification
  • Channel extraction
  • Fault detection

The webinar is free and runs an hour, including time for questions and answers at the end of the presentation.

Register today to attend the free webinar on Applications of Convolutional Neural Networks to Seismic Interpretation by Dr. Tao Zhao via the link below. You will receive a confirmation of your registration and details about how to log into the event upon registration. See below for the time of the webinar in your region.

Tao Zhao

Research Geophysicist | Geophysical Insights

TAO ZHAO joined Geophysical Insights in 2017. As a Research Geophysicist, Dr. Zhao develops and applies shallow and deep machine learning techniques on seismic and well log data, and advances multiattribute seismic interpretation workflows. He received a B.S. in Exploration Geophysics from the China University of Petroleum in 2011, an M.S. in Geophysics from the University of Tulsa in 2013, and a Ph.D. in geophysics from the University of Oklahoma in 2017. During his Ph.D. work at the University of Oklahoma, Dr. Zhao was an active member of the Attribute-Assisted Seismic Processing and Interpretation (AASPI) Consortium developing pattern recognition and seismic attribute algorithms.

Abstract:

Mapping and extracting features of interest is one of the most important objectives in seismic data interpretation. Due to the complexity of seismic data, geologic features identified by interpreters on seismic data using visualization techniques are often challenging to extract. With the rapid development in GPU computing power and the success obtained in computer vision, deep learning techniques, represented by convolutional neural networks (CNN), start to entice seismic interpreters in various applications. The main advantages of CNN over other supervised machine learning methods are its spatial awareness and automatic attribute extraction. The high flexibility in CNN architecture enables researchers to design different CNN models to identify different features of interest.

In this webinar, using several seismic surveys acquired from different regions, I will discuss three CNN applications in seismic interpretation: seismic facies classification, fault detection, and channel extraction. Seismic facies classification aims at classifying seismic data into several user-defined, distinct facies of interest. Conventional machine learning methods often produce a highly fragmented facies classification result, which requires a considerable amount of post-editing before it can be used as geobodies. In the first application, I will demonstrate that a properly built CNN model can generate seismic facies with higher purity and continuity. In the second application, compared with traditional seismic attributes, I deploy a CNN model built for fault detection which provides smooth fault images and robust noise degradation. The third application demonstrates the effectiveness of extracting large scale channels using CNN. These examples demonstrate that CNN models are capable of capturing the complex reflection patterns in seismic data, providing clean images of geologic features of interest, while also carrying a low computational cost.

Seismic Facies Classification Using Deep Convolutional Neural Networks

Seismic Facies Classification Using Deep Convolutional Neural Networks

By Tao Zhao
Published with permission: SEG International Exposition and 88th Annual Meeting
October 2018

Summary

Convolutional neural networks (CNNs) is a type of supervised learning technique that can be directly applied to amplitude data for seismic data classification. The high flexibility in CNN architecture enables researchers to design different models for specific problems. In this study, I introduce an encoder-decoder CNN model for seismic facies classification, which classifies all samples in a seismic line simultaneously and provides superior seismic facies quality comparing to the traditional patch-based CNN methods. I compare the encoder-decoder model with a traditional patch- based model to conclude the usability of both CNN architectures.

Introduction

With the rapid development in GPU computing and success obtained in computer vision domain, deep learning techniques, represented by convolutional neural networks (CNNs), start to entice seismic interpreters in the application of supervised seismic facies classification. A comprehensive review of deep learning techniques is provided in LeCun et al. (2015). Although still in its infancy, CNN-based seismic classification is successfully applied on both prestack (Araya-Polo et al., 2017) and poststack (Waldeland and Solberg, 2017; Huang et al., 2017; Lewis and Vigh, 2017) data for fault and salt interpretation, identifying different wave characteristics (Serfaty et al., 2017), as well as estimating velocity models (Araya-Polo et al., 2018).

The main advantages of CNN over other supervised classification methods are its spatial awareness and automatic feature extraction. For image classification problems, other than using the intensity values at each pixel individually, CNN analyzes the patterns among pixels in an image, and automatically generates features (in seismic data, attributes) suitable for classification. Because seismic data are 3D tomographic images, we would expect CNN to be naturally adaptable to seismic data classification. However, there are some distinct characteristics in seismic classification that makes it more challenging than other image classification problems. Firstly, classical image classification aims at distinguishing different images, while seismic classification aims at distinguishing different geological objects within the same image. Therefore, from an image processing point of view, instead of classification, seismic classification is indeed a segmentation problem (partitioning an image into blocky pixel shapes with a coarser set of colors). Secondly, training data availability for seismic classification is much sparser comparing to classical

image classification problems, for which massive data are publicly available. Thirdly, in seismic data, all features are represented by different patterns of reflectors, and the boundaries between different features are rarely explicitly defined. In contrast, features in an image from computer artwork or photography are usually well-defined. Finally, because of the uncertainly in seismic data, and the nature of manual interpretation, the training data in seismic classification is always contaminated by noise.

To address the first challenge, until today, most, if not all, published studies on CNN-based seismic facies classification perform classification on small patches of data to infer the class label of the seismic sample at the patch center. In this fashion, seismic facies classification is done by traversing through patches centered at every sample in a seismic volume. An alternative approach, although less discussed, is to use CNN models designed for image segmentation tasks (Long et al., 2015; Badrinarayanan et al., 2017; Chen et al., 2018) to obtain sample-level labels in a 2D profile (e.g. an inline) simultaneously, then traversing through all 2D profiles in a volume.

In this study, I use an encoder-decoder CNN model as an implementation of the aforementioned second approach. I apply both the encoder-decoder model and patch-based model to seismic facies classification using data from the North Sea, with the objective of demonstrating the strengths and weaknesses of the two CNN models. I conclude that the encoder-decoder model provides much better classification quality, whereas the patch-based model is more flexible on training data, possibly making it easier to use in production.

The Two Convolutional Neural Networks (CNN) Models

Patch-based model

A basic patch-based model consists of several convolutional layers, pooling (downsampling) layers, and fully-connected layers. For an input image (for seismic data, amplitudes in a small 3D window), a CNN model first automatically extracts several high-level abstractions of the image (similar to seismic attributes) using the convolutional and pooling layers, then classifies the extracted attributes using the fully- connected layers, which are similar to traditional multilayer perceptron networks. The output from the network is a single value representing the facies label of the seismic sample at the center of the input patch. An example of patch-based model architecture is provided in Figure 1a. In this example, the network is employed to classify salt versus non-salt from seismic amplitude in the SEAM synthetic data (Fehler and Larner, 2008). One input instance is a small patch of data bounded by the red box, and the corresponding output is a class label for this whole patch, which is then assigned to the sample at the patch center. The sample marked as the red dot is classified as non-salt.

CNN architecture patch-based model

Figure 1. Sketches for CNN architecture of a) 2D patch-based model and b) encoder-decoder model. In the 2D patch-based model, each input data instance is a small 2D patch of seismic amplitude centered at the sample to be classified. The corresponding output is then a class label for the whole 2D patch (in this case, non-salt), which is usually assigned to the sample at the center. In the encoder-decoder model, each input data instance is a whole inline (or crossline/time slice) of seismic amplitude. The corresponding output is a whole line of class labels, so that each sample is assigned a label (in this case, some samples are salt and others are non-salt). Different types of layers are denoted in different colors, with layer types marked at their first appearance in the network. The size of the cuboids approximately represents the output size of each layer.

Encoder-decoder model

Encoder-decoder is a popular network structure for tackling image segmentation tasks. Encoder-decoder models share a similar idea, which is first extracting high level abstractions of input images using convolutional layers, then recovering sample-level class labels by “deconvolution” operations. Chen et al. (2018) introduce a current state-of-the-art encoder-decoder model while concisely reviewed some popular predecessors. An example of encoder-decoder model architecture is provided in Figure 1b. Similar to the patch-based example, this encoder-decoder network is employed to classify salt versus non-salt from seismic amplitude in the SEAM synthetic data. Unlike the patch- based network, in the encoder-decoder network, one input instance is a whole line of seismic amplitude, and the corresponding output is a whole line of class labels, which has the same dimension as the input data. In this case, all samples in the middle of the line are classified as salt (marked in red), and other samples are classified as non-salt (marked in white), with minimum error.

Application of the Two CNN Models

For demonstration purpose, I use the F3 seismic survey acquired in the North Sea, offshore Netherlands, which is freely accessible by the geoscience research community. In this study, I am interested to automatically extract seismic facies that have specific seismic amplitude patterns. To remove the potential disagreement on the geological meaning of the facies to extract, I name the facies purely based on their reflection characteristics. Table 1 provides a list of extracted facies. There are eight seismic facies with distinct amplitude patterns, another facies (“everything else”) is used for samples not belonging to the eight target facies.

Facies numberFacies name
1Varies amplitude steeply dipping
2Random
3Low coherence
4Low amplitude deformed
5Low amplitude dipping
6High amplitude deformed
7Moderate amplitude continuous
8Chaotic
0Everything else

To generate training data for the seismic facies listed above, different picking scenarios are employed to compensate for the different input data format required in the two CNN models (small 3D patches versus whole 2D lines). For the patch-based model, 3D patches of seismic amplitude data are extracted around seed points within some user-defined polygons. There are approximately 400,000 3D patches of size 65×65×65 generated for the patch-based model, which is a reasonable amount for seismic data of this size. Figure 2a shows an example line on which seed point locations are defined in the co-rendered polygons.

The encoder-decoder model requires much more effort for generating labeled data. I manually interpret the target facies on 40 inlines across the seismic survey and use these for building the network. Although the total number of seismic samples in 40 lines are enormous, the encoder-decoder model only considers them as 40 input instances, which in fact are of very small size for a CNN network. Figure 2b shows an interpreted line which is used in training the network

In both tests, I randomly use 90% of the generated training data to train the network and use the remaining 10% for testing. On an Nvidia Quadro M5000 GPU with 8GB memory, the patch-based model takes about 30 minutes to converge, whereas the encoder-decoder model needs about 500 minutes. Besides the faster training, the patch-based model also has a higher test accuracy at almost 100% (99.9988%, to be exact) versus 94.1% from the encoder- decoder model. However, this accuracy measurement is sometimes a bit misleading. For a patch-based model, when picking the training and testing data, interpreters usually pick the most representative samples of each facies for which they have the most confidence, resulting in high quality training (and testing) data that are less noisy, and most of the ambiguous samples which are challenging for the classifier are excluded from testing. In contrast, to use an encoder-decoder model, interpreters have to interpret all the target facies in a training line. For example, if the target is faults, one needs to pick all faults in a training line, otherwise unlabeled faults will be considered as “non-fault” and confuse the classifier. Therefore, interpreters have to make some not-so-confident interpretation when generating training and testing data. Figure 2c and 2d show seismic facies predicted from the two CNN models on the same line shown in Figure 2a and 2b. We observe better defined facies from the encoder-decoder model compared to the patch- based model.

Figure 3 shows prediction results from the two networks on a line away from the training lines, and Figure 4 shows prediction results from the two networks on a crossline. Similar to the prediction results on the training line, comparing to the patch-based model, the encoder-decoder model provides facies as cleaner geobodies that require much less post-editing for regional stratigraphic classification (Figure 5). This can be attributed to an encoder-decoder model that is able to capture the large scale spatial arrangement of facies, whereas the patch-based model only senses patterns in small 3D windows. To form such windows, the patch-based model also needs to pad or simply skip samples close to the edge of a 3D seismic volume. Moreover, although the training is much faster in a patch-based model, the prediction stage is very computationally intensive, because it processes data size N×N×N times of the original seismic volume (N is the patch size along each dimension). In this study, the patch-based method takes about 400 seconds to predict a line, comparing to less than 1 second required in the encoder-decoder model.

Conclusion

In this study, I compared two types of CNN models in the application of seismic facies classification. The more commonly used patch-based model requires much less effort in generating labeled data, but the classification result is suboptimal comparing to the encoder-decoder model, and the prediction stage can be very time consuming. The encoder-decoder model generates superior classification result at near real-time speed, at the expense of more tedious labeled data picking and longer training time.

Acknowledgements

The author thanks Geophysical Insights for the permission to publish this work. Thank dGB Earth Sciences for providing the F3 North Sea seismic data to the public, and ConocoPhillips for sharing the MalenoV project for public use, which was referenced when generating the training data. The CNN models discussed in this study are implemented in TensorFlow, an open source library from Google.

Figure 2. Example of seismic amplitude co-rendered with training data picked on inline 340 used for a) patch-based model and b) encoder-decoder model. The prediction result from c) patch-based model, and d) from the encoder-decoder model. Target facies are colored in colder to warmer colors in the order shown in Table 1. Compare Facies 5, 6 and 8.

Figure 3. Prediction results from the two networks on a line away from the training lines. a) Predicted facies from the patch-based model. b) Predicted facies from the encoder-decoder based model. Target facies are colored in colder to warmer colors in the order shown in Table 1. The yellow dotted line marks the location of the crossline shown in Figure 4. Compare Facies 1, 5 and 8.

Figure 4. Prediction results from the two networks on a crossline. a) Predicted facies from the patch-based model. b) Predicted facies from the encoder-decoder model. Target facies are colored in colder to warmer colors in the order shown in Table 1. The yellow dotted lines mark the location of the inlines shown in Figure 2 and 3. Compare Facies 5 and 8.

Figure 5. Volumetric display of the predicted facies from the encoder-decoder model. The facies volume is visually cropped for display purpose. An inline and a crossline of seismic amplitude co-rendered with predicted facies are also displayed to show a broader distribution of the facies. Target facies are colored in colder to warmer colors in the order shown in Table 1.

References

Araya-Polo, M., T. Dahlke, C. Frogner, C. Zhang, T. Poggio, and D. Hohl, 2017, Automated fault detection without seismic processing: The Leading Edge, 36, 208–214.

Araya-Polo, M., J. Jennings, A. Adler, and T. Dahlke, 2018, Deep-learning tomography: The Leading Edge, 37, 58–66.

Badrinarayanan, V., A. Kendall, and R. Cipolla, 2017, SegNet: A deep convolutional encoder-decoder architecture for image segmentation: IEEE Transactions on Pattern Analysis and Machine Intelligence, 39, 2481–2495.

Chen, L. C., G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, 2018, DeepLab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs: IEEE Transactions on Pattern Analysis and Machine Intelligence, 40, 834–848.

Chen, L. C., Y. Zhu, G. Papandreou, F. Schroff, and H. Adam, 2018, Encoder-decoder with atrous separable convolution for semantic image segmentation: arXiv preprint, arXiv:1802.02611v2.

Fehler, M., and K. Larner, 2008, SEG advanced modeling (SEAM): Phase I first year update: The Leading Edge, 27, 1006–1007.

Huang, L., X. Dong, and T. E. Clee, 2017, A scalable deep learning platform for identifying geologic features from seismic attributes: The Leading Edge, 36, 249–256.

LeCun, Y., Y. Bengio, and G. Hinton, 2015, Deep learning: Nature, 521, 436–444.

Lewis, W., and D. Vigh, 2017, Deep learning prior models from seismic images for full-waveform inversion: 87th Annual International Meeting, SEG, Expanded Abstracts, 1512–1517.

Long, J., E. Shelhamer, and T. Darrell, 2015, Fully convolutional networks for semantic segmentation: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 3431–3440.

Serfaty, Y., L. Itan, D. Chase, and Z. Koren, 2017, Wavefield separation via principle component analysis and deep learning in the local angle domain: 87th Annual International Meeting, SEG, Expanded Abstracts, 991–995.

Waldeland, A. U., and A. H. S. S. Solberg, 2017, Salt classification using deep learning: 79th Annual International Conference and Exhibition, EAGE, Extended Abstracts, Tu-B4-12.

A Fault Detection Workflow Using Deep Learning and Image Processing

A Fault Detection Workflow Using Deep Learning and Image Processing

By Tao Zhao
Published with permission: SEG International Exposition and 88th Annual Meeting
October 2018

Summary

Within the last a couple of years, deep learning techniques, represented by convolutional neural networks (CNNs), have been applied to fault detection problems on seismic data with impressive outcome. As is true for all supervised learning techniques, the performance of a CNN fault detector highly depends on the training data, and post-classification regularization may greatly improve the result. Sometimes, a pure CNN-based fault detector that works perfectly on synthetic data may not perform well on field data. In this study, we investigate a fault detection workflow using both CNN and directional smoothing/sharpening. Applying both on a realistic synthetic fault model based on the SEAM (SEG Advanced Modeling) model and also field data from the Great South Basin, offshore New Zealand, we demonstrate that the proposed fault detection workflow can perform well on challenging synthetic and field data.

Introduction

Benefited from its high flexibility in network architecture, convolutional neural networks (CNNs) are a supervised learning technique that can be designed to solve many challenging problems in exploration geophysics. Among these problems, detection of particular seismic facies of interest might be the most straightforward application of CNNs. The first published study applying CNN on seismic data might be Waldeland and Solberg (2017), in which the authors used a CNN model to classify salt versus non-salt features in a seismic volume. At about the same time as Waldeland and Solberg (2017), Araya-Polo et al. (2017) and Huang et al. (2017) reported success in fault detection using CNN models.

From a computer vision perspective, in seismic data, faults are a special group of edges. CNN has been applied to more general edge detection problems with great success (El- Sayed et al., 2013; Xie and Tu, 2015). However, faults in seismic data are fundamentally different from edges in images used in computer vision domain. The regions separated by edges in a traditional computer vision image are relatively homogeneous, whereas in seismic data such regions are defined by patterns of reflectors. Moreover, not all edges in seismic data are faults. In practice, although providing excellent fault images, traditional edge detection attributes such as coherence (Marfurt et al., 1999) are also sensitive to stratigraphic edges such as unconformities, channel banks, and karst collapses. Wu and Hale (2016) proposed a brilliant workflow for automatically extracting fault surfaces, in which a crucial step is computing the fault likelihood. CNN-based fault detection methods can be used as an alternative approach to generate such fault likelihood volumes, and the fault strike and dip can be then computed from the fault likelihood.

One drawback of supervised machine learning-based fault detection is its brute-force nature, meaning that instead of detecting faults following geological/geophysical principles, the detection purely depends on the training data. In reality, we will never have training data that covers all possible appearances of faults in seismic data, nor are our data noise- free. Therefore, although the raw output from the CNN classifier may adequately represent faults in synthetic data of simple structure and low noise, some post-processing steps are needed for the result to be useful on field data. Based on the traditional coherence attribute, Qi et al. (2017) introduced an image processing-based workflow to skeletonize faults. In this study, we regularize the raw output from a CNN fault detector with an image processing workflow built on Qi et al. (2017) to improve the fault images.

We use both a realistic synthetic data and field data to investigate the effectiveness of the proposed workflow. The synthetic data should ideally be a good approximation of field data and provide full control on the parameter set. We build our synthetic data based on the SEAM model (Fehler and Larner, 2008) by taking sub-volumes from the impedance model and inserting faults. After verifying the performance on the synthetic data, we then move on to field data acquired form the Great South Basin, offshore New Zealand, where extensive amount of faulting occurs. Results on both synthetic and field data show great potential of the proposed fault detection workflow which provides very clean fault images.

Proposed Workflow

The proposed workflow starts with a CNN classifier which is used to produce a raw image of faults. In this study, we adopt a 3D patch-based CNN model that classifies each seismic sample using samples within a 3D window. An example of the CNN architecture used in this study is provided in Figure 1. Basic patch-based CNN model consists of several convolutional layers, pooling (downsampling) layers, and fully-connected layers. Given a 3D patch of seismic amplitudes, a CNN model first automatically extracts several high-level abstractions of the image (similar to seismic attributes) using the convolutional and pooling layers, then classifies the extracted attributes using the fully- connected layers, which behave similar to a traditional multilayer perceptron network. The output from the network is then a single value representing the facies label of the seismic sample centered at the 3D patch. In this study, the label is binary, representing “fault” or “non-fault”.

Figure 1. Sketches of a 2D patch-based CNN architecture. In this demo case, each input data instance is a small 2D patch of seismic amplitude centered at the sample to be classified. The corresponding output is a class label representing the patch (in this case, fault), which is usually assigned to the center sample. Different types of layers are denoted in different colors, with layer types marked at their first appearance in the network. The size of the cuboids approximately represents the output size of each layer.

We then use a suite of image processing techniques to improve the quality of the fault images. First, we use a directional Laplacian of Gaussian (LoG) filter (Machado et al., 2016) to enhance lineaments that are of high angle from layering reflectors and suppress anomalies close to reflector dip, while calculating the dip, azimuth, and dip magnitude of the faults. Taking these data, we then use a skeletonization step, redistributing the fault anomalies within a fault damage zone to the most likely fault plane. We then do a thresholding to generate a binary image for faults. Optionally, if the result is still noisy, we can continue with a median filter to reduce the random noise and iteratively perform the directional LoG and skeletonization to achieve a desirable result. Figure 2 summarizes the regularization workflow.

Synthetic Test

We first test the proposed workflow on synthetic data built on the SEAM model. To make the model a good approximation of real field data, we select a portion in the SEAM model where stacked channels and turbidites exist. We then randomly insert faults in the impedance model and convolve with a 40Hz Ricker wavelet to generate seismic volumes. The parameters used in random generation of five reverse faults in the 3D volume are provided in Table 1. Figure 3a shows one line from the generated synthetic data with faults highlighted in red. In this model, we observe strong layer deformation with amplitude change along reflectors due to the turbidites in the model. Therefore, such synthetic data are in fact quite challenging for a fault detection algorithm, because of the existence of other types of discontinuities.

We randomly use 20% of the samples on the fault planes and approximately the same amount of non-fault samples to train the CNN model. The total number of training sample is about 350,000, which represents <1% of the total samples in the seismic volume. Figure 3b shows the raw output from the CNN fault detector on the same line shown in Figure 3a. We observe that instead of sticks, faults appear as a small zone. Also, as expected, there are some misclassifications where data are quite challenging. We then perform the regularization steps excluding the optional steps in Figure 2. Figure 3c shows the result after directional LoG filter and skeletonization. Notice that these two steps have cleaned up much of the noise, and the faults are now thinner and more continuous. Finally, we perform a thresholding to generate a fault map where faults are labeled as “1” and “0” for everywhere else (Figure 3d). Figure 4 shows the fault detection result on a less challenging line. We observe that the result on such line is nearly perfect.

Figure 2. The regularization workflow used to improve the fault images after CNN fault detection.

Fault attributeValues range
Dip angle (degree)-15 to 15
Strike angle (degree)-25 to 25
Displacement (m)25 to 75

Table 1. Parameter ranges used in generating faults in the synthetic model.

Field Data Test

We further verify the proposed workflow on field data from the Great South Basin, offshore New Zealand. The seismic data contain extensive faulting with complex geometry, as well as other types of coherence anomalies as shown in Figure 5. In this case, we manually picked training data on five seismic lines for regions representing fault and non-fault. An example line is given in Figure 6. As one may notice, although the training data consist very limited coverage in the whole volume, we try to include the most representative samples for the two classes. On the field data, we use the whole regularization workflow including the optional steps. Figure 7 gives the final output from the proposed workflow, and the result from using coherence in lieu of raw CNN output in the workflow. We observe that the result from CNN plus regularization gives clean fault planes with very limited noise from other types of discontinuities.

Conclusion

In this study, we introduce a fault detection workflow using both CNN-based classification and image processing regularization. We are able to train a CNN classifier to be sensitive to only faults, which greatly reduces the mixing between faults and other discontinuities in the produced faults images. To improve the resolution and further suppress non-fault features in the raw fault images, we then use an image processing-based regularization workflow to enhance the fault planes. The proposed workflow shows great potential on both challenging synthetic data and field data.

Acknowledgements

The authors thank Geophysical Insights for the permission to publish this work. We thank New Zealand Petroleum and Minerals for providing the Great South Basin seismic data to the public. The CNN fault detector used in this study is implemented in TensorFlow, an open source library from Google. The authors also thank Gary Jones at Geophysical Insights for valuable discussions on the SEAM model.

Figure 3. Line A from the synthetic data showing seismic amplitude with a) artificially created faults highlighted in red; b) raw output from CNN fault detector; c) CNN detected faults after directional LoG and skeletonization; and d) final fault after thresholding.

Figure 4. Line B from the synthetic data showing seismic amplitude co-rendered with a) randomly created faults highlighted in red and b) final result from the fault detection workflow, in which predicted faults are marked in red.

Figure 5. Coherence attribute along t = 1.492s. Coherence shows discontinuities not limited to faults, posting challenges to obtain only fault images.

Figure 6. A vertical slice from the field seismic amplitude data with manually picked regions for training the CNN fault detector. Green regions represent fault and red regions represent non-fault.

References

Araya-Polo, M., T. Dahlke, C. Frogner, C. Zhang, T. Poggio, and D. Hohl, 2017, Automated fault detection without seismic processing: The Leading Edge, 36, 208–214.

El-Sayed, M. A., Y. A. Estaitia, and M. A. Khafagy, 2013, Automated edge detection using convolutional neural network: International Journal of Advanced Computer Science and Applications, 4, 11–17.

Fehler, M., and K. Larner, 2008, SEG Advanced Modeling (SEAM). Phase I first year update: The Leading Edge, 27, 1006–1007.

Huang, L., X. Dong, and T. E. Clee, 2017, A scalable deep learning platform for identifying geologic features from seismic attributes: The Leading Edge, 36, 249–256.

Machado, G., A. Alali, B. Hutchinson, O. Olorunsola, and K. J. Marfurt, 2016, Display and enhancement of volumetric fault images: Interpretation, 4, 1, SB51–SB61.

Marfurt, K. J., V. Sudhaker, A. Gersztenkorn, K. D. Crawford, and S. E. Nissen, 1999, Coherency calculations in the presence of structural dip: Geophysics, 64, 104–111.

Qi, J., G. Machado, and K. Marfurt, 2017, A workflow to skeletonize faults and stratigraphic features: Geophysics, 82, no. 4, O57–O70.

Waldeland, A. U., and A. H. S. S. Solberg, 2017, Salt classification using deep learning: 79th Annual International Conference and Exhibition, EAGE, Extended Abstracts, Tu-B4-12.

Wu, X., and D. Hale, 2016, 3D seismic image processing for faults: Geophysics, 81, no. 2, IM1–IM11.

Xie, S., and Z. Tu, 2015, Holistically-nested edge detection: Proceedings of the IEEE International Conference on Computer Vision, 1395–1403.