web analytics
Net Reservoir Discrimination through Multi-Attribute Analysis at Single Sample Scale

Net Reservoir Discrimination through Multi-Attribute Analysis at Single Sample Scale

By Jonathan Leal, Rafael Jerónimo, Fabian Rada, Reinaldo Viloria and Rocky Roden
Published with permission: First Break
Volume 37, September 2019

Abstract

A new approach has been applied to discriminate Net Reservoir using multi-attribute seismic analysis at single sample resolution, complemented by bivariate statistical analysis from petrophysical well logs. The combination of these techniques was used to calibrate the multi-attribute analysis to ground truth, thereby ensuring an accurate representation of the reservoir static properties and reducing the uncertainty related to reservoir distribution and storage capacity. Geographically, the study area is located in the south of Mexico. The reservoir rock consists of sandstones from the Upper Miocene age in a slope fan environment.

The first method in the process was the application of Principal Component Analysis (PCA), which was employed to identify the most prominent attributes for detecting lithological changes that might be associated with the Net Reservoir. The second method was the application of the Kohonen Self-Organizing Map (SOM) Neural Network Classification at voxel scale (i.e., sample rate and bin size dimensions from seismic data), instead of using waveform shape classification. The sample-level analysis revealed significant new information from different seismic attributes, providing greater insights into the characteristics of the reservoir distribution in a shaly sandstone. The third method was a data analysis technique based on contingency tables and Chi-Square test, which revealed relationships between two categorical variables (SOM volume neurons and Net Reservoir). Finally, a comparison between a SOM of simultaneous seismic inversion attributes and traditional attributes classification was made corroborating the delineated prospective areas. The authors found the SOM classification results are beneficial to the refinement of the sedimentary model in a way that more accurately identified the lateral and vertical distribution of the facies of economic interest, enabling decisions for new well locations and reducing the uncertainty associated with field exploitation. However, the Lithological Contrast SOM results from traditional attributes showed a better level of detail compared with seismic inversion SOM.

Introduction

Self-Organizing Maps (SOM) is an unsupervised neural network – a form of machine learning – that has been used in multi-attribute seismic analysis to extract more information from the seismic response than would be practical using only single attributes. The most common use is in automated facies mapping. It is expected that every neuron or group of neurons can be associated with a single depositional environment, the reservoir´s lateral and vertical extension, porosity changes or fluid content (Marroquín et al., 2009). Of course, the SOM results must be calibrated with available well logs. In this paper, the authors generated petrophysical labels to apply statistical validation techniques between well logs and SOM results. Based on the application of PCA to a larger set of attributes, a smaller, distilled set of attributes were classified using the SOM process to identify lithological changes in the reservoir (Roden et al., 2015).

A bivariate statistical approach was then conducted to reveal the relationship between two categorical variables: the individual neurons comprising the SOM classification volume and Net Reservoir determined from petrophysical properties (percentage of occurrence of each neuron versus Net Reservoir).

The Chi-Square test compares the behavior of the observed frequencies (Agresti, 2002) for each SOM neuron lithological contrast against the Net Reservoir variable (grouped in “Net Reservoir” and “no reservoir” categories). Additional data analysis was conducted to determine which neurons responded to the presence of hydrocarbons using box plots showing Water Saturation, Clay Volume, and Effective Porosity as Net Pay indicators. The combination of these methods demonstrated an effective means of identifying the approximate region of the reservoir.

About the Study Area

The reservoir rock consists of sandstones from the Upper Miocene age in a slope fan environment. These sandstones correspond to channel facies, and slope lobes constituted mainly of quartz and potassium feldspars cemented in calcareous material of medium maturity. The submarine slope fans were deposited at the beginning of the deceleration of the relative sea-level fall, and consist of complex deposits associated with gravitational mass movements.

Stratigraphy and Sedimentology

The stratigraphic chart comprises tertiary terrigenous rocks from Upper Miocene to Holocene. The litho-stratigraphic units are described in Table 1.

Table 1: Stratigraphic Epoch Chart of Study Area

 

Figure 1. Left: Regional depositional facies. Right: Electrofacies and theoretical model, Muti (1978).

Figure 1 (left) shows the facies distribution map of the sequence, corresponding to the first platform-basin system established in the region. The two dashed lines – one red and one dark brown – represent the platform edge at different times according to several regional integrated studies in the area. The predominant direction of contribution for studied Field W is south-north, which is consistent with the current regional sedimentary model. The field covers an area of approximately 46 km2 and is located in facies of distributary channels northeast of the main channel. The reservoir is also well-classified and consolidated in clay matrix, and it is thought that this texture corresponds to the middle portion of the turbidite system. The observed electrofacies logs of the reservoir are box-shaped in wells W-2, W-4, W-5, and W-6 derived from gamma ray logs and associated with facies of distributary channels that exhibit the highest average porosity. In contrast, wells W-3 and W-1 are different – associated with lobular facies – according to gamma ray logs. In Figure 1 (right), a sedimentary scheme of submarine fans proposed by Muti (1978).

Petrophysics

The Stieber model was used to classify Clay Volume (VCL). The Effective Porosity (PIGN) was obtained using the Neutron-Density model and non-clay water intergranular Water Saturation (SUWI) was determined to have a salinity of 45,000 ppm using the Simandoux model. Petrophysical cut-off values used to distinguish Net Reservoir and Net Pay estimations were 0.45, 0.10 and 0.65, respectively.

Reservoir Information

The reservoir rock corresponds to sands with Net Pay thickness ranging from 9-12 m, porosity between 18-25%, average permeability of 8-15 mD, and Water Saturation of approximately 25%. The initial pressure was 790 kg / cm2 with the current pressure is 516 kg/cm2. The main problems affecting productivity in this volumetric reservoir are pressure drop, being the mechanism of displacement the rock-fluid expansion, and gas in solution. Additionally, there are sanding problems and asphaltene precipitation.

Methodology

Multidisciplinary information was collected and validated to carry out seismic multi-attribute analysis. Static and dynamic characterization studies were conducted in the study area, revealing the most relevant reservoir characteristics and yielding a better sense of the proposed drilling locations. At present, six wells have been drilled.

The original available seismic volume and associated gathers employed in the generation of multiple attributes and for simultaneous inversion were determined to be of adequate quality. At target depth, the dominant frequency approaches 14 Hz, and the interval velocity is close to 3,300 m/s. Therefore, the vertical seismic resolution is 58 m. The production sand has an average thickness of 13 m, so it cannot be resolved with conventional seismic amplitude data.

Principal Component Analysis (PCA)

Principal Component Analysis (PCA) is one of the most common descriptive statistics procedures used to synthesize the information contained in a set of variables (volumes of seismic attributes) and to reduce the dimensionality of a problem. Applied to a collection of seismic attributes, PCA can be used to identify the seismic attributes that have the greatest “contribution,” based on the extent of their relative variance to a region of interest. Attributes identified through the use of PCA are responsive to specific geological features, e.g., lithological contrast, fracture zones, among others. The output of PCA is an Eigen spectrum that quantifies the relative contribution or energy of each seismic attribute to the studied characteristic.

PCA Applied for Lithological Contrast Detection

The PCA process was applied to the following attributes to identify the most significant attributes to the region to detect lithological contrasts at the depth of interest: Thin Bed Indicator, Envelope, Instantaneous Frequency, Imaginary Part, Relative Acoustic Impedance, Sweetness, Amplitude, and Real Part. Of the entire seismic volume, only the voxels in a time window (seismic samples) delimited by the horizon of interest were analyzed, specifically 56 milliseconds above and 32 milliseconds below the horizon. The results are shown for each principal component. In this case, the criterion used for the selected attributes were those whose maximum percentage contribution to the principle component was greater than or equal to 80%. Using this selection technique, the first five principal components were reviewed in the Eigen spectrum. In the end, six (6) attributes of the first two principal components were selected (Figure 2).

Figure 2. PCA results for lithological contrast detection.

Simultaneous Classification of Seismic Attributes Using a Self-Organizing Maps (SOM) Neural Network (Voxel Scale)

The SOM method is an unsupervised classification process in that the network is trained from the input data alone. A SOM consists of components (vectors) called neurons or classes and input vectors that have a position on the map. The values are compared employing neurons that are capable of detecting groupings through training (machine learning) and mapping. The SOM process non-linearly maps the neurons to a two dimensional, hexagonal or rectangular grid. SOM describes a mapping of a larger space to a smaller one. The procedure for locating a vector from the data space on the map is to find the neuron with the vector of weights (smaller metric distance) closer to the vector of the data space. (The subject of this analysis accounted for seismic samples located within the time window covering several samples above and below the target horizon throughout the study area). It is important to classify attributes that have the same common interpretive use, such as lithological indicators, fault delineation, among others. The SOM revealed patterns and identified natural organizational structures present in the data that are difficult to detect in any other way (Roden et al., 2015), since the SOM classification used in this study is applied on individual samples (using sample rate and bin size from seismic data, Figure 2, lower right box), detecting features below conventional seismic resolution, in contrast with traditional wavelet-based classification methods.

SOM Classification for Lithological Contrast Detection

The following six attributes were input to the SOM process with 25 classes (5 X 5) stipulated as the desired output: Envelope, Hilbert, Relative Acoustic Impedance, Sweetness, Amplitude, and Real Part.

As in the PCA analysis, the SOM was delimited to seismic samples (voxels) in a time window following the horizon of interest, specifically 56 milliseconds above to 32 milliseconds below. The resulting SOM classification volume was examined with several visualization and statistical analysis techniques to associate SOM classification patterns with reservoir rock.

3D and Plan Views

One way of identifying patterns or trends coherent with the sedimentary model of the area is visualizing all samples grouped by each neuron in 3D and plan views using stratal-slicing technique throughout the reservoir. The Kohonen SOM and the 2D Colormap in Figure 3 (lower right) ensures that the characteristics of neighboring neurons are similar. The upper part of Figure 3 shows groupings classified by all 5x5 (25) neurons comprising the neural network, while in the lower part there are groupings interpreted to be associated with the reservoir classified by a few neurons that are consistent with the regional sedimentary model, i.e., neurons N12, N13, N16, N17, N22, and N23.

Figure 3. Plan view with geological significance preliminary geobodies from Lithological Contrast SOM. Below: only neurons associated with reservoir are shown.

Vertical Seismic Section Showing Lithological Contrast SOM

The observed lithology in the reservoir sand is predominantly made up of clay sandstone. A discrete log for Net Reservoir was generated to calibrate the results of the Lithological Contrast SOM, using cut-off values according to Clay Volume and Effective Porosity. Figure 4 shows the SOM classification of Lithological Contrast with available well data and plan view. The samples grouped by neurons N17, N21, and N22 match with Net Reservoir discrete logs. It is notable that only the well W-3 (minor producer) intersected the samples grouped by the neuron N17 (light blue). The rest of the wells only intersected neurons N21 and N22. It is important to note that these features are not observed on the conventional seismic amplitude data (wiggle traces).

Figure 4. Vertical section composed by the SOM of Lithological Contrast, Amplitude attribute (wiggle), and Net Reservoir discrete property along wells.

Stratigraphic Well Section

A cross-section containing the wells (Figure 5) shows logs of Gamma Ray, Clay Volume, perforations, resistivity, Effective Porosity, Net Reservoir with lithological contrast SOM classification, and Net Pay.
The results of SOM were compared by observation with discrete well log data, relating specific neurons to the reservoir. At target zone depth, only the neurons N16, N17, N21, and N22 are present. It is noteworthy that only W-3 well (minor producer) intersect clusters formed by neuron N17 (light blue). The rest of the wells intersect neurons N16, N21, N22, and N23.

Statistical Analysis Vertical Proportion Curve (VPC)

Traditionally, Vertical Proportion Curves (VPC) are qualitative and quantitative tools used by some sedimentologists to define succession, division, and variability of sedimentary sequences from well data, since logs describe vertical and lateral evolution of facies (Viloria et al., 2002). A VPC can be modeled as an accumulative histogram where the bars represent the facies proportion present at a given level in a stratigraphic unit. As part of the quality control and revision of the SOM classification volume for Lithological Contrasts, this statistical technique was used to identify whether in the stratigraphic unit or in the window of interest, a certain degree of succession and vertical distribution of specific neurons observed could be related to the reservoir.

The main objective of this statistical method is to identify how specific neurons are vertically concentrated along one or more logs. As an illustration of the technique, a diagram of the stratigraphic grid is shown in Figure 6. The VPC was extracted from the whole 3D grid of SOM classification volume for Lithological Contrast, and detection was generated by counting the occurrence among the 25 neurons or classes in each stratigraphic layer in the VPC extracted from the grid. The VPC of SOM neurons exhibits remarkable slowly-varying characteristics indicative of geologic depositional patterns. The reservoir top corresponds to stratigraphic layer No. 16. In the VPC on the right, only neurons N16, N17, N21, and N22 are present. These neurons have a higher percentage occurrence relative to all 25 classes from the top of the target sand downwards. Corroborating the statistics, these same neural classes appear in the map view in Figure 3 and the vertical section shown in Figure 4. The stratigraphic well section in Figure 5 also supports the statistical results. It is important to note that these neurons also detected seismic samples above the top of the sand top, although in a lesser proportion. This effect is consistent with the existence of layers with similar lithological characteristics, which can be seen from the well logs.

Figure 6. Vertical proportion Curve to identify neurons related to reservoir rock.

Bivariate Statistical Analysis Cross Tabs

The first step in this methodology is a bivariate analysis through cross-tabs (contingency table) to determine if two categorical variables are related based on observing the extent to which the occurrence of one variable is repeated in the categories of the second. Given that one variable is analyzed in terms of another, a distinction must be made between dependent and independent variables. With cross tabs analysis, the possibilities are extended to (in addition to frequency analyzes for each variable, separately) the analyses of the joint frequencies or those in which the analysis unit nature is defined by the combination of two variables.

The result was obtained by extracting the SOM classification volume along wells paths and constructing a discrete well log with two categories: “Net Reservoir” and “not reservoir.” The distinction between “Net Reservoir” and “not reservoir” simply means that the dependent variable might have a hydrocarbon storage capacity or not. In this case, the dependent variable corresponds to neurons of SOM classification for Lithological Contrast volume. It is of ordinal type, since it has an established internal order, and the change from one category to another is not the same. The neurons go from N1 to N25, organized in rows. The independent variable is Net Reservoir, which is also an ordinal type variable. In this tab, the values organized in rows correspond to neurons from the SOM classification volume for Lithological Contrast, and in the columns are discrete states of the “Net Reservoir” and “not reservoir” count for each neuron. Table 2 shows that the highest Net Reservoir counts are associated with neurons N21 and N22 at 47.0% and 28.2% respectively. Conversely, lower counts of Net Reservoir are associated with neurons N17 (8.9%), N16 (7.8%) and N23 (8.0%).

Table 2. Cross Tab for Lithological Contrast SOM versus Net reservoir.

Neuron N21 was detected at reservoir depth in wells W-2 (producer), W-4 (abandoned for technical reasons during drilling), W-5 (producer) and W-6 (producer). N21 showed higher percentages of occurrence in Net Reservoir, so this neuron could be identified as indicating the highest storage capacity. N22 was present in wells W-1 and W-6 at target sand depth but also detected in wells W-2, W-4 and W-5 in clay-sandy bodies overlying the highest quality zone in the reservoir. N22 was also detected in the upper section of target sand horizontally navigated by the W-6 well, which has no petrophysical evaluation. N17 was only detected in well W-3, a minor producer of oil, which was sedimentologically cataloged as lobular facies and had the lowest reservoir rock quality. N16 was detected in a very small proportion in wells W-4 (abandoned for technical reasons during drilling) and W-5 (producer). Finally, N23 was only detected towards the top of the sand in well W-6, and in clayey layers overlying it in the other wells. This is consistent with the observed percentage of 8% Net Reservoir, as shown in Table 2.

Chi-Square Independence Hypothesis Testing

After applying the cross-tab evaluation, this classified information was the basis of a Chi-Square goodness-of-fit test to assess the independence or determine the association between two categorical variables: Net Reservoir and SOM neurons. That is, it aims to highlight the absence of a relationship between the variables. The Chi-Square test compared the behavior of the observed frequencies for each Lithological Contrast neuron with respect to the Net Reservoir variable (grouped in “Net Reservoir” and “no reservoir”), and with the theoretically expected frequency distribution when the hypothesis is null.

As a starting point, the null hypothesis formulation was that the Lithological Contrast SOM neuron occurrences are independent of the presence of Net Reservoir. If the calculated Chi-Square value is equal to or greater than a certain critical theoretical value, the null hypothesis must be rejected. Consequently, the alternative hypothesis must be accepted. Observe the results in Table 3 where the calculated Chi-Square is greater than the theoretical critical value (296 ≥ 9.4, with four degrees of freedom and 5% confidence level), so the null hypothesis of the independence of Net Pay with SOM neurons is rejected, leaving a relationship between Net Reservoir and Lithological Contrast SOM variables.

The test does not report a goodness of fit magnitude (substantial, moderate or poor), however. To measure the degree of correlation between both variables, Pearson’s Phi (φ) and Cramer’s V (ν) measures were computed. Pearson’s φ coefficient was estimated from Eq. 1.1.

Eq. 1.1

where X2: Chi-Square and n : No. of cases

Additionally, Cramer’s V was estimated using Eq. 1.2.

Eq. 1.2

In both cases, values near zero indicate a poor or weak relationship while values close to one indicate a strong relation. The authors obtained values for φ, and Cramer´s ν equals to 0.559 (Table 3). Based on this result, we can interpret a moderate relation between both variables.

Table 3. Calculated and theoretical Chi-Square values and its correlation measures.

Box-and-Whisker Plots

Box-and-whisker plots were constructed to compare and understand the behavior of petrophysical properties for the range that each neuron intersects the well paths in the SOM volume. Also, these quantify which neurons of interest respond to Net Reservoir and Net Pay properties (Figure 7). Five descriptive measures are shown for a box-and-whisker plot of each property:

• Median (thick black horizontal line)
• First quartile (lower limit of the box)
• Third quartile (upper limit of the box)
• Maximum value (upper end of the whisker)
• Minimum value (lower end of the whisker)

The graphs provide information about data dispersion, i.e., the longer the box and whiskers, the greater the dispersion and also data symmetry. If the median is relatively centered of the box, the distribution is symmetrical. If, on the contrary, it approaches the first or third quartile, the distribution could be skewed to these quartiles, respectively. Finally, these graphs identify outlier observations that depart from the rest of the data in an unusual way (these are represented by dots and asterisks as less or more distant from the data center). Horizontal dashed green line is the cut-off value for Effective Porosity (PIGN >0.10) while the dashed blue line represents the cut-off value for Clay Volume (VCL>0.45) and, dashed beige line is cut-off value for Water Saturation (SUWI<0.65).

Based on these data and the resulting analysis, it can be inferred that neurons N16, N17, N21, N22, and N23 respond positively to Net Reservoir. Of these neurons, the most valuable predictors are N21 and N22 since they present lower clay content in comparison with neurons N16 and N23 and associated higher Effective Porosity shown by neurons N16, N17, and N23 (Figure 7a). Neurons N21 and N22 are ascertained to represent the best reservoir rock quality. Finally, neuron N23 (Figure 7b), can be associated with rock lending itself with storage capacity, but clayey and with high Water Saturation, which allows discarding it as a significant neuron. It is important to note that this analysis was conducted by accounting for the simultaneous occurrence of the petrophysical values (VCL, PIGN, and SUWI) on the neurons initially intersected (Figure 7a), and then on the portion of the neurons that pass Net Reservoir cut-off values (Figure 7b), and finally on the portion of the neurons that pass net-pay cut-off values (Figure 7c). For all these petrophysical reasons, the neurons to be considered as a reference to estimate the lateral and vertical distribution of Net Reservoir associated with the target sand are in order of importance, N21, N22, N16, and N17.

Figure 7. Comparison between neurons according to petrophysical properties: VCL (Clay Volume), PIGN (Effective Porosity) and SUWI (Water Saturation). a) SOM neurons for lithological contrast detection, b) Those that pass Net Reservoir cut-off and c) Those that pass Net Pay cut-off.

Simultaneous Seismic Inversion

During this study, a simultaneous prestack inversion was performed using 3D seismic data and sonic logs, in order to estimate seismic petrophysical attributes as Acoustic Impedance (Zp), Shear Impedance (Zs), Density (Rho), as well as P&S-wave velocities, among others. They are commonly used as lithology indicators, possible fluids, and geomechanical properties. Figure 8a shows a scatter plot from well data of seismic attributes Lambda Rho and Mu Rho ratio versus Clay Volume (VCL) and as discriminator Vp/Vs ratio (Vp/Vs). The target sand corresponds to low Vp/Vs and Lambda/Mu values (circled in the figure). Another discriminator in the reservoir was S-wave impedance (Zs) (Figure 8b). From this, seismic inversion attributes were selected for classification by SOM neural network analysis. These attributes were Vp/Vs ratio, Lambda Rho/Mu Rho ratio, and Zs.

Figure 8. Scatter plots: a) Lambda Rho and Mu Rho ratio versus VCL and Vp/Vs y b) Zs versus VCL and Vp/Vs.

Self-Organizing Map (SOM) Comparison

Figure 9 is a plan view of neuron-extracted geobodies associated with the sand reservoir. In the upper part, a SOM classification for Lithological Contrast detection obtained from six traditional seismic attributes is shown; and in the lower part, a different SOM classification for Lithological Contrast detection was obtained from three attributes of simultaneous inversion. Both results are very similar. The selection of SOM classification neurons from inversion attributes was done through spatial pattern recognition, i.e., identifying geometry/shape of the clusters related to each of 25 neurons congruent with the sedimentary model, and by using a stratigraphic section for wells that includes both SOM classifications tracks.

Figure 9. Plan view of neurons with geological meaning. Up: SOM Classification from traditional attributes. Down: SOM Classification from simultaneous inversion attributes.

Figure 10 shows a well section that includes a track for Net Reservoir and Net Pay classification along with SOM classifications from traditional attributes and a second SOM from simultaneous inversion attributes defined from SOM volumes and well paths intersection. In fact, only the neurons numbers with geological meaning are shown.

Figure 10. Well section showing the target zone with tracks for discrete logs from Net Reservoir, Net Pay and both SOM classifications.

Discussion and Conclusions

Principal Component Analysis (PCA) identified the most significant seismic attributes to be classified by Self-Organizing Maps (SOM) neural network at single-sample basis to detect features associated with lithological contrast and recognize lateral and vertical extension in the reservoir. The interpretation of SOM classification volumes was supported by multidisciplinary sources (geological, petrophysical, and dynamic data). In this way, the clusters detected by certain neurons became the inputs for geobody interpretation. The statistical analysis and visualization techniques enabled the estimation of Net Reservoir for each neuron. Finally, the extension of reservoir rock geobodies derived from SOM classification of traditional attributes was corroborated by the SOM acting on simultaneous inversion attributes. Both multi-attribute machine learning analysis of traditional attributes and attributes of seismic inversion enable refinement of the sedimentary model to reveal more precisely the lateral and vertical distribution of facies. However, the Lithological Contrast SOM results from traditional attributes showed a better level of detail compared with seismic inversion SOM.

Collectively, the workflow may reduce uncertainty in proposing new drilling locations. Additionally, this methodology might be applied using specific attributes to identify faults and fracture zones, identify absorption phenomena, porosity changes, and direct hydrocarbon indicator features, and determine reservoir characteristics.

Acknowledgments

The authors thank Pemex and Oil and Gas Optimization for providing software and technical resources. Thanks also are extended to Geophysical Insights for the research and development of the Paradise® AI workbench and the machine learning applications used in this paper. Finally, thank Reinaldo Michelena, María Jerónimo, Tom Smith, and Hal Green for review of the manuscript.

References

Agresti, A., 2002, Categorical Data Analysis: John Wiley & Sons.

Marroquín I., J.J. Brault and B. Hart, 2009, A visual data mining methodology to conduct seismic facies analysis: Part 2 – Application to 3D seismic data: Geophysics, 1, 13-23.

Roden R., T. Smith and D. Sacrey, 2015, Geologic pattern recognition from seismic attributes: Principal component analysis and self-organizing maps: Interpretation, 4, 59-83.

Viloria R. and M. Taheri, 2002, Metodología para la Integración de la Interpretación Sedimentológica en el Modelaje Estocástico de Facies Sedimentarias, (INT-ID-9973, 2002). Technical Report INTEVEP-PDVSA.

NEW e-Course by Dr. Tom Smith: Machine Learning Essentials for Seismic Interpretation

NEW e-Course by Dr. Tom Smith: Machine Learning Essentials for Seismic Interpretation

Machine learning is foundational to the digital transformation of the oil & gas industry and will have a dramatic impact on the exploration and production of hydrocarbons.  Dr. Tom Smith, the founder and CEO of Geophysical Insights, conducts a comprehensive survey of machine learning technology and its applications in this 24-part series.  The course will benefit geoscientists, engineers, and data analysts at all experience levels, from data analysts who want to better understand applications of machine learning to geoscience, to senior geophysicists with deep experience in the field.

Aspects of supervised learning, unsupervised learning, classification and reclassification are introduced to illustrate how they work on seismic data.  Machine learning is presented, not as an end-all-be-all, but as a new set of tools which enables interpretation on seismic data on a new, higher level that of abstraction  that promises to reduce risks and identify features that which might otherwise be missed.

The following major topics are covered:

  • Operation  – supervised and unsupervised learning; buzzwords; examples
  • Foundation  – seismic processing for ML; attribute selection list objectives; principal component analysis
  • Practice  – geobodies; below-tuning; fluid contacts; making predictions
  • Prediction – the best well; the best seismic processing; over-fitting; cross-validation; who makes the best predictions?

This course can be taken for certification, or for informational purposes only (without certification). 

Enroll today for this valuable e-course from Geophysical Insights!

Seismic Facies Classification Using Deep Convolutional Neural Networks

Seismic Facies Classification Using Deep Convolutional Neural Networks

By Tao Zhao
Published with permission: SEG International Exposition and 88th Annual Meeting
October 2018

Summary

Convolutional neural networks (CNNs) is a type of supervised learning technique that can be directly applied to amplitude data for seismic data classification. The high flexibility in CNN architecture enables researchers to design different models for specific problems. In this study, I introduce an encoder-decoder CNN model for seismic facies classification, which classifies all samples in a seismic line simultaneously and provides superior seismic facies quality comparing to the traditional patch-based CNN methods. I compare the encoder-decoder model with a traditional patch- based model to conclude the usability of both CNN architectures.

Introduction

With the rapid development in GPU computing and success obtained in computer vision domain, deep learning techniques, represented by convolutional neural networks (CNNs), start to entice seismic interpreters in the application of supervised seismic facies classification. A comprehensive review of deep learning techniques is provided in LeCun et al. (2015). Although still in its infancy, CNN-based seismic classification is successfully applied on both prestack (Araya-Polo et al., 2017) and poststack (Waldeland and Solberg, 2017; Huang et al., 2017; Lewis and Vigh, 2017) data for fault and salt interpretation, identifying different wave characteristics (Serfaty et al., 2017), as well as estimating velocity models (Araya-Polo et al., 2018).

The main advantages of CNN over other supervised classification methods are its spatial awareness and automatic feature extraction. For image classification problems, other than using the intensity values at each pixel individually, CNN analyzes the patterns among pixels in an image, and automatically generates features (in seismic data, attributes) suitable for classification. Because seismic data are 3D tomographic images, we would expect CNN to be naturally adaptable to seismic data classification. However, there are some distinct characteristics in seismic classification that makes it more challenging than other image classification problems. Firstly, classical image classification aims at distinguishing different images, while seismic classification aims at distinguishing different geological objects within the same image. Therefore, from an image processing point of view, instead of classification, seismic classification is indeed a segmentation problem (partitioning an image into blocky pixel shapes with a coarser set of colors). Secondly, training data availability for seismic classification is much sparser comparing to classical

image classification problems, for which massive data are publicly available. Thirdly, in seismic data, all features are represented by different patterns of reflectors, and the boundaries between different features are rarely explicitly defined. In contrast, features in an image from computer artwork or photography are usually well-defined. Finally, because of the uncertainty in seismic data, and the nature of manual interpretation, the training data in seismic classification is always contaminated by noise.

To address the first challenge, until today, most, if not all, published studies on CNN-based seismic facies classification perform classification on small patches of data to infer the class label of the seismic sample at the patch center. In this fashion, seismic facies classification is done by traversing through patches centered at every sample in a seismic volume. An alternative approach, although less discussed, is to use CNN models designed for image segmentation tasks (Long et al., 2015; Badrinarayanan et al., 2017; Chen et al., 2018) to obtain sample-level labels in a 2D profile (e.g. an inline) simultaneously, then traversing through all 2D profiles in a volume.

In this study, I use an encoder-decoder CNN model as an implementation of the aforementioned second approach. I apply both the encoder-decoder model and patch-based model to seismic facies classification using data from the North Sea, with the objective of demonstrating the strengths and weaknesses of the two CNN models. I conclude that the encoder-decoder model provides much better classification quality, whereas the patch-based model is more flexible on training data, possibly making it easier to use in production.

The Two Convolutional Neural Networks (CNN) Models

Patch-based model

A basic patch-based model consists of several convolutional layers, pooling (downsampling) layers, and fully-connected layers. For an input image (for seismic data, amplitudes in a small 3D window), a CNN model first automatically extracts several high-level abstractions of the image (similar to seismic attributes) using the convolutional and pooling layers, then classifies the extracted attributes using the fully- connected layers, which are similar to traditional multilayer perceptron networks. The output from the network is a single value representing the facies label of the seismic sample at the center of the input patch. An example of patch-based model architecture is provided in Figure 1a. In this example, the network is employed to classify salt versus non-salt from seismic amplitude in the SEAM synthetic data (Fehler and Larner, 2008). One input instance is a small patch of data bounded by the red box, and the corresponding output is a class label for this whole patch, which is then assigned to the sample at the patch center. The sample marked as the red dot is classified as non-salt.

CNN architecture patch-based model

Figure 1. Sketches for CNN architecture of a) 2D patch-based model and b) encoder-decoder model. In the 2D patch-based model, each input data instance is a small 2D patch of seismic amplitude centered at the sample to be classified. The corresponding output is then a class label for the whole 2D patch (in this case, non-salt), which is usually assigned to the sample at the center. In the encoder-decoder model, each input data instance is a whole inline (or crossline/time slice) of seismic amplitude. The corresponding output is a whole line of class labels, so that each sample is assigned a label (in this case, some samples are salt and others are non-salt). Different types of layers are denoted in different colors, with layer types marked at their first appearance in the network. The size of the cuboids approximately represents the output size of each layer.

Encoder-decoder model

Encoder-decoder is a popular network structure for tackling image segmentation tasks. Encoder-decoder models share a similar idea, which is first extracting high level abstractions of input images using convolutional layers, then recovering sample-level class labels by “deconvolution” operations. Chen et al. (2018) introduce a current state-of-the-art encoder-decoder model while concisely reviewed some popular predecessors. An example of encoder-decoder model architecture is provided in Figure 1b. Similar to the patch-based example, this encoder-decoder network is employed to classify salt versus non-salt from seismic amplitude in the SEAM synthetic data. Unlike the patch- based network, in the encoder-decoder network, one input instance is a whole line of seismic amplitude, and the corresponding output is a whole line of class labels, which has the same dimension as the input data. In this case, all samples in the middle of the line are classified as salt (marked in red), and other samples are classified as non-salt (marked in white), with minimum error.

Application of the Two CNN Models

For demonstration purpose, I use the F3 seismic survey acquired in the North Sea, offshore Netherlands, which is freely accessible by the geoscience research community. In this study, I am interested to automatically extract seismic facies that have specific seismic amplitude patterns. To remove the potential disagreement on the geological meaning of the facies to extract, I name the facies purely based on their reflection characteristics. Table 1 provides a list of extracted facies. There are eight seismic facies with distinct amplitude patterns, another facies (“everything else”) is used for samples not belonging to the eight target facies.

Facies number Facies name
1 Varies amplitude steeply dipping
2 Random
3 Low coherence
4 Low amplitude deformed
5 Low amplitude dipping
6 High amplitude deformed
7 Moderate amplitude continuous
8 Chaotic
0 Everything else

To generate training data for the seismic facies listed above, different picking scenarios are employed to compensate for the different input data format required in the two CNN models (small 3D patches versus whole 2D lines). For the patch-based model, 3D patches of seismic amplitude data are extracted around seed points within some user-defined polygons. There are approximately 400,000 3D patches of size 65×65×65 generated for the patch-based model, which is a reasonable amount for seismic data of this size. Figure 2a shows an example line on which seed point locations are defined in the co-rendered polygons.

The encoder-decoder model requires much more effort for generating labeled data. I manually interpret the target facies on 40 inlines across the seismic survey and use these for building the network. Although the total number of seismic samples in 40 lines are enormous, the encoder-decoder model only considers them as 40 input instances, which in fact are of very small size for a CNN network. Figure 2b shows an interpreted line which is used in training the network

In both tests, I randomly use 90% of the generated training data to train the network and use the remaining 10% for testing. On an Nvidia Quadro M5000 GPU with 8GB memory, the patch-based model takes about 30 minutes to converge, whereas the encoder-decoder model needs about 500 minutes. Besides the faster training, the patch-based model also has a higher test accuracy at almost 100% (99.9988%, to be exact) versus 94.1% from the encoder- decoder model. However, this accuracy measurement is sometimes a bit misleading. For a patch-based model, when picking the training and testing data, interpreters usually pick the most representative samples of each facies for which they have the most confidence, resulting in high quality training (and testing) data that are less noisy, and most of the ambiguous samples which are challenging for the classifier are excluded from testing. In contrast, to use an encoder-decoder model, interpreters have to interpret all the target facies in a training line. For example, if the target is faults, one needs to pick all faults in a training line, otherwise unlabeled faults will be considered as “non-fault” and confuse the classifier. Therefore, interpreters have to make some not-so-confident interpretation when generating training and testing data. Figure 2c and 2d show seismic facies predicted from the two CNN models on the same line shown in Figure 2a and 2b. We observe better defined facies from the encoder-decoder model compared to the patch- based model.

Figure 3 shows prediction results from the two networks on a line away from the training lines, and Figure 4 shows prediction results from the two networks on a crossline. Similar to the prediction results on the training line, comparing to the patch-based model, the encoder-decoder model provides facies as cleaner geobodies that require much less post-editing for regional stratigraphic classification (Figure 5). This can be attributed to an encoder-decoder model that is able to capture the large scale spatial arrangement of facies, whereas the patch-based model only senses patterns in small 3D windows. To form such windows, the patch-based model also needs to pad or simply skip samples close to the edge of a 3D seismic volume. Moreover, although the training is much faster in a patch-based model, the prediction stage is very computationally intensive, because it processes data size N×N×N times of the original seismic volume (N is the patch size along each dimension). In this study, the patch-based method takes about 400 seconds to predict a line, comparing to less than 1 second required in the encoder-decoder model.

Conclusion

In this study, I compared two types of CNN models in the application of seismic facies classification. The more commonly used patch-based model requires much less effort in generating labeled data, but the classification result is suboptimal comparing to the encoder-decoder model, and the prediction stage can be very time consuming. The encoder-decoder model generates superior classification result at near real-time speed, at the expense of more tedious labeled data picking and longer training time.

Acknowledgements

The author thanks Geophysical Insights for the permission to publish this work. Thank dGB Earth Sciences for providing the F3 North Sea seismic data to the public, and ConocoPhillips for sharing the MalenoV project for public use, which was referenced when generating the training data. The CNN models discussed in this study are implemented in TensorFlow, an open source library from Google.

Figure 2. Example of seismic amplitude co-rendered with training data picked on inline 340 used for a) patch-based model and b) encoder-decoder model. The prediction result from c) patch-based model, and d) from the encoder-decoder model. Target facies are colored in colder to warmer colors in the order shown in Table 1. Compare Facies 5, 6 and 8.

Figure 3. Prediction results from the two networks on a line away from the training lines. a) Predicted facies from the patch-based model. b) Predicted facies from the encoder-decoder based model. Target facies are colored in colder to warmer colors in the order shown in Table 1. The yellow dotted line marks the location of the crossline shown in Figure 4. Compare Facies 1, 5 and 8.

Figure 4. Prediction results from the two networks on a crossline. a) Predicted facies from the patch-based model. b) Predicted facies from the encoder-decoder model. Target facies are colored in colder to warmer colors in the order shown in Table 1. The yellow dotted lines mark the location of the inlines shown in Figure 2 and 3. Compare Facies 5 and 8.

Figure 5. Volumetric display of the predicted facies from the encoder-decoder model. The facies volume is visually cropped for display purpose. An inline and a crossline of seismic amplitude co-rendered with predicted facies are also displayed to show a broader distribution of the facies. Target facies are colored in colder to warmer colors in the order shown in Table 1.

References

Araya-Polo, M., T. Dahlke, C. Frogner, C. Zhang, T. Poggio, and D. Hohl, 2017, Automated fault detection without seismic processing: The Leading Edge, 36, 208–214.

Araya-Polo, M., J. Jennings, A. Adler, and T. Dahlke, 2018, Deep-learning tomography: The Leading Edge, 37, 58–66.

Badrinarayanan, V., A. Kendall, and R. Cipolla, 2017, SegNet: A deep convolutional encoder-decoder architecture for image segmentation: IEEE Transactions on Pattern Analysis and Machine Intelligence, 39, 2481–2495.

Chen, L. C., G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, 2018, DeepLab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs: IEEE Transactions on Pattern Analysis and Machine Intelligence, 40, 834–848.

Chen, L. C., Y. Zhu, G. Papandreou, F. Schroff, and H. Adam, 2018, Encoder-decoder with atrous separable convolution for semantic image segmentation: arXiv preprint, arXiv:1802.02611v2.

Fehler, M., and K. Larner, 2008, SEG advanced modeling (SEAM): Phase I first year update: The Leading Edge, 27, 1006–1007.

Huang, L., X. Dong, and T. E. Clee, 2017, A scalable deep learning platform for identifying geologic features from seismic attributes: The Leading Edge, 36, 249–256.

LeCun, Y., Y. Bengio, and G. Hinton, 2015, Deep learning: Nature, 521, 436–444.

Lewis, W., and D. Vigh, 2017, Deep learning prior models from seismic images for full-waveform inversion: 87th Annual International Meeting, SEG, Expanded Abstracts, 1512–1517.

Long, J., E. Shelhamer, and T. Darrell, 2015, Fully convolutional networks for semantic segmentation: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 3431–3440.

Serfaty, Y., L. Itan, D. Chase, and Z. Koren, 2017, Wavefield separation via principle component analysis and deep learning in the local angle domain: 87th Annual International Meeting, SEG, Expanded Abstracts, 991–995.

Waldeland, A. U., and A. H. S. S. Solberg, 2017, Salt classification using deep learning: 79th Annual International Conference and Exhibition, EAGE, Extended Abstracts, Tu-B4-12.