web analytics
Net Reservoir Discrimination through Multi-Attribute Analysis at Single Sample Scale

Net Reservoir Discrimination through Multi-Attribute Analysis at Single Sample Scale

By Jonathan Leal, Rafael Jerónimo, Fabian Rada, Reinaldo Viloria and Rocky Roden
Published with permission: First Break
Volume 37, September 2019

Abstract

A new approach has been applied to discriminate Net Reservoir using multi-attribute seismic analysis at single sample resolution, complemented by bivariate statistical analysis from petrophysical well logs. The combination of these techniques was used to calibrate the multi-attribute analysis to ground truth, thereby ensuring an accurate representation of the reservoir static properties and reducing the uncertainty related to reservoir distribution and storage capacity. Geographically, the study area is located in the south of Mexico. The reservoir rock consists of sandstones from the Upper Miocene age in a slope fan environment.

The first method in the process was the application of Principal Component Analysis (PCA), which was employed to identify the most prominent attributes for detecting lithological changes that might be associated with the Net Reservoir. The second method was the application of the Kohonen Self-Organizing Map (SOM) Neural Network Classification at voxel scale (i.e., sample rate and bin size dimensions from seismic data), instead of using waveform shape classification. The sample-level analysis revealed significant new information from different seismic attributes, providing greater insights into the characteristics of the reservoir distribution in a shaly sandstone. The third method was a data analysis technique based on contingency tables and Chi-Square test, which revealed relationships between two categorical variables (SOM volume neurons and Net Reservoir). Finally, a comparison between a SOM of simultaneous seismic inversion attributes and traditional attributes classification was made corroborating the delineated prospective areas. The authors found the SOM classification results are beneficial to the refinement of the sedimentary model in a way that more accurately identified the lateral and vertical distribution of the facies of economic interest, enabling decisions for new well locations and reducing the uncertainty associated with field exploitation. However, the Lithological Contrast SOM results from traditional attributes showed a better level of detail compared with seismic inversion SOM.

Introduction

Self-Organizing Maps (SOM) is an unsupervised neural network – a form of machine learning – that has been used in multi-attribute seismic analysis to extract more information from the seismic response than would be practical using only single attributes. The most common use is in automated facies mapping. It is expected that every neuron or group of neurons can be associated with a single depositional environment, the reservoir´s lateral and vertical extension, porosity changes or fluid content (Marroquín et al., 2009). Of course, the SOM results must be calibrated with available well logs. In this paper, the authors generated petrophysical labels to apply statistical validation techniques between well logs and SOM results. Based on the application of PCA to a larger set of attributes, a smaller, distilled set of attributes were classified using the SOM process to identify lithological changes in the reservoir (Roden et al., 2015).

A bivariate statistical approach was then conducted to reveal the relationship between two categorical variables: the individual neurons comprising the SOM classification volume and Net Reservoir determined from petrophysical properties (percentage of occurrence of each neuron versus Net Reservoir).

The Chi-Square test compares the behavior of the observed frequencies (Agresti, 2002) for each SOM neuron lithological contrast against the Net Reservoir variable (grouped in “Net Reservoir” and “no reservoir” categories). Additional data analysis was conducted to determine which neurons responded to the presence of hydrocarbons using box plots showing Water Saturation, Clay Volume, and Effective Porosity as Net Pay indicators. The combination of these methods demonstrated an effective means of identifying the approximate region of the reservoir.

About the Study Area

The reservoir rock consists of sandstones from the Upper Miocene age in a slope fan environment. These sandstones correspond to channel facies, and slope lobes constituted mainly of quartz and potassium feldspars cemented in calcareous material of medium maturity. The submarine slope fans were deposited at the beginning of the deceleration of the relative sea-level fall, and consist of complex deposits associated with gravitational mass movements.

Stratigraphy and Sedimentology

The stratigraphic chart comprises tertiary terrigenous rocks from Upper Miocene to Holocene. The litho-stratigraphic units are described in Table 1.

Table 1: Stratigraphic Epoch Chart of Study Area

 

Figure 1. Left: Regional depositional facies. Right: Electrofacies and theoretical model, Muti (1978).

Figure 1 (left) shows the facies distribution map of the sequence, corresponding to the first platform-basin system established in the region. The two dashed lines – one red and one dark brown – represent the platform edge at different times according to several regional integrated studies in the area. The predominant direction of contribution for studied Field W is south-north, which is consistent with the current regional sedimentary model. The field covers an area of approximately 46 km2 and is located in facies of distributary channels northeast of the main channel. The reservoir is also well-classified and consolidated in clay matrix, and it is thought that this texture corresponds to the middle portion of the turbidite system. The observed electrofacies logs of the reservoir are box-shaped in wells W-2, W-4, W-5, and W-6 derived from gamma ray logs and associated with facies of distributary channels that exhibit the highest average porosity. In contrast, wells W-3 and W-1 are different – associated with lobular facies – according to gamma ray logs. In Figure 1 (right), a sedimentary scheme of submarine fans proposed by Muti (1978).

Petrophysics

The Stieber model was used to classify Clay Volume (VCL). The Effective Porosity (PIGN) was obtained using the Neutron-Density model and non-clay water intergranular Water Saturation (SUWI) was determined to have a salinity of 45,000 ppm using the Simandoux model. Petrophysical cut-off values used to distinguish Net Reservoir and Net Pay estimations were 0.45, 0.10 and 0.65, respectively.

Reservoir Information

The reservoir rock corresponds to sands with Net Pay thickness ranging from 9-12 m, porosity between 18-25%, average permeability of 8-15 mD, and Water Saturation of approximately 25%. The initial pressure was 790 kg / cm2 with the current pressure is 516 kg/cm2. The main problems affecting productivity in this volumetric reservoir are pressure drop, being the mechanism of displacement the rock-fluid expansion, and gas in solution. Additionally, there are sanding problems and asphaltene precipitation.

Methodology

Multidisciplinary information was collected and validated to carry out seismic multi-attribute analysis. Static and dynamic characterization studies were conducted in the study area, revealing the most relevant reservoir characteristics and yielding a better sense of the proposed drilling locations. At present, six wells have been drilled.

The original available seismic volume and associated gathers employed in the generation of multiple attributes and for simultaneous inversion were determined to be of adequate quality. At target depth, the dominant frequency approaches 14 Hz, and the interval velocity is close to 3,300 m/s. Therefore, the vertical seismic resolution is 58 m. The production sand has an average thickness of 13 m, so it cannot be resolved with conventional seismic amplitude data.

Principal Component Analysis (PCA)

Principal Component Analysis (PCA) is one of the most common descriptive statistics procedures used to synthesize the information contained in a set of variables (volumes of seismic attributes) and to reduce the dimensionality of a problem. Applied to a collection of seismic attributes, PCA can be used to identify the seismic attributes that have the greatest “contribution,” based on the extent of their relative variance to a region of interest. Attributes identified through the use of PCA are responsive to specific geological features, e.g., lithological contrast, fracture zones, among others. The output of PCA is an Eigen spectrum that quantifies the relative contribution or energy of each seismic attribute to the studied characteristic.

PCA Applied for Lithological Contrast Detection

The PCA process was applied to the following attributes to identify the most significant attributes to the region to detect lithological contrasts at the depth of interest: Thin Bed Indicator, Envelope, Instantaneous Frequency, Imaginary Part, Relative Acoustic Impedance, Sweetness, Amplitude, and Real Part. Of the entire seismic volume, only the voxels in a time window (seismic samples) delimited by the horizon of interest were analyzed, specifically 56 milliseconds above and 32 milliseconds below the horizon. The results are shown for each principal component. In this case, the criterion used for the selected attributes were those whose maximum percentage contribution to the principle component was greater than or equal to 80%. Using this selection technique, the first five principal components were reviewed in the Eigen spectrum. In the end, six (6) attributes of the first two principal components were selected (Figure 2).

Figure 2. PCA results for lithological contrast detection.

Simultaneous Classification of Seismic Attributes Using a Self-Organizing Maps (SOM) Neural Network (Voxel Scale)

The SOM method is an unsupervised classification process in that the network is trained from the input data alone. A SOM consists of components (vectors) called neurons or classes and input vectors that have a position on the map. The values are compared employing neurons that are capable of detecting groupings through training (machine learning) and mapping. The SOM process non-linearly maps the neurons to a two dimensional, hexagonal or rectangular grid. SOM describes a mapping of a larger space to a smaller one. The procedure for locating a vector from the data space on the map is to find the neuron with the vector of weights (smaller metric distance) closer to the vector of the data space. (The subject of this analysis accounted for seismic samples located within the time window covering several samples above and below the target horizon throughout the study area). It is important to classify attributes that have the same common interpretive use, such as lithological indicators, fault delineation, among others. The SOM revealed patterns and identified natural organizational structures present in the data that are difficult to detect in any other way (Roden et al., 2015), since the SOM classification used in this study is applied on individual samples (using sample rate and bin size from seismic data, Figure 2, lower right box), detecting features below conventional seismic resolution, in contrast with traditional wavelet-based classification methods.

SOM Classification for Lithological Contrast Detection

The following six attributes were input to the SOM process with 25 classes (5 X 5) stipulated as the desired output: Envelope, Hilbert, Relative Acoustic Impedance, Sweetness, Amplitude, and Real Part.

As in the PCA analysis, the SOM was delimited to seismic samples (voxels) in a time window following the horizon of interest, specifically 56 milliseconds above to 32 milliseconds below. The resulting SOM classification volume was examined with several visualization and statistical analysis techniques to associate SOM classification patterns with reservoir rock.

3D and Plan Views

One way of identifying patterns or trends coherent with the sedimentary model of the area is visualizing all samples grouped by each neuron in 3D and plan views using stratal-slicing technique throughout the reservoir. The Kohonen SOM and the 2D Colormap in Figure 3 (lower right) ensures that the characteristics of neighboring neurons are similar. The upper part of Figure 3 shows groupings classified by all 5x5 (25) neurons comprising the neural network, while in the lower part there are groupings interpreted to be associated with the reservoir classified by a few neurons that are consistent with the regional sedimentary model, i.e., neurons N12, N13, N16, N17, N22, and N23.

Figure 3. Plan view with geological significance preliminary geobodies from Lithological Contrast SOM. Below: only neurons associated with reservoir are shown.

Vertical Seismic Section Showing Lithological Contrast SOM

The observed lithology in the reservoir sand is predominantly made up of clay sandstone. A discrete log for Net Reservoir was generated to calibrate the results of the Lithological Contrast SOM, using cut-off values according to Clay Volume and Effective Porosity. Figure 4 shows the SOM classification of Lithological Contrast with available well data and plan view. The samples grouped by neurons N17, N21, and N22 match with Net Reservoir discrete logs. It is notable that only the well W-3 (minor producer) intersected the samples grouped by the neuron N17 (light blue). The rest of the wells only intersected neurons N21 and N22. It is important to note that these features are not observed on the conventional seismic amplitude data (wiggle traces).

Figure 4. Vertical section composed by the SOM of Lithological Contrast, Amplitude attribute (wiggle), and Net Reservoir discrete property along wells.

Stratigraphic Well Section

A cross-section containing the wells (Figure 5) shows logs of Gamma Ray, Clay Volume, perforations, resistivity, Effective Porosity, Net Reservoir with lithological contrast SOM classification, and Net Pay.
The results of SOM were compared by observation with discrete well log data, relating specific neurons to the reservoir. At target zone depth, only the neurons N16, N17, N21, and N22 are present. It is noteworthy that only W-3 well (minor producer) intersect clusters formed by neuron N17 (light blue). The rest of the wells intersect neurons N16, N21, N22, and N23.

Statistical Analysis Vertical Proportion Curve (VPC)

Traditionally, Vertical Proportion Curves (VPC) are qualitative and quantitative tools used by some sedimentologists to define succession, division, and variability of sedimentary sequences from well data, since logs describe vertical and lateral evolution of facies (Viloria et al., 2002). A VPC can be modeled as an accumulative histogram where the bars represent the facies proportion present at a given level in a stratigraphic unit. As part of the quality control and revision of the SOM classification volume for Lithological Contrasts, this statistical technique was used to identify whether in the stratigraphic unit or in the window of interest, a certain degree of succession and vertical distribution of specific neurons observed could be related to the reservoir.

The main objective of this statistical method is to identify how specific neurons are vertically concentrated along one or more logs. As an illustration of the technique, a diagram of the stratigraphic grid is shown in Figure 6. The VPC was extracted from the whole 3D grid of SOM classification volume for Lithological Contrast, and detection was generated by counting the occurrence among the 25 neurons or classes in each stratigraphic layer in the VPC extracted from the grid. The VPC of SOM neurons exhibits remarkable slowly-varying characteristics indicative of geologic depositional patterns. The reservoir top corresponds to stratigraphic layer No. 16. In the VPC on the right, only neurons N16, N17, N21, and N22 are present. These neurons have a higher percentage occurrence relative to all 25 classes from the top of the target sand downwards. Corroborating the statistics, these same neural classes appear in the map view in Figure 3 and the vertical section shown in Figure 4. The stratigraphic well section in Figure 5 also supports the statistical results. It is important to note that these neurons also detected seismic samples above the top of the sand top, although in a lesser proportion. This effect is consistent with the existence of layers with similar lithological characteristics, which can be seen from the well logs.

Figure 6. Vertical proportion Curve to identify neurons related to reservoir rock.

Bivariate Statistical Analysis Cross Tabs

The first step in this methodology is a bivariate analysis through cross-tabs (contingency table) to determine if two categorical variables are related based on observing the extent to which the occurrence of one variable is repeated in the categories of the second. Given that one variable is analyzed in terms of another, a distinction must be made between dependent and independent variables. With cross tabs analysis, the possibilities are extended to (in addition to frequency analyzes for each variable, separately) the analyses of the joint frequencies or those in which the analysis unit nature is defined by the combination of two variables.

The result was obtained by extracting the SOM classification volume along wells paths and constructing a discrete well log with two categories: “Net Reservoir” and “not reservoir.” The distinction between “Net Reservoir” and “not reservoir” simply means that the dependent variable might have a hydrocarbon storage capacity or not. In this case, the dependent variable corresponds to neurons of SOM classification for Lithological Contrast volume. It is of ordinal type, since it has an established internal order, and the change from one category to another is not the same. The neurons go from N1 to N25, organized in rows. The independent variable is Net Reservoir, which is also an ordinal type variable. In this tab, the values organized in rows correspond to neurons from the SOM classification volume for Lithological Contrast, and in the columns are discrete states of the “Net Reservoir” and “not reservoir” count for each neuron. Table 2 shows that the highest Net Reservoir counts are associated with neurons N21 and N22 at 47.0% and 28.2% respectively. Conversely, lower counts of Net Reservoir are associated with neurons N17 (8.9%), N16 (7.8%) and N23 (8.0%).

Table 2. Cross Tab for Lithological Contrast SOM versus Net reservoir.

Neuron N21 was detected at reservoir depth in wells W-2 (producer), W-4 (abandoned for technical reasons during drilling), W-5 (producer) and W-6 (producer). N21 showed higher percentages of occurrence in Net Reservoir, so this neuron could be identified as indicating the highest storage capacity. N22 was present in wells W-1 and W-6 at target sand depth but also detected in wells W-2, W-4 and W-5 in clay-sandy bodies overlying the highest quality zone in the reservoir. N22 was also detected in the upper section of target sand horizontally navigated by the W-6 well, which has no petrophysical evaluation. N17 was only detected in well W-3, a minor producer of oil, which was sedimentologically cataloged as lobular facies and had the lowest reservoir rock quality. N16 was detected in a very small proportion in wells W-4 (abandoned for technical reasons during drilling) and W-5 (producer). Finally, N23 was only detected towards the top of the sand in well W-6, and in clayey layers overlying it in the other wells. This is consistent with the observed percentage of 8% Net Reservoir, as shown in Table 2.

Chi-Square Independence Hypothesis Testing

After applying the cross-tab evaluation, this classified information was the basis of a Chi-Square goodness-of-fit test to assess the independence or determine the association between two categorical variables: Net Reservoir and SOM neurons. That is, it aims to highlight the absence of a relationship between the variables. The Chi-Square test compared the behavior of the observed frequencies for each Lithological Contrast neuron with respect to the Net Reservoir variable (grouped in “Net Reservoir” and “no reservoir”), and with the theoretically expected frequency distribution when the hypothesis is null.

As a starting point, the null hypothesis formulation was that the Lithological Contrast SOM neuron occurrences are independent of the presence of Net Reservoir. If the calculated Chi-Square value is equal to or greater than a certain critical theoretical value, the null hypothesis must be rejected. Consequently, the alternative hypothesis must be accepted. Observe the results in Table 3 where the calculated Chi-Square is greater than the theoretical critical value (296 ≥ 9.4, with four degrees of freedom and 5% confidence level), so the null hypothesis of the independence of Net Pay with SOM neurons is rejected, leaving a relationship between Net Reservoir and Lithological Contrast SOM variables.

The test does not report a goodness of fit magnitude (substantial, moderate or poor), however. To measure the degree of correlation between both variables, Pearson’s Phi (φ) and Cramer’s V (ν) measures were computed. Pearson’s φ coefficient was estimated from Eq. 1.1.

Eq. 1.1

where X2: Chi-Square and n : No. of cases

Additionally, Cramer’s V was estimated using Eq. 1.2.

Eq. 1.2

In both cases, values near zero indicate a poor or weak relationship while values close to one indicate a strong relation. The authors obtained values for φ, and Cramer´s ν equals to 0.559 (Table 3). Based on this result, we can interpret a moderate relation between both variables.

Table 3. Calculated and theoretical Chi-Square values and its correlation measures.

Box-and-Whisker Plots

Box-and-whisker plots were constructed to compare and understand the behavior of petrophysical properties for the range that each neuron intersects the well paths in the SOM volume. Also, these quantify which neurons of interest respond to Net Reservoir and Net Pay properties (Figure 7). Five descriptive measures are shown for a box-and-whisker plot of each property:

• Median (thick black horizontal line)
• First quartile (lower limit of the box)
• Third quartile (upper limit of the box)
• Maximum value (upper end of the whisker)
• Minimum value (lower end of the whisker)

The graphs provide information about data dispersion, i.e., the longer the box and whiskers, the greater the dispersion and also data symmetry. If the median is relatively centered of the box, the distribution is symmetrical. If, on the contrary, it approaches the first or third quartile, the distribution could be skewed to these quartiles, respectively. Finally, these graphs identify outlier observations that depart from the rest of the data in an unusual way (these are represented by dots and asterisks as less or more distant from the data center). Horizontal dashed green line is the cut-off value for Effective Porosity (PIGN >0.10) while the dashed blue line represents the cut-off value for Clay Volume (VCL>0.45) and, dashed beige line is cut-off value for Water Saturation (SUWI<0.65).

Based on these data and the resulting analysis, it can be inferred that neurons N16, N17, N21, N22, and N23 respond positively to Net Reservoir. Of these neurons, the most valuable predictors are N21 and N22 since they present lower clay content in comparison with neurons N16 and N23 and associated higher Effective Porosity shown by neurons N16, N17, and N23 (Figure 7a). Neurons N21 and N22 are ascertained to represent the best reservoir rock quality. Finally, neuron N23 (Figure 7b), can be associated with rock lending itself with storage capacity, but clayey and with high Water Saturation, which allows discarding it as a significant neuron. It is important to note that this analysis was conducted by accounting for the simultaneous occurrence of the petrophysical values (VCL, PIGN, and SUWI) on the neurons initially intersected (Figure 7a), and then on the portion of the neurons that pass Net Reservoir cut-off values (Figure 7b), and finally on the portion of the neurons that pass net-pay cut-off values (Figure 7c). For all these petrophysical reasons, the neurons to be considered as a reference to estimate the lateral and vertical distribution of Net Reservoir associated with the target sand are in order of importance, N21, N22, N16, and N17.

Figure 7. Comparison between neurons according to petrophysical properties: VCL (Clay Volume), PIGN (Effective Porosity) and SUWI (Water Saturation). a) SOM neurons for lithological contrast detection, b) Those that pass Net Reservoir cut-off and c) Those that pass Net Pay cut-off.

Simultaneous Seismic Inversion

During this study, a simultaneous prestack inversion was performed using 3D seismic data and sonic logs, in order to estimate seismic petrophysical attributes as Acoustic Impedance (Zp), Shear Impedance (Zs), Density (Rho), as well as P&S-wave velocities, among others. They are commonly used as lithology indicators, possible fluids, and geomechanical properties. Figure 8a shows a scatter plot from well data of seismic attributes Lambda Rho and Mu Rho ratio versus Clay Volume (VCL) and as discriminator Vp/Vs ratio (Vp/Vs). The target sand corresponds to low Vp/Vs and Lambda/Mu values (circled in the figure). Another discriminator in the reservoir was S-wave impedance (Zs) (Figure 8b). From this, seismic inversion attributes were selected for classification by SOM neural network analysis. These attributes were Vp/Vs ratio, Lambda Rho/Mu Rho ratio, and Zs.

Figure 8. Scatter plots: a) Lambda Rho and Mu Rho ratio versus VCL and Vp/Vs y b) Zs versus VCL and Vp/Vs.

Self-Organizing Map (SOM) Comparison

Figure 9 is a plan view of neuron-extracted geobodies associated with the sand reservoir. In the upper part, a SOM classification for Lithological Contrast detection obtained from six traditional seismic attributes is shown; and in the lower part, a different SOM classification for Lithological Contrast detection was obtained from three attributes of simultaneous inversion. Both results are very similar. The selection of SOM classification neurons from inversion attributes was done through spatial pattern recognition, i.e., identifying geometry/shape of the clusters related to each of 25 neurons congruent with the sedimentary model, and by using a stratigraphic section for wells that includes both SOM classifications tracks.

Figure 9. Plan view of neurons with geological meaning. Up: SOM Classification from traditional attributes. Down: SOM Classification from simultaneous inversion attributes.

Figure 10 shows a well section that includes a track for Net Reservoir and Net Pay classification along with SOM classifications from traditional attributes and a second SOM from simultaneous inversion attributes defined from SOM volumes and well paths intersection. In fact, only the neurons numbers with geological meaning are shown.

Figure 10. Well section showing the target zone with tracks for discrete logs from Net Reservoir, Net Pay and both SOM classifications.

Discussion and Conclusions

Principal Component Analysis (PCA) identified the most significant seismic attributes to be classified by Self-Organizing Maps (SOM) neural network at single-sample basis to detect features associated with lithological contrast and recognize lateral and vertical extension in the reservoir. The interpretation of SOM classification volumes was supported by multidisciplinary sources (geological, petrophysical, and dynamic data). In this way, the clusters detected by certain neurons became the inputs for geobody interpretation. The statistical analysis and visualization techniques enabled the estimation of Net Reservoir for each neuron. Finally, the extension of reservoir rock geobodies derived from SOM classification of traditional attributes was corroborated by the SOM acting on simultaneous inversion attributes. Both multi-attribute machine learning analysis of traditional attributes and attributes of seismic inversion enable refinement of the sedimentary model to reveal more precisely the lateral and vertical distribution of facies. However, the Lithological Contrast SOM results from traditional attributes showed a better level of detail compared with seismic inversion SOM.

Collectively, the workflow may reduce uncertainty in proposing new drilling locations. Additionally, this methodology might be applied using specific attributes to identify faults and fracture zones, identify absorption phenomena, porosity changes, and direct hydrocarbon indicator features, and determine reservoir characteristics.

Acknowledgments

The authors thank Pemex and Oil and Gas Optimization for providing software and technical resources. Thanks also are extended to Geophysical Insights for the research and development of the Paradise® AI workbench and the machine learning applications used in this paper. Finally, thank Reinaldo Michelena, María Jerónimo, Tom Smith, and Hal Green for review of the manuscript.

References

Agresti, A., 2002, Categorical Data Analysis: John Wiley & Sons.

Marroquín I., J.J. Brault and B. Hart, 2009, A visual data mining methodology to conduct seismic facies analysis: Part 2 – Application to 3D seismic data: Geophysics, 1, 13-23.

Roden R., T. Smith and D. Sacrey, 2015, Geologic pattern recognition from seismic attributes: Principal component analysis and self-organizing maps: Interpretation, 4, 59-83.

Viloria R. and M. Taheri, 2002, Metodología para la Integración de la Interpretación Sedimentológica en el Modelaje Estocástico de Facies Sedimentarias, (INT-ID-9973, 2002). Technical Report INTEVEP-PDVSA.

Machine Learning Applied to 3D Seismic Data from the Denver-Julesburg Basin Improves Stratigraphic Resolution in the Niobrara

Machine Learning Applied to 3D Seismic Data from the Denver-Julesburg Basin Improves Stratigraphic Resolution in the Niobrara

By Carolan Laudon, Sarah Stanley, Patricia Santogrossi 
Published with permission: Unconventional Resources Technology Conference (URTeC 2019)
July 2019

Abstract

Seismic attributes can be both powerful and challenging to incorporate into interpretation and analysis. Recent developments with machine learning have added new capabilities to multi-attribute seismic analysis. In 2018, Geophysical Insights conducted a proof of concept on 100 square miles of multi-client 3D data jointly owned by Geophysical Pursuit, Inc. (GPI) and Fairfield Geotechnologies (FFG) in the Denver-Julesburg Basin (DJ). The purpose of the study was to evaluate the effectiveness of a machine learning workflow to improve resolution within the reservoir intervals of the Niobrara and Codell formations, the primary targets for development in this portion of the basin.

The seismic data are from Phase 5 of the GPI/Fairfield Niobrara program in northern Colorado. A preliminary workflow which included synthetics, horizon picking and correlation of 28 wells was completed. The seismic volume was re-sampled from 2 ms to 1 ms. Detailed well time-depth charts were created for the Top Niobrara, Niobrara A, B and C benches, Fort Hays and Codell intervals. The interpretations, along with the seismic volume, were loaded into the Paradise® machine learning application, and two suites of attributes were generated, instantaneous and geometric. The first step in the machine learning workflow is Principal Component Analysis (PCA). PCA is a method of identifying attributes that have the greatest contribution to the data and that quantifies the relative contribution of each. PCA aids in the selection of which attributes are appropriate to use in a Self-Organizing Map (SOM). In this case, 15 instantaneous attribute volumes, plus the parent amplitude volume, were used in the PCA and eight were selected to use in SOMs. The SOM is a neural network-based machine learning process that is applied to multiple attribute volumes simultaneously. The SOM produces a non-linear classification of the data in a designated time or depth window.

For this study, a 60-ms interval that encompasses the Niobrara and Codell formations was evaluated using several SOM topologies. One of the main drilling targets, the B chalk, is approximately 30 feet thick; making horizontal well planning and execution a challenge for operators. An 8 X 8 SOM applied to 1 ms seismic data improves the stratigraphic resolution of the B bench. The neuron classification also images small but significant structural variations within the chalk bench. These variations correlate visually with the geometric curvature attributes. This improved resolution allows for precise well planning for horizontals within the bench. The 25 foot thick C bench and the 17 to 25 foot thick Codell are also seismically resolved via SOM analysis. Petrophysical analyses from wireline logs run in seven wells within the survey by Digital Formation; together with additional results from SOMs show the capability to differentiate a high TOC upper unit within the A marl which presents an additional exploration target. Utilizing 2D color maps and geobodies extracted from the SOMs combined with petrophysical results allows calculation of reserves for the individual reservoir units as well as the recently identified high TOC target within the A marl.

The results show that a multi-attribute machine learning workflow improves the seismic resolution within the Niobrara reservoirs of the DJ Basin and results can be utilized in both exploration and development.

Introduction and preliminary work

The Denver-Julesburg Basin is an asymmetrical foreland basin that covers approximately 70,000 square miles over parts of Colorado, Wyoming, Kansas and Nebraska. The basin has over 47,000 oil and gas wells with a production history that dates back to 1881 (Higley, 2015). In 2009, operators in the Wattenberg field began to drill and complete horizontal wells in the chalk benches of the Niobrara formation and within the Codell sandstone. As of October 2018, approximately 9500 horizontal wells have been drilled and completed within Colorado and Wyoming in the Niobrara and Codell formations (shaleprofile.com/2019/01/29/niobrara-co-wy-update-through-october-2018).

The transition to horizontal drilling necessitated the acquisition of modern, 3D seismic data (long offset, wide azimuth) to properly image the complex faulting and fracturing within the basin. In 2011, Geophysical Pursuit, Inc., in partnership with the former Geokinetics Inc., embarked on a multi-year, multi-client seismic program that ultimately resulted in the acquisition of 1580 square miles of contiguous 3D seismic data. In 2018, Geophysical Pursuit, Inc. (GPI) and joint-venture partner Fairfield Geotechnologies (FFG) provided Geophysical Insights with seismic data in the Denver-Julesburg Basin to conduct a proof of concept evaluation of the effectiveness of a machine learning workflow to improve resolution within the reservoir intervals of the Niobrara and Codell formations, currently the primary targets for development in this portion of the basin. The GPI/FFG seismic data analyzed are 100 square miles from the Niobrara Phase 5 multi-client 3D program in northern Colorado (Figure 1). Prior to the machine learning workflow, a preliminary interpretation workflow was carried out, that included synthetics, horizon picking and well correlation on 28 public wells with digital data. The seismic volume was resampled from 2 ms to 1 ms. Time depth charts were made with detailed well ties for the Top Niobrara, Niobrara A, B, and C benches, Fort Hays and Codell. The interpretations, along with the re-sampled seismic amplitude volume, were loaded into the Paradise® machine learning application. The machine learning software has several options for computing seismic attributes, and two suites were selected for the study: standard instantaneous attributes and geometric attributes from the AASPI (Attribute Assisted Seismic Processing and Interpretation) consortium (http://mcee.ou.edu/aaspi/).

Figure 1: Map of GPI FFG multi-client program and study area outline

Geologic Setting of the Niobrara and Surrounding Formations

The Niobrara formation is late Cretaceous in age and was deposited in the Western Interior Seaway (Kaufmann, 1977). The Niobrara is subdivided into the basal Fort Hays limestone and the Smoky Hill member. The Smoky Hill member is further subdivided into three subunits informally termed Niobrara A, B, and C. These units consist of fractured chalk benches which are primary reservoirs with marls and shales between the benches which comprise source rocks and secondary reservoir targets. (Figure 2). The Niobrara unconformably overlies the Codell sandstone and is overlain by the Sharon Springs member of the Pierre shale.

The Codell is also late Cretaceous in age, and unconformably underlies the Fort Hays member of the Niobrara formation. In general, the Codell thins from north to south due to erosional truncation (Sterling, Bottjer and Smith, 2016). In the study area, the thickness of the Codell ranges from 18 to 25 feet. Lewis (2013) inferred an eastern provenance for the Codell with a limited area of deposition or subsequent erosion through much of the DJ Basin. Based upon geochemical analyses, Sterling and others (2016) state that hydrocarbons produced from the Codell are sourced from the Niobrara, primarily the C marl, and the thermal maturity provides evidence of migration into the Codell. The same study found that oil produced from the Niobrara C chalk was generated in-situ.

Figure 2 (Sonnenberg, 2015) shows a generalized stratigraphic column and a structure map for the Niobrara in the DJ Basin along with an outline of the DJ basin and the location of the Wattenberg Field within which the study area is contained.

Figure 2: Outline of the DJ Basin with Niobrara structure contours and generalized stratigraphic column that shows the source rock and reservoir intervals for late Cretaceous units in the basin (from Sonnenberg, 2015).

Figure 3 shows the structural setting of the Niobrara in the study area, as well as types of fractures which can be expected to provide storage capacity and permeability for reservoirs within the chalk benches (Friedman and others, 1992). The study area covers approximately 100 square miles and shows large antiforms on the western edge. The area is normally faulted with most faults trending northeast to southwest. The Top Niobrara time structure also shows extensive small-scale structural relief which is visualized in a curvature attribute volume as shown in Figure 4. This implies that a significant amount of fracturing is present within the Niobrara.

Figure 3: Gross structure of the Niobrara in the study area in seismic two-way travel time. Insets from Friedman and others, 1992, showing predicted fracture types from structural elements. Area shown is approximately 100 square miles.

Figure 4: Most positive curvature, K1 on top Niobrara. The faulting and fractures are complex with both NE-SW and NW-SE trends apparent. Area shown is approximately 100 square miles. Seismic data provided courtesy of GPI and FFG.

Meissner and others (1984) and Landon and others (2001) have stated that the Niobrara formation kerogen is Type-II and oil-prone. Landon and others, and Finn and Johnson (2005) have also stated that the DJ basin contains the richest Niobrara source rocks with TOC contents reaching eight weight percent. Niobrara petroleum production is dependent on fractures in the hard, brittle, carbonate-rich zones. These zones are overlain and/or interbedded with soft, ductile marine shales that inhibit migration and seal the hydrocarbons in the fractured zones.

Why Utilize Machine Learning?

In the study area, the Niobrara to Greenhorn section is represented in approximately 60 milliseconds of two-way travel time in the seismic data. Figure 5 shows an amplitude section through a well within the study area. Figure 6 is an index map of wells used in the study with the Anderson 11-2 well highlighted in red. It is apparent that the top Niobrara is a well resolved positive amplitude or peak which can be picked on either a normal amplitude section or an instantaneous phase display. The individual units within the Niobrara A bench, A marl, B bench, B marl, C bench, C marl, Fort Hays and Codell present a significant challenge for an interpreter to resolve using only one or two attributes. The use of simultaneous multiple seismic attributes holds promise to resolve thin beds and a machine learning approach is one methodology which has been documented to successfully resolve stratigraphy below tuning (Roden and others, 2015, Santogrossi, 2017).

Figure 5: Amplitude section shows the approximately 60 milliseconds between marked horizons which contain the Niobrara and Codell reservoirs. Trace spacing is 110 feet, vertical scale is two-way time in seconds. Seismic data are shown courtesy of GPI and FFG.

Figure 6: Index map of vertical wells used in study. The dashed lines connect well names to well locations. Wells were obtained from the Colorado Oil and Gas Conservation Commission public database.

Machine Learning Data Preparation

The Niobrara Phase 5 3D data used for this study consisted of a 32-bit seismic amplitude volume that covers approximately 100 square miles. The survey contained 5.118 seconds of data with a bin spacing of 110 feet. Machine learning classifications benefit from sharper natural clusters of information through one level of finer trace sampling. Machine learned seismic resolution also benefits from sample-by-sample classification when compared to conventional wavelet analysis. Therefore, the data were upsampled to 1 ms from its original 2 ms interval by Geophysical Insights. The 1 ms amplitude data were used for seismic attribute generation.

Focus should be placed on the time interval that encompasses the geologic units of interest. The time interval selected for this study was 0.5 seconds to 2.2 seconds.

A total of 44 digital wells were obtained, 40 of which were within the seismic survey.

Classification by Principal Component Analysis (PCA)

Multi-dimensional analysis and multi-attribute analysis go hand in hand. Because individuals are grounded in three-dimensional space, it is difficult to visualize what data in a higher number dimensional space looks like. Fortunately, mathematics doesn’t have this limitation and the results can be easily understood with conventional 2D and 3D viewers.

Working with multiple instantaneous or geometric seismic attributes generates tremendous volumes of data. These volumes contain huge numbers of data points which may be highly continuous, greatly redundant, and/or noisy. (Coleou et al., 2003). Principal Component Analysis (PCA) is a linear technique for data reduction which maintains the variation associated with the larger data sets (Guo and others, 2009; Haykin, 2009; Roden and others, 2015). PCA has the ability to separate attribute types by frequency, distribution, and even character. PCA technology is used to determine which attributes to use and which may be ignored due to their very low impact on neural network solutions.

Figure 7 illustrates the analysis of a data cluster in two directions offset by 90 degrees. The first principal component (eigenvector 1) analyses the data cluster along the longest axis. The second principal component (eigenvector 2) analyses the data cluster variations perpendicular to the first principal component. As stated in the diagram, each eigenvector is associated with an eigenvalue which shows how much variance is in the data.

Figure 7: 2 attribute data set demonstrating the concept of PCA

Eigenvectors and eigenvalues from inline 1683 were consistently used for Principal Component Analysis because line 1683 bisected the deepest well in the study area. The entire pre-Niobrara, Niobrara, Codell, and post-Niobrara depositional events were present in the borehole.

PCA results for the first two eigenvectors for the interval Top Niobrara to Top Greenhorn are shown in Figure 8. Results show the most significant attributes in the first eigenvector are Sweetness, Envelope, and Relative Acoustic Impedance; each contributes approximately 60% of the maximum value for the eigenvector. PCA results for the second eigenvector show Thin Bed and Instantaneous Frequency are the most significant attributes. Figure 9 shows instantaneous attributes from the first eigenvector (sweetness) and second eigenvector (thin bed indicator) extracted near the B chalk of the Niobrara. The table shown in Figure 9 lists the instantaneous attributes that PCA indicated contain the most significance in the survey and the eigenvector associated with the attribute. This selection of attributes comprises a ‘recipe’ for input to the Self-Organizing Maps for the interval Niobrara to Greenhorn.

Figure 8: Eigenvalue charts for Eigenvectors 1 and 2 from PCA for Top Niobrara to Top Greenhorn. Attributes that contribute more than 50% of the maximum were selected for input to SOM

Figure 9: Instantaneous attributes near the Niobrara B chalk. These are prominent attributes in Eigenvectors 1 and 2. On the right of the figure is a list of eight selected attributes for SOM analysis. Seismic data is shown courtesy of GPI and FFG.

Self-Organzing Maps

Teuvo Kohonen, a Finnish mathematician, invented the concepts of Self-organizing Maps (SOM) in 1982 (Kohonen, T., 2001). Self-Organizing Maps employ the use of unsupervised neural networks to reduce very high dimensions of data to a scale that can be easily visualized (Roden and others, 2015). Another important aspect of SOMs is that every seismic sample is used as input to classification as opposed to wavelet-based classification.

Figures 10 and 11 illustrate classification by SOM. Within the 3D seismic survey, samples are first organized into attribute points with similar properties called natural clusters in attribute space. Within each cluster new, empty, multi-attribute samples, named neurons, are introduced. The SOM neurons will seek out natural clusters of like characteristics in the seismic data and produce a 2D mesh that can be illustrated with a two- dimensional color map.

Figure 10: Example SOM classification of two attributes into 4 clusters (neurons)

In other words, the neurons “learn” the characteristics of a data cluster through an iterative process (epochs) of cooperative then competitive training. When the learning is completed each unique cluster is assigned to a neuron number and each seismic sample is now classified (Smith, 2016).

Figure 11: Illustration of how SOM works with 3D seismic volumes

Note that the two-dimensional color map in Figure 11 shows an 8X8 topology. Topology is important. The finer the topology of the two-dimensional color map the finer the data clusters associated with each neuron become. For example: an 8X8 topology distributes 64 neurons throughout an attribute set, while a 12X12 topology distributes 144 neurons. Finer topologies help to refine variations in lithologies, porosity, and other reservoir characteristics. Although there is no theoretical limit to a two-dimensional map topology, experience has shown that there is a practical limit to the number of neuron topologies for geological resolution. Conversely, a coarser neuron topology is associated with much larger data clusters and helps to define structural features. For the Niobrara project an 8X8 topology appeared to give the best stratigraphic resolution for instantaneous attributes and a 5X5 topology resolved the geometric attributes most effectively.

SOM Results for the Survey and their Interpretation

The SOM topology selected to best resolve the sub-Niobrara stratigraphy from the eight instantaneous attributes is an 8X8 hexagonal which yields 64 individual neurons. The SOM interval selected was Top Niobrara to Top Greenhorn. The next sequence of figures highlights the improved resolution provided by the SOM when compared to the original amplitude data. Figure 12 shows a north-south inline through the survey and through the Rotharmel 11-33 well which was one of the wells selected for petrophysical analysis. The original amplitude data is shown along with the SOM result for the interval.

Figure 12: North-South inline showing the original amplitude data (upper) and the 8X8 SOM result (lower) from Top Niobrara through Greenhorn horizons. Seismic data is shown courtesy of GPI and FFG.

The next image, Figure 13, zooms into the SOM and highlights the correlation with lithology from petrophysical analysis. The B chalk is noted by a stacked pattern of yellow-red-yellow neurons, with the red representing the maximum carbonate content within the middle of the chalk bench.

Figure 13: 8X8 Instantaneous SOM through Rotharmel 11-33 with well log composite. The B bench, highlighted in green on the wellbore, ties the yellow-red-yellow sequence of neurons. Seismic data is shown courtesy of GPI and FFG.

One can see on the SOM the sweet spot within the B chalk and that there is a fair amount of small-scale structural relief present. These results aid in the resolution of structural offset within the reservoir away from well control which is critical for staying in a 20 to 30 foot zone when drilling horizontally. Each classified sample is 1 ms in thickness which converted to depth equates to roughly 7 feet.

Figure 14 shows the K2 curvature attribute co-rendered with the SOM results in vertical sections. The Rotharmel 11-33 is at the intersection of the vertical sections. The curvature is extracted at the middle of the B chalk and shows good agreement with the SOM. The entire B bench is represented by only 5-6 ms of seismic data.

Figure 14: Most negative curvature, K2 rendered at the middle of the B chalk. Vertical sections are an 8X8 instantaneous SOM Top Niobrara to Top Greenhorn. Seismic data is shown courtesy of GPI and FFG.

A Marl Results

Seven wells within the survey were sent to a third party for petrophysical analysis (Figure 15). The analysis identified zones of interest within the Niobrara marls which are typically considered source rocks. The calculations show a high TOC zone in the upper A marl which the analysis identifies as shale pay (Figure 16). A seismic cross-section of the 8X8 instantaneous SOM (Figure 16) through the three wells depicted shows that this zone is well imaged. The neurons can be isolated and volumetric calculations derived from the representative neurons.

Figure 15: Index map for wells used in petrophysical analysis (in red)

Figure 16: Petrophysical results and SOM for three wells in the study area. The TOC curve (Track 12) and Shale pay curve (Track 10), highlighted in yellow, indicate the Upper A marl is both a rich source rock and a potential shale reservoir. Seismic data is shown courtesy of GPI and FFG.

Codell Results

The Codell sandstone in general and within the study area shows more heterogeneity in reservoir properties than the Niobrara chalk benches. The petrophysical analysis on 7 wells shows net pay ranging from zero feet to three feet. The gross thickness ranges from 17 feet to 25 feet. The SOM results reflect this heterogeneity, resolve the Codell gross interval throughout most of the study area, and thus, can be useful for horizontal well planning.

Figures 17 and 18 shows inline 60 through a well with the Top Niobrara to Greenhorn 8X8 SOM results. The 2D color map has been manipulated to emphasize the lower interval from approximately base Niobrara through the Codell. Figure 18 zooms into the well and shows the specific neurons associated with the Codell interval. Figures 19 shows a N-S traverse through four wells again with the Codell interval highlighted through use of a 2D color map. The western and southwest areas of the survey show a much more continuous character to the classification with only two neurons representing the Codell interval (6 and 48). Figure 20 shows both the N-S traverse and a crossline through the anomaly.

Figure 17: Instantaneous 8X8 SOM, Top Niobrara to Greenhorn. Seismic data is shown courtesy of GPI and FFG.

Figure 18: Detailed look at the Codell portion of the SOM at the Haythorn 4-12 with GR in background. The 2D color map shows how neurons can be isolated to show a specific stratigraphic interval. Seismic data is shown courtesy of GPI and FFG.

Figure 19: Traverse through 4 wells in the western part of the study area showing the isolation of the Codell sandstone within the SOM. The south west part of the line shows the Codell being represented by only 2 neurons (6 and 48). The colormap can be interrogated to determine which attributes contribute to any given neuron. Seismic data is shown courtesy of GPI and FFG.

Figure 20: View of the SW Codell anomaly where the neuron stacking pattern changes to two neurons only (6 and 47). Seismic data is shown courtesy of GPI and FFG.

Figure 21: 3D view of neurons isolated from the SOM in the Codell interval. The areas where red is prominent and continuous show the extent of Codell represented by neurons 6 and 47 only. Also, an area in the eastern part of the study is outlined. The Codell is not represented in this area by the six neurons highlighted in the 2D color map. Seismic data is shown courtesy of GPI and FFG.

Unfortunately, vertical well control was not available through this southwestern anomaly. To examine the extent of individual neurons within the SOM at Codell level, the next image, Figure 21, shows a 3D view of the isolated Codell neurons. The southwest anomaly is apparent as well as similar anomalies in the northern portion of the survey. What is also immediately apparent is that in the east-central portion of the survey, the Codell is not represented by the six neurons (6,7,47, 48, 55, 56) previously used to isolate it within the volume. Figure 22 takes a closer look at the SOM results through this area and also utilizes the original amplitude data. Both the SOM and the amplitude data show a change in character throughout the entire section, but the SOM results only change significantly in the lower Niobrara to Greenhorn portion of the interval.

The machine learning application has a feature in which individual neurons can be queried for statistics on how individual seismic attributes contribute to the cluster which makes up the neuron. Queries were done on all of the neurons within the Codell and shown are the results for neuron 6 which is one of 2 neurons characteristic of the southwestern Codell anomaly and on neuron 61in the area where the SOM changes significantly in Figure 23. Neuron 6 has equal contributions from Instantaneous Frequency, Hilbert, Thin Bed, and Relative Acoustic Impedance. Neuron 61 shows Instantaneous Q as the top attribute which is consistent with the interpretation of the section being structurally disturbed or highly fractured.

Figure 22: West-East crossline through two wells showing the SOM and amplitude data through the blank area from Figure 23. The seismic character and classification results differ significantly in this portion of the survey for the lower Niobrara, Fort Hays and Codell. This area is interpreted to be highly fractured. Seismic data is shown courtesy of GPI and FFG.

Figure 23: Example of attribute details for individual neurons (6 and 61). This shows the contribution of individual attributes to the neuron.

Structural Attributes

The machine learning workflow can be applied to geometric attributes. PCA and SOM need to be run separately from the instantaneous attributes since PCA assumes a Gaussian distribution of the attributes. This assumption doesn’t hold for geometric attributes but the SOM process does not assume any distribution and thus still finds patterns in the data. To produce a structural SOM, four attributes were selected from PCA: Curvature_K1, Similarity, Energy Ratio, Texture Entropy, and Texture Homogeneity. These were combined with the original amplitude data to generate SOMs from the Top Niobrara to Top Greenhorn interval. Several SOM topologies were generated with geometric attributes and a 5X5 yielded good results. Figure 24 shows the geometrical SOM results at the Top Niobrara, B bench, and Codell. The Top Niobrara level shows major faults, but not nearly as much structural disturbance as the mid-Niobrara B bench or the Codell level. The eastern part of the survey where the instantaneous classification changed also shows significant differences between the B bench and Codell and agrees with the interpretation that this is a highly fractured area for the lower Niobrara and Codell. The B bench appears more structurally disrupted than the Top Niobrara but shows fewer areal changes compared to Codell. Pressure and production data could help confirm how these features relate to reservoir quality.

Figure 24: 5X5 Structural SOM at 3 levels. There are significant changes both vertically and areally

Conclusions

Seismic multi-attribute analysis has always held the promise of improving interpretations via the integration of attributes which respond to subsurface conditions such as stratigraphy, lithology, faulting, fracturing, fluids, pressure, etc. Machine learning augments traditional interpretation and attribute analysis by utilizing attribute space to simultaneously classify suites of attributes into sample based, high dimension clusters that are subsequently visualized and further interpreted in the 3D seismic survey. 2D colormaps aid in their interpretation and visualization.

In the DJ Basin, we have resolved the primary reservoir targets, the Niobrara chalk benches and the Codell formation, represented within approximately 60 ms of data in two-way time, to the level of one to five neurons which is approximately 7 to 35 feet in thickness. Structural SOM classifications with a suite of geometric attributes better image the complex faulting and fracturing and its variations throughout the reservoir interval. The classification volumes are designed to aid in drilling target identification, reserves calculations and horizontal well planning.

Acknowledgements

The authors would like to thank their colleagues at Geophysical Insights for their valuable insight and suggestions and Digital Formation for the petrophysical analysis. We also thank Geophysical Pursuit, Inc. and Fairfield Geotechnologies for use of their data and permission to publish this paper.

References

Coleou, T., M. Poupon, and A. Kostia, 2003, Unsupervised seismic facies classification: A review and comparison
of techniques and implementation: The Leading Edge, 22, 942–953, doi: 10.1190/1.1623635.

Finn, T. M. and Johnson, R. C., 2005, Niobrara Total Petroleum System in the Southwestern Wyoming Province, Chapter 6 of Petroleum Systems and Geologic Assessment of Oil and Gas in the Southwestern Wyoming Province, Wyoming, Colorado, and Utah, USGS Southwestern Wyoming Province Assessment Team, U.S. Geological Survey Digital Data Series DDS–69–D.

Guo, H., K. J. Marfurt, and J. Liu, 2009, Principal component spectral analysis: Geophysics, 74, no. 4, p. 35–43.

Haykin, S., 2009, Neural networks and learning machines, 3rd ed.: Pearson.

Kauffman, E.G., 1977, Geological and biological overview— Western Interior Cretaceous Basin, in Kauffman, E.G., ed., Cretaceous facies, faunas, and paleoenvironments across the Western Interior Basin: The Mountain Geologist, v. 14, nos. 3 and 4, p. 75–99.

Kohonen, T., 2001, Self-organizing maps: Third extended addition, Springer, Series in Information Services, Vol. 30.

Landon, S.M., Longman, M.W., and Luneau, B.A., 2001, Hydrocarbon source rock potential of the Upper Cretaceous Niobrara Formation, Western Interior Seaway of the Rocky Mountain region: The Mountain Geologist, v. 38, no. 1, p. 1–18.

Lewis, R.K., 2013, Stratigraphy and Depositional Environments of the Late Cretaceous (Late Turonian) Codell Sandstone and Juana Lopez Member of the Carlile Shale, Southeast Colorado: Colorado School of Mines MS Thesis, 190 p.

Longman, M.W., Luneau, B.A., and Landon, S.M., 1998, Nature and distribution of Niobrara lithologies in the Cretaceous Western Interior Seaway of the Rocky Mountain Region: The Mountain Geologist, v. 35, no. 4, p. 137–170.

Luneau, B., Longman, M., Kaufman, P., and Landon, S., 2011, Stratigraphy and Petrophysical Characteristics of the Niobrara Formation in the Denver Basin, Colorado and Wyoming, AAPG Search and Discovery Article #50469.

Meissner, F.F., Woodward, J., and Clayton, J.L., 1984, Stratigraphic relationships and distribution of source rocks in the greater Rocky Mountain region, in Woodward, J., Meissner, F.F., and Clayton, J.L., eds., Hydrocarbon source rocks of the greater Rocky Mountain region: Rocky Mountain Association of Geologists Guidebook, p. 1–34.

Molenaar, C.M., and Rice, D.D., 1988, Cretaceous rocks of the Western Interior Basin, in Sloss, L.L., ed., Sedimentary cover-North American craton, U.S.: Geological Society of America, The Geology of North America, v. D–2, p. 77–82.

Roden, R., Smith, T., and Sacrey, D., 2015, Geologic pattern recognition from seismic attributes: Principal component analysis and self-organizing maps, Interpretation, Vol. 3, No. 4, p. SAE59-SAE83.

Santogrossi, P., 2017, Classification/Corroboration of Facies Architecture in the Eagle Ford Group: A Case Study in Thin Bed Resolution, URTeC 2696775, doi 10.15530-urtec-2017-<2696775>.

Smith, T., 2016, Why SOM is an Appealing Learning Machine, Internal Geophysical Insights Paper.

Sonnenberg, S.A., 2015. Geologic Factors Controlling Production in the Codell Sandstone, Wattenberg Field, Colorado. URTeC Paper 2145312 presented at the Unconventional Resources Technology Conference, San Antonio, TX, July 20-22.

Sonnenberg, S.A., 2015. New reserves in an old field, the Niobrara/Codell resource plays in the Wattenberg Field, Denver Basin, Colorado. EAGE First Break, v. 33, p. 55-62.

Sterling, R., Bottjer, R. and Smith, K., 2016, Codell SS, A review of the Northern DJ oil resource play Laramie County, WY and Weld, County, CO, AAPG Search and Discovery Article #10754.

Applications of Machine Learning for Geoscientists – Permian Basin

Applications of Machine Learning for Geoscientists – Permian Basin

By Carrie Laudon
Published with permission: Permian Basin Geophysical Society 60th Annual Exploration Meeting
May 2019

Abstract

Over the last few years, because of the increase in low-cost computer power, individuals and companies have stepped up investigations into the use of machine learning in many areas of E&P. For the geosciences, the emphasis has been in reservoir characterization, seismic data processing, and to a lesser extent interpretation. The benefits of using machine learning (whether supervised or unsupervised) have been demonstrated throughout the literature, and yet the technology is still not a standard workflow for most seismic interpreters. This lack of uptake can be attributed to several factors, including a lack of software tools, clear and well-defined case histories and training. Fortunately, all these factors are being mitigated as the technology matures. Rather than looking at machine learning as an adjunct to the traditional interpretation methodology, machine learning techniques should be considered the first step in the interpretation workflow.

By using statistical tools such as Principal Component Analysis (PCA) and Self Organizing Maps (SOM) a multi-attribute 3D seismic volume can be “classified”. The PCA reduces a large set of seismic attributes both instantaneous and geometric, to those that are the most meaningful. The output of the PCA serves as the input to the SOM, a form of unsupervised neural network, which, when combined with a 2D color map facilitates the identification of clustering within the data volume. When the correct “recipe” is selected, the clustered or classified volume allows the interpreter to view and separate geological and geophysical features that are not observable in traditional seismic amplitude volumes. Seismic facies, detailed stratigraphy, direct hydrocarbon indicators, faulting trends, and thin beds are all features that can be enhanced by using a classified volume.

The tuning-bed thickness or vertical resolution of seismic data traditionally is based on the frequency content of the data and the associated wavelet. Seismic interpretation of thin beds routinely involves estimation of tuning thickness and the subsequent scaling of amplitude or inversion information below tuning. These traditional below-tuning-thickness estimation approaches have limitations and require assumptions that limit accuracy. The below tuning effects are a result of the interference of wavelets, which are a function of the geology as it changes vertically and laterally. However, numerous instantaneous attributes exhibit effects at and below tuning, but these are seldom incorporated in thin-bed analyses. A seismic multi-attribute approach employs self-organizing maps to identify natural clusters from combinations of attributes that exhibit below-tuning effects. These results may exhibit changes as thin as a single sample interval in thickness. Self-organizing maps employed in this fashion analyze associated seismic attributes on a sample-by-sample basis and identify the natural patterns or clusters produced by thin beds. Examples of this approach to improve stratigraphic resolution in both the Eagle Ford play, and the Niobrara reservoir of the Denver-Julesburg Basin will be used to illustrate the workflow.

Introduction

Seismic multi-attribute analysis has always held the promise of improving interpretations via the integration of attributes which respond to subsurface conditions such as stratigraphy, lithology, faulting, fracturing, fluids, pressure, etc. The benefits of using machine learning (whether supervised or unsupervised) has been demonstrated throughout the literature and yet the technology is still not a standard workflow for most seismic interpreters. This lack of uptake can be attributed to several factors, including a lack of software tools, clear and well-defined case histories, and training. This paper focuses on an unsupervised machine learning workflow utilizing Self-Organizing Maps (Kohonen, 2001) in combination with Principal Component Analysis to produce classified seismic volumes from multiple instantaneous attribute volumes. The workflow addresses several significant issues in seismic interpretation: it analyzes large amounts of data simultaneously; it determines relationships between different types of data; it is sample based and produces high-resolution results and, reveals geologic features that are difficult to see in conventional approaches.

Principal Component Analysis (PCA)

Multi-dimensional analysis and multi-attribute analysis go hand in hand. Because individuals are grounded in three-dimensional space, it is difficult to visualize what data in a higher number dimensional space looks like. Fortunately, mathematics doesn’t have this limitation and the results can be easily understood with conventional 2D and 3D viewers.

Working with multiple instantaneous or geometric seismic attributes generates tremendous volumes of data. These volumes contain huge numbers of data points which may be highly continuous, greatly redundant, and/or noisy. (Coleou et al., 2003). Principal Component Analysis (PCA) is a linear technique for data reduction which maintains the variation associated with the larger data sets (Guo and others, 2009; Haykin, 2009; Roden and others, 2015). PCA can separate attribute types by frequency, distribution, and even character. PCA technology is used to determine which attributes may be ignored due to their very low impact on neural network solutions and which attributes are most prominent in the data. Figure 1 illustrates the analysis of a data cluster in two directions, offset by 90 degrees. The first principal component (eigenvector 1) analyses the data cluster along the longest axis. The second principal component (eigenvector 2) analyses the data cluster variations perpendicular to the first principal component. As stated in the diagram, each eigenvector is associated with an eigenvalue which shows how much variance there is in the data.

two attribute data set

Figure 1. Two attribute data set illustrating the concept of PCA

The next step in PCA analysis is to review the eigen spectrum to select the most prominent attributes in a data set. The following example is taken from a suite of instantaneous attributes over the Niobrara formation within the Denver­ Julesburg Basin. Results for eigenvectors 1 are shown with three attributes: sweetness, envelope and relative acoustic impedance being the most prominent.

two attribute data set

Figure 2. Results from PCA for first eigenvector in a seismic attribute data set

Utilizing a cutoff of 60% in this example, attributes were selected from PCA for input to the neural network classification. For the Niobrara, eight instantaneous attributes from the four of the first six eigenvectors were chosen and are shown in Table 1. The PCA allowed identification of the most significant attributes from an initial group of 19 attributes.

Results from PCA for Niobrara Interval

Table 1: Results from PCA for Niobrara Interval shows which instantaneous attributes will be used in a Self-Organizing Map (SOM).

Self-Organizing Maps

Teuvo Kohonen, a Finnish mathematician, invented the concepts of Self-Organizing Maps (SOM) in 1982 (Kohonen, T., 2001). Self-Organizing Maps employ the use of unsupervised neural networks to reduce very high dimensions of data to a classification volume that can be easily visualized (Roden and others, 2015). Another important aspect of SOMs is that every seismic sample is used as input to classification as opposed to wavelet-based classification.

Figure 3 diagrams the SOM concept for 10 attributes derived from a 3D seismic amplitude volume. Within the 3D seismic survey, samples are first organized into attribute points with similar properties called natural clusters in attribute space. Within each cluster new, empty, multi-attribute samples, named neurons, are introduced. The SOM neurons will seek out natural clusters of like characteristics in the seismic data and produce a 2D mesh that can be illustrated with a two- dimensional color map. In other words, the neurons “learn” the characteristics of a data cluster through an iterative process (epochs) of cooperative than competitive training. When the learning is completed each unique cluster is assigned to a neuron number and each seismic sample is now classified (Smith, 2016).

two attribute data set

Figure 3. Illustration of the concept of a Self-Organizing Map

Figures 4 and 5 show a simple example using 2 attributes, amplitude, and Hilbert transform on a synthetic example. Synthetic reflection coefficients are convolved with a simple wavelet, 100 traces created, and noise added. When the attributes are cross plotted, clusters of points can be seen in the cross plot. The colored cross plot shows the attributes after SOM classification into 64 neurons with random colors assigned. In Figure 5, the individual clusters are identified and mapped back to the events on the synthetic. The SOM has correctly distinguished each event in the synthetic.

Two attribute synthetic example of a Self-Organizing Map

Figure 4. Two attribute synthetic example of a Self-Organizing Map. The amplitude and Hilbert transform are cross plotted. The colored cross plot shows the attributes after classification into 64 neurons by SOM.

Synthetic SOM example

Figure 5. Synthetic SOM example with neurons identified by number and mapped back to the original synthetic data

Results for Niobrara and Eagle Ford

In 2018, Geophysical Insights conducted a proof of concept on 100 square miles of multi-client 3D data jointly owned by Geophysical Pursuit, Inc. (GPI) and Fairfield Geotechnologies (FFG) in the Denver¬ Julesburg Basin (DJ). The purpose of the study is to evaluate the effectiveness of a machine learning workflow to improve resolution within the reservoir intervals of the Niobrara and Codell formations, the primary targets for development in this portion of the basin. An amplitude volume was resampled from 2 ms to 1 ms and along with horizons, loaded into the Paradise® machine learning application and attributes generated. PCA was used to identify which attributes were most significant in the data, and these were used in a SOM to evaluate the interval Top Niobrara to Greenhorn (Laudon and others, 2019).

Figure 6 shows results of an 8X8 SOM classification of 8 instantaneous attributes over the Niobrara interval along with the original amplitude data. Figure 7 is the same results with a well composite focused on the B chalk, the best section of the reservoir, which is difficult to resolve with individual seismic attributes. The SOM classification has resolved the chalk bench as well as other stratigraphic features within the interval.

North-South Inline showing the original amplitude data (upper) and the 8X8 SOM result (lower) from Top Niobrara

Figure 6. North-South Inline showing the original amplitude data (upper) and the 8X8 SOM result (lower) from Top Niobrara through Greenhorn horizons. Seismic data is shown courtesy of GPI and FFG.

8X8 Instantaneous SOM through Rotharmel 11-33 with well log composite

Figure 7. 8X8 Instantaneous SOM through Rotharmel 11-33 with well log composite. The B bench, highlighted in green on the wellbore, ties the yellow-red-yellow sequence of neurons. Seismic data is shown courtesy of GPI and FFG

 

8X8 SOM results through the Eagle Ford

Figure 8. 8X8 SOM results through the Eagle Ford. The primary target, the Lower Eagle Ford shale had 16 neuron classes over 14-29 milliseconds of data. Seismic data shown courtesy of Seitel.

The results shown in Figure 9 reveal non-layer cake facies bands that include details in the Eagle )RUG,v basal clay-rich shale, high resistivity and low resistivity Eagle Ford shale objectives, the Eagle Ford ash, and the upper Eagle Ford marl, which are overlain disconformably by the Austin Chalk.

Eagle Ford SOM classification shown with well results

Figure 9. Eagle Ford SOM classification shown with well results. The SOM resolves a high resistivity interval, overlain by a thin ash layer and finally a low resistivity layer. The SOM also resolves complex 3-dimensional relationships between these facies

Convolutional Neural Networks (CNN)

A promising development in machine learning is supervised classification via the applications of convolutional neural networks (CNNs). Supervised methods have, in the past, not been efficient due to the laborious task of training the neural network. CNN is a deep learning seismic classification. We apply CNN to fault detection on seismic data. The examples that follow show CNN fault detection results which did not require any interpreter picked faults for training, rather the network was trained using synthetic data. Two results are shown, one from the North Sea, Figure 10, and one from the Great South Basin, New Zealand, Figure 11.

Side by side comparison of coherence attribute to CNN fault probability attribute, North Sea

Figure 10. Side by side comparison of coherence attribute to CNN fault probability attribute, North Sea

Side by side comparison of coherence attribute to CNN fault probability attribute, North Sea

Figure 11. Comparison of Coherence to CNN fault probability attribute, New Zealand

Conclusions

Advances in compute power and algorithms are making the use of machine learning available on the desktop to seismic interpreters to augment their interpretation workflow. Taking advantage of today’s computing technology, visualization techniques, and an understanding of machine learning as applied to seismic data, PCA combined with SOMs efficiently distill multiple seismic attributes into classification volumes. When applied on a multi-attribute seismic sample basis, SOM is a powerful nonlinear cluster analysis and pattern recognition machine learning approach that helps interpreters identify geologic patterns in the data and has been able to reveal stratigraphy well below conventional tuning thickness.

In the fault interpretation domain, recent development of a Convolutional Neural Network that works directly on amplitude data shows promise to efficiently create fault probability volumes without the requirement of a labor-intensive training effort.

References

Coleou, T., M. Poupon, and A. Kostia, 2003, Unsupervised seismic facies classification: A review and comparison of techniques and implementation: The Leading Edge, 22, 942–953, doi: 10.1190/1.1623635.

Guo, H., K. J. Marfurt, and J. Liu, 2009, Principal component spectral analysis: Geophysics, 74, no. 4, 35–43.

Haykin, S., 2009. Neural networks and learning machines, 3rd ed.: Pearson

Kohonen, T., 2001,Self organizing maps: Third extended addition, Springer, Series in Information Services, Vol. 30.

Laudon, C., Stanley, S., and Santogrossi, P., 2019, Machine Leaming Applied to 3D Seismic Data from the Denver-Julesburg Basin Improves Stratigraphic Resolution in the Niobrara, URTeC 337, in press

Roden, R., and Santogrossi, P., 2017, Significant Advancements in Seismic Reservoir Characterization with Machine Learning, The First, v. 3, p. 14-19

Roden, R., Smith, T., and Sacrey, D., 2015, Geologic pattern recognition from seismic attributes: Principal component analysis and self-organizing maps, Interpretation, Vol. 3, No. 4, p. SAE59-SAE83.

Santogrossi, P., 2017, Classification/Corroboration of Facies Architecture in the Eagle Ford Group: A Case Study in Thin Bed Resolution, URTeC 2696775, doi 10.15530-urtec-2017-<2696775>.

Seismic Facies Classification Using Deep Convolutional Neural Networks

Seismic Facies Classification Using Deep Convolutional Neural Networks

By Tao Zhao
Published with permission: SEG International Exposition and 88th Annual Meeting
October 2018

Summary

Convolutional neural networks (CNNs) is a type of supervised learning technique that can be directly applied to amplitude data for seismic data classification. The high flexibility in CNN architecture enables researchers to design different models for specific problems. In this study, I introduce an encoder-decoder CNN model for seismic facies classification, which classifies all samples in a seismic line simultaneously and provides superior seismic facies quality comparing to the traditional patch-based CNN methods. I compare the encoder-decoder model with a traditional patch- based model to conclude the usability of both CNN architectures.

Introduction

With the rapid development in GPU computing and success obtained in computer vision domain, deep learning techniques, represented by convolutional neural networks (CNNs), start to entice seismic interpreters in the application of supervised seismic facies classification. A comprehensive review of deep learning techniques is provided in LeCun et al. (2015). Although still in its infancy, CNN-based seismic classification is successfully applied on both prestack (Araya-Polo et al., 2017) and poststack (Waldeland and Solberg, 2017; Huang et al., 2017; Lewis and Vigh, 2017) data for fault and salt interpretation, identifying different wave characteristics (Serfaty et al., 2017), as well as estimating velocity models (Araya-Polo et al., 2018).

The main advantages of CNN over other supervised classification methods are its spatial awareness and automatic feature extraction. For image classification problems, other than using the intensity values at each pixel individually, CNN analyzes the patterns among pixels in an image, and automatically generates features (in seismic data, attributes) suitable for classification. Because seismic data are 3D tomographic images, we would expect CNN to be naturally adaptable to seismic data classification. However, there are some distinct characteristics in seismic classification that makes it more challenging than other image classification problems. Firstly, classical image classification aims at distinguishing different images, while seismic classification aims at distinguishing different geological objects within the same image. Therefore, from an image processing point of view, instead of classification, seismic classification is indeed a segmentation problem (partitioning an image into blocky pixel shapes with a coarser set of colors). Secondly, training data availability for seismic classification is much sparser comparing to classical

image classification problems, for which massive data are publicly available. Thirdly, in seismic data, all features are represented by different patterns of reflectors, and the boundaries between different features are rarely explicitly defined. In contrast, features in an image from computer artwork or photography are usually well-defined. Finally, because of the uncertainty in seismic data, and the nature of manual interpretation, the training data in seismic classification is always contaminated by noise.

To address the first challenge, until today, most, if not all, published studies on CNN-based seismic facies classification perform classification on small patches of data to infer the class label of the seismic sample at the patch center. In this fashion, seismic facies classification is done by traversing through patches centered at every sample in a seismic volume. An alternative approach, although less discussed, is to use CNN models designed for image segmentation tasks (Long et al., 2015; Badrinarayanan et al., 2017; Chen et al., 2018) to obtain sample-level labels in a 2D profile (e.g. an inline) simultaneously, then traversing through all 2D profiles in a volume.

In this study, I use an encoder-decoder CNN model as an implementation of the aforementioned second approach. I apply both the encoder-decoder model and patch-based model to seismic facies classification using data from the North Sea, with the objective of demonstrating the strengths and weaknesses of the two CNN models. I conclude that the encoder-decoder model provides much better classification quality, whereas the patch-based model is more flexible on training data, possibly making it easier to use in production.

The Two Convolutional Neural Networks (CNN) Models

Patch-based model

A basic patch-based model consists of several convolutional layers, pooling (downsampling) layers, and fully-connected layers. For an input image (for seismic data, amplitudes in a small 3D window), a CNN model first automatically extracts several high-level abstractions of the image (similar to seismic attributes) using the convolutional and pooling layers, then classifies the extracted attributes using the fully- connected layers, which are similar to traditional multilayer perceptron networks. The output from the network is a single value representing the facies label of the seismic sample at the center of the input patch. An example of patch-based model architecture is provided in Figure 1a. In this example, the network is employed to classify salt versus non-salt from seismic amplitude in the SEAM synthetic data (Fehler and Larner, 2008). One input instance is a small patch of data bounded by the red box, and the corresponding output is a class label for this whole patch, which is then assigned to the sample at the patch center. The sample marked as the red dot is classified as non-salt.

CNN architecture patch-based model

Figure 1. Sketches for CNN architecture of a) 2D patch-based model and b) encoder-decoder model. In the 2D patch-based model, each input data instance is a small 2D patch of seismic amplitude centered at the sample to be classified. The corresponding output is then a class label for the whole 2D patch (in this case, non-salt), which is usually assigned to the sample at the center. In the encoder-decoder model, each input data instance is a whole inline (or crossline/time slice) of seismic amplitude. The corresponding output is a whole line of class labels, so that each sample is assigned a label (in this case, some samples are salt and others are non-salt). Different types of layers are denoted in different colors, with layer types marked at their first appearance in the network. The size of the cuboids approximately represents the output size of each layer.

Encoder-decoder model

Encoder-decoder is a popular network structure for tackling image segmentation tasks. Encoder-decoder models share a similar idea, which is first extracting high level abstractions of input images using convolutional layers, then recovering sample-level class labels by “deconvolution” operations. Chen et al. (2018) introduce a current state-of-the-art encoder-decoder model while concisely reviewed some popular predecessors. An example of encoder-decoder model architecture is provided in Figure 1b. Similar to the patch-based example, this encoder-decoder network is employed to classify salt versus non-salt from seismic amplitude in the SEAM synthetic data. Unlike the patch- based network, in the encoder-decoder network, one input instance is a whole line of seismic amplitude, and the corresponding output is a whole line of class labels, which has the same dimension as the input data. In this case, all samples in the middle of the line are classified as salt (marked in red), and other samples are classified as non-salt (marked in white), with minimum error.

Application of the Two CNN Models

For demonstration purpose, I use the F3 seismic survey acquired in the North Sea, offshore Netherlands, which is freely accessible by the geoscience research community. In this study, I am interested to automatically extract seismic facies that have specific seismic amplitude patterns. To remove the potential disagreement on the geological meaning of the facies to extract, I name the facies purely based on their reflection characteristics. Table 1 provides a list of extracted facies. There are eight seismic facies with distinct amplitude patterns, another facies (“everything else”) is used for samples not belonging to the eight target facies.

Facies number Facies name
1 Varies amplitude steeply dipping
2 Random
3 Low coherence
4 Low amplitude deformed
5 Low amplitude dipping
6 High amplitude deformed
7 Moderate amplitude continuous
8 Chaotic
0 Everything else

To generate training data for the seismic facies listed above, different picking scenarios are employed to compensate for the different input data format required in the two CNN models (small 3D patches versus whole 2D lines). For the patch-based model, 3D patches of seismic amplitude data are extracted around seed points within some user-defined polygons. There are approximately 400,000 3D patches of size 65×65×65 generated for the patch-based model, which is a reasonable amount for seismic data of this size. Figure 2a shows an example line on which seed point locations are defined in the co-rendered polygons.

The encoder-decoder model requires much more effort for generating labeled data. I manually interpret the target facies on 40 inlines across the seismic survey and use these for building the network. Although the total number of seismic samples in 40 lines are enormous, the encoder-decoder model only considers them as 40 input instances, which in fact are of very small size for a CNN network. Figure 2b shows an interpreted line which is used in training the network

In both tests, I randomly use 90% of the generated training data to train the network and use the remaining 10% for testing. On an Nvidia Quadro M5000 GPU with 8GB memory, the patch-based model takes about 30 minutes to converge, whereas the encoder-decoder model needs about 500 minutes. Besides the faster training, the patch-based model also has a higher test accuracy at almost 100% (99.9988%, to be exact) versus 94.1% from the encoder- decoder model. However, this accuracy measurement is sometimes a bit misleading. For a patch-based model, when picking the training and testing data, interpreters usually pick the most representative samples of each facies for which they have the most confidence, resulting in high quality training (and testing) data that are less noisy, and most of the ambiguous samples which are challenging for the classifier are excluded from testing. In contrast, to use an encoder-decoder model, interpreters have to interpret all the target facies in a training line. For example, if the target is faults, one needs to pick all faults in a training line, otherwise unlabeled faults will be considered as “non-fault” and confuse the classifier. Therefore, interpreters have to make some not-so-confident interpretation when generating training and testing data. Figure 2c and 2d show seismic facies predicted from the two CNN models on the same line shown in Figure 2a and 2b. We observe better defined facies from the encoder-decoder model compared to the patch- based model.

Figure 3 shows prediction results from the two networks on a line away from the training lines, and Figure 4 shows prediction results from the two networks on a crossline. Similar to the prediction results on the training line, comparing to the patch-based model, the encoder-decoder model provides facies as cleaner geobodies that require much less post-editing for regional stratigraphic classification (Figure 5). This can be attributed to an encoder-decoder model that is able to capture the large scale spatial arrangement of facies, whereas the patch-based model only senses patterns in small 3D windows. To form such windows, the patch-based model also needs to pad or simply skip samples close to the edge of a 3D seismic volume. Moreover, although the training is much faster in a patch-based model, the prediction stage is very computationally intensive, because it processes data size N×N×N times of the original seismic volume (N is the patch size along each dimension). In this study, the patch-based method takes about 400 seconds to predict a line, comparing to less than 1 second required in the encoder-decoder model.

Conclusion

In this study, I compared two types of CNN models in the application of seismic facies classification. The more commonly used patch-based model requires much less effort in generating labeled data, but the classification result is suboptimal comparing to the encoder-decoder model, and the prediction stage can be very time consuming. The encoder-decoder model generates superior classification result at near real-time speed, at the expense of more tedious labeled data picking and longer training time.

Acknowledgements

The author thanks Geophysical Insights for the permission to publish this work. Thank dGB Earth Sciences for providing the F3 North Sea seismic data to the public, and ConocoPhillips for sharing the MalenoV project for public use, which was referenced when generating the training data. The CNN models discussed in this study are implemented in TensorFlow, an open source library from Google.

Figure 2. Example of seismic amplitude co-rendered with training data picked on inline 340 used for a) patch-based model and b) encoder-decoder model. The prediction result from c) patch-based model, and d) from the encoder-decoder model. Target facies are colored in colder to warmer colors in the order shown in Table 1. Compare Facies 5, 6 and 8.

Figure 3. Prediction results from the two networks on a line away from the training lines. a) Predicted facies from the patch-based model. b) Predicted facies from the encoder-decoder based model. Target facies are colored in colder to warmer colors in the order shown in Table 1. The yellow dotted line marks the location of the crossline shown in Figure 4. Compare Facies 1, 5 and 8.

Figure 4. Prediction results from the two networks on a crossline. a) Predicted facies from the patch-based model. b) Predicted facies from the encoder-decoder model. Target facies are colored in colder to warmer colors in the order shown in Table 1. The yellow dotted lines mark the location of the inlines shown in Figure 2 and 3. Compare Facies 5 and 8.

Figure 5. Volumetric display of the predicted facies from the encoder-decoder model. The facies volume is visually cropped for display purpose. An inline and a crossline of seismic amplitude co-rendered with predicted facies are also displayed to show a broader distribution of the facies. Target facies are colored in colder to warmer colors in the order shown in Table 1.

References

Araya-Polo, M., T. Dahlke, C. Frogner, C. Zhang, T. Poggio, and D. Hohl, 2017, Automated fault detection without seismic processing: The Leading Edge, 36, 208–214.

Araya-Polo, M., J. Jennings, A. Adler, and T. Dahlke, 2018, Deep-learning tomography: The Leading Edge, 37, 58–66.

Badrinarayanan, V., A. Kendall, and R. Cipolla, 2017, SegNet: A deep convolutional encoder-decoder architecture for image segmentation: IEEE Transactions on Pattern Analysis and Machine Intelligence, 39, 2481–2495.

Chen, L. C., G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, 2018, DeepLab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs: IEEE Transactions on Pattern Analysis and Machine Intelligence, 40, 834–848.

Chen, L. C., Y. Zhu, G. Papandreou, F. Schroff, and H. Adam, 2018, Encoder-decoder with atrous separable convolution for semantic image segmentation: arXiv preprint, arXiv:1802.02611v2.

Fehler, M., and K. Larner, 2008, SEG advanced modeling (SEAM): Phase I first year update: The Leading Edge, 27, 1006–1007.

Huang, L., X. Dong, and T. E. Clee, 2017, A scalable deep learning platform for identifying geologic features from seismic attributes: The Leading Edge, 36, 249–256.

LeCun, Y., Y. Bengio, and G. Hinton, 2015, Deep learning: Nature, 521, 436–444.

Lewis, W., and D. Vigh, 2017, Deep learning prior models from seismic images for full-waveform inversion: 87th Annual International Meeting, SEG, Expanded Abstracts, 1512–1517.

Long, J., E. Shelhamer, and T. Darrell, 2015, Fully convolutional networks for semantic segmentation: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 3431–3440.

Serfaty, Y., L. Itan, D. Chase, and Z. Koren, 2017, Wavefield separation via principle component analysis and deep learning in the local angle domain: 87th Annual International Meeting, SEG, Expanded Abstracts, 991–995.

Waldeland, A. U., and A. H. S. S. Solberg, 2017, Salt classification using deep learning: 79th Annual International Conference and Exhibition, EAGE, Extended Abstracts, Tu-B4-12.

A Fault Detection Workflow Using Deep Learning and Image Processing

A Fault Detection Workflow Using Deep Learning and Image Processing

By Tao Zhao
Published with permission: SEG International Exposition and 88th Annual Meeting
October 2018

Summary

Within the last a couple of years, deep learning techniques, represented by convolutional neural networks (CNNs), have been applied to fault detection problems on seismic data with an impressive outcome. As is true for all supervised learning techniques, the performance of a CNN fault detector highly depends on the training data, and post-classification regularization may greatly improve the result. Sometimes, a pure CNN-based fault detector that works perfectly on synthetic data may not perform well on field data. In this study, we investigate a fault detection workflow using both CNN and directional smoothing/sharpening. Applying both on a realistic synthetic fault model based on the SEAM (SEG Advanced Modeling) model and also field data from the Great South Basin, offshore New Zealand, we demonstrate that the proposed fault detection workflow can perform well on challenging synthetic and field data.

Introduction

Benefited from its high flexibility in network architecture, convolutional neural networks (CNNs) are a supervised learning technique that can be designed to solve many challenging problems in exploration geophysics. Among these problems, detection of particular seismic facies of interest might be the most straightforward application of CNNs. The first published study applying CNN on seismic data might be Waldeland and Solberg (2017), in which the authors used a CNN model to classify salt versus non-salt features in a seismic volume. At about the same time as Waldeland and Solberg (2017), Araya-Polo et al. (2017) and Huang et al. (2017) reported success in fault detection using CNN models.

From a computer vision perspective, in seismic data, faults are a special group of edges. CNN has been applied to more general edge detection problems with great success (El- Sayed et al., 2013; Xie and Tu, 2015). However, faults in seismic data are fundamentally different from edges in images used in computer vision domain. The regions separated by edges in a traditional computer vision image are relatively homogeneous, whereas in seismic data such regions are defined by patterns of reflectors. Moreover, not all edges in seismic data are faults. In practice, although providing excellent fault images, traditional edge detection attributes such as coherence (Marfurt et al., 1999) are also sensitive to stratigraphic edges such as unconformities, channel banks, and karst collapses. Wu and Hale (2016) proposed a brilliant workflow for automatically extracting fault surfaces, in which a crucial step is computing the fault likelihood. CNN-based fault detection methods can be used as an alternative approach to generate such fault likelihood volumes, and the fault strike and dip can be then computed from the fault likelihood.

One drawback of supervised machine learning-based fault detection is its brute-force nature, meaning that instead of detecting faults following geological/geophysical principles, the detection purely depends on the training data. In reality, we will never have training data that covers all possible appearances of faults in seismic data, nor are our data noise- free. Therefore, although the raw output from the CNN classifier may adequately represent faults in synthetic data of simple structure and low noise, some post-processing steps are needed for the result to be useful on field data. Based on the traditional coherence attribute, Qi et al. (2017) introduced an image processing-based workflow to skeletonize faults. In this study, we regularize the raw output from a CNN fault detector with an image processing workflow built on Qi et al. (2017) to improve the fault images.

We use both a realistic synthetic data and field data to investigate the effectiveness of the proposed workflow. The synthetic data should ideally be a good approximation of field data and provide full control on the parameter set. We build our synthetic data based on the SEAM model (Fehler and Larner, 2008) by taking sub-volumes from the impedance model and inserting faults. After verifying the performance on the synthetic data, we then move on to field data acquired from the Great South Basin, offshore New Zealand, where extensive amount of faulting occurs. Results on both synthetic and field data show great potential of the proposed fault detection workflow which provides very clean fault images.

Proposed Workflow

The proposed workflow starts with a CNN classifier which is used to produce a raw image of faults. In this study, we adopt a 3D patch-based CNN model that classifies each seismic sample using samples within a 3D window. An example of the CNN architecture used in this study is provided in Figure 1. Basic patch-based CNN model consists of several convolutional layers, pooling (downsampling) layers, and fully-connected layers. Given a 3D patch of seismic amplitudes, a CNN model first automatically extracts several high-level abstractions of the image (similar to seismic attributes) using the convolutional and pooling layers, then classifies the extracted attributes using the fully- connected layers, which behave similar to a traditional multilayer perceptron network. The output from the network is then a single value representing the facies label of the seismic sample centered at the 3D patch. In this study, the label is binary, representing “fault” or “non-fault”.

Figure 1. Sketches of a 2D patch-based CNN architecture. In this demo case, each input data instance is a small 2D patch of seismic amplitude centered at the sample to be classified. The corresponding output is a class label representing the patch (in this case, fault), which is usually assigned to the center sample. Different types of layers are denoted in different colors, with layer types marked at their first appearance in the network. The size of the cuboids approximately represents the output size of each layer.

We then use a suite of image processing techniques to improve the quality of the fault images. First, we use a directional Laplacian of Gaussian (LoG) filter (Machado et al., 2016) to enhance lineaments that are of high angle from layering reflectors and suppress anomalies close to reflector dip, while calculating the dip, azimuth, and dip magnitude of the faults. Taking these data, we then use a skeletonization step, redistributing the fault anomalies within a fault damage zone to the most likely fault plane. We then do a thresholding to generate a binary image for faults. Optionally, if the result is still noisy, we can continue with a median filter to reduce the random noise and iteratively perform the directional LoG and skeletonization to achieve a desirable result. Figure 2 summarizes the regularization workflow.

Synthetic Test

We first test the proposed workflow on synthetic data built on the SEAM model. To make the model a good approximation of real field data, we select a portion in the SEAM model where stacked channels and turbidites exist. We then randomly insert faults in the impedance model and convolve with a 40Hz Ricker wavelet to generate seismic volumes. The parameters used in random generation of five reverse faults in the 3D volume are provided in Table 1. Figure 3a shows one line from the generated synthetic data with faults highlighted in red. In this model, we observe strong layer deformation with amplitude change along reflectors due to the turbidites in the model. Therefore, such synthetic data are in fact quite challenging for a fault detection algorithm, because of the existence of other types of discontinuities.

We randomly use 20% of the samples on the fault planes and approximately the same amount of non-fault samples to train the CNN model. The total number of training sample is about 350,000, which represents <1% of the total samples in the seismic volume. Figure 3b shows the raw output from the CNN fault detector on the same line shown in Figure 3a. We observe that instead of sticks, faults appear as a small zone. Also, as expected, there are some misclassifications where data are quite challenging. We then perform the regularization steps excluding the optional steps in Figure 2. Figure 3c shows the result after directional LoG filter and skeletonization. Notice that these two steps have cleaned up much of the noise, and the faults are now thinner and more continuous. Finally, we perform a thresholding to generate a fault map where faults are labeled as “1” and “0” for everywhere else (Figure 3d). Figure 4 shows the fault detection result on a less challenging line. We observe that the result on such line is nearly perfect.

Figure 2. The regularization workflow used to improve the fault images after CNN fault detection.

Fault attribute Values range
Dip angle (degree) -15 to 15
Strike angle (degree) -25 to 25
Displacement (m) 25 to 75

Table 1. Parameter ranges used in generating faults in the synthetic model.

Field Data Test

We further verify the proposed workflow on field data from the Great South Basin, offshore New Zealand. The seismic data contain extensive faulting with complex geometry, as well as other types of coherence anomalies as shown in Figure 5. In this case, we manually picked training data on five seismic lines for regions representing fault and non-fault. An example line is given in Figure 6. As one may notice, although the training data consist very limited coverage in the whole volume, we try to include the most representative samples for the two classes. On the field data, we use the whole regularization workflow including the optional steps. Figure 7 gives the final output from the proposed workflow, and the result from using coherence in lieu of raw CNN output in the workflow. We observe that the result from CNN plus regularization gives clean fault planes with very limited noise from other types of discontinuities.

Conclusion

In this study, we introduce a fault detection workflow using both CNN-based classification and image processing regularization. We are able to train a CNN classifier to be sensitive to only faults, which greatly reduces the mixing between faults and other discontinuities in the produced faults images. To improve the resolution and further suppress non-fault features in the raw fault images, we then use an image processing-based regularization workflow to enhance the fault planes. The proposed workflow shows great potential on both challenging synthetic data and field data.

Acknowledgements

The authors thank Geophysical Insights for the permission to publish this work. We thank New Zealand Petroleum and Minerals for providing the Great South Basin seismic data to the public. The CNN fault detector used in this study is implemented in TensorFlow, an open source library from Google. The authors also thank Gary Jones at Geophysical Insights for valuable discussions on the SEAM model.

Figure 3. Line A from the synthetic data showing seismic amplitude with a) artificially created faults highlighted in red; b) raw output from CNN fault detector; c) CNN detected faults after directional LoG and skeletonization; and d) final fault after thresholding.

Figure 4. Line B from the synthetic data showing seismic amplitude co-rendered with a) randomly created faults highlighted in red and b) final result from the fault detection workflow, in which predicted faults are marked in red.

Figure 5. Coherence attribute along t = 1.492s. Coherence shows discontinuities not limited to faults, posting challenges to obtain only fault images.

Figure 6. A vertical slice from the field seismic amplitude data with manually picked regions for training the CNN fault detector. Green regions represent fault and red regions represent non-fault.

References

Araya-Polo, M., T. Dahlke, C. Frogner, C. Zhang, T. Poggio, and D. Hohl, 2017, Automated fault detection without seismic processing: The Leading Edge, 36, 208–214.

El-Sayed, M. A., Y. A. Estaitia, and M. A. Khafagy, 2013, Automated edge detection using convolutional neural network: International Journal of Advanced Computer Science and Applications, 4, 11–17.

Fehler, M., and K. Larner, 2008, SEG Advanced Modeling (SEAM). Phase I first year update: The Leading Edge, 27, 1006–1007.

Huang, L., X. Dong, and T. E. Clee, 2017, A scalable deep learning platform for identifying geologic features from seismic attributes: The Leading Edge, 36, 249–256.

Machado, G., A. Alali, B. Hutchinson, O. Olorunsola, and K. J. Marfurt, 2016, Display and enhancement of volumetric fault images: Interpretation, 4, 1, SB51–SB61.

Marfurt, K. J., V. Sudhaker, A. Gersztenkorn, K. D. Crawford, and S. E. Nissen, 1999, Coherency calculations in the presence of structural dip: Geophysics, 64, 104–111.

Qi, J., G. Machado, and K. Marfurt, 2017, A workflow to skeletonize faults and stratigraphic features: Geophysics, 82, no. 4, O57–O70.

Waldeland, A. U., and A. H. S. S. Solberg, 2017, Salt classification using deep learning: 79th Annual International Conference and Exhibition, EAGE, Extended Abstracts, Tu-B4-12.

Wu, X., and D. Hale, 2016, 3D seismic image processing for faults: Geophysics, 81, no. 2, IM1–IM11.

Xie, S., and Z. Tu, 2015, Holistically-nested edge detection: Proceedings of the IEEE International Conference on Computer Vision, 1395–1403.

Solving Exploration Problems with Machine Learning

Solving Exploration Problems with Machine Learning

By: Deborah Sacrey and Rocky Roden
Published with permission: First Break
Volume 36, June 2018

Introduction

Over the past eight years the evolution of machine learning in the form of unsupervised neural networks has been applied to improve and gain more insights in the seismic interpretation process (Smith and Taner, 2010; Roden et al., 2015; Santogrossi, 2016: Roden and Chen, 2017; Roden et al., 2017). Today’s interpretation environment involves an enormous amount of seismic data, including regional 3D surveys with numerous processing versions and dozens if not hundreds of seismic attributes. This ‘Big Data’ issue poses problems for geoscientists attempting to make accurate and efficient interpretations. Multi-attribute machine learning approaches such as self-organizing maps (SOMs), an unsupervised learning approach, not only incorporates numerous seismic attributes, but often reveals details in the data not previously identified. The reason for this improved interpretation process is that SOM analyses data at each data sample (sample interval X bin) for the multiple seismic attributes that are simultaneously analysed for natural patterns or clusters. The scale of the patterns identified by this machine learning process is on a sample basis, unlike conventional amplitude data where resolution is limited by the associated wavelet (Roden et al., 2017).

Figure 1 illustrates how all the sample points from the multiple attributes are placed in attribute space where they are standardized or normalized to the same scale. In this case, ten attributes are employed. Neurons which are points that identify the patterns or clusters, are randomly located in attribute space where the SOM process proceeds to identify patterns in this multi-attribute space. When completed, the results are nonlinearly mapped to a 2D colormap where hexagons representing each neuron identify associated natural patterns in the data in 3D space. This 3D visualization is how the geoscientist interprets geological features of interest.

Figure 1. Illustration of the SOM process where samples from ten seismic attributes are placed in attribute space, normalized, then 64 neurons identify 64 patterns from the data in the SOM process. The interpreter selects one or a combination of neurons from the 2D colourmap to identify geologic features of interest.

The three case studies in this paper are real-world examples of using this machine learning approach to make better interpretations.

Case History 1 – Defining Reservoir in Deep, Pressured Sands with Poor Data Quality

The Tuscaloosa reservoir in Southern Louisiana, USA, is a low-resistivity sand at a depth of approximately 5180 to 6100 m (17,000 to 20,000 ft). It is primarily a gas reservoir, but does have a component of liquids to it as well. The average liquid ratio is 50 barrels to 1MMcfg. The problem is being able to identify a reservoir around a well which has been producing from the early 1980s, but has never been offset because of poor well control and poor seismic data quality at that depth. The well in question has produced more than 50 BCF of gas and approximately 1.2 MMBO, and is still producing at a very steady rate. The operator wanted to know if the classification process could wade through the poor data quality and see the reservoir from which the well had been producing to calculate the depleted area and look for additional drilling locations.

The initial data quality of the seismic data, which was shot long after the well started producing is shown in Figure 2. Instantaneous and hydrocarbon-indicating attributes were created to be used in a multi-attribute classification analysis to help define the laminated-sand reservoir. The 3D seismic data areal coverage was approximately 72.5 km2. and there were 12 wells in the area for calibration, not including the key well in question. The attributes were calculated over the entire survey, from 1.0 to 4.0 seconds, which covers the zone of interest as well as some possible shallow pay zones. Many of the wells had been drilled after the 3D data was shot, and ranged from dry holes to wells which have produced more than 35BCFE since the early 2000s.

Figure 2. Seismic amplitude line through Tuscaloosa Sands at 5790 m. This key well has been producing for more than 35 years from 6 m of perforations.

A synthetic was created to tie the well in question to the seismic data. It was noted that the data was out-of-phase after tying to the Austin Chalk. The workflow was to rotate the data to zero-phase with U.S. polarity convention, up-sample the data from 4 ms to 2 ms, which allows for better statistical analysis of the attributes for classification purposes, and create the attributes from the parent PSTM volume. While reviewing the attributes, the appearance of a flat event seemed to be evident in the Attenuation data.

Figure 3. The appearance of a ‘flat spot’ in the Attenuation attribute.

Figure 3 shows what looks like a ‘flat spot’ against regional dip. The appearance of this flat event suggests that a combination of appropriate seismic attributes used in the classification process designed to delineate sands from shales, hydrocarbon indicators and general stratigraphy, may be able to define this reservoir. The eight attributes used were: Attenuation, Envelope Bands on Envelope Breaks, Envelope Bands on Phase Breaks, Envelope 2nd Derivative, Envelope Slope, Instantaneous Frequency, PSTM-Enh_180 rot, and Trace Envelope. A 10x10 matrix topology (100 neurons) was used in the SOM analysis to look for patterns in the data that would break out the reservoir in a 200 ms thick portion of the volume, a zone in which the Tuscaloosa sands occur in this area.

Figure 4. Time slice through the perforated interval from the SOM results showing a) areal extent of the reservoir and the appearance of a braided-channel system in the thinly laminated sands and b) only the key neural classification components shown from which the reservoir is better defined and extent can be easily measured. The yellow circle denotes the key well reservoir.

Figure 4a shows a time slice from the SOM results through the perforations with all the neural patterns turned on in 3D space as well as the well bores within the area of the 3D space. Figure 4b shows only the sand reservoir without all the background information. It can be noted that this time slice cuts regional dip in a thinly-laminated reservoir, so evidence of a braided stream system can readily be seen in the slice. Figure 4b shows the same time slice, but with only the key neural patterns turned on in the 2D Map matrix of neurons.

The result is that the reservoir for the key well ended up calculating out to 526 Hectares, which explains the long life and great production. It also seems to extend off the edge of the 3D space, which could add significantly to the reservoir extent.

Figure 5. Seismic amplitude dip line showing a synthetic tie that denotes locations of perforations, which is associated with a very weak peak event.

Case History 2 – Finding Hydrocarbons in Thinbed Environments Well Below Seismic Tuning

In this case, the goal was to find an extension of a reservoir tied to a well which had produced more than 450 MBO from a thin Frio (Tertiary Age) sand at 3289 m. The sand thickness was just a little over 2 m, well below seismic tuning (20 m) at that depth. Careful attention to synthetic creation showed that the well tied to a weak peak event. Figure 5 shows this weak amplitude and the tie to the key well. Again, the data was up-sampled from a 4 ms sample rate to a 2 ms sample rate for better statistics in the SOM classification process, then the attributes were calculated from the up-sampled PSTM-enhanced volume.

The reservoir was situated within a fault block, bounded to the southeast by a 125 m throw fault and along the northwest side by a 50 m throw fault. There were three wells in the southwest portion of the fault block which had poor production and were deemed to be mechanical failures. The key producer was about 4.5 km northeast of the three wells along strike within the fault block. Figure 6 shows an amplitude strike line that connects the marginal wells to the key well. The green horizon was mapped in the trough event that was consistent over the area. The black horizon is 17 ms below the green horizon and is located in the actual zone of perforations in the key producing well and is associated with a weak peak event.

A neural topology of 64 neurons (8x8) was used, along with the following eight attributes to help determine the thickness and areal extent: Envelope, Imaginary Part, Instantaneous Frequency, PSTM-Enh, Relative Acoustic Impedance, Thin Bed Indicator, Sweetness and Hilbert. Close examination of the SOM results (Figure 7) indicated the level associated with the weak peak event resembled some offshore bar development. Fairly flat reflectors below the event and indications of ‘drape’ over the event led to the conclusion that this sand was not a blanket sand, as the client perceived, but had finite limitations. Figure 7 shows the flattened time slice in the perforated zone through the event after the neural analysis. One can see the possibility of a tidal channel cut, which could compartmentalize the existing production. Also noted is the fact that the three wells which had been labelled as mechanical failures, were actually on very small areas of sand which indicate limited reservoir extent.

Figure 6. Amplitude strike line along fault block showing marginal wells on left and key producer on right. Mapped trough event is shown as well as a horizon 17 ms below the trough which is flattened and displayed in Figure 7.

Figure 7. Flattened time slice 17 ms below trough event after SOM analysis. A possible tidal channel cut through the bar can be interpreted. Note the three marginal wells to the southwest are on the fringes or separated from the main producing area.

Figure 8 is a SOM dip seismic section showing the discovery well after completion. Three m of sand was in the well bore and 2 m were perforated for an initial rate of 250 BOPD and 1.1 MMcfgpd. The SOM results identified these thin sands with light-blue-to-green neurons, with each neuron representing about 2-3 m thickness. This process has identified the thin beds well below the conventional tuning thickness of 20 m. It is estimated that there is another 2 MMBOE remaining in this reservoir.

Figure 8. SOM dip line showing tie of thin sand to bar.

Case History 3 – Using the Classification Process to Help With Interpreting Difficult Depositional Environments

There are many places around the world where the seismic data is hard to interpret because of multiple episodes of exposure, erosion, transgressive and regressive sequences and multi-period faulting. This case is in a portion of the Permian Basin of West Texas and Southeastern New Mexico. This is an area which has been structurally deformed from episodes of expansion and contraction as well as being exposed and buried over millions of years in the Mississippian through to the Silurian ages. There are multiple unconformable surfaces as well as turbidite and debris flows, carbonates and clastic deposition, so it is a challenge to interpret.

The 3D has both a PSTM stack from gathers and a high-resolution version of the PSTM. The workflow was to use the high-resolution volume and use a low-topology SOM classification of attributes which would help accentuate the stratigraphy. Figure 9a is an example of the PSTM from an initial volume produced from gathers and Figure 9b is the same line in the high-resolution version. The two horizons shown are Lower Mississippian in yellow and the Upper Devonian in green, both horizons were interpreted in the PSTM stack from gathers. One can see in the high-resolution volume that both horizons are definitely not following the same events and additional detail in the data is desired.

Figure 9. Seismic amplitude line in wiggle trace variable area format going through key producing well; a) PSTM from stack and b) high resolution PSTM. Upper horizon (yellow) is Lower Mississippian and lower horizon (green) is Upper Devonian picked from data in a).

The workflow here was to use multi-attribute classification with a low-topology (fewer neurons, so fewer patterns to interpret) unsupervised Self-Organized Map (SOM). The lower neural count will tend to combine the natural patterns in the data into a more ‘regional’ view and make it easier to interpret. A 4x4 neural matrix was used, so the result only had 16 patterns to interpret. The four attributes used were also picked for their ability to sort out stratigraphic events. These were: Instantaneous Phase, Instantaneous Frequency, and Normalized Amplitude.

Figure 10 is the result of the interpretation process in the SOM classification volume. Both the new interpreted horizons and the old horizons are shown to illustrate how much more accurate the classification process is at defining stratigraphic events. In this section, one can see karsting, debris flows and possibly some reefs.

Figure 10. Results of the classification using the high-resolution seismic data in a low-topology learning process. Both the new and old horizon interpretations are shown. Many interesting stratigraphic features are shown, including karsting, possible reefs and debris flows.

Conclusion

In this article, three different scenarios were given where the use of multi-attribute neural analysis of data can aid in solving some of the many issues geoscientists face when trying to interpret their data. The problems solved were using SOM classification to help define 1) reservoirs in deep, pressured, poor data quality areas, 2) thin-bedded reservoirs while exploring or developing fields and 3) classification of data to help interpret data in difficult stratigraphic environments. The classification process, or machine learning, is the next wave of new technology designed to analyze seismic data in ways that the human eye cannot.

Acknowledgments

The authors would like to thank Geophysical Insights for the research and development of the Paradise® AI workbench and the machine learning applications used in this paper.

References

Roden, R. and Chen, C., 2017, Interpretation of DHI Characteristics with machine learning. First Break, 35, 55-63.

Roden, R., Smith, T. and Sacrey, D., 2015, Geologic pattern recognition from seismic attributes: Principal component analysis and self-organizing maps. Interpretation, 3, SAE59-SAE83.

Roden, R., Smith, T., Santogrossi, P., Sacrey, D. and Jones, G., 2017, Seismic interpretation below tuning with multi-attribute analysis. The Leading Edge, 36, 330-339.

Santogrossi, P., 2017, Technology reveals Eagle Ford insights. American Oil & Gas Reporter, January.

Smith, T. and Taner, M.T., 2010, Natural clusters in multi-attribute seismics found with self-organizing maps. Extended Abstracts, Robinson-Treitel Spring Symposium by GSH/SEG, March 10-11, 2010, Houston, Tx.