web analytics
Net Reservoir Discrimination through Multi-Attribute Analysis at Single Sample Scale

Net Reservoir Discrimination through Multi-Attribute Analysis at Single Sample Scale

By Jonathan Leal, Rafael Jerónimo, Fabian Rada, Reinaldo Viloria and Rocky Roden
Published with permission: First Break
Volume 37, September 2019

Abstract

A new approach has been applied to discriminate Net Reservoir using multi-attribute seismic analysis at single sample resolution, complemented by bivariate statistical analysis from petrophysical well logs. The combination of these techniques was used to calibrate the multi-attribute analysis to ground truth, thereby ensuring an accurate representation of the reservoir static properties and reducing the uncertainty related to reservoir distribution and storage capacity. Geographically, the study area is located in the south of Mexico. The reservoir rock consists of sandstones from the Upper Miocene age in a slope fan environment.

The first method in the process was the application of Principal Component Analysis (PCA), which was employed to identify the most prominent attributes for detecting lithological changes that might be associated with the Net Reservoir. The second method was the application of the Kohonen Self-Organizing Map (SOM) Neural Network Classification at voxel scale (i.e., sample rate and bin size dimensions from seismic data), instead of using waveform shape classification. The sample-level analysis revealed significant new information from different seismic attributes, providing greater insights into the characteristics of the reservoir distribution in a shaly sandstone. The third method was a data analysis technique based on contingency tables and Chi-Square test, which revealed relationships between two categorical variables (SOM volume neurons and Net Reservoir). Finally, a comparison between a SOM of simultaneous seismic inversion attributes and traditional attributes classification was made corroborating the delineated prospective areas. The authors found the SOM classification results are beneficial to the refinement of the sedimentary model in a way that more accurately identified the lateral and vertical distribution of the facies of economic interest, enabling decisions for new well locations and reducing the uncertainty associated with field exploitation. However, the Lithological Contrast SOM results from traditional attributes showed a better level of detail compared with seismic inversion SOM.

Introduction

Self-Organizing Maps (SOM) is an unsupervised neural network – a form of machine learning – that has been used in multi-attribute seismic analysis to extract more information from the seismic response than would be practical using only single attributes. The most common use is in automated facies mapping. It is expected that every neuron or group of neurons can be associated with a single depositional environment, the reservoir´s lateral and vertical extension, porosity changes or fluid content (Marroquín et al., 2009). Of course, the SOM results must be calibrated with available well logs. In this paper, the authors generated petrophysical labels to apply statistical validation techniques between well logs and SOM results. Based on the application of PCA to a larger set of attributes, a smaller, distilled set of attributes were classified using the SOM process to identify lithological changes in the reservoir (Roden et al., 2015).

A bivariate statistical approach was then conducted to reveal the relationship between two categorical variables: the individual neurons comprising the SOM classification volume and Net Reservoir determined from petrophysical properties (percentage of occurrence of each neuron versus Net Reservoir).

The Chi-Square test compares the behavior of the observed frequencies (Agresti, 2002) for each SOM neuron lithological contrast against the Net Reservoir variable (grouped in “Net Reservoir” and “no reservoir” categories). Additional data analysis was conducted to determine which neurons responded to the presence of hydrocarbons using box plots showing Water Saturation, Clay Volume, and Effective Porosity as Net Pay indicators. The combination of these methods demonstrated an effective means of identifying the approximate region of the reservoir.

About the Study Area

The reservoir rock consists of sandstones from the Upper Miocene age in a slope fan environment. These sandstones correspond to channel facies, and slope lobes constituted mainly of quartz and potassium feldspars cemented in calcareous material of medium maturity. The submarine slope fans were deposited at the beginning of the deceleration of the relative sea-level fall, and consist of complex deposits associated with gravitational mass movements.

Stratigraphy and Sedimentology

The stratigraphic chart comprises tertiary terrigenous rocks from Upper Miocene to Holocene. The litho-stratigraphic units are described in Table 1.

Table 1: Stratigraphic Epoch Chart of Study Area

 

Figure 1. Left: Regional depositional facies. Right: Electrofacies and theoretical model, Muti (1978).

Figure 1 (left) shows the facies distribution map of the sequence, corresponding to the first platform-basin system established in the region. The two dashed lines – one red and one dark brown – represent the platform edge at different times according to several regional integrated studies in the area. The predominant direction of contribution for studied Field W is south-north, which is consistent with the current regional sedimentary model. The field covers an area of approximately 46 km2 and is located in facies of distributary channels northeast of the main channel. The reservoir is also well-classified and consolidated in clay matrix, and it is thought that this texture corresponds to the middle portion of the turbidite system. The observed electrofacies logs of the reservoir are box-shaped in wells W-2, W-4, W-5, and W-6 derived from gamma ray logs and associated with facies of distributary channels that exhibit the highest average porosity. In contrast, wells W-3 and W-1 are different – associated with lobular facies – according to gamma ray logs. In Figure 1 (right), a sedimentary scheme of submarine fans proposed by Muti (1978).

Petrophysics

The Stieber model was used to classify Clay Volume (VCL). The Effective Porosity (PIGN) was obtained using the Neutron-Density model and non-clay water intergranular Water Saturation (SUWI) was determined to have a salinity of 45,000 ppm using the Simandoux model. Petrophysical cut-off values used to distinguish Net Reservoir and Net Pay estimations were 0.45, 0.10 and 0.65, respectively.

Reservoir Information

The reservoir rock corresponds to sands with Net Pay thickness ranging from 9-12 m, porosity between 18-25%, average permeability of 8-15 mD, and Water Saturation of approximately 25%. The initial pressure was 790 kg / cm2 with the current pressure is 516 kg/cm2. The main problems affecting productivity in this volumetric reservoir are pressure drop, being the mechanism of displacement the rock-fluid expansion, and gas in solution. Additionally, there are sanding problems and asphaltene precipitation.

Methodology

Multidisciplinary information was collected and validated to carry out seismic multi-attribute analysis. Static and dynamic characterization studies were conducted in the study area, revealing the most relevant reservoir characteristics and yielding a better sense of the proposed drilling locations. At present, six wells have been drilled.

The original available seismic volume and associated gathers employed in the generation of multiple attributes and for simultaneous inversion were determined to be of adequate quality. At target depth, the dominant frequency approaches 14 Hz, and the interval velocity is close to 3,300 m/s. Therefore, the vertical seismic resolution is 58 m. The production sand has an average thickness of 13 m, so it cannot be resolved with conventional seismic amplitude data.

Principal Component Analysis (PCA)

Principal Component Analysis (PCA) is one of the most common descriptive statistics procedures used to synthesize the information contained in a set of variables (volumes of seismic attributes) and to reduce the dimensionality of a problem. Applied to a collection of seismic attributes, PCA can be used to identify the seismic attributes that have the greatest “contribution,” based on the extent of their relative variance to a region of interest. Attributes identified through the use of PCA are responsive to specific geological features, e.g., lithological contrast, fracture zones, among others. The output of PCA is an Eigen spectrum that quantifies the relative contribution or energy of each seismic attribute to the studied characteristic.

PCA Applied for Lithological Contrast Detection

The PCA process was applied to the following attributes to identify the most significant attributes to the region to detect lithological contrasts at the depth of interest: Thin Bed Indicator, Envelope, Instantaneous Frequency, Imaginary Part, Relative Acoustic Impedance, Sweetness, Amplitude, and Real Part. Of the entire seismic volume, only the voxels in a time window (seismic samples) delimited by the horizon of interest were analyzed, specifically 56 milliseconds above and 32 milliseconds below the horizon. The results are shown for each principal component. In this case, the criterion used for the selected attributes were those whose maximum percentage contribution to the principle component was greater than or equal to 80%. Using this selection technique, the first five principal components were reviewed in the Eigen spectrum. In the end, six (6) attributes of the first two principal components were selected (Figure 2).

Figure 2. PCA results for lithological contrast detection.

Simultaneous Classification of Seismic Attributes Using a Self-Organizing Maps (SOM) Neural Network (Voxel Scale)

The SOM method is an unsupervised classification process in that the network is trained from the input data alone. A SOM consists of components (vectors) called neurons or classes and input vectors that have a position on the map. The values are compared employing neurons that are capable of detecting groupings through training (machine learning) and mapping. The SOM process non-linearly maps the neurons to a two dimensional, hexagonal or rectangular grid. SOM describes a mapping of a larger space to a smaller one. The procedure for locating a vector from the data space on the map is to find the neuron with the vector of weights (smaller metric distance) closer to the vector of the data space. (The subject of this analysis accounted for seismic samples located within the time window covering several samples above and below the target horizon throughout the study area). It is important to classify attributes that have the same common interpretive use, such as lithological indicators, fault delineation, among others. The SOM revealed patterns and identified natural organizational structures present in the data that are difficult to detect in any other way (Roden et al., 2015), since the SOM classification used in this study is applied on individual samples (using sample rate and bin size from seismic data, Figure 2, lower right box), detecting features below conventional seismic resolution, in contrast with traditional wavelet-based classification methods.

SOM Classification for Lithological Contrast Detection

The following six attributes were input to the SOM process with 25 classes (5 X 5) stipulated as the desired output: Envelope, Hilbert, Relative Acoustic Impedance, Sweetness, Amplitude, and Real Part.

As in the PCA analysis, the SOM was delimited to seismic samples (voxels) in a time window following the horizon of interest, specifically 56 milliseconds above to 32 milliseconds below. The resulting SOM classification volume was examined with several visualization and statistical analysis techniques to associate SOM classification patterns with reservoir rock.

3D and Plan Views

One way of identifying patterns or trends coherent with the sedimentary model of the area is visualizing all samples grouped by each neuron in 3D and plan views using stratal-slicing technique throughout the reservoir. The Kohonen SOM and the 2D Colormap in Figure 3 (lower right) ensures that the characteristics of neighboring neurons are similar. The upper part of Figure 3 shows groupings classified by all 5x5 (25) neurons comprising the neural network, while in the lower part there are groupings interpreted to be associated with the reservoir classified by a few neurons that are consistent with the regional sedimentary model, i.e., neurons N12, N13, N16, N17, N22, and N23.

Figure 3. Plan view with geological significance preliminary geobodies from Lithological Contrast SOM. Below: only neurons associated with reservoir are shown.

Vertical Seismic Section Showing Lithological Contrast SOM

The observed lithology in the reservoir sand is predominantly made up of clay sandstone. A discrete log for Net Reservoir was generated to calibrate the results of the Lithological Contrast SOM, using cut-off values according to Clay Volume and Effective Porosity. Figure 4 shows the SOM classification of Lithological Contrast with available well data and plan view. The samples grouped by neurons N17, N21, and N22 match with Net Reservoir discrete logs. It is notable that only the well W-3 (minor producer) intersected the samples grouped by the neuron N17 (light blue). The rest of the wells only intersected neurons N21 and N22. It is important to note that these features are not observed on the conventional seismic amplitude data (wiggle traces).

Figure 4. Vertical section composed by the SOM of Lithological Contrast, Amplitude attribute (wiggle), and Net Reservoir discrete property along wells.

Stratigraphic Well Section

A cross-section containing the wells (Figure 5) shows logs of Gamma Ray, Clay Volume, perforations, resistivity, Effective Porosity, Net Reservoir with lithological contrast SOM classification, and Net Pay.
The results of SOM were compared by observation with discrete well log data, relating specific neurons to the reservoir. At target zone depth, only the neurons N16, N17, N21, and N22 are present. It is noteworthy that only W-3 well (minor producer) intersect clusters formed by neuron N17 (light blue). The rest of the wells intersect neurons N16, N21, N22, and N23.

Statistical Analysis Vertical Proportion Curve (VPC)

Traditionally, Vertical Proportion Curves (VPC) are qualitative and quantitative tools used by some sedimentologists to define succession, division, and variability of sedimentary sequences from well data, since logs describe vertical and lateral evolution of facies (Viloria et al., 2002). A VPC can be modeled as an accumulative histogram where the bars represent the facies proportion present at a given level in a stratigraphic unit. As part of the quality control and revision of the SOM classification volume for Lithological Contrasts, this statistical technique was used to identify whether in the stratigraphic unit or in the window of interest, a certain degree of succession and vertical distribution of specific neurons observed could be related to the reservoir.

The main objective of this statistical method is to identify how specific neurons are vertically concentrated along one or more logs. As an illustration of the technique, a diagram of the stratigraphic grid is shown in Figure 6. The VPC was extracted from the whole 3D grid of SOM classification volume for Lithological Contrast, and detection was generated by counting the occurrence among the 25 neurons or classes in each stratigraphic layer in the VPC extracted from the grid. The VPC of SOM neurons exhibits remarkable slowly-varying characteristics indicative of geologic depositional patterns. The reservoir top corresponds to stratigraphic layer No. 16. In the VPC on the right, only neurons N16, N17, N21, and N22 are present. These neurons have a higher percentage occurrence relative to all 25 classes from the top of the target sand downwards. Corroborating the statistics, these same neural classes appear in the map view in Figure 3 and the vertical section shown in Figure 4. The stratigraphic well section in Figure 5 also supports the statistical results. It is important to note that these neurons also detected seismic samples above the top of the sand top, although in a lesser proportion. This effect is consistent with the existence of layers with similar lithological characteristics, which can be seen from the well logs.

Figure 6. Vertical proportion Curve to identify neurons related to reservoir rock.

Bivariate Statistical Analysis Cross Tabs

The first step in this methodology is a bivariate analysis through cross-tabs (contingency table) to determine if two categorical variables are related based on observing the extent to which the occurrence of one variable is repeated in the categories of the second. Given that one variable is analyzed in terms of another, a distinction must be made between dependent and independent variables. With cross tabs analysis, the possibilities are extended to (in addition to frequency analyzes for each variable, separately) the analyses of the joint frequencies or those in which the analysis unit nature is defined by the combination of two variables.

The result was obtained by extracting the SOM classification volume along wells paths and constructing a discrete well log with two categories: “Net Reservoir” and “not reservoir.” The distinction between “Net Reservoir” and “not reservoir” simply means that the dependent variable might have a hydrocarbon storage capacity or not. In this case, the dependent variable corresponds to neurons of SOM classification for Lithological Contrast volume. It is of ordinal type, since it has an established internal order, and the change from one category to another is not the same. The neurons go from N1 to N25, organized in rows. The independent variable is Net Reservoir, which is also an ordinal type variable. In this tab, the values organized in rows correspond to neurons from the SOM classification volume for Lithological Contrast, and in the columns are discrete states of the “Net Reservoir” and “not reservoir” count for each neuron. Table 2 shows that the highest Net Reservoir counts are associated with neurons N21 and N22 at 47.0% and 28.2% respectively. Conversely, lower counts of Net Reservoir are associated with neurons N17 (8.9%), N16 (7.8%) and N23 (8.0%).

Table 2. Cross Tab for Lithological Contrast SOM versus Net reservoir.

Neuron N21 was detected at reservoir depth in wells W-2 (producer), W-4 (abandoned for technical reasons during drilling), W-5 (producer) and W-6 (producer). N21 showed higher percentages of occurrence in Net Reservoir, so this neuron could be identified as indicating the highest storage capacity. N22 was present in wells W-1 and W-6 at target sand depth but also detected in wells W-2, W-4 and W-5 in clay-sandy bodies overlying the highest quality zone in the reservoir. N22 was also detected in the upper section of target sand horizontally navigated by the W-6 well, which has no petrophysical evaluation. N17 was only detected in well W-3, a minor producer of oil, which was sedimentologically cataloged as lobular facies and had the lowest reservoir rock quality. N16 was detected in a very small proportion in wells W-4 (abandoned for technical reasons during drilling) and W-5 (producer). Finally, N23 was only detected towards the top of the sand in well W-6, and in clayey layers overlying it in the other wells. This is consistent with the observed percentage of 8% Net Reservoir, as shown in Table 2.

Chi-Square Independence Hypothesis Testing

After applying the cross-tab evaluation, this classified information was the basis of a Chi-Square goodness-of-fit test to assess the independence or determine the association between two categorical variables: Net Reservoir and SOM neurons. That is, it aims to highlight the absence of a relationship between the variables. The Chi-Square test compared the behavior of the observed frequencies for each Lithological Contrast neuron with respect to the Net Reservoir variable (grouped in “Net Reservoir” and “no reservoir”), and with the theoretically expected frequency distribution when the hypothesis is null.

As a starting point, the null hypothesis formulation was that the Lithological Contrast SOM neuron occurrences are independent of the presence of Net Reservoir. If the calculated Chi-Square value is equal to or greater than a certain critical theoretical value, the null hypothesis must be rejected. Consequently, the alternative hypothesis must be accepted. Observe the results in Table 3 where the calculated Chi-Square is greater than the theoretical critical value (296 ≥ 9.4, with four degrees of freedom and 5% confidence level), so the null hypothesis of the independence of Net Pay with SOM neurons is rejected, leaving a relationship between Net Reservoir and Lithological Contrast SOM variables.

The test does not report a goodness of fit magnitude (substantial, moderate or poor), however. To measure the degree of correlation between both variables, Pearson’s Phi (φ) and Cramer’s V (ν) measures were computed. Pearson’s φ coefficient was estimated from Eq. 1.1.

Eq. 1.1

where X2: Chi-Square and n : No. of cases

Additionally, Cramer’s V was estimated using Eq. 1.2.

Eq. 1.2

In both cases, values near zero indicate a poor or weak relationship while values close to one indicate a strong relation. The authors obtained values for φ, and Cramer´s ν equals to 0.559 (Table 3). Based on this result, we can interpret a moderate relation between both variables.

Table 3. Calculated and theoretical Chi-Square values and its correlation measures.

Box-and-Whisker Plots

Box-and-whisker plots were constructed to compare and understand the behavior of petrophysical properties for the range that each neuron intersects the well paths in the SOM volume. Also, these quantify which neurons of interest respond to Net Reservoir and Net Pay properties (Figure 7). Five descriptive measures are shown for a box-and-whisker plot of each property:

• Median (thick black horizontal line)
• First quartile (lower limit of the box)
• Third quartile (upper limit of the box)
• Maximum value (upper end of the whisker)
• Minimum value (lower end of the whisker)

The graphs provide information about data dispersion, i.e., the longer the box and whiskers, the greater the dispersion and also data symmetry. If the median is relatively centered of the box, the distribution is symmetrical. If, on the contrary, it approaches the first or third quartile, the distribution could be skewed to these quartiles, respectively. Finally, these graphs identify outlier observations that depart from the rest of the data in an unusual way (these are represented by dots and asterisks as less or more distant from the data center). Horizontal dashed green line is the cut-off value for Effective Porosity (PIGN >0.10) while the dashed blue line represents the cut-off value for Clay Volume (VCL>0.45) and, dashed beige line is cut-off value for Water Saturation (SUWI<0.65).

Based on these data and the resulting analysis, it can be inferred that neurons N16, N17, N21, N22, and N23 respond positively to Net Reservoir. Of these neurons, the most valuable predictors are N21 and N22 since they present lower clay content in comparison with neurons N16 and N23 and associated higher Effective Porosity shown by neurons N16, N17, and N23 (Figure 7a). Neurons N21 and N22 are ascertained to represent the best reservoir rock quality. Finally, neuron N23 (Figure 7b), can be associated with rock lending itself with storage capacity, but clayey and with high Water Saturation, which allows discarding it as a significant neuron. It is important to note that this analysis was conducted by accounting for the simultaneous occurrence of the petrophysical values (VCL, PIGN, and SUWI) on the neurons initially intersected (Figure 7a), and then on the portion of the neurons that pass Net Reservoir cut-off values (Figure 7b), and finally on the portion of the neurons that pass net-pay cut-off values (Figure 7c). For all these petrophysical reasons, the neurons to be considered as a reference to estimate the lateral and vertical distribution of Net Reservoir associated with the target sand are in order of importance, N21, N22, N16, and N17.

Figure 7. Comparison between neurons according to petrophysical properties: VCL (Clay Volume), PIGN (Effective Porosity) and SUWI (Water Saturation). a) SOM neurons for lithological contrast detection, b) Those that pass Net Reservoir cut-off and c) Those that pass Net Pay cut-off.

Simultaneous Seismic Inversion

During this study, a simultaneous prestack inversion was performed using 3D seismic data and sonic logs, in order to estimate seismic petrophysical attributes as Acoustic Impedance (Zp), Shear Impedance (Zs), Density (Rho), as well as P&S-wave velocities, among others. They are commonly used as lithology indicators, possible fluids, and geomechanical properties. Figure 8a shows a scatter plot from well data of seismic attributes Lambda Rho and Mu Rho ratio versus Clay Volume (VCL) and as discriminator Vp/Vs ratio (Vp/Vs). The target sand corresponds to low Vp/Vs and Lambda/Mu values (circled in the figure). Another discriminator in the reservoir was S-wave impedance (Zs) (Figure 8b). From this, seismic inversion attributes were selected for classification by SOM neural network analysis. These attributes were Vp/Vs ratio, Lambda Rho/Mu Rho ratio, and Zs.

Figure 8. Scatter plots: a) Lambda Rho and Mu Rho ratio versus VCL and Vp/Vs y b) Zs versus VCL and Vp/Vs.

Self-Organizing Map (SOM) Comparison

Figure 9 is a plan view of neuron-extracted geobodies associated with the sand reservoir. In the upper part, a SOM classification for Lithological Contrast detection obtained from six traditional seismic attributes is shown; and in the lower part, a different SOM classification for Lithological Contrast detection was obtained from three attributes of simultaneous inversion. Both results are very similar. The selection of SOM classification neurons from inversion attributes was done through spatial pattern recognition, i.e., identifying geometry/shape of the clusters related to each of 25 neurons congruent with the sedimentary model, and by using a stratigraphic section for wells that includes both SOM classifications tracks.

Figure 9. Plan view of neurons with geological meaning. Up: SOM Classification from traditional attributes. Down: SOM Classification from simultaneous inversion attributes.

Figure 10 shows a well section that includes a track for Net Reservoir and Net Pay classification along with SOM classifications from traditional attributes and a second SOM from simultaneous inversion attributes defined from SOM volumes and well paths intersection. In fact, only the neurons numbers with geological meaning are shown.

Figure 10. Well section showing the target zone with tracks for discrete logs from Net Reservoir, Net Pay and both SOM classifications.

Discussion and Conclusions

Principal Component Analysis (PCA) identified the most significant seismic attributes to be classified by Self-Organizing Maps (SOM) neural network at single-sample basis to detect features associated with lithological contrast and recognize lateral and vertical extension in the reservoir. The interpretation of SOM classification volumes was supported by multidisciplinary sources (geological, petrophysical, and dynamic data). In this way, the clusters detected by certain neurons became the inputs for geobody interpretation. The statistical analysis and visualization techniques enabled the estimation of Net Reservoir for each neuron. Finally, the extension of reservoir rock geobodies derived from SOM classification of traditional attributes was corroborated by the SOM acting on simultaneous inversion attributes. Both multi-attribute machine learning analysis of traditional attributes and attributes of seismic inversion enable refinement of the sedimentary model to reveal more precisely the lateral and vertical distribution of facies. However, the Lithological Contrast SOM results from traditional attributes showed a better level of detail compared with seismic inversion SOM.

Collectively, the workflow may reduce uncertainty in proposing new drilling locations. Additionally, this methodology might be applied using specific attributes to identify faults and fracture zones, identify absorption phenomena, porosity changes, and direct hydrocarbon indicator features, and determine reservoir characteristics.

Acknowledgments

The authors thank Pemex and Oil and Gas Optimization for providing software and technical resources. Thanks also are extended to Geophysical Insights for the research and development of the Paradise® AI workbench and the machine learning applications used in this paper. Finally, thank Reinaldo Michelena, María Jerónimo, Tom Smith, and Hal Green for review of the manuscript.

References

Agresti, A., 2002, Categorical Data Analysis: John Wiley & Sons.

Marroquín I., J.J. Brault and B. Hart, 2009, A visual data mining methodology to conduct seismic facies analysis: Part 2 – Application to 3D seismic data: Geophysics, 1, 13-23.

Roden R., T. Smith and D. Sacrey, 2015, Geologic pattern recognition from seismic attributes: Principal component analysis and self-organizing maps: Interpretation, 4, 59-83.

Viloria R. and M. Taheri, 2002, Metodología para la Integración de la Interpretación Sedimentológica en el Modelaje Estocástico de Facies Sedimentarias, (INT-ID-9973, 2002). Technical Report INTEVEP-PDVSA.

Solving Interpretation Problems using Machine Learning on Multi-Attribute, Sample-Based Seismic Data

Solving Interpretation Problems using Machine Learning on Multi-Attribute, Sample-Based Seismic Data

Solving Interpretation Problems using Machine Learning on Multi-Attribute, Sample-Based Seismic Data
Presented by Deborah Sacrey, Owner of Auburn Energy
Challenges addressed in this webinar include:

  • Reducing risk in drilling marginal or dry holes
  • Interpretation of thin bedded reservoirs far below conventional seismic tuning
  • How to better understand reservoir characteristics
  • Interpretation of reservoirs in deep, pressured environments
  • Using the classification process to help with correlations in difficult stratigraphic or structural environments

The webinar is open to those interested in learning more about how the application of machine learning is key to seismic interpretation.

 
Deborah Sacrey

Deborah Sacrey

Owner

Auburn Energy

Deborah Sacrey is a geologist/geophysicist with 41 years of oil and gas exploration experience in the Texas, Louisiana Gulf Coast, and Mid-Continent areas of the US. Deborah specializes in 2D and 3D interpretation for clients in the US and internationally.

She received her degree in Geology from the University of Oklahoma in 1976 and began her career with Gulf Oil in Oklahoma City. She started Auburn Energy in 1990 and built her first geophysical workstation using the Kingdom software in 1996. Deborah then worked closely with SMT (now part of IHS) for 18 years developing and testing Kingdom. For the past eight years, she has been part of a team to study and bring the power of multi-attribute neural analysis of seismic data to the geoscience community, guided by Dr. Tom Smith, founder of SMT. Deborah has become an expert in the use of the Paradise® software and has over five discoveries for clients using the technology.

Deborah is very active in the geological community. She is past national President of SIPES (Society of Independent Professional Earth Scientists), past President of the Division of Professional Affairs of AAPG (American Association of Petroleum Geologists), Past Treasurer of AAPG and Past President of the Houston Geological Society. She is currently the incoming President of the Gulf Coast Association of Geological Societies (GCAGS) and is a member of the GCAGS representation on the AAPG Advisory Council. Deborah is also a DPA Certified Petroleum Geologist #4014 and DPA Certified Petroleum Geophysicist #2. She is active in the Houston Geological Society, South Texas Geological Society and the Oklahoma City Geological Society (OCGS).

Solving Exploration Problems with Machine Learning

Solving Exploration Problems with Machine Learning

By: Deborah Sacrey and Rocky Roden
Published with permission: First Break
Volume 36, June 2018

Introduction

Over the past eight years the evolution of machine learning in the form of unsupervised neural networks has been applied to improve and gain more insights in the seismic interpretation process (Smith and Taner, 2010; Roden et al., 2015; Santogrossi, 2016: Roden and Chen, 2017; Roden et al., 2017). Today’s interpretation environment involves an enormous amount of seismic data, including regional 3D surveys with numerous processing versions and dozens if not hundreds of seismic attributes. This ‘Big Data’ issue poses problems for geoscientists attempting to make accurate and efficient interpretations. Multi-attribute machine learning approaches such as self-organizing maps (SOMs), an unsupervised learning approach, not only incorporates numerous seismic attributes, but often reveals details in the data not previously identified. The reason for this improved interpretation process is that SOM analyses data at each data sample (sample interval X bin) for the multiple seismic attributes that are simultaneously analysed for natural patterns or clusters. The scale of the patterns identified by this machine learning process is on a sample basis, unlike conventional amplitude data where resolution is limited by the associated wavelet (Roden et al., 2017).

Figure 1 illustrates how all the sample points from the multiple attributes are placed in attribute space where they are standardized or normalized to the same scale. In this case, ten attributes are employed. Neurons which are points that identify the patterns or clusters, are randomly located in attribute space where the SOM process proceeds to identify patterns in this multi-attribute space. When completed, the results are nonlinearly mapped to a 2D colormap where hexagons representing each neuron identify associated natural patterns in the data in 3D space. This 3D visualization is how the geoscientist interprets geological features of interest.

Figure 1. Illustration of the SOM process where samples from ten seismic attributes are placed in attribute space, normalized, then 64 neurons identify 64 patterns from the data in the SOM process. The interpreter selects one or a combination of neurons from the 2D colourmap to identify geologic features of interest.

The three case studies in this paper are real-world examples of using this machine learning approach to make better interpretations.

Case History 1 – Defining Reservoir in Deep, Pressured Sands with Poor Data Quality

The Tuscaloosa reservoir in Southern Louisiana, USA, is a low-resistivity sand at a depth of approximately 5180 to 6100 m (17,000 to 20,000 ft). It is primarily a gas reservoir, but does have a component of liquids to it as well. The average liquid ratio is 50 barrels to 1MMcfg. The problem is being able to identify a reservoir around a well which has been producing from the early 1980s, but has never been offset because of poor well control and poor seismic data quality at that depth. The well in question has produced more than 50 BCF of gas and approximately 1.2 MMBO, and is still producing at a very steady rate. The operator wanted to know if the classification process could wade through the poor data quality and see the reservoir from which the well had been producing to calculate the depleted area and look for additional drilling locations.

The initial data quality of the seismic data, which was shot long after the well started producing is shown in Figure 2. Instantaneous and hydrocarbon-indicating attributes were created to be used in a multi-attribute classification analysis to help define the laminated-sand reservoir. The 3D seismic data areal coverage was approximately 72.5 km2. and there were 12 wells in the area for calibration, not including the key well in question. The attributes were calculated over the entire survey, from 1.0 to 4.0 seconds, which covers the zone of interest as well as some possible shallow pay zones. Many of the wells had been drilled after the 3D data was shot, and ranged from dry holes to wells which have produced more than 35BCFE since the early 2000s.

Figure 2. Seismic amplitude line through Tuscaloosa Sands at 5790 m. This key well has been producing for more than 35 years from 6 m of perforations.

A synthetic was created to tie the well in question to the seismic data. It was noted that the data was out-of-phase after tying to the Austin Chalk. The workflow was to rotate the data to zero-phase with U.S. polarity convention, up-sample the data from 4 ms to 2 ms, which allows for better statistical analysis of the attributes for classification purposes, and create the attributes from the parent PSTM volume. While reviewing the attributes, the appearance of a flat event seemed to be evident in the Attenuation data.

Figure 3. The appearance of a ‘flat spot’ in the Attenuation attribute.

Figure 3 shows what looks like a ‘flat spot’ against regional dip. The appearance of this flat event suggests that a combination of appropriate seismic attributes used in the classification process designed to delineate sands from shales, hydrocarbon indicators and general stratigraphy, may be able to define this reservoir. The eight attributes used were: Attenuation, Envelope Bands on Envelope Breaks, Envelope Bands on Phase Breaks, Envelope 2nd Derivative, Envelope Slope, Instantaneous Frequency, PSTM-Enh_180 rot, and Trace Envelope. A 10x10 matrix topology (100 neurons) was used in the SOM analysis to look for patterns in the data that would break out the reservoir in a 200 ms thick portion of the volume, a zone in which the Tuscaloosa sands occur in this area.

Figure 4. Time slice through the perforated interval from the SOM results showing a) areal extent of the reservoir and the appearance of a braided-channel system in the thinly laminated sands and b) only the key neural classification components shown from which the reservoir is better defined and extent can be easily measured. The yellow circle denotes the key well reservoir.

Figure 4a shows a time slice from the SOM results through the perforations with all the neural patterns turned on in 3D space as well as the well bores within the area of the 3D space. Figure 4b shows only the sand reservoir without all the background information. It can be noted that this time slice cuts regional dip in a thinly-laminated reservoir, so evidence of a braided stream system can readily be seen in the slice. Figure 4b shows the same time slice, but with only the key neural patterns turned on in the 2D Map matrix of neurons.

The result is that the reservoir for the key well ended up calculating out to 526 Hectares, which explains the long life and great production. It also seems to extend off the edge of the 3D space, which could add significantly to the reservoir extent.

Figure 5. Seismic amplitude dip line showing a synthetic tie that denotes locations of perforations, which is associated with a very weak peak event.

Case History 2 – Finding Hydrocarbons in Thinbed Environments Well Below Seismic Tuning

In this case, the goal was to find an extension of a reservoir tied to a well which had produced more than 450 MBO from a thin Frio (Tertiary Age) sand at 3289 m. The sand thickness was just a little over 2 m, well below seismic tuning (20 m) at that depth. Careful attention to synthetic creation showed that the well tied to a weak peak event. Figure 5 shows this weak amplitude and the tie to the key well. Again, the data was up-sampled from a 4 ms sample rate to a 2 ms sample rate for better statistics in the SOM classification process, then the attributes were calculated from the up-sampled PSTM-enhanced volume.

The reservoir was situated within a fault block, bounded to the southeast by a 125 m throw fault and along the northwest side by a 50 m throw fault. There were three wells in the southwest portion of the fault block which had poor production and were deemed to be mechanical failures. The key producer was about 4.5 km northeast of the three wells along strike within the fault block. Figure 6 shows an amplitude strike line that connects the marginal wells to the key well. The green horizon was mapped in the trough event that was consistent over the area. The black horizon is 17 ms below the green horizon and is located in the actual zone of perforations in the key producing well and is associated with a weak peak event.

A neural topology of 64 neurons (8x8) was used, along with the following eight attributes to help determine the thickness and areal extent: Envelope, Imaginary Part, Instantaneous Frequency, PSTM-Enh, Relative Acoustic Impedance, Thin Bed Indicator, Sweetness and Hilbert. Close examination of the SOM results (Figure 7) indicated the level associated with the weak peak event resembled some offshore bar development. Fairly flat reflectors below the event and indications of ‘drape’ over the event led to the conclusion that this sand was not a blanket sand, as the client perceived, but had finite limitations. Figure 7 shows the flattened time slice in the perforated zone through the event after the neural analysis. One can see the possibility of a tidal channel cut, which could compartmentalize the existing production. Also noted is the fact that the three wells which had been labelled as mechanical failures, were actually on very small areas of sand which indicate limited reservoir extent.

Figure 6. Amplitude strike line along fault block showing marginal wells on left and key producer on right. Mapped trough event is shown as well as a horizon 17 ms below the trough which is flattened and displayed in Figure 7.

Figure 7. Flattened time slice 17 ms below trough event after SOM analysis. A possible tidal channel cut through the bar can be interpreted. Note the three marginal wells to the southwest are on the fringes or separated from the main producing area.

Figure 8 is a SOM dip seismic section showing the discovery well after completion. Three m of sand was in the well bore and 2 m were perforated for an initial rate of 250 BOPD and 1.1 MMcfgpd. The SOM results identified these thin sands with light-blue-to-green neurons, with each neuron representing about 2-3 m thickness. This process has identified the thin beds well below the conventional tuning thickness of 20 m. It is estimated that there is another 2 MMBOE remaining in this reservoir.

Figure 8. SOM dip line showing tie of thin sand to bar.

Case History 3 – Using the Classification Process to Help With Interpreting Difficult Depositional Environments

There are many places around the world where the seismic data is hard to interpret because of multiple episodes of exposure, erosion, transgressive and regressive sequences and multi-period faulting. This case is in a portion of the Permian Basin of West Texas and Southeastern New Mexico. This is an area which has been structurally deformed from episodes of expansion and contraction as well as being exposed and buried over millions of years in the Mississippian through to the Silurian ages. There are multiple unconformable surfaces as well as turbidite and debris flows, carbonates and clastic deposition, so it is a challenge to interpret.

The 3D has both a PSTM stack from gathers and a high-resolution version of the PSTM. The workflow was to use the high-resolution volume and use a low-topology SOM classification of attributes which would help accentuate the stratigraphy. Figure 9a is an example of the PSTM from an initial volume produced from gathers and Figure 9b is the same line in the high-resolution version. The two horizons shown are Lower Mississippian in yellow and the Upper Devonian in green, both horizons were interpreted in the PSTM stack from gathers. One can see in the high-resolution volume that both horizons are definitely not following the same events and additional detail in the data is desired.

Figure 9. Seismic amplitude line in wiggle trace variable area format going through key producing well; a) PSTM from stack and b) high resolution PSTM. Upper horizon (yellow) is Lower Mississippian and lower horizon (green) is Upper Devonian picked from data in a).

The workflow here was to use multi-attribute classification with a low-topology (fewer neurons, so fewer patterns to interpret) unsupervised Self-Organized Map (SOM). The lower neural count will tend to combine the natural patterns in the data into a more ‘regional’ view and make it easier to interpret. A 4x4 neural matrix was used, so the result only had 16 patterns to interpret. The four attributes used were also picked for their ability to sort out stratigraphic events. These were: Instantaneous Phase, Instantaneous Frequency, and Normalized Amplitude.

Figure 10 is the result of the interpretation process in the SOM classification volume. Both the new interpreted horizons and the old horizons are shown to illustrate how much more accurate the classification process is at defining stratigraphic events. In this section, one can see karsting, debris flows and possibly some reefs.

Figure 10. Results of the classification using the high-resolution seismic data in a low-topology learning process. Both the new and old horizon interpretations are shown. Many interesting stratigraphic features are shown, including karsting, possible reefs and debris flows.

Conclusion

In this article, three different scenarios were given where the use of multi-attribute neural analysis of data can aid in solving some of the many issues geoscientists face when trying to interpret their data. The problems solved were using SOM classification to help define 1) reservoirs in deep, pressured, poor data quality areas, 2) thin-bedded reservoirs while exploring or developing fields and 3) classification of data to help interpret data in difficult stratigraphic environments. The classification process, or machine learning, is the next wave of new technology designed to analyze seismic data in ways that the human eye cannot.

Acknowledgments

The authors would like to thank Geophysical Insights for the research and development of the Paradise® AI workbench and the machine learning applications used in this paper.

References

Roden, R. and Chen, C., 2017, Interpretation of DHI Characteristics with machine learning. First Break, 35, 55-63.

Roden, R., Smith, T. and Sacrey, D., 2015, Geologic pattern recognition from seismic attributes: Principal component analysis and self-organizing maps. Interpretation, 3, SAE59-SAE83.

Roden, R., Smith, T., Santogrossi, P., Sacrey, D. and Jones, G., 2017, Seismic interpretation below tuning with multi-attribute analysis. The Leading Edge, 36, 330-339.

Santogrossi, P., 2017, Technology reveals Eagle Ford insights. American Oil & Gas Reporter, January.

Smith, T. and Taner, M.T., 2010, Natural clusters in multi-attribute seismics found with self-organizing maps. Extended Abstracts, Robinson-Treitel Spring Symposium by GSH/SEG, March 10-11, 2010, Houston, Tx.

Machine Learning Revolutionizing Seismic Interpretation

Machine Learning Revolutionizing Seismic Interpretation

By Thomas A. Smith and Kurt J. Marfurt
Published with permission: The American Oil & Gas Reporter
July 2017

The science of petroleum geophysics is changing, driven by the nature of the technical and business demands facing geoscientists as oil and gas activity pivots toward a new phase of unconventional reservoir development in an economic environment that rewards efficiency and risk mitigation. At the same time, fast-evolving technologies such as machine learning and multiattribute data analysis are introducing powerful new capabilities in investigating and interpreting the seismic record.

Through it all, however, the core mission of the interpreter remains the same as ever: extracting insights from seismic data to describe the subsurface and predict geology between existing well locations–whether they are separated by tens of feet on the same horizontal well pad or tens of miles in adjacent deepwater blocks. Distilled to its fundamental level, the job of the data interpreter is to determine where (and where not) to drill and complete a well. Getting the answer correct to that million-dollar question gives oil and gas companies a competitive edge. The ability to arrive at the right answers in the timeliest manner possible is invariably the force that pushes technological boundaries in seismic imaging and interpretation. The state of the art in seismic interpretation is being redefined partly by the volume and richness of high-density, full-azimuth 3-D surveying methods and processing techniques such as reverse time migration and anisotropic tomography. Combined, these solutions bring new resolution and clarity to processed subsurface images that simply are unachievable using conventional imaging methods. In data interpretation, analytical tools such as machine learning, pattern recognition, multiattribute analysis and self-organizing maps are enhancing the interpreter’s ability to classify, model and manipulate data in multidimensional space. As crucial as the technological advancements are, however, it is clear that the future of petroleum geophysics is being shaped largely by the demands of North American unconventional resource plays. Optimizing the economic performance of tight oil and shale gas projects is not only impacting the development of geophysical technology, but also dictating the skill sets that the next generation of successful interpreters must possess. Resource plays shift the focus of geophysics to reservoir development, challenging the relevance of seismic-based methods in an engineering-dominated business environment. Engineering holds the purse strings in resource plays, and the problems geoscientists are asked to solve with 3-D seismic are very different than in conventional exploration geophysics. Identifying shallow drilling hazards overlying a targeted source rock, mapping the orientation of natural fractures or faults, and characterizing changes in stress profiles or rock properties is related as much to engineering as to geophysics.

Given the requirements in unconventional plays, there are four practical steps to creating value with seismic analysis methods. The first and obvious step is for oil and gas companies to acquire 3-D seismic and incorporate the data into their digital databases.  Some operators active in unconventional plays fully embrace 3-D technology, while others only apply it selectively. If interpreters do not have access to high-quality data and the tools to evaluate that information, they cannot possibly add value to company’s bottom line.The second step is to break the conventional resolution barrier on the seismic reflection wavelet, the so-called quarter-wave length limit. This barrier is based on the overlapping reflections of seismic energy from the top and bottom of a layer, and depends on layer velocity, thickness, and wavelet frequencies. Below the quarter-wave length, the wavelets start to overlap in time and interfere with one another, making it impossible by conventional means to resolve separate events. The third step is correlating seismic reflection data–including compressional wave energy, shear wave energy and density–to quantitative rock property and geomechanical information from geology and petrophysics. Connecting seismic data to the variety of very detailed information available at the borehole lowers risk and provides a clearer picture of the subsurface between wells, which is fundamentally the purpose of acquiring a 3-D survey. The final step is conducting a broad, multiscaled analysis that fully integrates all available data into a single rock volume encompassing geophysical, geologic and petrophysical features. Whether an unconventional shale or a conventional carbonate, bringing all the data together in a unified rock volume resolves issues in subsurface modeling and enables more realistic interpretations of geological characteristics.

The Role of Technology

Every company faces pressures to economize, and the pressures to run an efficient business only ratchet up at lower commodity prices. The business challenges also relate to the personnel side of the equation, and that should never be dismissed. Companies are trying to bridge the gap between older geoscientists who seemingly know everything and the ones entering the business who have little experience but benefit from mentoring, education and training. One potential solution is using information technology to capture best practices across a business unit, and then keeping a scorecard of those practices in a database that can offer expert recommendations based on past experience. Keylogger applications can help by tracking how experienced geoscientists use data and tools in their day-to-day workflows. However, there is no good substitute for a seasoned interpreter. Technologies such as machine learning and pattern recognition have game-changing possibilities in statistical analysis, but as petroleum geologist Wallace Pratt pointed out in the 1950s, oil is first found in the human mind. The role of computing technology is to augment, not replace, the interpreter’s creativity and intuitive reasoning (i.e., the “geopsychology” of interpretation).

Delivering Value

A self-organizing map (SOM) is a neural network-based, machine learning process that is simultaneously applied to multiple seismic attribute volumes. This example shows a class II amplitude-variation-with-offset response from the top of gas sands, representing the specific conventional geological settings where most direct hydrocarbon indicator characteristics are found. From the top of the producing reservoir, the top image shows a contoured time structure map overlain by amplitudes in color. The bottom image is a SOM classification with low probability (less than 1 percent) denoted by white areas. The yellow line is the downdip edge of the high-amplitude zone designated in the top image. Consequently, seismic data interpreters need to make the estimates they derive from geophysical data more quantitative and more relatable for the petroleum engineer. Whether it is impedance inversion or anisotropic velocity modeling, the predicted results must add some measure of accuracy and risk estimation. It is not enough to simply predict a higher porosity at a certain reservoir depth. To be of consequence to engineering workflows, porosity predictions must be reliably delivered within a range of a few percentage points at depths estimated on a scale of plus or minus a specific number of feet.

3-d seismic image

Class II amplitude-variation-with-offset response from the top of gas sand.

Machine learning techniques apply statistics-based algorithms that learn iteratively from the data and adapt independently to produce repeatable results. The goal is to address the big data problem of interpreting massive volumes of data while helping the interpreter better understand the interrelated relationships of different types of attributes contained within 3-D data. The technology classifies attributes by breaking data into what computer scientists call “objects” to accelerate the evaluation of large datasets and allow the interpreter to reach conclusions much faster. Some computer scientists believe “deep learning” concepts can be applied directly to 3-D prestack seismic data volumes, with an algorithm figuring out the relations between seismic amplitude data patterns and the desired property of interest.  While Amazon, Alphabet and others are successfully using deep learning in marketing and other functions, those applications have access to millions of data interactions a day. Given the significantly fewer number of seismic interpreters in the world, and the much greater sensitivity of 3-D data volumes, there may never be sufficient access to training data to develop deep learning algorithms for 3-D interpretation.The concept of “shallow learning” mitigates this problem.
 
Stratigraphy above the Buda

Conventional amplitude seismic display from a northwest-to-southeast seismic section across a well location is contrasted with SOM results using multiple instantaneous attributes.

First, 3-D seismic data volumes are converted to well-established relations that represent waveform shape, continuity, orientation and response with offsets and azimuths that have proven relations (“attributes”) to porosity, thickness, brittleness, fractures and/or the presence of hydrocarbons. This greatly simplifies the problem, with the machine learning algorithms only needing to find simpler (i.e., shallower) relations between the attributes and properties of interest.In resource plays, seismic data interpretations increasingly are based on statistical rather than deterministic predictions. In development projects with hundreds of wells within a 3-D seismic survey area, operators rely on the interpreter to identify where to drill and predict how a well will complete and produce. Given the many known and unknown variables that can impact drilling, completion and production performance, the challenge lies with figuring out how to use statistical tools to apply data measurements from the previous wells to estimate the performance of the next well drilled within the 3-D survey area. Therein lies the value proposition of any kind of science, geophysics notwithstanding. The value of applying machine learning-based interpretation boils down to one word: prediction. The goal is not to score 100 percent accuracy, but to enhance the predictions made from seismic analysis to avoid drilling uneconomic or underproductive wells. Avoiding investments in only a couple bad wells can pay for all the geophysics needed to make those predictions. And because the statistical models are updated with new data as each well is drilled and completed, the results continually become more quantitative for improved prediction accuracy over time.

New Functionalities

In terms of particular interpretation functionalities, three specific concepts are being developed around machine learning capabilities:

  • Evaluating multiple seismic attributes simultaneously using self-organizing maps (multiattribute analysis);
  • Relating in multidimensional space natural clusters or groupings of attributes that represent geologic information embedded in the data; and
  • Graphically representing the clustered information as geobodies to quantify the relative contributions of each attribute in a given seismic volume in a form that is intrinsic to geoscientific workflows.

A 3-D seismic volume contains numerous attributes, expressed as a mathematical construct representing a class of data from simultaneous analysis. An individual class of data can be any measurable property that is used to identify geologic features, such as rock brittleness, total organic carbon or formation layering. Supported by machine learning and neural networks, multiattribute technology enhances the geoscientist’s ability to quickly investigate large data volumes and delineate anomalies for further analysis, locate fracture trends and sweet spots in shale plays, identify geologic and stratigraphic features, map subtle changes in facies at or even below conventional seismic resolution, and more. The key breakthrough is that the new technology works on machine learning analysis of multiattribute seismic samples.While applied exclusively to seismic data at present, there are many types of attributes contained within geologic, petrophysical and engineering datasets. In fact, literally, any type of data that can be put into rows and columns on a spreadsheet is applicable to multiattribute analysis. Eventually, multiattribute analysis will incorporate information from different disciplines and allow all of it to be investigated within the same multidimensional space that leads to the second concept: Using machine learning to organize and evaluate natural clusters of attribute classes. If an interpreter is analyzing eight attributes in an eight-dimensional space, the attributes can be grouped into natural clusters that populate that space. The third component is delivering the information found in the clusters in high-dimensionality space in a form that quantifies the relative contribution of the attributes to the class of data, such as simple geobodies displayed with a 2-D color index map. This approach allows multiple attributes to be mapped over large areas to obtain a much more complete picture of the subsurface, and has demonstrated the ability to achieve resolution below conventional seismic tuning thickness. For example, in an application in the Eagle Ford Shale in South Texas, multiattribute analysis was able to match 24 classes of attributes within a 150-foot vertical section across 200 square miles of a 3-D survey. Using these results, a stratigraphic diagram of the seismic facies has been developed over the entire survey area to improve geologic predictions between boreholes, and ultimately, correlate seismic facies with rock properties measured at the boreholes. Importantly, the mathematical foundation now exists to demonstrate the relationships of the different attributes and how they tie with pixel components in geobody form using machine learning. Understanding how the attribute data mathematically relate to one another and to geological properties gives geoscientists confidence in the interpretation results.

Leveraging Integration

The term “exploration geophysics” is becoming almost a misnomer in North America, given the focus on unconventional reservoirs, and how seismic methods are being used in these plays to develop rather than find reservoirs. With seismic reflection data being applied across the board in a variety of ways and at different resolutions in unconventional development programs, operators are combining 3-D seismic with data from other disciplines into a single integrated subsurface model. Fully leveraging the new sets of statistical and analytical tools to make better predictions from integrated multidisciplinary datasets is crucial to reducing drilling and completion risk and improving operational decision making. Multidimensional classifiers and attribute selection lists using principal component analysis and independent component analysis can be used with geophysical, geological, engineering, petrophysical and other attributes to create general-purpose multidisciplinary tools of benefit to all oil and gas company departments and disciplines. As noted, the integrated models used in resource plays increasingly are based on statistics, so any evaluation to develop the models also needs to be statistical. In the future, a basic part of conducting a successful analysis will be the ability to understand statistical data and how the data can be organized to build more tightly integrated models. And if oil and gas companies require more integrated interpretations, it follows that interpreters will have to possess more integrated skills and knowledge. The geoscientist of tomorrow may need to be more of a multidisciplinary professional with the blended capabilities of a geologist, geophysicist, engineer and applied statistician. But whether a geoscientist is exploring, appraising or developing reservoirs, he or she only can be as good as the prediction of the final model. By applying technologies such as machine learning and multiattribute analysis during the workup, interpreters can use their creative energies to extract more knowledge from their data and make more knowledgeable predictions about undrilled locations.

THOMAS A. SMITH is president and chief executive officer of Geophysical Insights, which he founded in 2008 to develop machine learning processes for multiattribute seismic analysis. Smith founded Seismic Micro-Technology in 1984, focused on personal computer-based seismic interpretation. He began his career in 1971 as a processing geophysicist at Chevron Geophysical. Smith is a recipient of the Society of Exploration Geophysicists’ Enterprise Award, Iowa State University’s Distinguished Alumni Award and the University of Houston’s Distinguished Alumni Award for Natural Sciences and Mathematics. He holds a B.S. and an M.S. in geology from Iowa State, and a Ph.D. in geophysics from the University of Houston.
Dr. Kurt Marfurt KURT J. MARFURT is the Frank and Henrietta Schultz Chair and Professor of Geophysics in the ConocoPhillips School of Geology & Geophysics at the University of Oklahoma. He has devoted his career to seismic processing, seismic interpretation and reservoir characterization, including attribute analysis, multicomponent 3-D, coherence and spectral decomposition. Marfurt began his career at Amoco in 1981. After 18 years of service in geophysical research, he became director of the University of Houston’s Center for Applied Geosciences & Energy. He joined the University of Oklahoma in 2007. Marfurt holds an M.S. and a Ph.D. in applied geophysics from Columbia University.