Geologic Pattern Recognition from Seismic Attributes: Principal Component Analysis and Self-Organizing Maps

By Rocky Roden, Thomas A. Smith, and Deborah Sacrey | Published with permission: Interpretation Journal | November 2015

Abstract

Interpretation of seismic reflection data routinely involves powerful multiple-central-processing-unit computers, advanced visualization techniques, and generation of numerous seismic data types and attributes. Even with these technologies at the disposal of interpreters, there are additional techniques to derive even more useful information from our data. Over the last few years, there have been efforts to distill numerous seismic attributes into volumes that are easily evaluated for their geologic significance and improved seismic interpretation. Seismic attributes are any measurable property of seismic data. Commonly used categories of seismic attributes include instantaneous, geometric, amplitude accentuating, amplitude-variation with offset, spectral decomposition, and inversion. Principal Component Analysis (PCA), a linear quantitative technique, has proven to be an excellent approach for use in understanding which seismic attributes or combination of seismic attributes has interpretive significance. The PCA reduces a large set of seismic attributes to indicate variations in the data, which often relate to geologic features of interest. PCA, as a tool used in an interpretation workflow, can help to determine meaningful seismic attributes. In turn, these attributes are input to self-organizing-map (SOM) training. The SOM, a form of unsupervised neural networks, has proven to take many of these seismic attributes and produce meaningful and easily interpretable results. SOM analysis reveals the natural clustering and patterns in data and has been beneficial in defining stratigraphy, seismic facies, direct hydrocarbon indicator features, and aspects of shale plays, such as fault/fracture trends and sweet spots. With modern visualization capabilities and the application of 2D color maps, SOM routinely identifies meaningful geologic patterns. Recent work using SOM and PCA has revealed geologic features that were not previously identified or easily interpreted from the seismic data. The ultimate goal in this multiattribute analysis is to enable the geoscientist to produce a more accurate interpretation and reduce exploration and development risk.

 

You must log in to read the rest of this article. Please log in or register as a user.

 

Introduction

The object of seismic interpretation is to extract all the geologic information possible from the data as it relates to structure, stratigraphy, rock properties, and perhaps reservoir fluid changes in space and time (Liner, 1999). Over the past two decades, the industry has seen significant advancements in interpretation capabilities, strongly driven by increased computer power and associated visualization technology. Advanced picking and tracking algorithms for horizons and faults, integration of prestack and poststack seismic data, detailed mapping capabilities, integration of well data, development of geologic models, seismic analysis and fluid modeling, and generation of seismic attributes are all part of the seismic interpreter’s toolkit. What is the next advancement in seismic interpretation? A significant issue in today’s interpretation environment is the enormous amount of data that is used and generated in and for our workstations. Seismic gathers, regional 3D surveys with numerous processing versions, large populations of wells and associated data, and dozens if not hundreds of seismic attributes, routinely produce quantities of data in terms of terabytes. The ability for the interpreter to make meaningful interpretations from these huge projects can be difficult and at times quite inefficient. Is the next step in the advancement of interpretation the ability to interpret large quantities of seismic data more effectively and potentially derive more meaningful information from the data?

For example, is there a more efficient methodology to analyze prestack data whether interpreting gathers, offset/angle stacks, or amplitude-variation with offset (AVO) attributes? Can the numerous volumes of data produced by spectral decomposition be efficiently analyzed to determine which frequencies contain the most meaningful information? Is it possible to derive more geologic information from the numerous seismic attributes generated by interpreters by evaluating numerous attributes all at once and not each one individually? This paper describes the methodologies to analyze combinations of seismic attributes of any kind for meaningful patterns that correspond to geologic features. Principal component analysis (PCA) and self organizing maps (SOMs) provide multiattribute analyses that have proven to be an excellent pattern recognition approach in the seismic interpretation workflow. A seismic attribute is any measurable property of seismic data, such as amplitude, dip, phase, frequency, and polarity that can be measured at one instant in time/depth over a time/depth window, on a single trace, on a set of traces, or on a surface interpreted from the seismic data (Schlumberger Oilfield Glossary, 2015). Seismic attributes reveal features, relationships, and patterns in the seismic data that otherwise might not be noticed (Chopra and Marfurt, 2007). Therefore, it is only logical to deduce that a multiattribute approach with the proper input parameters can produce even more meaningful results and help to reduce risk in prospects and projects.

Evolution of seismic attributes

Balch (1971) and Anstey at Seiscom-Delta in the early 1970s are credited with producing some of the first generation of seismic attributes and stimulated the industry to rethink standard methodology when these results were presented in color. Further development was advanced with the publications by Taner and Sheriff (1977) and Taner et al. (1979) who present complex trace attributes to display aspects of seismic data in color not seen before, at least in the interpretation community. The primary complex trace attributes including reflection strength/envelope, instantaneous phase, and instantaneous frequency inspired several generations of new seismic attributes that evolved as our visualization and computer power improved. Since the 1970s, there has been an explosion of seismic attributes to such an extent that there is not a standard approach to categorize these attributes. Brown (1996) categorizes seismic attributes by time, amplitude, frequency, and attenuation in prestack and poststack modes. Chen and Sidney (1997) categorize seismic attributes by wave kinematics/dynamics and by reservoir features. Taner (2003) further categorizes seismic attributes by prestack and by poststack, which is further divided into instantaneous, physical, geometric, wavelet, reflective, and transmissive. Table 1 is a composite list of seismic attributes and associated categories routinely used in seismic interpretation today. There are of course many more seismic attributes and combinations of seismic attributes than listed in Table 1, but as Barnes (2006) suggests, if you do not know what an attribute means or is used for, discard it. Barnes (2006) prefers attributes with geologic or geophysical significance and avoids attributes with purely mathematical meaning. In a similar vein, Kalkomey (1997) indicates that when correlating well control with seismic attributes, there is a high probability of spurious correlations if the well measurements are small or the number of independent seismic attributes is considered large. Therefore, the recommendation when the well correlation is small is that only those seismic attributes that have a physically justifiable relationship with the reservoir property be considered as candidates for predictors.

Seismic Attribute Categories and Corresponding Types and Interpretive Uses
Table 1. is a composite list of seismic attributes and associated categories routinely used in seismic interpretation today. There are of course many more seismic attributes and combinations of seismic attributes than listed in Table 1, but as Barnes (2006) suggests, if you do not know what an attribute means or is used for, discard it. Barnes (2006) prefers attributes with geologic or geophysical significance and avoids attributes with purely mathematical meaning.

In an effort to improve the interpretation of seismic attributes, interpreters began to coblend two and three attributes together to better visualize features of interest. Even the generation of attributes on attributes has been used. Abele and Roden (2012) describe an example of this where dip of maximum similarity, a type of coherency, was generated for two spectral decomposition volumes high and low bands, which displayed high energy at certain frequencies in the Eagle Ford Shale interval of South Texas. The similarity results at the Eagle Ford from the high-frequency data showed more detail of fault and fracture trends than the similarity volume of the full-frequency data. Even the low-frequency similarity results displayed better regional trends than the original full-frequency data. From the evolution of ever more seismic attributes that multiply the information to interpret, we investigate PCA and self organizing maps to derive more useful information from multiattribute data in the search for oil and gas.

Principal component analysis

PCA is a linear mathematical technique used to reduce a large set of seismic attributes to a small set that still contains most of the variation in the large set. In other words, PCA is a good approach to identify the combination of most meaningful seismic attributes generated from an original volume. The first principal component accounts for as much of the variability in the data as possible, and each succeeding component (orthogonal to each preceding) accounts for as much of the remaining variability (Guo et al., 2009; Haykin, 2009). Given a set of seismic attributes generated from the same original volume, PCA can identify the attributes producing the largest variability in the data suggesting these combinations of attributes will better identify specific geologic features of interest. Even though the first principal component represents the largest linear attribute combinations that best represents the variability of the bulk of the data, it may not identify specific features of interest to the interpreter.

The interpreter should also evaluate succeeding principal components because they may be associated with other important aspects of the data and geologic features not identified with the first principal component. In other words, PCA is a tool that, used in an interpretation workflow, can give direction to meaningful seismic attributes and improve interpretation results. It is logical, therefore, that a PCA evaluation may provide important information on appropriate seismic attributes to take into a SOM generation.

Natural clusters

Several challenges and potential benefits of multiple attributes for interpretation are illustrated in Figure 1. Geologic features are revealed through attributes as coherent energy. When there is more than one attribute, we call these centers of coherent energy natural clusters. Identification and isolation of natural clusters are important parts of interpretation when working with multiple attributes. In Figure 1, we illustrate natural clusters in blue with 1000 samples of a Gaussian distribution in 2D. Figure 1a shows two natural clusters that project to either attribute scale (horizontal or vertical), so they would both be clearly visible in the seismic data, given a sufficient signal-to-noise ratio. Figure 1b shows four natural clusters that project to both attribute axes, but in this case, the horizontal attribute would see three clusters and would separate the left and right natural clusters clearly. The six natural clusters would be difficult to isolate in Figure 1c, and even the two natural clusters in Figure 1d, while clearly separated, have a special challenge. In Figure 1d, the right natural cluster is clearly separated and details could be resolved by attribute value. However, the left natural cluster would be separated by the attribute on the vertical axis and resolution cannot be as accurate because the higher values are obscured by projection from the right natural cluster. Based on the different 2D cluster examples in Figure 1a–1d, PCA has determined the largest variation in each example labeled one on the red lines and represents the first principal component. The second principal component is labeled two on the red lines.

PCA seismic attribute clusters
Figure 1. Natural clusters are illustrated in four situations with sets of 1000-sample Gaussian distributions shown in blue. From PCA, the principal components are drawn in red from an origin marked at the mean data values in x and y. The first principal components point in the direction of maximum variability. The first eigenvalues, as variance in these directions, are (a) 4.72, (b) 4.90, (c) 4.70, and (d) 0.47. The second principal components are perpendicular to the first and have respective eigenvalues of (a) 0.22, (b) 1.34, (c) 2.35, and (d) 0.26.
Overview of principal component analysis

We illustrate PCA in 2D with the natural clusters of Figure 1, although the concepts extend to any dimension. Each dimension counts as an attribute. We first consider the two natural clusters in Figure 1a and find the centroid (average x and y points). We draw an axis through the point at some angle and project all the attribute points to the axis. Different angles will result in a different distribution of points on the axis. For an angle, there is a variance for all the distances from the centroid. PCA finds the angle that maximizes that variance (labeled one on red line). This direction is the first principal component, and this is called the first eigenvector.

The value of the variance is the first eigenvalue. Interestingly, there is an error for each data point that projects on the axis, and mathematically, the first principal component is also a least-squares fit to the data. If we subtract the least-squares fit, reducing the dimension by one, the second principal component (labeled two on red line) and second eigenvalue fit the residual. In Figure 1a, the first principal component passes through the centers of the natural clusters and the projection distribution is spread along the axis (the eigenvalue is 4.72). The first principal component is nearly equal parts of attribute one (50%) and attribute two (50%) because it lies nearly along a 45° line. We judge that both attributes are important and worth consideration.

The second principal component is perpendicular to the first, and the projections are restricted (eigenvalue is 0.22). Because the second principal component is so much smaller than the first (5%), we discard it. It is clear that the PCA alone reveals the importance of both attributes. In Figure 1b, the first principal component eigenvector is nearly horizontal. The x-component is 0.978, and the y-component is 0.208, so the first principal eigenvector is composed of 82% attribute one (horizontal axis) and 18% attribute two (vertical axis). The second component is 27% smaller than the first and is significant. In Figure 1c and 1d, the first principal components are also nearly horizontal with component mixes of 81%, 12% and 86%, 14%, respectively. The second components were 50% and 55%, respectively. We demonstrate PCA on these natural cluster models to illustrate that it is a valuable tool to evaluate the relative importance of attributes, although our data typically have many more natural clusters than attributes, and we must resort to automatic tools, such as SOM to hunt for natural clusters after a suitable suite of attributes has been selected.

Survey and attribute spaces

For this discussion, seismic data are represented by a 3D seismic survey data volume regularly sampled in location x or y and in time t or in depth Z. A 2D seismic line is treated as a 3D survey of one line. Each survey is represented by several attributes, f1, f2 … fF. For example, the attributes might include the amplitude, Hilbert transform, envelope, phase, frequency, etc. As such, an individual sample x is represented in bold as an attribute vector of F-dimensions in survey space:
Equation 1
where ∈ reads “is a member of” and {..} is a set. Indices c, d, e, and f are indices of time or depth, trace, line number, and attribute, respectively. A sample drawn from the survey space with c, d, and e indices is a vector of attributes in a space RF. It is important to note for later use that x does not change position in attribute space. The samples in a 3D survey drawn from X may lie in a fixed time or depth interval, a fixed interval offset from a horizon, or a variable interval between a pair of horizons. Moreover, samples may be restricted to a specific geographic area.

Normalization and covariance matrix

PCA starts with computation of the covariance matrix, which estimates the statistical correlation between pairs of attributes. Because the number range of an attribute is unconstrained, the mean and standard deviation of each attribute are computed and corresponding attribute samples are normalized by these two constants. In statistical calculations, these normalized attribute values are known as standard scores or Z scores. We note that the mean and standard deviation are often associated with a Gaussian distribution, but here we make no such assumption because it does not underlie all attributes. The phase attribute, for example, is often uniformly distributed across all angles, and envelopes are often lognormally distributed across amplitude values. However, standardization is a way to assure that all attributes are treated equally in the analyses to follow. Letting T stand for transpose, a multi-attribute sample on a single time or depth trace is represented as a column vector of F attributes:
Equation 2
where each component is a standardized attribute value and where the selected samples in the PCA analysis range from i = 1 to I. The covariance matrix is estimated by summing the dot product of pairs of multi-attribute samples over all I samples selected for the PCA analysis. That is, the covariance matrix

Equation 3where the dot product is the matrix sum of the product x . x = xT x. The covariance matrix is symmetric, semipositive definite, and of dimension F × F.

Eigenvalues and eigenvectors

The PCA proceeds by computing the set of eigenvalues and eigenvectors for the covariance matrix. That is, for the covariance matrix, there are a set of F eigenvalues λ and eigenvectors v, which satisfy
Equation 4

This is a well-posed problem for which there are many stable numerical algorithms. For small F(<=10), the Jacobi method of diagonalization is convenient. For larger matrices, Householder transforms are used to reduce it to tridiagonal form, and then QR/QL deflation where Q, R, and L refer to parts of any matrix. Note that an eigenvector and eigenvalue are a matched pair. Eigenvectors are all orthogonal to each other and orthonormal when they are each of unit length. Mathematically, eigenvectors vi × vj = 0 for i ≠ j and vi × vj = 1 for i = j. The algorithms mentioned above compute orthonormal eigenvectors.

The list of eigenvalues is inspected to better understand how many attributes are important. The eigenvalue list that is sorted in decreasing order will be called the eigen-spectrum. We adopt the notation that the pair of eigenvalue and eigenvectors with the largest eigenvalue is {λ1V1}, and that the pair with the smallest eigenvalue is {λFVF}. A plot of the eigen-spectrum is drawn with a horizontal axis numbered one through F from left to right and a vertical axis that is increasing eigenvalue. For a multi-attribute seismic survey, a plot of the corresponding eigen-spectrum is often shaped like a decreasing exponential function. See Figure 3. The point where the eigen-spectrum generally flattens is particularly important. To the right of this point, additional eigenvalues are insignificant. Inspection of the eigen-spectrum constitutes the first and often the most important step in PCA (Figure 3b and 3c).

Unfortunately, eigenvalues reveal nothing about which attributes are important. On the other hand, simple identification of the number of attributes that are important is of considerable value. If L of F attributes are important, then F–L attributes are unimportant. Now, in general, seismic samples lie in an attribute space RF , but the PCA indicates that the data actually occupy a smaller space RL. The space RF−L is just noise.

The second step is to inspect eigenvectors. We proceed by picking the eigenvector corresponding to the largest eigenvalue fλ1v1g. This eigenvector, as a linear combination of attributes, points in the direction of maximum variance. The coefficients of the attribute components reveal the relative importance of the attributes. For example, suppose that there are four attributes of which two components are nearly zero and two are of equal value. We will conclude that for this eigenvector, we can identify two attributes that are important and two that are not. We find that a review of the eigenvectors for the first few eigenvalues of the eigen-spectrum reveal those attributes that are important in understanding the data (Figure 3b and 3c). Often the attributes of importance in this second stepmatch the number of significant attributes estimated in the first step.

Self-organizing maps

The self-organizing map (SOM) is a data visualization technique invented in 1982 by Kohonen (2001). This nonlinear approach reduces the dimensions of data through the use of unsupervised neural networks. SOM attempts to solve the issue that humans cannot visualize high-dimensional data. In other words, we cannot understand the relationship between numerous types of data all at once. SOM reduces dimensions by producing a 2D map that plots the similarities of the data by grouping similar data items together. Therefore, SOM analysis reduces dimensions and displays similarities. SOM approaches have been used in numerous fields, such as finance, industrial control, speech analysis, and astronomy (Fraser and Dickson, 2007). Roy et al. (2013) describe how neural networks have been used since the late 1990s in the industry to resolve various geoscience interpretation problems. In seismic interpretation, SOM is an ideal approach to understand how numerous seismic attributes relate and to classify various patterns in the data. Seismic data contain huge amounts of data samples, and they are highly continuous, greatly redundant, and significantly noisy (Coleou et al., 2003). The tremendous amount of samples from numerous seismic attributes exhibits significant organizational structure in the midst of noise. SOM analysis identifies these natural organizational structures in the form of clusters. These clusters reveal important information about the classification structure of natural groups that are difficult to view any other way. Dimensionality reduction properties of SOM are well known (Haykin, 2009). These natural groups and patterns in the data identified by the SOM analysis routinely reveal geologic features important in the interpretation process.

Overview of self-organizing maps

In a time or depth seismic survey, samples are first organized into multi-attribute samples, so all attributes are analyzed together as points. If there are F attributes, there are F numbers in each multi-attribute sample. SOM is a nonlinear mathematical process that introduces several new, empty multi-attribute samples called neurons.

These SOM neurons will hunt for natural clusters of energy in the seismic data. The neurons discussed in this article form a 2D mesh that will be illuminated in the data with a 2D color map.

2D-Seismic-Line-Processed-by-SOM-Analysis
Figure 2. Offshore West Africa 2D seismic line processed by SOM analysis. An 8 × 8 mesh of neurons trained on 13 instantaneous attributes with 100 epochs of unsupervised learning. A SOM of neurons resulted. In the figure insert, each neuron is shown as a unique color in the 2D color map. After training, each multiattribute seismic sample was classified by finding the neuron closest to the sample by Euclidean distance. The color of the neuron is assigned to the seismic sample in the display. A great deal of geologic detail is evident in classification by SOM neurons.

The SOM assigns initial values to the neurons, then for each multi-attribute sample, it finds the neuron closest to that sample by the Euclidean distance and advances it toward the sample by a small amount. Other neurons nearby in the mesh are also advanced. This process is repeated for each sample in the training set, thus completing one epoch of SOM learning. The extent of neuron movement during an epoch is an indicator of the level of SOM learning during that epoch. If an additional epoch is worthwhile, adjustments are made to the learning parameters and the next epoch is undertaken. When learning becomes insignificant, the process is complete. Figure 2 presents a portion of results of SOM learning on a 2D seismic line offshore of West Africa. For these results, a mesh of 8 × 8 neurons has six adjacent touching neurons. The 13 single-trace (instantaneous) attributes were selected for this analysis, so there was no communication between traces. These early results demonstrated that SOM learning was able to identify a great deal of geologic features. The 2D color map identifies different neurons with shades of green, blue, red, and yellow. The advantage of a 2D color map is that neurons that are adjacent to each other in the SOM analysis have similar shades of color. The figure reveals water-bottom reflections, shelf-edge peak and trough reflections, unconformities, onlaps/offlaps, and normal faults. These features are readily apparent on the SOM classification section, where amplitude is only one of 13 attributes used. Therefore, a SOM evaluation can incorporate several appropriate types of seismic attributes to define geology not easily interpreted from conventional seismic amplitude displays alone.

Self-organizing map neurons

Mathematically, a SOM neuron (loosely following notation by Haykin, 2009) lies in attribute space alongside the normalized data samples, which together lie in RF. Therefore, a neuron is also an F-dimensional column vector, noted here as w in bold. Neurons learn or adapt to the attribute data, but they also learn from each other. A neuron w lies in a mesh, which may be 1D, 2D, or 3D, and the connections between neurons are also specified. The neuron mesh is a topology of neuron connections in a neuron space. At this point in the discussion, the topology is unspecified, so we use a single subscript j as a place marker for counting neurons just as we use a single subscript i to count selected multi-attribute samples for SOM analysis.
Equation 5
A neuron learns by adjusting its position within attribute space as it is drawn toward nearby data samples. In general, the problem is to discover and identify an unknown number of natural clusters distributed in attribute space given the following: I data samples in survey space, F attributes in attribute space, and J neurons in neuron space. We are justified in searching for natural clusters because they are the multi-attribute seismic expression of seismic reflections, seismic facies, faults, and other geobodies, which we recognize in our data as geologic features. For example, faults are often identified by the single attribute of coherency. Flat spots are identified because they are flat. The AVO anomalies are identified as classes 1, 2, 2p, 3, or 4 by classifying several AVO attributes.

The number of multi-attribute samples is often in the millions, whereas the number of neurons is often in the dozens or hundreds. That is, the number of neurons J ≪ I. Were it not so, detection of natural clusters in attribute space would be hopeless. The task of a neural network is to assist us in our understanding of the data.

Neuron topology was first inspired by observation of the brain and other neural tissue. We present here results based on a neuron topology W, that is, 2D, so W lies in R2. Results shown in this paper have neuron meshes that are hexagonally connected (six adjacent points of contact).

Self-organizing-map learning

We now turn from a survey space perspective to an operational one to consider SOM learning as a process. SOM learning takes place during a series of time steps, but these time steps are not the familiar time steps of a seismic trace. Rather, these are time steps of learning. During SOM learning, neurons advance toward multi-attribute data samples, thus reducing error and thereby advancing learning. A SOM neuron adjusts itself by the following recursion:
Equation 6

where wj (n + 1) is the attribute position of neuron j at time step n. The recursion proceeds from time step n to n + 1. The update is in the direction toward xi along the “error” direction xi − wj (n). This is the direction that pulls the winning neuron toward the data sample. The amount of displacement is controlled by learning controls, η and h, which will be discussed shortly.

Equation 6 depends on w and x, so either select an x, and then use some strategy to select w, or vice versa. We elect to have all x participate in training, so we select x and use the following strategy to select w. The neuron nearest xi is the one for which the squared Euclidean distance,
Equation 7
is the smallest of all wj . This neuron is called the winning neuron, and this selection rule is central to a competitive learning strategy, which will be discussed shortly. For data sample xi, the resulting winning neuron will have subscript j, which results from scanning over all neurons with free subscript s under the minimum condition noted as
Equation 8

Now, the winning neuron for xi is found from

Equation 9
where the bar | reads “given” and the inverted A ∀ reads “for all.” That is, the winning neuron for data sample xi is wj . We observe that for every data sample, there is a winning neuron. One complete pass through all data samples fxi j i ¼ 1 to Ig is called one epoch. One epoch completes a single time step of learning. We typically exercise 60–100 epochs of training. It is noted that “epoch” as a unit of machine learning shares a sense of time with a geologic epoch, which is a division of time between periods and ages.

Returning to learning controls of equation 6, let the first term η change with time so as to be an adjustable learning rate control. We choose
Equation 10
with η0 as some initial learning rate and T2 as a learning decay factor. As time progresses, the learning control in equation 10 diminishes. This results in neurons that move large distances during early time steps move smaller distances in later time steps. The second term h of equation 6 is a little more complicated and calls into action the neuron topology W. Here, h is called the neighborhood function because it adjusts not only the winning neuron wj but also other neurons in the neighborhood of wj. Now, the neuron topology W is 2D, and the neighborhood function is given by
Equation 11
where d2J,K = [rj – rk] for a neuron at rj and the winning neuron at rk in the neuron topology. Distance d in equation 11 represents the distance between a winning neuron and another nearby neuron in neuron topology W. The neighborhood function in equation 11depends on the distance between a neuron and the winning neuron and also time. The time-varying part of equation 11 is defined as
Equation 12
where σ0 is the initial neighborhood distance and τ1 is the neighborhood decay factor. As σ increases with time, h decreases with time. We define an edge of the neighborhood as the distance beyond which the neuron weight is negligible and treated as zero. In 2D neuron topologies, the neighborhood edge is defined by a radius. Let this cut-off distance depend on a free constant ζ. In equation 11, we set h = ζ and solve for dmax as
Equation 13
The neighborhood edge distance dmax → 0 as ζ → 0. As time marches on, the neighborhood edge shrinks to zero and continued processing steps of SOM are similar to K-means clustering (Bishop, 2007). Additional details of SOM learning are found in Appendix A on SOM analysis operation.

Harvesting

Rather than apply the SOM learning process to a large time or depth window spanning an entire 3D survey, we sample a subset of the full complement of multi-attribute samples in a process called harvesting. This is first introduced by Taner et al. (2009) and is described in more detail by Smith and Taner (2010).

First, a representative set of harvest patches from the 3D survey is selected, and then on each of these patches, we conduct independent SOM training. Each harvest patch is one or more lines, and each SOM analysis yields a set of winning neurons. We then apply a harvest rule to select the set of winning neurons that best represent the full set of data samples of interest.

A variety of harvest rules has been investigated. We often choose a harvest rule based on best learning. Best learning selects the winning neuron set for the SOM training, in which there has been the largest proportional reduction of error between initial and final epochs on the data that were presented. The error is measured by summing distances as a measure of how near the winning neurons are to their respective data samples. The largest reduction in error is the indicator of best learning. Additional details on harvest sampling and harvest rules are found in Appendix A on SOM analysis operation.

Self-organizing-map classification and probabilities

Once natural clusters have been identified, it is a simple task to classify samples in survey space as members of a particular cluster. That is, once the learning process has completed, the winning neuron set is used to classify each selected multi-attribute sample in the survey. Each neuron in the winning neuron set (j = 1 to J) is tested with equation 9 against each selected sample in the survey (i = 1 to I). Each selected sample then has assigned to it a neuron that is nearest to that sample in Euclidean distance. The winning neuron index is assigned to that sample in the survey.

Every sample in the survey has associated with it a winning neuron separated by a Euclidean distance that is the square root of equation 7. After classification, we study the Euclidean distances to see how well the neurons fit. Although there are perhaps many millions of survey samples, there are far fewer neurons, so for each neuron, we collect its distribution of survey sample distances. Some samples near the neuron are a good fit, and some samples far from the neuron are a poor fit. We quantify the goodness-of-fit by distance variance as described in Appendix A. Certainly, the probability of a correct classification of a neuron to a data sample is higher when the distance is smaller than when it is larger. So, in addition to assigning a winning neuron index to a sample, we also assign a classification probability. The classification probability ranges from one to zero corresponding to distance separations of zero to infinity. Those areas in the survey where the classification probability is low correspond to areas where no neuron fits the data very well. In other words, anomalous regions in the survey are noted by low probability. Additional comments are found in Appendix A.

Case Studies – Offshore Gulf of Mexico

This case study is located offshore Louisiana in the Gulf of Mexico in a water depth of 143 m (470 ft). This shallow field (approximately 1188 m [3900 ft]) has two producing wells that were drilled on the upthrown side of an east–west-trending normal fault and into an amplitude anomaly identified on the available 3D seismic data. The normally pressured reservoir is approximately 30 m (100 ft) thick and located in a typical “bright-spot” setting, i.e., a Class 3 AVO geologic setting (Rutherford and Williams, 1989). The goal of the multi-attribute analysis is to more clearly identify possible DHI characteristics such as flat spots (hydrocarbon contacts) and attenuation effects to better understand the existing reservoir and provide important approaches to decrease risk for future exploration in the area.

Gulf of Mexico Case Tudy Seismic Attributes Employed by PCA
Table 2. Instantaneous seismic attributes used in the PCA evaluation for the Gulf of Mexico case study.

Initially, 18 instantaneous seismic attributes were generated from the 3D data in this area (see Table 2). These seismic attributes were put into a PCA evaluation to determine which produced the largest variation in the data and the most meaningful attributes for SOM analysis. The PCA was computed in a window 20 ms above and 150 ms below the mapped top of the reservoir over the entire survey, which encompassed approximately 26 km2 (10 mi2). Figure 3a displays a chart with each bar representing the highest eigenvalue on its associated inline over the displayed portion of the survey. The bars in red in Figure 3a specifically denote the inlines that cover the areal extent of the amplitude feature and the average of their eigenvalue results are displayed in Figure 3b and 3cFigure 3b displays the principal components from the selected inlines over the anomalous feature with the highest eigenvalue (first principal component) indicating the percentage of seismic attributes contributing to this largest variation in the data. In this first principal component, the top seismic attributes include the envelope, envelope modulated phase, envelope second derivative, sweetness, and average energy, all of which account for more than 63% of the variance of all the instantaneous attributes in this PCA evaluation. Figure 3c displays the PCA results, but this time the second highest eigenvalue was selected and produced a different set of seismic attributes. The top seismic attributes from the second principal component include instantaneous frequency, thin bed indicator, acceleration of phase, and dominant frequency, which total almost 70% of the variance of the 18 instantaneous seismic attributes analyzed. These results suggest that when applied to an SOM analysis, perhaps the two sets of seismic attributes for the first and second principal components will help to define two different types of anomalous features or different characteristics of the same feature.

PCA Inline Eigenvalues
Results from PCA using 18 instantaneous seismic attributes: (a) bar chart with each bar denoting the highest eigenvalue for its associated inline over the displayed portion of the seismic 3D volume. The red bars specifically represent the highest eigenvalues on the inlines over the field, (b) average of eigenvalues over the field (red) with the first principal component in orange and associated seismic attribute contributions to the right, and (c) second principal component over the field with the seismic attribute contributions to the right. The top five attributes in panel (b) were run in SOM A, and the top four attributes in panel (c) were run in SOM B.

The first SOM analysis (SOM A) incorporates the seismic attributes defined by the PCA with the highest variation in the data, i.e., the five highest percentage contributing attributes in Figure 3b. Several neuron counts for SOM analyses were run on the data with lower count matrices revealing broad, discrete features and the higher counts displaying more detail and less variation. The SOM results from a 5 × 5 matrix of neurons (25) were selected for this paper. The north–south line through the field in Figures 4 and 5 shows the original stacked amplitude data and classification results from the SOM analyses. In Figure 4b, the color map associated with the SOM classification results indicates all 25 neurons are displayed, and Figure 4c shows results with four interpreted neurons highlighted. Based on the location of the hydrocarbons determined from well control, it is interpreted from the SOM results that attenuation in the reservoir is very pronounced with this evaluation. As Figure 4b and 4c reveal, there is apparent absorption banding in the reservoir above the known hydrocarbon contacts defined by the wells in the field. This makes sense because the seismic attributes used are sensitive to relatively low-frequency broad variations in the seismic signal often associated with attenuation effects. This combination of seismic attributes used in the SOM analysis generates a more pronounced and clearer picture of attenuation in the reservoir than any one of the seismic attributes or the original amplitude volume individually. Downdip of the field is another undrilled anomaly that also reveals apparent attenuation effects.

Thumbnail SOM
Figure 4. SOM A results on the north–south inline through the field: (a) original stacked amplitude, (b) SOM results with associated 5 × 5 color map displaying all 25 neurons, and (c) SOM results with four neurons selected that isolate interpreted attenuation effects.

Som Results B.2
Figure 5. SOM B results on the same inline as Figure 4: (a) original stacked amplitude, (b) SOM results with associated 5 × 5 color map, and (c) SOM results with color map showing two neurons that highlight flat spots in the data. The hydrocarbon contacts (flat spots) in the field were confirmed by well control.

The second SOM evaluation (SOM B) includes the seismic attributes with the highest percentages from the second principal component based on the PCA (see Figure 3). It is important to note that these attributes are different than the attributes determined from the first principal component. With a 5 × 5 neuron matrix, Figure 5 shows the classification results from this SOM evaluation on the same north–south line as Figure 4, and it clearly identifies several hydrocarbon contacts in the form of flat spots. These hydrocarbon contacts in the field are confirmed by the well control. Figure 5b defines three apparent flat spots, which are further isolated in Figure 5c that displays these features with two neurons. The gas/oil contact in the field was very difficult to see on the original seismic data, but it is well defined and mappable from this SOM analysis. The oil/water contact in the field is represented by a flat spot that defines the overall base of the hydrocarbon reservoir. Hints of this oil/water contact were interpreted from the original amplitude data, but the second SOM classification provides important information to clearly define the areal extent of reservoir. Downdip of the field is another apparent flat spot event that is undrilled and is similar to the flat spots identified in the field. Based on SOM evaluations A and B in the field that reveal similar known attenuation and flat spot results, respectively, there is a high probability this undrilled feature contains hydrocarbons.

Shallow Yegua trend in Gulf Coast of Texas

This case study is located in Lavaca County, Texas, and targets the Yegua Formation at approximately 1828 m (6000 ft). The initial well was drilled just downthrown on a small southwest–northeast regional fault, with a subsequent well being drilled on the upthrown side. There were small stacked data amplitude anomalies on the available 3D seismic data at both well locations. The Yegua in the wells is approximately 5 m (18 ft) thick and is composed of thinly laminated sands. Porosities range from 24% to 30% and are normally pressured. Because of the thin laminations and often lower porosities, these anomalies exhibit a class 2 AVO response, with near-zero amplitudes on the near offsets and an increase in negative amplitude with offset (Rutherford and Williams, 1989). The goal of the multi-attribute analysis was to determine the full extent of the reservoir because both wells were performing much better than the size of the amplitude anomaly indicated from the stacked seismic data (Figure 6a and 6b). The first well drilled downthrown had a stacked data amplitude anomaly of approximately 70 acres, whereas the second well upthrown had an anomaly of about 34 acres.

Stacked data amplitude maps
Figure 6. Stacked data amplitude maps at the Yegua level denote: (a) interpreted outline of hydrocarbon distribution based on upthrown amplitude anomaly and (b) interpreted outline of hydrocarbons based on downthrown amplitude anomaly.

Yegua Case Study AVO Attributes Employed for SOM Evaluation
Table 3. The AVO seismic attributes computed and used for the SOM evaluation in the Yegua case study.

SOM Classification
Figure 7. The SOM classification at the Yegua level denoting a larger area around the wells associated with gas drainage than indicated from the stacked amplitude response as seen in Figure 6. Also shown is the location of the arbitrary line displayed in Figure 8. The 1D color bar has been designed to highlight neurons 1 through 9 interpreted to indicate those neuron patterns which represent sand/reservoir extents.

The gathers that came with the seismic data had been conditioned and were used in creating very specific AVO volumes conducive to the identification of class 2 AVO anomalies in this geologic setting. In this case, the AVO attributes selected were based on the interpreter’s experience in this geologic setting.

Table 3 lists the AVO attributes and the attributes generated from the AVO attributes used in this SOM evaluation. The intercept and gradient volumes were created using the Shuey three-term approximation of the Zoeppritz equation (Shuey, 1985). The near-offset volume was produced from the

0°–15° offsets and the far-offset volume from the 31°–45° offsets. The attenuation, envelope bands on the envelope breaks, and envelope bands on the phase breaks seismic attributes were all calculated from the far-offset volume. For this SOM evaluation, an 8 × 8 matrix (64 neurons) was used.

Figure 7 displays the areal distribution of the SOM classification at the Yegua interval. The interpretation of this SOM classification is that the two areas outlined represent the hydrocarbon producing portion of the reservoir and all the connectivity of the sand feeding into the well bores. On the downthrown side of the fault, the drainage area has increased to approximately 280 acres, which supports the engineering and pressure data. The areal extent of the drainage area on the upthrown reservoir has increased to approximately 95 acres, and again agreeing with the production data. It is apparent that the upthrown well is in the feeder channel, which deposited sand across the then-active fault and splays along the fault scarp.

In addition to the SOM classification, the anomalous character of these sands can be easily seen in the probability results from the SOM analysis (Figure 8). The probability is a measure of how far the neuron is from its identified cluster (see Appendix A). The low-probability zones denote the most anomalous areas determined from the SOM evaluation. The most anomalous areas typically will have the lowest probability, whereas the events that are present over most of the data, such as horizons, interfaces, etc., will have higher probabilities. Because the seismic attributes that went into this SOM analysis are AVO-related attributes that enhance DHI features, these low-probability zones are interpreted to be associated with the Yegua hydrocarbonbearing sands.

Arbitrary Line showing Low Probability
Figure 8. Arbitrary line (location in Figure 7) showing low probability in the Yegua at each well location, indicative of anomalous results from the SOM evaluation. The colors represent probabilities with the wiggle trace in the background from the original stacked amplitude data.

SOM Classifcation Results
Figure 9. The SOM classification results at a time slice show the base of the upthrown reservoir and the upper portion of the downthrown reservoir and denote: (a) full classification results defined by the associated 2D color map and (b) isolation of upthrown and downthrown reservoirs by specific neurons represented by the associated 2D color map

Figure 9 displays the SOM classification results with a time slice located at the base of the upthrown reservoir and the upper portion of the downthrown reservoir. There is a slight dip component in the figure. Figure 9a reveals the total SOM classification results with all 64 neurons as indicated by the associated 2D color map. Figure 9b is the same time slice slightly rotated to the west with very specific neurons highlighted in the 2D color map defining the upthrown and downthrown fields. The advantage of SOM classification analyses is the ability to isolate specific neurons that highlight desired geologic features. In this case, the SOM classification of the AVO-related attributes was able to define the reservoirs drilled by each of the wells and provide a more accurate picture of their areal distributions than the stacked data amplitude information.

Eagle Ford Shale

This study is conducted using 3D seismic data from the Eagle Ford Shale resource play of south Texas. Understanding the existing fault and fracture patterns in the Eagle Ford Shale is critical to optimizing well locations, well plans, and fracture treatment design. To identify fracture trends, the industry routinely uses various seismic techniques, such as processing of seismic attributes, especially geometric attributes, to derive the maximum structural information from the data.

Geometric seismic attributes describe the spatial and temporal relationship of all other attributes (Taner, 2003). The two main categories of these multitrace attributes are coherency/similarity and curvature. The objective of coherency/similarity attributes is to enhance the visibility of the geometric characteristics of seismic data such as dip, azimuth, and continuity. Curvature is a measure of how bent or deformed a surface is at a particular point with the more deformed the surface the more the curvature. These characteristics measure the lateral relationships in the data and emphasize the continuity of events such as faults, fractures, and folds.

Geometric Attributes - Eagle Ford Shale
Figure 10. Three geometric attributes at the top of the Eagle Ford Shale computed from (left) the full-frequency data and (right) the 24.2-Hz spectral decomposition volume with results from the (a) dip of maximum similarity, (b) curvature most positive, and (c) curvature minimum. The 1D color bar is common for each pair of outputs.

Geometric Attributes - Eagle Ford Shale SOM
Table 4. Geometric attributes used in the Eagle Ford Shale SOM analysis.

SOM Results - Eagle Ford Shale
Figure 11. SOM results from the top of the Eagle Ford Shale with associated 2D color map.

The goal of this case study is to more accurately define the fault and fracture patterns (regional stress fields) than what had already been revealed in running geometric attributes over the existing stacked seismic data. Initially, 18 instantaneous attributes, 14 coherency/similarity attributes, 10 curvature attributes, and 40 frequency subband volumes of spectral decomposition were generated. In the evaluation of these seismic attributes, it was determined in the Eagle Ford interval that the highest energy resided in the 22- to 26-Hz range. Therefore, a comparison was made with geometric attributes computed from a spectral decomposition volume with a center frequency of 24.2 Hz with the same geometric attributes computed from the original fullfrequency volume. At the Eagle Ford interval, Figure 10 compares three geometric attributes generated from the original seismic volume with the same geometric attributes generated from the 24.2-Hz spectral decomposition volume. It is evident from each of these geometric attributes that there is an improvement in the image delineation of fault/fracture trends with the spectral decomposition volumes. Based on the results of the geometric attributes produced from the 24.2-Hz volume and trends in the associated PCA interpretation, Table 4 lists the attributes used in the SOM analysis over the Eagle Ford interval. This SOM analysis incorporated an 8 × 8 matrix (64 neurons). Figure 11 displays the results at the top of the Eagle Ford Shale of the SOM analysis using the nine geometric attributes computed from the 24.2-Hz spectral decomposition volume. The associated 2D color map in Figure 11 provides the correlation of colors to neurons. There are very clear northeast–southwest trends of relatively large fault and fracture systems, which are typical for the Eagle Ford Shale (primarily in dark blue). What is also evident is an orthogonal set of events running southeast–northwest and east–west (red).

SOM Results - Eagle Ford Shale 2
Figure 12. SOM results from the top of the Eagle Ford Shale with (a) only neuron 31 highlighted denoting northeast–southwest trends, (b) neuron 14 highlighted denoting east–west trends, and (c) three neurons highlighted with neuron 24 displaying smoothed nonfaulted background trend.

To further evaluate the SOM results, individual clusters or patterns in the data are isolated with the highlighting of specific neurons in the 2D color map in Figure 12Figure 12a indicates neuron 31 (blue) is defining the larger northeast–southwest fault/fracture trends in the Eagle Ford Shale. Figure 12b with neuron 14 (red) indicates orthogonal sets of events. Because the survey was acquired southeast–northwest, it could be interpreted that the similar trending events in Figure 12b are possible acquisition footprint effects, but there are very clear indications of east–west lineations also. These east–west lineations probably represent fault/fracture trends orthogonal to the major northeast–southwest trends in the region. Figure 12c displays the neurons from Figure 12a and 12b, as well as neuron 24 (dark gray). With these three neurons highlighted, it is easy to see the fault and fracture trends against a background, where neuron 24 displays a relatively smooth and nonfaulted region. The key issue in this evaluation is that the SOM analysis allows the breakout of the fault/fracture trends and allows the geoscientist tomake better informed decisions in their interpretation.

Conclusions

Seismic attributes, which are any measurable properties of seismic data, aid interpreters in identifying geologic features, which are not clearly understood in the original data. However, the enormous amount of information generated from seismic attributes and the difficulty in understanding how these attributes when combined define geology, requires another approach in the interpretation workflow. The application of PCA can help interpreters to identify seismic attributes that show the most variance in the data for a given geologic setting. The PCA works very well in geologic settings, where anomalous features stand out from the background data, such as class 3 AVO settings that exhibit DHI characteristics. The PCA helps to determine, which attributes to use in a multi-attribute analysis using SOMs.

Applying current computing technology, visualization techniques, and understanding of appropriate parameters for SOM, enables interpreters to take multiple seismic attributes and identify the natural organizational patterns in the data. Multiple-attribute analyses are beneficial when single attributes are indistinct. These natural patterns or clusters represent geologic information embedded in the data and can help to identify geologic features, geobodies, and aspects of geology that often cannot be interpreted by any other means. The SOM evaluations have proven to be beneficial in essentially all geologic settings including unconventional resource plays, moderately compacted onshore regions, and offshore unconsolidated sediments. An important observation in the three case studies is that the seismic attributes used in each SOM analysis were different. This indicates the appropriate seismic attributes to use in any SOM evaluation should be based on the interpretation problem to be solved and the associated geologic setting. The application of PCA and SOM can not only identify geologic patterns not seen previously in the seismic data, but it also can increase or decrease confidence in already interpreted features. In other words, this multi-attribute approach provides a methodology to produce a more accurate risk assessment of a geoscientist’s interpretation and may represent the next generation of advanced interpretation.

References

Abele, S., and R. Roden, 2012, Fracture detection interpretation beyond conventional seismic approaches: Presented at AAPG Eastern Section Meeting, Poster AAPG-ICE.

Balch, A. H., 1971, Color sonograms: A new dimension in seismic data interpretation: Geophysics, 36, 1074–1098.

Barnes, A., 2006, Too many seismic attributes?: CSEG Recorder, 31, 41–45.

Bishop, C. M., 2006, Pattern recognition and machine learning: Springer, 561–565.

Bishop, C. M., 2007, Neural networks for pattern recognition: Oxford, 240–245.

Bishop, C. M., M. Svensen, and C. K. I. Williams, 1998, GTM: The generative topographic mapping: Neural Computation, 10, 215–234.

Brown, A. R., 1996, Interpretation of three-dimensional seismic data, 3rd ed.: AAPG Memoir 42.

Chen, Q., and S. Sidney, 1997, Seismic attribute technology for reservoir forecasting and monitoring: The Leading Edge, 16, 445–448.

Chopra, S., and K. Marfurt, 2007, Seismic attributes for prospect identification and reservoir characterization, SEG, Geophysical Development Series.

Coleou, T., M. Poupon, and A. Kostia, 2003, Unsupervised seismic facies classification: A review and comparison of techniques and implementation: The Leading Edge, 22, 942–953.

Erwin, E., K. Obermayer, and K. Schulten, 1992, Self-organizing maps: Ordering, convergence properties and energy functions: Biological Cybernetics, 67, 47–55.

Fraser, S. H., and B. L. Dickson, 2007, A new method of data integration and integrated data interpretation: Self-organizing maps, in B. Milkereit, ed., Proceedings of Exploration 07: Fifth Decennial International Conference on Mineral Exploration, 907–910.

Guo, H., K. J. Marfurt, and J. Liu, 2009, Principal component spectral analysis: Geophysics, 74, no. 4, P35–P43.

Haykin, S., 2009, Neural networks and learning machines, 3rd ed.: Pearson.

Kalkomey, C. T., 1997, Potential risks when using seismic attributes as predictors of reservoir properties: The Leading Edge, 16, 247–251.

Kohonen, T., 2001, Self organizing maps: Third extended edition, Springer, Series in Information Services.

Liner, C., 1999, Elements of 3-D seismology: PennWell.

Roy, A., B. L. Dowdell, and K. J. Marfurt, 2013, Characterizing a Mississippian tripolitic chert reservoir using 3D unsupervised and supervised multi-attribute seismic facies analysis: An example from Osage County, Oklahoma: Interpretation, 1, no. 2, SB109–SB124.

Rutherford, S. R., and R. H. Williams, 1989, Amplitude-versus-offset variations in gas sands: Geophysics, 54, 680–688.

Schlumberger Oilfield Glossary, 2015, online reference, http://www.glossary.oilfield.slb.com.

Shuey, R. T., 1985, A simplification of the Zoeppritz equations: Geophysics, 50, 609–614.

Smith, T., and M. T. Taner, 2010, Natural clusters in multiattribute seismics found with self-organizing maps: Source and signal processing section paper 5: Presented at Robinson-Treitel Spring Symposium by GSH/SEG, Extended Abstracts.

Taner, M. T., 2003, Attributes revisited, https://www.rocksolidimages.com/attributes-revisited, accessed 13 August 2013.

Taner, M. T., F. Koehler, and R. E. Sheriff, 1979, Complex seismic trace analysis: Geophysics, 44, 1041–1063.

Taner, M. T., and R. E. Sheriff, 1977, Application of amplitude, frequency, and other attributes, to stratigraphic and hydrocarbon determination, in C. E. Payton, ed., Applications to hydrocarbon exploration: AAPG Memoir 26, 301–327.

Taner, M. T., S. Treitel, and T. Smith, 2009, Self-organizing maps of multi-attribute 3D seismic reflection surveys, Presented at the 79th International SEG Convention, SEG 2009 Workshop on “What’s New in Seismic Interpretation,” Paper no. 6.

 

You must log in to access the PDF file. Please log in or register as a user.

 

Welcome Back!

Download PDF here

OR

Request access by filling the form below to download full PDF.

Most Popular Papers
systematic workflow for reservoir thumbnail
Systematic Workflow for Reservoir Characterization in Northwestern Colombia using Multi-attribute Classification
A workflow is presented which includes data conditioning, finding the best combination of attributes for ML classification aided by Principal ...
Read More
identify reservoir rock
Net Reservoir Discrimination through Multi-Attribute Analysis at Single Sample Scale
Published in the special Machine Learning edition of First Break, this paper lays out results from multi-attribute analysis using Paradise, ...
Read More
CNN_facies
Seismic Facies Classification Using Deep Convolutional Neural Networks
Using a new supervised learning technique, convolutional neural networks (CNN), interpreters are approaching seismic facies classification in a revolutionary way ...
Read More
  • Registration confirmation will be emailed to you.

  • We're committed to your privacy. Geophysical Insights uses the information you provide to us to contact you about our relevant content, events, and products. You may unsubscribe from these communications at any time. For more information, check out our Privacy Policy

    Bob A. Hardage

    Investigating the Internal Fabric of VSP data with Attribute Analysis and Unsupervised Machine Learning

    Examination of vertical seismic profile (VSP) data with unsupervised machine learning technology is a rigorous way to compare the fabric of down-going, illuminating, P and S wavefields with the fabric of up-going reflections and interbed multiples created by these wavefields. This concept is introduced in this paper by applying unsupervised learning to VSP data to better understand the physics of P and S reflection seismology. The zero-offset VSP data used in this investigation were acquired in a hard-rock, fast-velocity, environment that caused the shallowest 2 or 3 geophones to be inside the near-field radiation zone of a vertical-vibrator baseplate. This study shows how to use instantaneous attributes to backtrack down-going direct-P and direct-S illuminating wavelets to the vibrator baseplate inside the near-field zone. This backtracking confirms that the points-of-origin of direct-P and direct-S are identical. The investigation then applies principal component (PCA) analysis to VSP data and shows that direct-S and direct-P wavefields that are created simultaneously at a vertical-vibrator baseplate have the same dominant principal components. A self-organizing map (SOM) approach is then taken to illustrate how unsupervised machine learning describes the fabric of down-going and up-going events embedded in vertical-geophone VSP data. These SOM results show that a small number of specific neurons build the down-going direct-P illuminating wavefield, and another small group of neurons build up-going P primary reflections and early-arriving down-going P multiples. The internal attribute fabric of these key down-going and up-going neurons are then compared to expose their similarities and differences. This initial study indicates that unsupervised machine learning, when applied to VSP data, is a powerful tool for understanding the physics of seismic reflectivity at a prospect. This research strategy of analyzing VSP data with unsupervised machine learning will now expand to horizontal-geophone VSP data.

    Bob A. Hardage

    Bob A. Hardage received a PhD in physics from Oklahoma State University. His thesis work focused on high-velocity micro-meteoroid impact on space vehicles, which required trips to Goddard Space Flight Center to do finite-difference modeling on dedicated computers. Upon completing his university studies, he worked at Phillips Petroleum Company for 23 years and was Exploration Manager for Asia and Latin America when he left Phillips. He moved to WesternAtlas and worked 3 years as Vice President of Geophysical Development and Marketing. He then established a multicomponent seismic research laboratory at the Bureau of Economic Geology and served The University of Texas at Austin as a Senior Research Scientist for 28 years. He has published books on VSP, cross-well profiling, seismic stratigraphy, and multicomponent seismic technology. He was the first person to serve 6 years on the Board of Directors of the Society of Exploration Geophysicists (SEG). His Board service was as SEG Editor (2 years), followed by 1-year terms as First VP, President Elect, President, and Past President. SEG has awarded him a Special Commendation, Life Membership, and Honorary Membership. He wrote the AAPG Explorer column on geophysics for 6 years. AAPG honored him with a Distinguished Service award for promoting geophysics among the geological community.

    Carrie Laudon
    Senior Geophysical Consultant

    Calibrating SOM Results to Wells – Improving Stratigraphic Resolution in the Niobrara

    Over the last few years, because of the increase in low cost computer power, individuals and companies have stepped up investigations into the use of machine learning in many areas of E&P. For the geosciences, the emphasis has been in reservoir characterization, seismic data processing and most recently, interpretation.
    By using statistical tools such as Attribute Selection, which uses Principal Component Analysis (PCA), and Multi-Attribute Classification using Self Organizing Maps (SOM), a multi-attribute 3D seismic volume can be “classified.” PCA reduces a large set of seismic attributes to those that are the most meaningful. The output of the PCA serves as the input to the SOM, a form of unsupervised neural network, which when combined with a 2D color map facilitates the identification of clustering within the data volume.
    The application of SOM and PCA in Paradise will be highlighted through a case study of the Niobrara unconventional reservoir. 100 square miles from Phase 5 of Geophysical Pursuit, Inc. and Fairfield Geotechnologies’ multiclient library were analyzed for stratigraphic resolution of the Niobrara chalk reservoirs within a 60 millisecond two-way time window. Thirty wells from the COGCC public database were available to corroborate log data to the SOM results. Several SOM topologies were generated and extracted within Paradise at well locations. These were exported and run through a statistical analysis program to visualize the neuron to reservoir correlations via histograms. Chi2 squared independence tests also validated a relationship between SOM neuron numbers and the presence of reservoir for all chalk benches within the Niobrara.

    Dr. Carrie Laudon
    Senior Geophysical Consultant

    Carolan (Carrie) Laudon holds a PhD in geophysics from the University of Minnesota and a BS in geology from the University of Wisconsin Eau Claire. She has been Senior Geophysical Consultant with Geophysical Insights since 2017 working with Paradise®, their machine learning platform. Prior roles include Vice President of Consulting Services and Microseismic Technology for Global Geophysical Services and 17 years with Schlumberger in technical, management and sales, starting in Alaska and including Aberdeen, Scotland, Houston, TX, Denver, CO and Reading, England. She spent five years early in her career with ARCO Alaska as a seismic interpreter for the Central North Slope exploration team.

    Deborah Sacrey
    Owner, Auburn Energy

    Finding Hydrocarbons using SOM Classification

    In the past, the use of unsupervised neural analysis has been used only on one seismic attribute at a time and using a seismic wavelet to find the natural clusters in the data. A new approach, using multiple seismic attributes and looking at the statistical clustering in the data based on sample interval can significantly help in discerning thin beds and subtle stratigraphic changes in the subsurface.

    Advances in computing power and the creation of many new seismic attribute families, such as Geometric, AVO, Inversion and the use of Spectral Decomposition over the last 30 years has made multiple attribute analysis extremely powerful.

    The key to this presentation is showing examples of how the SOM classification process has led to hydrocarbon discoveries in different types of depositional environments. Examples of cases in which the decision was made not to drill a well, thus avoiding a potential dry hole, will also be shown.

    Deborah Sacrey
    Owner, Auburn Energy

    Deborah is a geologist/geophysicist with 44 years of oil and gas exploration experience in Texas, Louisiana Gulf Coast and Mid-Continent areas of the US. She received her degree in Geology from the University of Oklahoma in 1976 and immediately started working for Gulf Oil in their Oklahoma City offices.

    She started her own company, Auburn Energy, in 1990 and built her first geophysical workstation using Kingdom software in 1996. She helped SMT/IHS for 18 years in developing and testing the Kingdom Software. She specializes in 2D and 3D interpretation for clients in the US and internationally. For the past nine years she has been part of a team to study and bring the power of multi-attribute neural analysis of seismic data to the geoscience public, guided by Dr. Tom Smith, founder of SMT. She has become an expert in the use of Paradise software and has seven discoveries for clients using multi-attribute neural analysis.

    Deborah has been very active in the geological community. She is past national President of SIPES (Society of Independent Professional Earth Scientists), past President of the Division of Professional Affairs of AAPG (American Association of Petroleum Geologists), Past Treasurer of AAPG and Past President of the Houston Geological Society. She is also Past President of the Gulf Coast Association of Geological Societies and just ended a term as one of the GCAGS representatives on the AAPG Advisory Council. Deborah is also a DPA Certified Petroleum Geologist #4014 and DPA Certified Petroleum Geophysicist #2. She belongs to AAPG, SIPES, Houston Geological Society, South Texas Geological Society and the Oklahoma City Geological Society (OCGS).

    Dr. Tom Smith
    President & CEO

    Dr. Tom Smith received a BS and MS degree in Geology from Iowa State University. His graduate research focused on a shallow refraction investigation of the Manson astrobleme. In 1971, he joined Chevron Geophysical as a processing geophysicist but resigned in 1980 to complete his doctoral studies in 3D modeling and migration at the Seismic Acoustics Lab at the University of Houston. Upon graduation with the Ph.D. in Geophysics in 1981, he started a geophysical consulting practice and taught seminars in seismic interpretation, seismic acquisition and seismic processing. Dr. Smith founded Seismic Micro-Technology in 1984 to develop PC software to support training workshops which subsequently led to development of the KINGDOM Software Suite for integrated geoscience interpretation with world-wide success.

    The Society of Exploration Geologists (SEG) recognized Dr. Smith’s work with the SEG Enterprise Award in 2000, and in 2010, the Geophysical Society of Houston (GSH) awarded him an Honorary Membership. Iowa State University (ISU) has recognized Dr. Smith throughout his career with the Distinguished Alumnus Lecturer Award in 1996, the Citation of Merit for National and International Recognition in 2002, and the highest alumni honor in 2015, the Distinguished Alumni Award. The University of Houston College of Natural Sciences and Mathematics recognized Dr. Smith with the 2017 Distinguished Alumni Award.

    In 2009, Dr. Smith founded Geophysical Insights, where he leads a team of geophysicists, geologists and computer scientists in developing advanced technologies for fundamental geophysical problems. The company launched the Paradise® multi-attribute analysis software in 2013, which uses Machine Learning and pattern recognition to extract greater information from seismic data.

    Dr. Smith has been a member of the SEG since 1967 and is a professional member of SEG, GSH, HGS, EAGE, SIPES, AAPG, Sigma XI, SSA and AGU. Dr. Smith served as Chairman of the SEG Foundation from 2010 to 2013. On January 25, 2016, he was recognized by the Houston Geological Society (HGS) as a geophysicist who has made significant contributions to the field of geology. He currently serves on the SEG President-Elect’s Strategy and Planning Committee and the ISU Foundation Campaign Committee for Forever True, For Iowa State.

    Fabian Rada
    Sr. Geophysicist, Petroleum Oil & Gas Services

    Statistical Calibration of SOM results with Well Log Data (Case Study)

    The first stage of the proposed statistical method has proven to be very useful in testing whether or not there is a relationship between two qualitative variables (nominal or ordinal) or categorical quantitative variables, in the fields of health and social sciences. Its application in the oil industry allows geoscientists not only to test dependence between discrete variables, but to measure their degree of correlation (weak, moderate or strong). This article shows its application to reveal the relationship between a SOM classification volume of a set of nine seismic attributes (whose vertical sampling interval is three meters) and different well data (sedimentary facies, Net Reservoir, and effective porosity grouped by ranges). The data were prepared to construct the contingency tables, where the dependent (response) variable and independent (explanatory) variable were defined, the observed frequencies were obtained, and the frequencies that would be expected if the variables were independent were calculated and then the difference between the two magnitudes was studied using the contrast statistic called Chi-Square. The second stage implies the calibration of the SOM volume extracted along the wellbore path through statistical analysis of the petrophysical properties VCL and PHIE, and SW for each neuron, which allowed to identify the neurons with the best petrophysical values in a carbonate reservoir.

    Fabian Rada
    Sr. Geophysicist, Petroleum Oil & Gas Servicest

    Fabian Rada joined Petroleum Oil and Gas Services, Inc (POGS) in January 2015 as Business Development Manager and Consultant to PEMEX. In Mexico, he has participated in several integrated oil and gas reservoir studies. He has consulted with PEMEX Activos and the G&G Technology group to apply the Paradise AI workbench and other tools. Since January 2015, he has been working with Geophysical Insights staff to provide and implement the multi-attribute analysis software Paradise in Petróleos Mexicanos (PEMEX), running a successful pilot test in Litoral Tabasco Tsimin Xux Asset. Mr. Rada began his career in the Venezuelan National Foundation for Seismological Research, where he participated in several geophysical projects, including seismic and gravity data for micro zonation surveys. He then joined China National Petroleum Corporation (CNPC) as QC Geophysicist until he became the Chief Geophysicist in the QA/QC Department. Then, he transitioned to a subsidiary of Petróleos de Venezuela (PDVSA), as a member of the QA/QC and Chief of Potential Field Methods section. Mr. Rada has also participated in processing land seismic data and marine seismic/gravity acquisition surveys. Mr. Rada earned a B.S. in Geophysics from the Central University of Venezuela.

    Hal Green
    Director – Marketing & Business Development

    Introduction to the Paradise AI Workbench

    Companies worldwide are seeking solutions for their digital transformation initiatives and face a make-vs-buy decision when it comes to their E&P software tools. This talk will show how the commercial, off-the-shelf Paradise AI workbench can be a robust and cost-effective component of the new digital infrastructure. Using a combination of machine learning (ML) and deep learning applications, geoscientists apply Paradise to extract greater insights from seismic and well data for these and other objectives:

    • - Identify and calibrate detailed stratigraphy
    • - Distinguish thin beds below conventional tuning
    • - Classify seismic facies
    • - Detect faults automatically
    • - Interpret Direct Hydrocarbon Indicators
    • - Reveal fracture trends in shale plays
    • - Estimate reserves/resources

    The brief introduction includes single-slide use cases in different geologic settings to illustrate the general-purpose application of ‘AI’ technology. The summary also will provide some context to the other presentations available at the Geophysical Insights virtual booth.

    Hal Green
    Director of Marketing & Business Development

    Hal H. Green is a marketing executive and entrepreneur in the energy industry with more than 25 years of experience in starting and managing technology companies. He holds a B.S. in Electrical Engineering from Texas A&M University and an MBA from the University of Houston. He has invested his career at the intersection of marketing and technology, with a focus on business strategy, marketing, and effective selling practices. Mr. Green has a diverse portfolio of experience in marketing technology to the hydrocarbon supply chain – from upstream exploration through downstream refining & petrochemical. Throughout his career, Mr. Green has been a proven thought-leader and entrepreneur, while supporting several tech start-ups.

    He started his career as a process engineer in the semiconductor manufacturing industry in Dallas, Texas and later launched an engineering consulting and systems integration business. Following the sale of that business in the late 80’s, he joined Setpoint in Houston, Texas where he eventually led that company’s Manufacturing Systems business. Aspen Technology acquired Setpoint in January 1996 and Mr. Green continued as Director of Business Development for the Information Management and Polymer Business Units.

    In 2004, Mr. Green founded Advertas, a full-service marketing and public relations firm serving clients in energy and technology. In 2010, Geophysical Insights retained Advertas as their marketing firm. Dr. Tom Smith, President/CEO of Geophysical Insights, soon appointed Mr. Green as Director of Marketing and Business Development for Geophysical Insights, in which capacity he still serves today.

    Hana Kabazi
    Product Manager

    Hana Kabazi joined Geophysical Insights in October of 201, and is now one of our Product Managers for Paradise. Mrs. Kabazi has over 7 years of oil and gas experience, including 5 years and Halliburton – Landmark. During her time at Landmark she held positions as a consultant to many E&P companies, technical advisor to the QA organization, and as product manager of Subsurface Mapping in DecsionSpace. Mrs. Kabazi has a B.S. in Geology from the University of Texas Austin, and an M.S. in Geology from the University of Houston.

    Heather Bedle
    Assistant Professor, University of Oklahoma

    Gas Hydrates, Reefs, Channel Architecture, and Fizz Gas: SOM Applications in a Variety of Geologic Settings

    Students at the University of Oklahoma have been exploring the uses of SOM techniques for the last year. This presentation will review learnings and results from a few of these research projects. Two projects have investigated the ability of SOMs to aid in identification of pore space materials – both trying to qualitatively identify gas hydrates and under-saturated gas reservoirs. A third study investigated individual attributes and SOMs in recognizing various carbonate facies in a pinnacle reef in the Michigan Basin. The fourth study took a deep dive of various machine learning algorithms, of which SOMs will be discussed, to understand how much machine learning can aid in the identification of deepwater channel architectures.

    Heather Bedle
    Assistant Professor, University of Oklahoma

    Heather Bedle received a B.S. (1999) in physics from Wake Forest University, and then worked as a systems engineer in the defense industry. She later received a M.S. (2005) and a Ph. D. (2008) degree from Northwestern University. After graduate school, she joined Chevron and worked as both a development geologist and geophysicist in the Gulf of Mexico before joining Chevron’s Energy Technology Company Unit in Houston, TX. In this position, she worked with the Rock Physics from Seismic team analyzing global assets in Chevron’s portfolio. Dr. Bedle is currently an assistant professor of applied geophysics at the University of Oklahoma’s School of Geosciences. She joined OU in 2018, after instructing at the University of Houston for two years. Dr. Bedle and her student research team at OU primarily work with seismic reflection data, using advanced techniques such as machine learning, attribute analysis, and rock physics to reveal additional structural, stratigraphic and tectonic insights of the subsurface.

    Ivan Marroquin
    Senior Research Geophysicist

    Connecting Multi-attribute Classification to Reservoir Properties

    Interpreters rely on seismic pattern changes to identify and map geologic features of importance. The ability to recognize such features depends on the seismic resolution and characteristics of seismic waveforms. With the advancement of machine learning algorithms, new methods for interpreting seismic data are being developed. Among these algorithms, self-organizing maps (SOM) provides a different approach to extract geological information from a set of seismic attributes.

    SOM approximates the input patterns by a finite set of processing neurons arranged in a regular 2D grid of map nodes. Such that, it classifies multi-attribute seismic samples into natural clusters following an unsupervised approach. Since machine learning is unbiased, so the classifications can contain both geological information and coherent noise. Thus, seismic interpretation evolves into broader geologic perspectives. Additionally, SOM partitions multi-attribute samples without a priori information to guide the process (e.g., well data).

    The SOM output is a new seismic attribute volume, in which geologic information is captured from the classification into winning neurons. Implicit and useful geological information are uncovered through an interactive visual inspection of winning neuron classifications. By doing so, interpreters build a classification model that aids them to gain insight into complex relationships between attribute patterns and geological features.

    Despite all these benefits, there are interpretation challenges regarding whether there is an association between winning neurons and geological features. To address these issues, a bivariate statistical approach is proposed. To evaluate this analysis, three cases scenarios are presented. In each case, the association between winning neurons and net reservoir (determined from petrophysical or well log properties) at well locations is analyzed. The results show that the statistical analysis not only aid in the identification of classification patterns; but more importantly, reservoir/not reservoir classification by classical petrophysical analysis strongly correlates with selected SOM winning neurons. Confidence in interpreted classification features is gained at the borehole and interpretation is readily extended as geobodies away from the well.

    Ivan Marroquin
    Senior Research Geophysicist

    Iván Dimitri Marroquín is a 20-year veteran of data science research, consistently publishing in peer-reviewed journals and speaking at international conference meetings. Dr. Marroquín received a Ph.D. in geophysics from McGill University, where he conducted and participated in 3D seismic research projects. These projects focused on the development of interpretation techniques based on seismic attributes and seismic trace shape information to identify significant geological features or reservoir physical properties. Examples of his research work are attribute-based modeling to predict coalbed thickness and permeability zones, combining spectral analysis with coherency imagery technique to enhance interpretation of subtle geologic features, and implementing a visual-based data mining technique on clustering to match seismic trace shape variability to changes in reservoir properties.

    Dr. Marroquín has also conducted some ground-breaking research on seismic facies classification and volume visualization. This lead to his development of a visual-based framework that determines the optimal number of seismic facies to best reveal meaningful geologic trends in the seismic data. He proposed seismic facies classification as an alternative to data integration analysis to capture geologic information in the form of seismic facies groups. He has investigated the usefulness of mobile devices to locate, isolate, and understand the spatial relationships of important geologic features in a context-rich 3D environment. In this work, he demonstrated mobile devices are capable of performing seismic volume visualization, facilitating the interpretation of imaged geologic features.  He has definitively shown that mobile devices eventually will allow the visual examination of seismic data anywhere and at any time.

    In 2016, Dr. Marroquín joined Geophysical Insights as a senior researcher, where his efforts have been focused on developing machine learning solutions for the oil and gas industry. For his first project, he developed a novel procedure for lithofacies classification that combines a neural network with automated machine methods. In parallel, he implemented a machine learning pipeline to derive cluster centers from a trained neural network. The next step in the project is to correlate lithofacies classification to the outcome of seismic facies analysis.  Other research interests include the application of diverse machine learning technologies for analyzing and discerning trends and patterns in data related to oil and gas industry.

    Jie Qi
    Research Geophysicist

    Applications of Deep Learning-based Seismic Fault Detection

    The traditional fault detection method is based on geophysicists’ hand-picking, which is very time-consuming on large seismic datasets. Convolutional Neural Networks (CNN)-based fault detection method is an emerging technology that shows great promise for the seismic interpreter. One of the more successful deep learning CNN methods uses synthetic data to train a CNN model. Faults are labeled as a single classification and other background geologic features are another classification in CNN-based fault detection. The labeled faults with associated seismic amplitude data are used to train in a CNN model, then predict or classify the corresponding fault classification in a large seismic dataset by the trained CNN model. The outperformance of CNN-based methods is that the computation cost of applications of a pre-trained CNN model to seismic fault classification is extremely low. This study shows applications of CNN models to predict faults from 3D seismic data. Firstly, the CNN model is trained with multiple 3D synthetic seismic amplitude data and their associated fault label data. The training data has been considered with different data quality, frequency bandwidth, noise levels, and structural features. The well-trained CNN model is then applied to detect faults on datasets, which exhibit different noise level and geologic features. Then the results from CNN are compared to those obtained using traditional seismic attributes and manual interpretation. The comparison indicates that the CNN method can perform more accurately and has a high potential to do more on seismic fault detection.

    Jie Qi
    Research Geophysicist

    Jie Qi is a Research Geophysicist at Geophysical Insights, where he works closely with product development and geoscience consultants. His research interests include machine learning-based fault detection, seismic interpretation, pattern recognition, image processing, seismic attribute development and interpretation, and seismic facies analysis. Dr. Qi received a BS (2011) in Geoscience from the China University of Petroleum in Beijing, and an MS (2013) in Geophysics from the University of Houston. He earned a Ph.D. (2017) in Geophysics from the University of Oklahoma, Norman. His industry experience includes work as a Research Assistant (2011-2013) at the University of Houston and the University of Oklahoma (2013-2017). Dr. Qi was with Petroleum Geo-Services (PGS), Inc. in 2014 as a summer intern, where he worked on a semi-supervised seismic facies analysis. In 2017, he served as a postdoctoral Research Associate in the Attributed Assisted-Seismic Processing and Interpretation (AASPI) consortium at the University of Oklahoma from 2017 to 2020.

    Jie Qi
    Research Geophysicist

    An Integrated Fault Detection Workflow

    Seismic fault detection is one of the top critical procedures in seismic interpretation. Identifying faults are significant for characterizing and finding the potential oil and gas reservoirs. Seismic amplitude data exhibiting good resolution and a high signal-to-noise ratio are key to identifying structural discontinuities using seismic attributes or machine learning techniques, which in turn serve as input for automatic fault extraction. Deep learning Convolutional Neural Networks (CNN) performs well on fault detection without any human-computer interactive work. This study shows an integrated CNN-based fault detection workflow to construct fault images that are sufficiently smooth for subsequent fault automatic extraction. The objectives were to suppress noise or stratigraphic anomalies subparallel to reflector dip, and sharpen fault and other discontinuities that cut reflectors, preconditioning the fault images for subsequent automatic extraction. A 2D continuous wavelet transform-based acquisition footprint suppression method was applied time slice by time slice to suppress wavenumber components to avoid interpreting the acquisition footprint as artifacts by the CNN fault detection method. To further suppress cross-cutting noise as well as sharpen fault edges, a principal component edge-preserving structure-oriented filter is also applied. The conditioned amplitude volume is then fed to a pre-trained CNN model to compute fault probability. Finally, a Laplacian of Gaussian filter is applied to the original CNN fault probability to enhance fault images. The resulting fault probability volume is favorable with respect to traditional human-interpreter generated on vertical slices through the seismic amplitude volume.

    Laura Cuttill
    Practice Lead, Advertas

    Young Professionals – Managing Your Personal Brand to Level-up Your Career

    No matter where you are in your career, your online “personal brand” has a huge impact on providing opportunity for prospective jobs and garnering the respect and visibility needed for advancement. While geoscientists tackle ambitious projects, publish in technical papers, and work hard to advance their careers, often, the value of these isn’t realized beyond their immediate professional circle. Learn how to…

    • - Communicate who you are to high-level executives in exploration and development
    • - Avoid common social media pitfalls
    • - Optimize your online presence to best garner attention from recruiters
    • - Stay relevant
    • - Create content of interest
    • - Establish yourself as a thought leader in your given area of specialization
    Laura Cuttill
    Practice Lead, Advertas

    As a 20-year marketing veteran marketing in oil and gas and serial entrepreneur, Laura has deep experience in bringing technology products to market and growing sales pipeline. Armed with a marketing degree from Texas A&M, she began her career doing technical writing for Schlumberger and ExxonMobil in 2001. She started Advertas as a co-founder in 2004 and began to leverage her upstream experience in marketing. In 2006, she co-founded the cyber-security software company, 2FA Technology. After growing 2FA from a startup to 75% market share in target industries, and the subsequent sale of the company, she returned to Advertas to continue working toward the success of her clients, such as Geophysical Insights. Today, she guides strategy for large-scale marketing programs, manages project execution, cultivates relationships with industry media, and advocates for data-driven, account-based marketing practices.

    Mike Dunn
    Sr. Vice President of Business Development

    New Capabilities of 3.4

    Paradise has given interpreters the ability detect more detail within the seismic data. Therefore, a natural extension of the current software is the ability to easily compare the SOM and Geobody results to borehole logs and lithofacies. As a result of this exciting capability, Paradise is now able to display digital well logs, TD charts, formation tops, and cross-sections in simple and straightforward manner. In this What’s New in Paradise 3.4 presentation we will be discussing the new Well Log Cross Section functionality, GPU support for 3 AASPI algorithms, demonstrating significant speedup, and the latest Petrel 2020 connector. Examples of the new well functionality will use the offshore New Zealand Maui Field data set. In addition, a live demonstration will walk users through a well cross section workflow.

    Mike Dunn
    Senior Vice President Business Development

    Michael A. Dunn is an exploration executive with extensive global experience including the Gulf of Mexico, Central America, Australia, China and North Africa. Mr. Dunn has a proven a track record of successfully executing exploration strategies built on a foundation of new and innovative technologies. Currently, Michael serves as Senior Vice President of Business Development for Geophysical Insights.

    He joined Shell in 1979 as an exploration geophysicist and party chief and held increasing levels or responsibility including Manager of Interpretation Research. In 1997, he participated in the launch of Geokinetics, which completed an IPO on the AMEX in 2007. His extensive experience with oil companies (Shell and Woodside) and the service sector (Geokinetics and Halliburton) has provided him with a unique perspective on technology and applications in oil and gas. Michael received a B.S. in Geology from Rutgers University and an M.S. in Geophysics from the University of Chicago.

    Rocky R. Roden
    Senior Consulting Geophysicist

    What Interpreters Should Know about Machine Learning

    Our lives are intertwined with applications, services, orders, products, research, and objects that are incorporated, produced, or effected in some way by Artificial Intelligence and Machine Learning. Buzz words like Deep Learning, Big Data, Supervised and Unsupervised Learning are employed routinely to describe Machine Learning, but how do these applications relate to geoscience interpretation and finding oil and gas. More importantly, do these Machine Learning methods produce better results than conventional interpretation approaches? This webinar will initially wade through the vernacular of Machine Learning and Data Science as it relates to the geoscientist. The presentation will review how these methods are employed, along with interpretation case studies of different machine learning applications. An overview of computer power and machine learning will be described. Machine Learning is a disruptive technology that holds great promise, and this webinar is an interpreter’s perspective, not a data scientist. This course will provide an understanding of how Machine Learning for interpretation is being utilized today and provide insights on future directions and trends.

    Rocky R. Roden
    Senior Consulting Geophysicist

    Over 45 years in industry as a Geophysicist, Exploration/Development Manager, Director of Applied Technology, and Chief Geophysicist. Previously with Texaco, Pogo Producing, Maxus Energy, YPF Maxus, and Repsol (retired as Chief Geophysicist 2001). Mr. Roden has authored or co-authored over 30 technical publications on various aspects of seismic interpretation, AVO analysis, amplitude risk assessment, and geoscience machine learning. Ex-Chairman of The Leading Edge editorial board. Currently a consultant with Geophysical Insights developing machine learning advances for oil and gas exploration and development and is a principal in the Rose and Associates DHI Risk Analysis Consortium, which has involved 85 oil companies since 2001, developing a seismic amplitude risk analysis program and worldwide prospect database. He holds a B.S. in Oceanographic Technology-Geology from Lamar University and an M.S. in Geological and Geophysical Oceanography from Texas A&M University.

    Sarah Stanley
    Senior Geoscientist

    New Capabilities of 3.4

    Paradise has given interpreters the ability detect more detail within the seismic data. Therefore, a natural extension of the current software is the ability to easily compare the SOM and Geobody results to borehole logs and lithofacies. As a result of this exciting capability, Paradise is now able to display digital well logs, TD charts, formation tops, and cross-sections in simple and straightforward manner. In this What’s New in Paradise 3.4 presentation we will be discussing the new Well Log Cross Section functionality, GPU support for 3 AASPI algorithms, demonstrating significant speedup, and the latest Petrel 2020 connector. Examples of the new well functionality will use the offshore New Zealand Maui Field data set. In addition, a live demonstration will walk users through a well cross section workflow.

    Sarah Stanley
    Senior Geoscientist and Lead Trainer

    Sarah Stanley joined Geophysical Insights in October, 2017 as a geoscience consultant, and became a full-time employee July 2018. Prior to Geophysical Insights, Sarah was employed by IHS Markit in various leadership positions from 2011 to her retirement in August 2017, including Director US Operations Training and Certification, the Operational Governance Team, and, prior to February 2013, Director of IHS Kingdom Training. Sarah joined SMT in May, 2002, and was the Director of Training for SMT until IHS Markit’s acquisition in 2011.

    Prior to joining SMT Sarah was employed by GeoQuest, a subdivision of Schlumberger, from 1998 to 2002. Sarah was also Director of the Geoscience Technology Training Center, North Harris College from 1995 to 1998, and served as a voluntary advisor on geoscience training centers to various geological societies. Sarah has over 37 years of industry experience and has worked as a petroleum geoscientist in various domestic and international plays since August of 1981. Her interpretation experience includes tight gas sands, coalbed methane, international exploration, and unconventional resources.

    Sarah holds a Bachelor’s of Science degree with majors in Biology and General Science and minor in Earth Science, a Master’s of Arts in Education and Master’s of Science in Geology from Ball State University, Muncie, Indiana. Sarah is both a Certified Petroleum Geologist, and a Registered Geologist with the State of Texas. Sarah holds teaching credentials in both Indiana and Texas.

    Sarah is a member of the Houston Geological Society and the American Association of Petroleum Geologists, where she currently serves in the AAPG House of Delegates. Sarah is a recipient of the AAPG Special Award, the AAPG House of Delegates Long Service Award, and the HGS President’s award for her work in advancing training for petroleum geoscientists. She has served on the AAPG Continuing Education Committee and was Chairman of the AAPG Technical Training Center Committee. Sarah has also served as Secretary of the HGS, and Served two years as Editor for the AAPG Division of Professional Affairs Correlator.

    Tom Smith
    President and CEO, Geophysical Insights

    Machine Learning for Incomplete Geoscientists

    This presentation covers big-picture machine learning buzz words with humor and unassailable frankness. The goal of the material is for every geoscientist to gain confidence in these important concepts and how they add to our well-established practices, particularly seismic interpretation. Presentation topics include a machine learning historical perspective, what makes it different, a fish factory, Shazam, comparison of supervised and unsupervised machine learning methods with examples, tuning thickness, deep learning, hard/soft attribute spaces, multi-attribute samples, and several interpretation examples. After the presentation, you may not know how to run machine learning algorithms, but you should be able to appreciate their value and avoid some of their limitations.

    Scroll to Top