web analytics
The Oil Industry’s Cyber–Transformation Is Closer Than You Think

The Oil Industry’s Cyber–Transformation Is Closer Than You Think

By David Brown, Explorer Correspondent
Published with permission: AAPG Explorer
June 2019

The concept of digital transformation in the oil and gas industry gets talked about a lot these days, even though the phrase seems to have little specific meaning.

So, will there really be some kind of extensive cyber-transformation of the industry over the next decade?

“No,” said Tom Smith, president and CEO of Geophysical Insights in Houston.

Instead, it will happen “over the next three years,” he predicted.

Machine Learning

Much of the industry’s transformation will come from advances in machine learning, as well as continuing developments in computing and data analysis going on outside of oil and gas, Smith said.

Through machine learning, computers can develop, modify and apply algorithms and statistical models to perform tasks without explicit instructions.

“There’s basically been two types of machine learning. There’s ‘machine learning’ where you are training the machine to learn and adapt. After that’s done, you can take that little nugget (of adapted programming) and use it on other data. That’s supervised machine learning,” Smith explained.

“What makes machine learning so profoundly different is this concept that the program itself will be modified by the data. That’s profound,” he said.

Smith earned his master’s degree in geology from Iowa State University, then joined Chevron Geophysical as a processing geophysicist. He later left to complete his doctoral studies in 3-D modeling and migration at the University of Houston.

In 1984, he founded the company Seismic Micro-Technology, which led to development of the KINGDOM software suite for integrated geoscience interpretation. Smith launched Geophysical Insights in 2009 and introduced the Paradise analysis software, which uses machine learning and pattern recognition to extract information from seismic data.

He’s been named a distinguished alumnus of both Iowa State and the University of Houston College of Natural Sciences and Mathematics, and received the Society of Exploration Geophysicists Enterprise Award in 2000.

Smith sees two primary objectives for machine learning: replacing repetitive tasks with machines – essentially, doing things faster – and discovery, or identifying something new.

“Doing things faster, that’s the low-hanging fruit. We see that happening now,” Smith said.

Machine learning is “very susceptible to nuances of the data that may not be apparent to you and I. That’s part of the ‘discovery’ aspect of it,” he noted. “It isn’t replacing anybody, but it’s the whole process of the data changing the program.”

Most machine learning now uses supervised learning, which employs an algorithm and a training dataset to “teach” improvement. Through repeated processing, prediction and correction, the machine learns to achieve correct outcomes.

“Another aspect is that the first, fundamental application of supervised machine learning is in classification,” Smith said,

But, “in the geosciences, we’re not looking for more of the same thing. We’re looking for anomalies,” he observed.

Multidimensional Analysis

The next step in machine learning is unsupervised learning. Its primary goal Is to learn more about datasets by modeling the structure or distribution of the data – “to self-discover the characteristics of the data,” Smith said.

“If there are concentrations of information in the data, the unsupervised machine learning will gravitate toward those concentrations,” he explained.

As a result of changes in geology and stratigraphy, patterns are created in the amplitude and attributes generated from the seismic response. Those patterns correspond to subsurface conditions and can be understood using machine-learning and deep-learning techniques, Smith said.

Human seismic interpreters can see only in three dimensions, he noted, but the patterns resulting from multiple seismic attributes are multidimensional. He used the term “attribute space” to distinguish from three-dimensional seismic volumes.

In geophysics, unsupervised machine learning was first used to analyze multiple seismic attributes to classify these patterns, a result of concentrations of neurons.

“We see the effectiveness of (using multiple) attributes to resolve thin beds in unconventional plays and to expose direct hydrocarbon Indicators in conventional settings. Existing computing hardware and software now routinely handle multiple-attribute analysis, with 5 to 10 being typical numbers,” he said.

Machine-learning and deep-learning technology, such as the use of convolutional neural networks (CNN), has important practical applications in oil and gas, Smith noted. For instance, the “subtleties of shale-sand fan sequences are highly suited” to analysis by machine learning-enhanced neural networks, he said.

“Seismic facies classification and fault detection are just two of the important applications of CNN technology that we are putting into our Paradise machine-learning workbench this year,” he said.

A New Commodity

Just as a seismic shoot or a seismic imaging program have monetary value, algorithms enhanced by machine-learning systems also are valuable for the industry, explained Smith.

In the future, “people will be able to buy, sell and exchange machine-learning changes in algorithms. There will be industry standards for exchanging these ‘machine-learning engines,’ if you will,” he said.

As information technology continues to advance, those developments will affect computing and data analysis in oil and gas. Smith said he’s been pleased to see the industry “embracing the cloud” as a shared computing-and-data-storage space.

“An important aspect of this is, the way our industry does business and the way the world does business are very different,” Smith noted.

“When you look at any analysis of Web data, you are looking at many, many terabytes of information that’s constantly changing,” he said.

In a way, the oil and gas industry went to school on very large sets of seismic data when huge datasets were not all that common. Now the industry has some catching up to do with today’s dynamic data-and-processing approach.

For an industry accustomed to thinking in terms of static, captured datasets and proprietary algorithms, that kind of mind-shift could be a challenge.

“There are two things we’re going to have to give up. The first thing is giving up the concept of being able to ‘freeze’ all the input data,” Smith noted.

“The second thing we have to give up is, there’s been quite a shift to using public algorithms. They’re cheap, but they are constantly changing,” he said.

Moving the Industry Forward

Smith will serve as moderator of the opening plenary session, “Business Breakthroughs with Digital Transformation Crossing Disciplines,” at the upcoming Energy in Data conference in Austin, Texas.

Presentations at the Energy in Data conference will provide information and insights for geologists, geophysicists and petroleum engineers, but its real importance will be in moving the industry forward toward an integrated digital transformation, Smith said.

“We have to focus on the aspects of machine-learning impact not just on these three, major disciplines, but on the broader perspective,” Smith explained. “The real value of this event, in my mind, has to be the integration, the symbiosis of these disciplines.”

While the conference should appeal to everyone from a company’s chief information officer on down, recent graduates will probably find the concepts most accessible, Smith said.

“Early-career professionals will get it. Mid-managers will find it valuable if they dig a little deeper into things,” he said.

And whether it’s a transformation or simply part of a larger transition, the coming change in computing and data in oil and gas will be one of many steps forward, Smith said.

“Three years from now we’re going to say, ‘Gosh, we were in the Dark Ages three years ago,’” he said. “And it’s not going to be over.”

Machine Learning Revolutionizing Seismic Interpretation

Machine Learning Revolutionizing Seismic Interpretation

By Thomas A. Smith and Kurt J. Marfurt
Published with permission: The American Oil & Gas Reporter
July 2017

The science of petroleum geophysics is changing, driven by the nature of the technical and business demands facing geoscientists as oil and gas activity pivots toward a new phase of unconventional reservoir development in an economic environment that rewards efficiency and risk mitigation. At the same time, fast-evolving technologies such as machine learning and multiattribute data analysis are introducing powerful new capabilities in investigating and interpreting the seismic record.

Through it all, however, the core mission of the interpreter remains the same as ever: extracting insights from seismic data to describe the subsurface and predict geology between existing well locations–whether they are separated by tens of feet on the same horizontal well pad or tens of miles in adjacent deepwater blocks. Distilled to its fundamental level, the job of the data interpreter is to determine where (and where not) to drill and complete a well. Getting the answer correct to that million-dollar question gives oil and gas companies a competitive edge. The ability to arrive at the right answers in the timeliest manner possible is invariably the force that pushes technological boundaries in seismic imaging and interpretation. The state of the art in seismic interpretation is being redefined partly by the volume and richness of high-density, full-azimuth 3-D surveying methods and processing techniques such as reverse time migration and anisotropic tomography. Combined, these solutions bring new resolution and clarity to processed subsurface images that simply are unachievable using conventional imaging methods. In data interpretation, analytical tools such as machine learning, pattern recognition, multiattribute analysis and self-organizing maps are enhancing the interpreter’s ability to classify, model and manipulate data in multidimensional space. As crucial as the technological advancements are, however, it is clear that the future of petroleum geophysics is being shaped largely by the demands of North American unconventional resource plays. Optimizing the economic performance of tight oil and shale gas projects is not only impacting the development of geophysical technology, but also dictating the skill sets that the next generation of successful interpreters must possess. Resource plays shift the focus of geophysics to reservoir development, challenging the relevance of seismic-based methods in an engineering-dominated business environment. Engineering holds the purse strings in resource plays, and the problems geoscientists are asked to solve with 3-D seismic are very different than in conventional exploration geophysics. Identifying shallow drilling hazards overlying a targeted source rock, mapping the orientation of natural fractures or faults, and characterizing changes in stress profiles or rock properties is related as much to engineering as to geophysics.

Given the requirements in unconventional plays, there are four practical steps to creating value with seismic analysis methods. The first and obvious step is for oil and gas companies to acquire 3-D seismic and incorporate the data into their digital databases.  Some operators active in unconventional plays fully embrace 3-D technology, while others only apply it selectively. If interpreters do not have access to high-quality data and the tools to evaluate that information, they cannot possibly add value to company’s bottom line.The second step is to break the conventional resolution barrier on the seismic reflection wavelet, the so-called quarter-wave length limit. This barrier is based on the overlapping reflections of seismic energy from the top and bottom of a layer, and depends on layer velocity, thickness, and wavelet frequencies. Below the quarter-wave length, the wavelets start to overlap in time and interfere with one another, making it impossible by conventional means to resolve separate events. The third step is correlating seismic reflection data–including compressional wave energy, shear wave energy and density–to quantitative rock property and geomechanical information from geology and petrophysics. Connecting seismic data to the variety of very detailed information available at the borehole lowers risk and provides a clearer picture of the subsurface between wells, which is fundamentally the purpose of acquiring a 3-D survey. The final step is conducting a broad, multiscaled analysis that fully integrates all available data into a single rock volume encompassing geophysical, geologic and petrophysical features. Whether an unconventional shale or a conventional carbonate, bringing all the data together in a unified rock volume resolves issues in subsurface modeling and enables more realistic interpretations of geological characteristics.

The Role of Technology

Every company faces pressures to economize, and the pressures to run an efficient business only ratchet up at lower commodity prices. The business challenges also relate to the personnel side of the equation, and that should never be dismissed. Companies are trying to bridge the gap between older geoscientists who seemingly know everything and the ones entering the business who have little experience but benefit from mentoring, education and training. One potential solution is using information technology to capture best practices across a business unit, and then keeping a scorecard of those practices in a database that can offer expert recommendations based on past experience. Keylogger applications can help by tracking how experienced geoscientists use data and tools in their day-to-day workflows. However, there is no good substitute for a seasoned interpreter. Technologies such as machine learning and pattern recognition have game-changing possibilities in statistical analysis, but as petroleum geologist Wallace Pratt pointed out in the 1950s, oil is first found in the human mind. The role of computing technology is to augment, not replace, the interpreter’s creativity and intuitive reasoning (i.e., the “geopsychology” of interpretation).

Delivering Value

A self-organizing map (SOM) is a neural network-based, machine learning process that is simultaneously applied to multiple seismic attribute volumes. This example shows a class II amplitude-variation-with-offset response from the top of gas sands, representing the specific conventional geological settings where most direct hydrocarbon indicator characteristics are found. From the top of the producing reservoir, the top image shows a contoured time structure map overlain by amplitudes in color. The bottom image is a SOM classification with low probability (less than 1 percent) denoted by white areas. The yellow line is the downdip edge of the high-amplitude zone designated in the top image. Consequently, seismic data interpreters need to make the estimates they derive from geophysical data more quantitative and more relatable for the petroleum engineer. Whether it is impedance inversion or anisotropic velocity modeling, the predicted results must add some measure of accuracy and risk estimation. It is not enough to simply predict a higher porosity at a certain reservoir depth. To be of consequence to engineering workflows, porosity predictions must be reliably delivered within a range of a few percentage points at depths estimated on a scale of plus or minus a specific number of feet.

3-d seismic image

Class II amplitude-variation-with-offset response from the top of gas sand.

Machine learning techniques apply statistics-based algorithms that learn iteratively from the data and adapt independently to produce repeatable results. The goal is to address the big data problem of interpreting massive volumes of data while helping the interpreter better understand the interrelated relationships of different types of attributes contained within 3-D data. The technology classifies attributes by breaking data into what computer scientists call “objects” to accelerate the evaluation of large datasets and allow the interpreter to reach conclusions much faster. Some computer scientists believe “deep learning” concepts can be applied directly to 3-D prestack seismic data volumes, with an algorithm figuring out the relations between seismic amplitude data patterns and the desired property of interest.  While Amazon, Alphabet and others are successfully using deep learning in marketing and other functions, those applications have access to millions of data interactions a day. Given the significantly fewer number of seismic interpreters in the world, and the much greater sensitivity of 3-D data volumes, there may never be sufficient access to training data to develop deep learning algorithms for 3-D interpretation.The concept of “shallow learning” mitigates this problem.
 
Stratigraphy above the Buda

Conventional amplitude seismic display from a northwest-to-southeast seismic section across a well location is contrasted with SOM results using multiple instantaneous attributes.

First, 3-D seismic data volumes are converted to well-established relations that represent waveform shape, continuity, orientation and response with offsets and azimuths that have proven relations (“attributes”) to porosity, thickness, brittleness, fractures and/or the presence of hydrocarbons. This greatly simplifies the problem, with the machine learning algorithms only needing to find simpler (i.e., shallower) relations between the attributes and properties of interest.In resource plays, seismic data interpretations increasingly are based on statistical rather than deterministic predictions. In development projects with hundreds of wells within a 3-D seismic survey area, operators rely on the interpreter to identify where to drill and predict how a well will complete and produce. Given the many known and unknown variables that can impact drilling, completion and production performance, the challenge lies with figuring out how to use statistical tools to apply data measurements from the previous wells to estimate the performance of the next well drilled within the 3-D survey area. Therein lies the value proposition of any kind of science, geophysics notwithstanding. The value of applying machine learning-based interpretation boils down to one word: prediction. The goal is not to score 100 percent accuracy, but to enhance the predictions made from seismic analysis to avoid drilling uneconomic or underproductive wells. Avoiding investments in only a couple bad wells can pay for all the geophysics needed to make those predictions. And because the statistical models are updated with new data as each well is drilled and completed, the results continually become more quantitative for improved prediction accuracy over time.

New Functionalities

In terms of particular interpretation functionalities, three specific concepts are being developed around machine learning capabilities:

  • Evaluating multiple seismic attributes simultaneously using self-organizing maps (multiattribute analysis);
  • Relating in multidimensional space natural clusters or groupings of attributes that represent geologic information embedded in the data; and
  • Graphically representing the clustered information as geobodies to quantify the relative contributions of each attribute in a given seismic volume in a form that is intrinsic to geoscientific workflows.

A 3-D seismic volume contains numerous attributes, expressed as a mathematical construct representing a class of data from simultaneous analysis. An individual class of data can be any measurable property that is used to identify geologic features, such as rock brittleness, total organic carbon or formation layering. Supported by machine learning and neural networks, multiattribute technology enhances the geoscientist’s ability to quickly investigate large data volumes and delineate anomalies for further analysis, locate fracture trends and sweet spots in shale plays, identify geologic and stratigraphic features, map subtle changes in facies at or even below conventional seismic resolution, and more. The key breakthrough is that the new technology works on machine learning analysis of multiattribute seismic samples.While applied exclusively to seismic data at present, there are many types of attributes contained within geologic, petrophysical and engineering datasets. In fact, literally, any type of data that can be put into rows and columns on a spreadsheet is applicable to multiattribute analysis. Eventually, multiattribute analysis will incorporate information from different disciplines and allow all of it to be investigated within the same multidimensional space that leads to the second concept: Using machine learning to organize and evaluate natural clusters of attribute classes. If an interpreter is analyzing eight attributes in an eight-dimensional space, the attributes can be grouped into natural clusters that populate that space. The third component is delivering the information found in the clusters in high-dimensionality space in a form that quantifies the relative contribution of the attributes to the class of data, such as simple geobodies displayed with a 2-D color index map. This approach allows multiple attributes to be mapped over large areas to obtain a much more complete picture of the subsurface, and has demonstrated the ability to achieve resolution below conventional seismic tuning thickness. For example, in an application in the Eagle Ford Shale in South Texas, multiattribute analysis was able to match 24 classes of attributes within a 150-foot vertical section across 200 square miles of a 3-D survey. Using these results, a stratigraphic diagram of the seismic facies has been developed over the entire survey area to improve geologic predictions between boreholes, and ultimately, correlate seismic facies with rock properties measured at the boreholes. Importantly, the mathematical foundation now exists to demonstrate the relationships of the different attributes and how they tie with pixel components in geobody form using machine learning. Understanding how the attribute data mathematically relate to one another and to geological properties gives geoscientists confidence in the interpretation results.

Leveraging Integration

The term “exploration geophysics” is becoming almost a misnomer in North America, given the focus on unconventional reservoirs, and how seismic methods are being used in these plays to develop rather than find reservoirs. With seismic reflection data being applied across the board in a variety of ways and at different resolutions in unconventional development programs, operators are combining 3-D seismic with data from other disciplines into a single integrated subsurface model. Fully leveraging the new sets of statistical and analytical tools to make better predictions from integrated multidisciplinary datasets is crucial to reducing drilling and completion risk and improving operational decision making. Multidimensional classifiers and attribute selection lists using principal component analysis and independent component analysis can be used with geophysical, geological, engineering, petrophysical and other attributes to create general-purpose multidisciplinary tools of benefit to all oil and gas company departments and disciplines. As noted, the integrated models used in resource plays increasingly are based on statistics, so any evaluation to develop the models also needs to be statistical. In the future, a basic part of conducting a successful analysis will be the ability to understand statistical data and how the data can be organized to build more tightly integrated models. And if oil and gas companies require more integrated interpretations, it follows that interpreters will have to possess more integrated skills and knowledge. The geoscientist of tomorrow may need to be more of a multidisciplinary professional with the blended capabilities of a geologist, geophysicist, engineer and applied statistician. But whether a geoscientist is exploring, appraising or developing reservoirs, he or she only can be as good as the prediction of the final model. By applying technologies such as machine learning and multiattribute analysis during the workup, interpreters can use their creative energies to extract more knowledge from their data and make more knowledgeable predictions about undrilled locations.

THOMAS A. SMITH is president and chief executive officer of Geophysical Insights, which he founded in 2008 to develop machine learning processes for multiattribute seismic analysis. Smith founded Seismic Micro-Technology in 1984, focused on personal computer-based seismic interpretation. He began his career in 1971 as a processing geophysicist at Chevron Geophysical. Smith is a recipient of the Society of Exploration Geophysicists’ Enterprise Award, Iowa State University’s Distinguished Alumni Award and the University of Houston’s Distinguished Alumni Award for Natural Sciences and Mathematics. He holds a B.S. and an M.S. in geology from Iowa State, and a Ph.D. in geophysics from the University of Houston.
Dr. Kurt Marfurt KURT J. MARFURT is the Frank and Henrietta Schultz Chair and Professor of Geophysics in the ConocoPhillips School of Geology & Geophysics at the University of Oklahoma. He has devoted his career to seismic processing, seismic interpretation and reservoir characterization, including attribute analysis, multicomponent 3-D, coherence and spectral decomposition. Marfurt began his career at Amoco in 1981. After 18 years of service in geophysical research, he became director of the University of Houston’s Center for Applied Geosciences & Energy. He joined the University of Oklahoma in 2007. Marfurt holds an M.S. and a Ph.D. in applied geophysics from Columbia University.
Seismic Interpretation with Machine Learning

Seismic Interpretation with Machine Learning

By: Rocky Roden, Geophysical Insights, and Deborah Sacrey, Auburn Energy
Published with permission: GeoExPro Magazine
December 2016

Today’s seismic interpreters must deal with enormous amounts of information, or ‘Big Data’, including seismic gathers, regional 3D surveys with numerous processing versions, large populations of wells and associated data, and dozens if not hundreds of seismic attributes that routinely produce terabytes of data. Machine learning has evolved to handle Big Data. This incorporates the use of computer algorithms that iteratively learn from the data and independently adapt to produce reliable, repeatable results. Multi-attribute analyses employing principal component analysis (PCA) and self-organizing maps are components of a machine-learning interpretation workflow (Figure 1) that involves the selection of appropriate seismic attributes and the application of these attributes in an unsupervised neural network analysis, also known as a self-organizing map, or SOM. This identifies the natural clustering and patterns in the data and has been beneficial in defining stratigraphy, seismic facies, DHI features, sweet spots for shale plays, and thin beds, to name just a few successes. Employing these approaches and visualizing SOM results utilizing 2D color maps reveal geologic features not previously identified or easily interpreted from conventional seismic data.

Steps 1 and 2: Defining Geologic Problems and Multiple Attributes

Seismic attributes are any measurable property of seismic data and are produced to help enhance or quantify features of interpretation interest. There are hundreds of types of seismic attributes and interpreters routinely wrestle with evaluating these volumes efficiently and strive to understand how they relate to each other.

The first step in a multi-attribute machine-learning interpretation workflow is the identification of the problem to resolve by the geoscientist. This is important because depending on the interpretation objective (facies, stratigraphy, bed thickness, DHIs, etc.), the appropriate set of attributes must be chosen. If it is unclear which attributes to select, a principal component analysis (PCA) may be beneficial. This is a linear mathematical technique to reduce a large set of variables (seismic attributes) to a smaller set that still contains most of the variation of independent information in the larger dataset. In other words, PCA helps determine the most meaningful seismic attributes.

seismic interpretation workflow

Figure 1: Multi-attribute machine learning interpretation workflow with principal component analysis (PCA) and self-organizing maps (SOM).

Figure 2 is a PCA analysis from Paradise® software by Geophysical Insights, where 12 instantaneous attributes were input over a window encompassing a reservoir of interest. The following figures also include images of results from Paradise. Each bar in Figure 2a denotes the highest eigenvalue on the inlines in this survey. An eigenvalue is a value showing how much variance there is in its associated eigenvector and an eigenvector is a direction showing a principal spread of attribute variance in the data. The PCA results from the selected red bar in Figure 2a are denoted in Figures 2b and 2c. Figure 2b shows the principal components from the selected inline over the zone of interest with the highest eigenvalue (first principal component) indicating the seismic attributes contributing to this largest variation in the data. The percentage contribution of each attribute to the first principal component is designated. In this case the top four seismic attributes represent over 94% of the variance of all the attributes employed. These four attributes are good candidates to be employed in a SOM analysis. Figure 2c displays the percentage contribution of the attributes for the second principal component. The top three attributes contribute over 68% to the second principal component. PCA is a measure of the variance of the data, but it is up to the interpreter to determine and evaluate how the results and associated contributing attributes relate to the geology and the problem to be resolved.

principal component analysis for seismic interpretation

Figure 2: Principal Component Analysis (PCA) results from 12 seismic attributes: (a) bar chart with each bar denoting the highest eigenvalue for its associated inline over the displayed portion of the seismic 3D volume. The red bar designates the inline with the results shown in 2b and c; (b) first principal component designated win orange and associated seismic attribute contribution to the right; and (c) second principal component in orange with the seismic contributions to the right. The highest contributing attributes for each principal component are possible candidates for a SOM analysis, depending on the interpretation goal.

Steps 3 and 4: SOM Analysis and Interpretation

The next step in the multi-attribute interpretation process requires pattern recognition and classification of the often subtle information embedded in the seismic attributes. Taking advantage of today’s computing technology, visualization techniques, and understanding of appropriate parameters, self-organizing maps, developed by Teuvo Kohonen in 1982, efficiently distill multiple seismic attributes into classification and probability volumes. SOM is a powerful non-linear cluster analysis and pattern recognition approach that helps interpreters identify patterns in their data, some of which can relate to desired geologic characteristics. The tremendous amount of samples from numerous seismic attributes exhibit significant organizational structure. SOM analysis identifies these natural organizational structures in the form of natural attribute clusters. These clusters reveal significant information about the classification structure of natural groups that is difficult to view any other way.

Figure 3 describes the SOM process used to identify geologic features in a multi-attribute machine-learning methodology. In this case, 10 attributes were selected to run in a SOM analysis over a specific 3D survey, which means that 10 volumes of different attributes are input into the process. All the values from every sample from the survey are input into attribute space where the values are normalized or standardized to the same scale. The interpreter selects the number of patterns or clusters to be delineated. In the example in Figure 3, 64 patterns are to be determined and are designated by 64 neurons. After the SOM analysis, the results are nonlinearly mapped to a 2D color map which shows 64 neurons.

SOM workflow process

Figure 3: How SOM works (10 seismic attributes)

At this point, the interpreter evaluates which neurons and associated patterns in 3D space define features of interest. Figure 4 displays the SOM results, where four neurons have highlighted not only a channel system but details within that channel. The next step is to refine the interpretation and perhaps use different combinations of attributes and/or use different neuron counts. For example, in Figure 4, to better define details in the channel system may require increasing the neuron count to 100 or more neurons to produce much more detail. The scale of the geologic feature of interest is related to the number of neurons employed; low neuron counts will reveal larger scale features, whereas a high neuron count defines much more detail.

SOM seismic interpretation analysis

Figure 4: SOM analysis interpretation of channel feature with 2D color map

Workflow Examples

Figure 5 shows the SOM classification from an offshore Class 3 AVO setting where direct hydrocarbon indicators (DHIs) should be prevalent. The four attributes listed for this SOM run were selected from the second principal component in a PCA analysis. This SOM analysis clearly identified flat spots associated with a gas/oil and an oil/water contact. Figure 5 displays a line through the middle of a field where the SOM classification identified these contacts, which were verified by well control. The upper profile indicates that 25 neurons were employed to identify 25 patterns in the data. The lower profile indicates that only two neurons are identifying the patterns associated with the hydrocarbon contacts (flat spots). These hydrocarbon contacts were difficult to interpret with conventional amplitude data.

SOM results defining hydrocarbon contacts

Figure 5: SOM results defining hydrocarbon contacts on a seismic line through a field. Attributes chosen for the identification of flat spots were 1. instantaneous frequency; 2. thin bed indicator; 3. acceleration of phase; 4. dominant frequency

The profile in Figure 6 displays a SOM classification where the colors represent individual neurons with a wiggle-trace variable area overlay of the conventional amplitude data. This play relates to a series of thin strandline sand deposits. These sands are located in a very weak trough on the conventional amplitude data and essentially have no amplitude expression. The SOM classification employed seven seismic attributes which were determined from the PCA analysis. A 10x10 matrix of neurons or 100 neurons were employed for this SOM classification. The downdip well produced gas from a 6’ thick sand that confirmed the anomaly associated with a dark brown neuron from the SOM analysis. The inset for this sand indicates that the SOM analysis has identified this thin sand down to a single sample size which is 1 ms (5’) for this data. The updip well on the profile in Figure 6 shows a thin oil sand (~6’ thick) that is associated with a lighter brown neuron with another possible strandline sand slightly downdip. This SOM classification defines very thin beds and employs several instantaneous seismic attributes that are measuring energy in time and space outside the realm of conventional amplitude data.

SOM results showing thin beds

Figure 6: SOM results showing thin beds in a strandline setting

Geology Defined

The implementation of a multi-attribute machine-learning analysis is not restricted to any geologic environment or setting. SOM classifications have been employed successfully both onshore and offshore, in hard rocks and soft rocks, in shales, sands, and carbonates, and as demonstrated above, for DHIs and thin beds. The major limitations are the seismic attributes selected and their inherent data quality. SOM is a non-linear classifier and takes advantage of finely sampled data and is not burdened by typical amplitude resolution limitations. This machine learning seismic interpretation approach has been very successful in distilling numerous attributes to identify geologic objectives and has provided the interpreter with a methodology to deal with Big Data.

Rocky Roden ROCKY RODEN owns his own consulting company, Rocky Ridge Resources Inc., and works with several oil companies on technical and prospect evaluation issues. He also is a principal in the Rose and Associates DHI Risk Analysis Consortium and was Chief Consulting Geophysicist with Seismic Micro-technology. He is a proven oil finder (36 years in the industry) with extensive knowledge of modern geoscience technical approaches (past Chairman – The Leading Edge Editorial Board). As Chief Geophysicist and Director of Applied Technology for Repsol-YPF, his role comprised advising corporate officers, geoscientists, and managers on interpretation, strategy and technical analysis for exploration and development in offices in the U.S., Argentina, Spain, Egypt, Bolivia, Ecuador, Peru, Brazil, Venezuela, Malaysia, and Indonesia. He has been involved in the technical and economic evaluation of Gulf of Mexico lease sales, farmouts worldwide, and bid rounds in South America, Europe, and the Far East. Previous work experience includes exploration and development at Maxus Energy, Pogo Producing, Decca Survey, and Texaco. He holds a BS in Oceanographic Technology-Geology from Lamar University and a MS in Geological and Geophysical Oceanography from Texas A&M University. Rocky is a member of SEG, AAPG, HGS, GSH, EAGE, and SIPES.
Deborah Sacrey DEBORAH SACREY  is a geologist/geophysicist with 39 years of oil and gas exploration experience in the Texas and Louisiana Gulf Coast, and Mid-Continent areas. For the past three years, she has been part of a Geophysical Insights team working to bring the power of multiattribute neural analysis of seismic data to the geoscience public. Sacrey received a degree in geology from the University of Oklahoma in 1976, and immediately started working for Gulf Oil. She started her own company, Auburn Energy, in 1990, and built her first geophysical workstation using
Kingdom software in 1995. She specializes in 2-D and 3-D interpretation
for clients in the United States and internationally. Sacrey is a DPA certified
petroleum geologist and DPA certified petroleum geophysicist.

 

Seismic Computing – Advances Opening New Possibilities

Seismic Computing – Advances Opening New Possibilities

By: Kari Johnson, Special Correspondent
Published with permission: The American Oil & Gas Reporter
November 2015

Permanent sensors both on land and on the seafloor are collecting a new stream of seismic data that can be used for repeated active seismic, microseismic analysis, and continuous passive monitoring. Distributed acoustic sensors (DAS) record continuous seismic data very cheaply, taking another quantum step in the amount of data coming from the reservoir during exploration, development and production.

These are just two examples of how dramatically the volume of technical data is rising, says Biondo Biondi, professor of geophysics at Stanford University. “The big change taking place is in the breadth of data we can get with different kinds of sensors,” he states. “Beyond seismic, there are streams of data from sensors measuring temperature, pressure, flow, and other physical information. This is putting a strain on computational capability, but it does open the possibility of a lot of integration of geophysical and other data.”

Data sources are evolving rapidly, becoming less expensive and providing denser data. “One Stanford student is experimenting with rotational sensors that record six or seven components,” says Biondi. “Others are working with both active and passive DAS data.”

When and how to process those data are also subjects of study. “A simple DAS fiber creates terabytes of data every day,” he explains. “It is unlikely that all of the data can move in bulk across the network. Instead, some amount of real-time processing will be needed near the source.”

In addition, while DAS arrays offer a low-cost way to collect dense acoustic data passively or actively, data quality is lower than from conventional geophones, Biondi says. “The challenge in this case is to get high-quality insight from low-quality data.”

While cloud computing certainly is proving useful in meeting some industry needs, Biondi says it may be more appropriate to keep the data “closer to the ground” because of its volume and proprietary nature. “Fog computing is the term for this mixed model,” he relates.

Data collected from DAS acquisition may be preprocessed local to the acquisition center, for example, then sent to the cloud for analysis, and then into the hands of the interpreter, he speculates. “The more channels and better data collected, the more accurate the wave field capture,” he comments. “This will speed the transition from seismic processing to waveform imaging. The interpreter and processor can interact in a feedback loop. Many types of geological and geophysical information could be part of the fog computing process.”

Reservoir-Centered Geology

Another ongoing trend in geophysics is a push to place more emphasis on reservoir-centered geology, according to Biondi. “The goal is tighter integration of reservoir properties, geomechanics, seismic, petrophysics, etc. One student constrained anisotropic parameter estimation using petrophysical data and well logs and models. Some students are constraining attenuation and connecting seismic with geomechanics, including reservoir compaction and overburden stretching. Others are working with reservoir engineers to model fluid flows that include geomechanical effects,” he notes.

“As we move toward waveform inversion, we no longer are dealing with ‘magic’ processing parameters, but with more description of the geology,” says Biondi. “That allows us to bring quantitative information into seismic imaging.”

That includes unconventional plays, where Biondi says integrated reservoir analysis soon could be performed in real time to guide well planning, drilling, completion and fracturing design decisions.

An important step in data analysis is merging statistical data analytics with physics-based analysis. “Traditional seismic imaging is based on the physics of waveform propagation, fluid flow modeling is based on physics, and geomechanical analysis is based on mechanical modeling,” Biondi remarks. “By adding details about the physics and geology, we can point researchers in the direction of physical phenomenon or geological settings where a different understanding of the geology and physics is needed.”

Integrating Data And Processes

The industry is finding tremendous value in integrating data and multidisciplinary processes, says Kamal Al-Yahya, senior vice president at CGG GeoSoftware. Traditional tools for reservoir characterization and petrophysical analysis were essentially siloed by data type and discipline. Geophysicists worked with seismic data, geologists worked with petrophysical data, and drilling and reservoir departments worked with engineering data.

“The associated applications for each domain can be best in class, but workflows still can suffer from addressing only part of the data spectrum and serving only a segment of the different disciplines involved,” he observes. “Industry professionals would like to work together more to improve efficiency and build on one another’s ideas. That requires integration.”

Integration at the workflow level lets users access several applications in interpretation and design workflows without having to move data, he explains, referencing the example of a smart phone where contact data are used by many applications from a single source. “Users in various disciplines can begin to collaborate. Normally they have different perspectives,” Al-Yahya says. “Everybody can be looking at the same data, but users in each discipline will see them differently based on their areas of expertise.”

While upstream software applications tend to be highly scientific and complex, Al-Yahya says new computing technologies are making applications easier to use. “A complex application does not have to have a complex interface,” he holds. “Simpler interfaces support collaboration between geographically dispersed experts and across disciplines.”

Automation is an important step toward reducing interface complexities. Al-Yahya points out that processing algorithms at the front end of seismic analysis have automated removing survey footprints and tracking geologic feature. “Artifacts introduced by sources and receivers during acquisition are automatically removed, substantially relieving the burden on interpreters who used to spend hours meticulously correcting the data,” he notes. “Geologic features are identified automatically, allowing interpreters to navigate through dips, staying on a specific feature even through complex geology.”quantitative interpretation

These and other automated capabilities save time and help interpreters avoid mental fatigue. “If you spend all your time picking features, there is no time or energy left for analysis,” Al-Yahya observes.

In geostatistical applications, generating and evaluating multiple realizations used to be a processing bottleneck. But processing time has been shortened dramatically by harnessing multiple central processing units, and ranking tools help interpreters sift through hundreds of plausible realizations looking for the most probable, Al-Yahya continues.

“Interpreters focus their energies on adding insight to the process and make adjustments to the initial automated ranking. In this way, technology and interpreter skills are both optimized, leading to improved reservoir characterization,” he concludes.

Software As A Service

Lower computing infrastructure costs enables operators to measure well performance and manage facilities more efficiently, says Oscar Teoh, vice president of operations at iStore. At the same time, easy-to-adopt-and-use devices have become ubiquitous, and users are accustomed to accessing applications using them. This combination of new measures and new access technologies has led to the desire for software as a service (SaaS) applications, he adds. SaaS apps are available over the Internet and simplify the process of distributing access to data.

This new generation of applications fosters collaboration, putting people literally on the same page for tasks ranging from monitoring well performance to forecasting and economics. “Every aspect of operations can be improved with greater access by people in the field and head office,” Teoh says. “Another side of this is the crew change we have been going through,” he adds. “We need to build a wider network of collaboration to keep the expertise available.”

One of the key concepts of SaaS is that it brings the work to people, not the people to work. “When you have this efficiency, the return on investment is high because you do not need a full-time expert. Instead, you have people that you can federate as needed,” explains Teoh.

SaaS also enables users to choose the tool appropriate to them. “Tablets, desktops and collaborative spaces are simply tools that can be used for the right occasion,” he says. “What used to be available on specialized systems is now available on common devices such as smart phones. For example, 3-D images that used to cost millions and require immersive visualization rooms now are available easily through the Internet on affordable platforms that enable users to easily interact with and manipulate subsurface views, such as producing formations and wellbore locations.”

SAAS for seismic interpretation

Using software as a service applications, even complex 3-D images are available through the Internet on affordable platforms that let users easily interact with and manipulate subsurface views, such as producing formations and wellbore locations. Shown here is a Web-based 3-D visualization of multilateral wellbores on a seismic horizon structure map.

The best collaborative tools foster and support two-way interaction, where users can touch and move, poke and point, and change data, says Teoh. Optimization in the application enables this interactivity by smartly caching data on the device and selectively transmitting data. Individual workspaces allow users to create and share their own views and edits without affecting the master version.

Standardization and data governance are the underpinnings of effective collaboration. Enforcing rules of ownership and validating data sources are essential to ensuring that the right information is accessed by the right users. Data management is a journey, not a destination, says Teoh. SaaS applications harness the power of the Internet using Web and data services to connect distinct and different databases collected for specific purposes.

“Using web technology, supervisory control and data acquisition data, production data, regulatory reporting data and other data sources can be brought together in a collaborative space for strategic and tactical decision making,” Teoh remarks. “SAAS applications tend to focus on the essentials, avoiding feature overload, proving a more efficient and reliable solution.”

Massive Parallelization

There are two critical factors for efficient HPC seismic processing, according to Charles Sicking, Global Geophysical’s vice president of research and development. The first is turnaround time. In a business where time is literally money, he says operators place a premium on the speed as well as the accuracy of processed results. And that leads to the second factor: quality.

“Quality increases dramatically when clients participate earlier and more often in the processing,” says Sicking. “With faster turnaround times, it becomes reasonable to increase the number of quality reviews. Quality goes sky high when clients get to look at the data in different ways and do more tests over the course of a project.”

Massive parallelization has significantly improved both of these factors, according to Sicking. Parallelization enables simultaneous multinode computations and data access to make processes extremely efficient and save weeks in turnaround time. He says that highly parallelized disk systems enable two kinds of parallelism schemes for seismic processing.

The simplest is course-grain parallelization, whereby each CPU on each node runs the same software application against different parts of the data. In this method, there is no intercommunication between the CPUs, and they do not share memory or compute power. A dataset split across 1,000 CPUs can be processed 1,000 times faster, calculates Sicking.

The second kind is fine-grained parallelism, in which one application runs on a node with multiple CPUs. The application processes one piece of the data using all the CPUs on one node simultaneously. This capability is used extensively for computationally-intensive processes such as reverse-time migration, he notes.

Both kinds of parallelization can be combined by putting a course-grained wrapper around a fine-grained application, Sicking says. Then, for example, a seismic volume containing 50,000 shots can run on 100 nodes with each node processing 500 shots in parallel.

Super highly parallelized disk systems are key to effective parallelization, according to Sicking. Disk storage systems have inherent physical limitations on the speed of data access. “To bypass this limitation, highly parallelized disk systems have many blades with trays holding disks,” he explains. “Each blade has a computer, and all blades communicate and interface with the dataset, which is distributed across hundreds of hard drives. Requests for data are executed in a way that increases disk input/output up to 1,000 times compared with the serial access on single hard drives.”

Data access is fast enough that even datasets with many terabytes can be accessed efficiently, he notes. “When we changed the parallelization of our ambient seismic processing algorithm, the run time went from 2,100 down to 40 equivalent node days on the first large dataset,” Sicking reports. “That huge improvement dramatically shortened turnaround time.”

As another example, Global Geophysical’s seismic imaging application for horizontal transverse isotropy scanning requires very large compute resources, says Sicking. “Our system application uses parallelization to break the computation into small pieces, allowing hundreds of segments to run in parallel. Using this method, many parallel jobs can run simultaneously on hundreds of nodes, allowing for the timely delivery of advanced processing products such as inversion ready gathers,” Sicking says.

The third form of parallelization is to have the entire dataset loaded into memory on many nodes and use all of the CPUs of all nodes to process that dataset. “This method is very useful for transposing multidimensional datasets to change the framework of the data structure. To run effectively, the entire dataset must be accessible simultaneously,” says Sicking. “In a parallelized system, the algorithm shuffles the data until they are completely transposed in memory, and then outputs to the disk system with the new data structure,” he concludes.

Big Data Analytics

“The oil and gas industry is working hard to catch up to the advances in information technology,” says Scott Oelfke, product manager at LMKR, who notes that big data analytics already are being used successfully in financial, manufacturing and retail. One area where Oelfke says he sees some early experimentation with big data technology is in production optimization in unconventional reservoirs.Big data analytics for seismic interpretation

“With tools such as the open-source Hadoop and SAP’s in-memory HANA platform, the technology exists to leverage big data analytics. If upstream operators can figure out the right questions to ask and what datasets to use, they can get more value from their geological and geophysical data.”

Another area where Oelfke says he sees advancement is managing large data volumes on corporate networks. That is where advanced seismic attribute tools come in, generating high-quality attributes out of huge 3-D volumes, says Oelfke.

“In the past, this process was very time consuming. Today, attributes can be generated using the graphics processing unit and previewed in real time to let interpreters key in on exactly the attribute of interest. The volume can be generated immediately,” he elaborates. “Instead of taking two or three days to generate 12-15 volumes for review, only one volume is created and the process completes in an hour or sooner.”

The processing power in this scenario comes from gaming technology. High-end visualization is cheaper than ever, commoditized by the gaming industry. “Thanks to the power of the GPU, processing and visualizing complex subsurface geology is very fast,” Oelfke states.

To illustrate the sheer volume of data that interpreters must contend with, consider the typical number of wells in a project. “Twenty years ago, 500 wells in a project was a lot of wells, but 500,000 wells are not uncommon today,” says Oelfke. “The scale of these plays is creating huge volumes of data.”

Geosteering is another area benefiting from emerging Web technologies such as HTML5 (the fifth revision of the hypertext markup language standard), and the open-source Angular Web application framework, Oelfke points out. “Moving geosteering to the Web lets operators steer wells anywhere, anytime, 24 hours a day, seven days a week,” he says. “A Web-based tool gives geoscientists the flexibility to get their work done in the office, at home or on the road. It gives these folks their lives back.”

Internet Of Things

Various technologies are converging in ways that result in massive quantities of data being generated in most industries today, but the oil and gas industry has a unique challenge with the types of data being collected as well as the quantity of data, says Felix Balderas, director of technology and product development at Geophysical Insights.

“We need to have the tools to analyze multivariate data because traditional tools were not designed for what is happening with data today,” he remarks. “From upstream to downstream, we are seeing an increased use of data-generating sensors and other devices.”

These devices often are equipped with flash drives, making them more rugged and giving them more storage capability, and faster acquisition and transmission rates, and they are interconnected, Balderas points out.

“This and other increased capacities have produced larger data volumes than we have seen in the past,” he says, adding that the emerging Internet of Things (IOT) opens the possibility for tracking data from all aspects of an operation in real time. “This could provide valuable insights, if the proper tools are available to exploit this information.”

In the seismic acquisition sector, massive volumes of data are generated to create datasets with sizes in terms of terabytes and petabytes, Balderas notes. “These must be analyzed by interpreters, but many of the tools interpreters use were developed when a dataset measured in gigabytes was considered big,” he says. “Fortunately, desktop workstations are keeping pace with performance requirements in most cases, but the challenge continues of how to extract knowledge in a manner that is efficient and effective, given the quantity of data now available.”

machine learning for seismic interpretation

Geophysical Insights’ Paradise multiattribute analysis software uses learning machine technology to extract more information from seismic data than is possible using traditional interpretation tools because it learns the data at full seismic resolution

Among the potential solutions are analytical and statistical techniques that cross-correlate apparently disparate data types to find previously unseen relationships that can help optimize dataset selections, such as seismic attributes, and find patterns that reduce the time to identify strategically important geological areas of interest.

“Traditionally, interpreters looked for geological patterns as much visually as numerically, manually picking points to identify geological features. This was a slow and error-prone technique that introduced human bias. The solutions we are developing are based on learning machine (LM) technology,” Balderas says. “Paradise®, the multiattribute analysis software that applies LM technology, extracts more information from seismic data than is possible using traditional interpretation tools because it learns the data at full seismic resolution. And, unlike human interpreters, Paradise is not limited to viewing only two or three attributes at a time.”

What makes LM algorithms different from imperative programming algorithms is that LM can learn from the data, rather than following a set of predefined instructions. Driverless cars, for example, must be able to recognize any stoplight encountered on a route. “There is no way to describe, using instructions, every possible intersection and stoplight configuration,” Balderas explains. “Sooner or later, the car will encounter a stoplight it has not seen before. With LM algorithms, the car will recognize a pattern and adjust what it knows about stoplights for future reference.”

A similar process of pattern recognition and machine learning techniques can shorten the time for extracting knowledge from geophysical data, he contends. “Applied to a volume of geophysical data, the algorithm looks for patterns that reveal geological features, which is essentially what interpreters do,” notes Balderas.

He adds that the speed of pattern recognition is crucial to generating value. “Learning machines can quickly locate faults, horizons and other geological features for the interpreter to review,” Balderas states. “There is no technological substitute for an experienced interpreter, but this ‘candidate feature’ finding approach helps the interpreter focus his work on areas with the greatest potential.”

Delighting in Geophysics

Delighting in Geophysics

 
Q&A with Dr. Thomas A. Smith
Published with permission: GeoExPro September 2014, Volume 11, No 4

Why did you launch Geophysical Insights after the sale of SMT? Wasn’t it time for a break?

The work at SMT was thoroughly enjoyable, particularly generating new ideas and developing new technology, so after the sale of SMT, it seemed quite natural to continue. I jumped into geophysical research with delight. Geophysical Insights was launched to develop the next generation of interpretation technologies, a result of some of that research. We recognized that there was an opportunity to make a contribution to the industry. Response has been good, with a substantial number of people expressing a great interest in these new ways to conduct seismic interpretation. Here’s why: today we have more data, a greater variety of play concepts, and often less time for interpreters to analyze prospects. In particular, the number of seismic attributes available is now in the hundreds. Witnessing this growing body of information, several years ago M. Turhan Tanner (see GEO ExPro Vol. 3, No. 4), Sven Treitel and I began collaborating on the premise that greater insight may be extracted from the seismic response by analyzing multiple attributes simultaneously. We recognized that advanced pattern recognition methods were being used in many applications outside geoscience that could be adopted to address what we saw as an opportunity to advance the geoscience for exploration and production. Our thoughts on the opportunity were put forward at a 2009 SEG workshop entitled ‘What’s New in Seismic Interpretation’ in a presentation called ‘Self Organizing Maps of Multiple Attribute 3D Seismic Reflections’.

Tell us about the advanced geoscience analysis software platform, Paradise.

Paradise is an off-the-shelf analysis platform that enables interpreters to use advanced pattern recognition methods like Self-Organizing Maps and Principal Component Analysis through guided workflows. In 2009 we organized a team of interpretation software specialists, geoscientists, and marketing professionals to develop an advanced geoscience platform that would take full advantage of modern computing architecture, including large-scale parallel processing. Today, Paradise distills a variety of information from many attributes simultaneously at full seismic resolution, i.e. operating on every piece of data in a volume. This is one of the many differences in the application of machine learning and pattern recognition methods available in Paradise.

What is your perspective on the interpretation needs of unconventional compared to conventional resources? 

Both types of plays have their respective challenges, of course. Our work at Geophysical Insights is evenly divided between conventional and unconventional resources; however, there is growth in the use of seismic among E&P companies in unconventional plays. Systematic drilling programs are now being augmented more often by seismic interpretation, which is reducing field development costs by optimizing drilling and development. There is also growing recognition of what is termed ‘complex conventionals’, like carbonates – a geologic setting that requires advanced analysis for the characterization of carbonate reservoir rocks.

Where do you see the next big advances in geophysics? 

While traditional interpretation tools have made extensive use of improvements in interpretation imagery, their analysis has been largely qualitative – an interpretation of visual imagery on a screen. Certainly, qualitative interpretation is important and will always have a place in the interpretation process. We see the next generation of technologies producing quantitative results that will guide and inform an interpretation, thereby complementing qualitative analysis. Improvements in quantitative analysis will help interpretation add forecasting to prediction.

Do you think these advances will come from industry or academia?

Bright people and ideas are everywhere, and we must be open to solutions from a variety of sources. Technology breakthroughs are often an application of existing concepts from multiple disciplines applied in a whole new way. I believe that fluid, inter-disciplinary teams, enabled by advanced technology, offer an excellent organizational model for addressing the complex challenges of monetizing hydrocarbons in challenging geologic settings.

Where will these advances originate?

While the U.S. has emerged as a leader in O&G production due in large part to the development of unconventional resources and the application of new technologies, regions outside of the U.S. are beginning to develop these too. It is reasonable to expect that universities and companies in these regions will generate many new technologies, which will be essential to supply the growing demand for hydrocarbons worldwide. I applaud the next generation of geoscientists and hope that they enjoy the work of our industry as much as we do.

 

Approach Aids Multiattribute Analysis

Approach Aids Multiattribute Analysis

By: Rocky Roden, Geophysical Insights, and Deborah Sacrey, Auburn Energy
Published with permission: American Oil and Gas Reporter
September 2015

Seismic attributes, which are any measurable properties of seismic data, aid interpreters in identifying geologic features that are not understood clearly in the original data. However, the enormous amount of information generated from seismic attributes and the difficulty in understanding how these attributes when combined define geology, requires another approach in the interpretation workflow.

To address these issues, “machine learning” to evaluate seismic attributes has evolved over the last few years. Machine learning uses computer algorithms that learn iteratively from the data and adapt independently to produce reliable, repeatable results. Applying current computing technology and visualization techniques, machine learning addresses two significant issues in seismic interpretation:

• The big data problem of trying to interpret dozens, if not hundreds, of volumes of data; and

• The fact that humans cannot understand the relationship of several types of data all at once.

Principal component analysis (PCA) and self-organizing maps (SOMs) are machine learning approaches that when applied to seismic multiattribute analysis are producing results that reveal geologic features not previously identified or easily interpreted. Applying principal component analysis can help interpreters identify seismic attributes that show the most variance in the data for a given geologic setting, which helps determine which attributes to use in a multiattribute analysis using self-organizing maps. SOM analysis enables interpreters to identify the natural organizational patterns in the data from multiple seismic attributes.

Multiple-attribute analyses are beneficial when single attributes are indistinct. These natural patterns or clusters represent geologic information embedded in the data and can help identify geologic features, geobodies, and aspects of geology that often cannot be interpreted by any other means. SOM evaluations have proven to be beneficial in essentially all geologic settings, including unconventional resource plays, moderately compacted onshore regions, and offshore unconsolidated sediments.

This indicates the appropriate seismic attributes to employ in any SOM evaluation should be based on the interpretation problem to be solved and the associated geologic setting. Applying PCA and SOM can not only identify geologic patterns not seen previously in the seismic data, it also can increase or decrease confidence in features already interpreted. In other words, this multiattribute approach provides a methodology to produce a more accurate risk assessment of a geoscientist’s interpretation and may represent the next generation of advanced interpretation.

Seismic Attributes

A seismic attribute can be defined as any measure of the data that helps to visually enhance or quantify features of interpretation interest. There are hundreds of types of attributes, but Table 1 shows a composite list of seismic attributes and associated categories routinely employed in seismic interpretation. Interpreters wrestle continuously with evaluating the numerous seismic attribute volumes, including visually co-blending two or three attributes and even generating attributes from other attributes in an effort to better interpret their data.

This is where machine learning approaches such as PCA and SOM can help interpreters evaluate their data more efficiently, and help them understand the relationships between numerous seismic attributes to produce more accurate results.

Principal Component Analysis

Principal component analysis is a linear mathematical technique for reducing a large set of seismic attributes to a small set that still contains most of the variation in the large set. In other words, PCA is a good approach for identifying the combination of the most meaningful seismic attributes generated from an original volume.

principal component analysis for multiattribute analysis

Results from Principal Component Analysis in Paradise® utilizing 18 instantaneous seismic attributes are shown here. 1A shows histograms of the highest eigenvalues for in-lines in the seismic 3-D volume, with red histograms representing eigenvalues over the field. 1B shows the average of eigenvalues over the field (red), with the first principal component in orange and associated seismic attribute contributions to the right. 1C shows the second principal component over the field with the seismic attribute contributions to the right. The top five attributes in 1B were run in SOM A and the top four attributes in 1C were run in SOM B.

The first principal component accounts for as much of the variability in the data as possible, and each succeeding component (orthogonal to each preceding component) accounts for as much of the remaining variability. Given a set of seismic attributes generated from the same original volume, PCA can identify the attributes producing the largest variability in the data, suggesting these combinations of attributes will better identify specific geologic features of interest.

Even though the first principal component represents the largest linear attribute combinations best representing the variability of the bulk of the data, it may not identify specific features of interest. The interpreter should evaluate succeeding principal components also because they may be associated with other important aspects of the data and geologic features not identified with the first principal component.

In other words, PCA is a tool that, when employed in an interpretation workflow, can give direction to meaningful seismic attributes and improve interpretation results. It is logical, therefore, that a PCA evaluation may provide important information on appropriate seismic attributes to take into generating a self-organizing map.

Self-Organizing Maps

The next level of interpretation requires pattern recognition and classification of the often subtle information embedded in the seismic attributes. Taking advantage of today’s computing technology, visualization techniques and understanding of appropriate parameters, self-organizing maps distill multiple seismic attributes efficiently into classification and probability volumes. SOM is a powerful non- linear cluster analysis and pattern recognition approach that helps interpreters identify patterns in their data that can relate to desired geologic characteristics such as those listed in Table 1.

Seismic data contain huge amounts of data samples and are highly continuous, greatly redundant and significantly noisy. The tremendous amount of samples from numerous seismic attributes exhibit significant organizational structure in the midst of noise. SOM analysis identifies these natural organizational structures in the form of clusters. These clusters reveal significant information about the classification structure of natural groups that is difficult to view any other way. The natural groups and patterns in the data identified by clusters reveal the geology and aspects of the data that are difficult to interpret otherwise.

Offshore Case Study

Offshore Case Study 01

This shows SOM A results from Paradise on a north-south inline through the field. 1A shows the original stacked amplitude. 2B shows SOM results with the associated five-by-five color map displaying all 25 neurons. 2C shows SOM results with four neurons elected that isolate attenuation effects.

Offshore Case Study 02

SOM B results from Paradise are shown on the same in-line as Figure 2. 3A is the original stacked amplitude. 3B shows SOM results with the associated five-by-five color map. 3C is the SOM results with a color map showing two neurons that highlight flat spots in the data.

 

A case study is provided by a lease located in the Gulf of Mexico offshore Louisiana in 470 feet of water. This shallow field (approximately 3,900 feet) has two producing wells that were drilled on the upthrown side of an east-west trending normal fault and into an amplitude anomaly identified on the available 3-D seismic data. The normally pressured reservoir is approximately 100 feet thick and is located in a typical “bright spot” setting, i.e. a Class 3 AVO geologic setting (Rutherford and Williams, 1989).

The goal of this multiattribute analysis is to more clearly identify possible direct hydrocarbon indicator characteristics such as flat spots (hydrocarbon contacts) and attenuation effects and to better understand the reservoir and provide important approaches for decreasing the risk of future exploration in the area.

Initially, 18 instantaneous seismic attributes were generated from the 3-D data in the area. These were put into a PCA evaluation to determine which produced the largest variation in the data and the most meaningful attributes for SOM analysis.

The PCA was computed in a window 20 milliseconds above and 150 milliseconds below the mapped top of the reservoir over the entire survey, which encompassed approximately 10 square miles. Each bar in Figure 1A represents the highest eigenvalue on its associated in-line over the portion of the survey displayed.

An eigenvalue shows how much variance there is in its associated eigenvector, and an eigenvector is a direction showing the spread in the data. The red bars in Figure 1A specifically denote the in-lines that cover the areal extent of the amplitude feature, and the average of their eigenvalue results are displayed in Figures 1B and 1C.

Figure 1B displays the principal components from the selected in-lines over the anomalous feature with the highest eigenvalue (first principal component), indicating the percentage of seismic attributes contributing to this largest variation in the data. In this first principal component, the top seismic attributes include trace envelope, envelope modulated phase, envelope second derivative, sweetness and average energy, all of which account for more than 63 percent of the variance of all the instantaneous attributes in this PCA evaluation.

Figure 1C displays the PCA results, but this time the second highest eigenvalue was selected and produced a different set of seismic attributes. The top seismic attributes from the second principal component include instantaneous frequency, thin bed indicator, acceleration of phase, and dominant frequency, which total almost 70 percent of the variance of the 18 instantaneous seismic attributes analyzed. These results suggest that when applied to a SOM analysis, perhaps the two sets of seismic attributes for the first and second principal components will help define different types of anomalous features or different characteristics of the same feature.

The first SOM analysis (SOM A) incorporates the seismic attributes defined by the PCA with the highest variation in the data, i.e., the five highest percentage contributing attributes in Figure 1B.

Several neuron counts for SOM analyses were run on the data, and lower count matrices revealed broad, discrete features, while the higher counts displayed more detail and less variation. The SOM results from a five-by-five matrix of neurons (25) were selected for this article.

 

Detecting Attenuation

The north-south line through the field in Figures 2 and 3 show the original stacked amplitude data and classification results from the SOM analyses. In Figure 2B, the color map associated with the SOM classification results indicates all 25 neurons are displayed. Figure 2C shows results with four interpreted neurons highlighted.

Based on the location of the hydrocarbons determined from well control, it is interpreted from the SOM results that attenuation in the reservoir is very pronounced. As Figures 2B and 2C reveal, there is apparent absorption banding in the reservoir above the known hydrocarbon contacts defined by the wells in the field. This makes sense because the seismic attributes employed are sensitive to relatively low-frequency, broad variations in the seismic signal often associated with attenuation effects.

This combination of seismic attributes employed in the SOM analysis generates a more pronounced and clearer picture of attenuation in the reservoir than any of the seismic attributes or the original amplitude volume individually. Downdip of the field is another undrilled anomaly that also reveals apparent attenuation effects.

The second SOM evaluation (SOM B) includes the seismic attributes with the highest percentages from the second principal component, based on the PCA (see Figure 1). It is important to note that these attributes are different from the attributes determined from the first principal component. With a five-by-five neuron matrix, Figure 3 shows the classification results from this SOM evaluation on the same north-south line as Figure 2, and it identifies clearly several hydrocarbon contacts in the form of flat spots. These hydrocarbon contacts are confirmed by the well control.

Figure 3B defines three apparent flat spots that are further isolated in Figure 3C, which displays these features with two neurons. The gas/oil contact in the field was very difficult to see in the original seismic data, but is well defined and can be mapped from this SOM analysis.

The oil/water contact in the field is represented by a flat spot that defines the overall base of the hydrocarbon reservoir. Hints of this oil/water contact were interpreted from the original amplitude data, but the second SOM classification provides important information to clearly define the areal extent of reservoir.

Downdip of the field is another apparent flat spot event that is undrilled and is similar to the flat spots identified in the field. Based on SOM evaluations A and B in the field, which reveal similar known attenuation and flat spot results, respectively, there is a high probability this undrilled feature contains hydrocarbons.

West Texas Case Study

Unlike the Gulf of Mexico case study, attribute analyses on the Fasken Ranch in the Permian Basin involved using a “recipe” of seismic attributes, based on their ability to sort out fluid properties, porosity trends and hydrocarbon sensitivities. Rather than use principal component analysis to see which attributes had the greatest variation in the data, targeted use of specific attributes helped solve an issue regarding conventional porosity zones within an unconventional depositional environment in the Spraberry and Wolfcamp formations.

The Fasken Ranch is located in portions of Andrews, Ector, Martin and Midland counties, Tx. The approximately 165,000-acre property, which consists of surface and mineral rights, is held privately. This case study shows the SOM analysis results for one well, the Fasken Oil and Ranch No. 303 FEE BI, which was drilled as a straight hole to a depth of 11,195 feet. The well was drilled through the Spraberry and Wolfcamp formations and encountered a porosity zone from 8,245 to 8,270 feet measured depth.

This enabled the well to produce more than four times the normal cumulative production found in a typical vertical Spraberry well. The problem was being able to find that zone using conventional attribute analysis in the seismic data. Figure 4A depicts cross-line 516, which trends north-south and shows the intersection with well 303. The porosity zone is highlighted with a red circle.

water oil contact

4A is bandwidth extension amplitude volume, highlighting the No. 303 well and porosity zone. Wiggle trace overlay is from amplitude volume. 4B is SOM classification volume, highlighting the No. 303 well and porosity zone. Topology was 10-by-10 neurons with a 30-millisecond window above and below the zone of interest. Wiggle trace overlay is from amplitude volume.

Seven attributes were used in the neural analysis: attenuation, BE14-100 (amplitude volume), average energy, envelope time derivative, density (derived through prestack inversion), spectral decomposition envelop sub-band at 67.3 hertz, and sweetness.

Figure 4B is the same cross-line 516, showing the results of classifying the seven attributes referenced. The red ellipse shows the pattern in the data that best represents the actual porosity zone encountered in the well, but could not be identified readily by conventional attribute analysis.

Figure 5 is a 3-D view of the cluster of neurons that best represent porosity. The ability to isolate specific neurons enables one to more easily visualize specific stratigraphic events in the data.

neural cluster with colormap

This SOM classification volume in 3-D view shows the combination of a neural “cluster” that represents the porosity zone seen in the No. 303 well, but not seen in surrounding wells.

 

 

Conclusions

Seismic attributes help identify numerous geologic features in conventional seismic data. Applying principal component analysis can help interpreters identify seismic attributes that show the most variance in the data for a given geologic setting, and help them determine which attributes to use in a multiattribute analysis using self-organizing maps. Applying current computing technology, visualization techniques, and understanding of appropriate parameters for SOM enables interpreters to take multiple seismic attributes and identify the natural organizational patterns in the data.

Multiple-attribute analyses are beneficial when single attributes are indistinct. These natural patterns or clusters represent geologic information embedded in the data and can help identify geologic features that often cannot be interpreted by any other means. Applying SOM to bring out geologic features and anomalies of significance may indicate this approach represents the next generation of advanced interpretation.

 

Editor’s Note

The authors wish to thank the staff of Geophysical Insights for researching and developing the applications used in this article. The seismic data for the Gulf of Mexico case study is courtesy of Petroleum Geo-Services. Thanks to T. Englehart for insight into the Gulf of Mexico case study. The authors also would like to acknowledge Glenn Winters and Dexter Harmon of Fasken Ranch for the use of the Midland Merge 3-D seismic survey in the West Texas case study.

Rocky Roden ROCKY RODEN runs his own consulting company, Rocky Ridge Resources Inc., and works with oil companies around the world on interpretation technical issues, prospect generation, risk analysis evaluations, and reserve/resource calculations. He is a senior consulting geophysicist with Houston-based Geophysical
Insights, helping develop advanced geophysical technology for interpretation.
He also is a principal in the Rose and Associates DHI Risk Analysis Consortium,
which is developing a seismic amplitude risk analysis program and worldwide
prospect database. Roden also has worked with Seismic Microtechnology
and Rock Solid Images on integrating advanced geophysical software applications.
He holds a B.S. in oceanographic technology-geology from Lamar University
and a M.S. in geological and geophysical oceanography from Texas A&M University.
Deborah Sacrey DEBORAH KING SACREY is a geologist/geophysicist with 39 years of oil and gas exploration experience in the Texas and Louisiana Gulf Coast, and Mid-Continent areas. For the past three years, she has been part of a Geophysical Insights team working to bring the power of multiattribute neural analysis of seismic data to the geoscience public. Sacrey received a degree in geology from the University of Oklahoma in 1976, and immediately started working for Gulf Oil. She started her own company, Auburn Energy, in 1990, and built her first geophysical workstation using
Kingdom software in 1995. She specializes in 2-D and 3-D interpretation
for clients in the United States and internationally. Sacrey is a DPA certified
petroleum geologist and DPA certified petroleum geophysicist.