The process of statistically analyzing multiple seismic attributes using a SOM (Self-Organized Map) algorithm has been around for several decades. However, advances in computing power, coupled with the many new attributes developed in the last 30 years, has made this type of analysis extremely powerful.
In the past, SOM has been used on only one attribute at a time using the seismic wavelet as the basis for the neural analysis. The approach in this presentation is using SOM on multiple seismic attributes at one time, and in a sample-based, not wavelet, format.
Studies done in the Meramec Formation in Central Oklahoma and the Woodbine Formation of East Texas will be highlighted for the SOM process’s ability to find the best reservoir through the statistical analysis of seismic attributes. Then, converting the neural clusters into Geobodies, calculations can be made to determine reservoir size and reserve estimates. A statistical tool is also embodied to show how the neural patterns can be compared to distinct petrophysical rock properties to confirm the presence of the reservoir.
Owner – Auburn Energy
Deborah is a geologist/geophysicist with 44 years of oil and gas exploration experience in Texas, Louisiana Gulf Coast and Mid-Continent areas of the US. She received her degree in Geology from the University of Oklahoma in 1976 and immediately started working for Gulf Oil in their Oklahoma City offices.
She started her own company, Auburn Energy, in 1990 and built her first geophysical workstation using Kingdom software in 1996. She helped SMT/IHS for 18 years in developing and testing the Kingdom Software. She specializes in 2D and 3D interpretation for clients in the US and internationally. For the past nine years she has been part of a team to study and bring the power of multi-attribute neural analysis of seismic data to the geoscience public, guided by Dr. Tom Smith, founder of SMT. She has become an expert in the use of Paradise software and has seven discoveries for clients using multi-attribute neural analysis.
Deborah has been very active in the geological community. She is past national President of SIPES (Society of Independent Professional Earth Scientists), past President of the Division of Professional Affairs of AAPG (American Association of Petroleum Geologists), Past Treasurer of AAPG and Past President of the Houston Geological Society. She is also Past President of the Gulf Coast Association of Geological Societies and just ended a term as one of the GCAGS representatives on the AAPG Advisory Council. Deborah is also a DPA Certified Petroleum Geologist #4014 and DPA Certified Petroleum Geophysicist #2. She belongs to AAPG, SIPES, Houston Geological Society, South Texas Geological Society and the Oklahoma City Geological Society (OCGS).
Hello, glad to have you in the last breakout session of the day, this is the exploration and development track, and we will be hearing shortly from Deborah Sacrey of Auburn Energy. Deborah Sacrey is the geologists/geophysicists with 44 years of oil and gas exploration experience in the Texas, Louisiana, Gulf Coast, and Mid-Continent areas of the US. Deborah specializes in 2D, 3D interpretation for clients in the US and internationally. She received her degree in Geology from the University of Oklahoma in 1976 and began her career with Gulf Oil in Oklahoma City. She started Auburn Energy in 1980 and built her first geo-physical workstation using the kingdom software in 1996, Deborah then worked closely with SMT now, part of IHS for 18 years, developing and testing Kingdom. For the past eight years, she has been part of the team to study and bring the power of multi-attribute neural analysis of seismic data to the geoscience community guided by Dr. Tom Smith, founder of SMT. Deborah has become an expert in the use of Paradise software and has over five discoveries for clients using this technology. She is very active in the geoscience community, including positions as past President of GCAGS, past President of SIPES, and past treasurer of AAPG. She will be presenting today on a tale of two reservoirs, how machine learning can help define sweet spots in conventional and unconventional reservoirs. Please help me welcome Deborah Sacrey.
Good afternoon, everyone. This is Deborah Sacrey. As you know, I am a consultant in the Houston Texas area. Today, I would like to go through and show you two case studies I’ve done for finding sweet spots in a conventional and unconventional reservoir. The first thing I want to kind of cover is some of the technology behind what went into the case studies. In this particular software I’m using I’m looking at data, multiple seismic attributes at one time, and I’m looking at it on a sample by sample basis. This gives me a lot more statistical information for understanding what’s going on in the subsurface. The data is generally normalized to a variance from the standard deviation so that no one attribute contributes more than any other, thus true pattern recognition or cluster analysis is totally unbiased and the importance of using an unsupervised neural analysis as opposed to a supervised is when we’re going in and looking at geology, we really don’t know what’s down there, so we allow the unsupervised information, the computer to figure it out and show us what’s going on instead of us trying to direct it.
Now, the sample statistics are really important. If you’re working in a waylet world and you have a 40 millisecond wavelet you’re going to be mapping at peak or a trough or possibly zero crossings, so you can be sampling that 40 millisecond wavelet four times. However, if you’re working with sample statistics and you’re looking at two millisecond time samples, then you’re actually looking at that form at 40 millisecond wavelet 20 times, five times as densely as you would, if you were just picking a peak or a trough or a zero crossing, this gives us a lot more information about the subsurface.
One reason why multiple seismic attributes at one time are important is here’s an example of eight different attributes, basically they’re weightless and all the red dots represent each sample. So you can see as you go from attribute to attribute the information that you get from those wavelets at that sample are very, very different. This helps us understand what’s going on much better. And finally, when we’re looking at the data, we’re looking at it, I call these chiclets, but they’re really voxels, but we’re looking at basically information that comes back in bin size in the 3D. So if it’s 110 by 110, this is 110 feet by the sample interval. Most of the times, if I have four millisecond data, I can up sample one time to two, or if I have two millisecond data, I can go to one millisecond without introducing noise or artifacts. I try to get the sample interval as small as I can to find out more about the vertical resolution in the rock.
Whereas tuning thickness in the wavelet world maybe 40, 50 feet at 10,000 feet. There is no real limit to what you can do in the neural classification world. The vertical resolution in the sample statistics is based on the interval velocity of the rock from which the samples taken, that has no depth limitation really. I’ve been able to find seven foot thick sands at 11,000 feet. But people ask me all the time, how do I determine my recipe of attributes? And I tell them, well, a lot of it has to do with just general knowledge in the fact that I’ve been using machine learning for about eight years, so I have a really good understanding in certain areas of the world what attributes are more important. But generally I go through a principal component analysis.
This allows us to reduce a large set of variables, the seismic attributes, to the small set that still contains most of the variation in a large set. Here’s an example of two attributes, attribute A, an attribute B. If you look at all the data points from those two attributes within your zone of interest, I don’t do the whole darn seismic volume, I just try to keep the principal component analysis to that zone of interest that I’m working on. The first principal component is showing you the largest variation, so that’s going to be the longest line right here, and the eigenvector is showing the direction of the line and how much variance or spread of data there is. The second principal component, which is the next largest variation, is orthogonal or 90 degrees to the first. Well, that’s fine and our brains can understand it if we’re dealing with one or two or maybe even three attributes, but if we’re working in the world where we have 12 or 13 or 14 attributes, we can’t see in 14 different dimensions, but the computer can figure it out.
And how that translates to reducing a lot of information down to something really usable is… Here’s an example of where of orthogonal with 12 attributes. I picked a line through a particularly good well, because I’m interested in seeing what attributes are most important at my best production. In this case, the first principal component is showing me four attributes and these four attributes, in their relative percentages to all 12, account for more than 93% of the data of all 12. The second principal component, now I have three attributes that end up being 81% and they’re 81% of the remaining information from the first 93%, so right here into two principle components, I’ve taken my 12 attributes and reduce it down to the seven most important. Let’s get on with the case histories.
This first one is finding a sweet spot in an Oklahoman unconventional reservoir. This is some work I did about a year and a half ago at the courtesy of TGS. TGS has great data in the Oklahoma area and they wanted to know if machine learning could discriminate reduction in the Meramec formation, which is an unconventional reservoir in this area, and understand the accuracy in the neural classification results. Okay. The assumptions and challenges were production is not necessarily related to geological changes, porosity and permeability could not be calculated for the log curves they gave us because most of these are older wells. They were anticipating difficulty in isolating specific production in all the wells through the perforated zone. A decision was made to use only straight holes because of all the variables in completions with lateral wells, fracking and how well they stayed in zone and so forth.
They gave me 35 wells. Now this is going to be kind of hard to see, but I’ve taken the cumulative production at the wells and those wells in green have produced less than 35,000 barrels of oil. There’s a little cluster over here of wells in pink and a couple of wells over here and those wells have produced above 50,000 barrels, 50 to a 100,000 barrels and of all 35 wells in this study, one well stood out beautifully and it was called the Effie Casady number one, which was drilled in 1980 and has produced 240,000 barrels of oil and 2.23 Bcfg. The study area was approximately 195.5 square miles, certainly not all of the 3D that they have shot in this area, but a good portion of it.
I’m going to focus on that south to north arbitrary, but in the study I looked at arbitraries both west to east and south to north, trying to hoop up the better wells and the poor wells to. Now, this is a time map on the Meramec formation and the contrary interval here is about 75 feet or about 10 milliseconds. This is a morally kind of limestone and it’s pretty fast porosity. Data quality is beautiful. TGS did a wonderful job doing this and I have to say courtesy of TGS. But the Meramec formation that we’re studying is between this pink line right here at the top of the formation down to this yellow line, which includes the Woodford shale. The Woodford in this area is actually the source rock for the hydrocarbons in the Meramec. Here’s kind of a blowup of one well in particular where you can see the top of the Meramec down to the Woodward, which is just a trough in this area.
What’s important to also note is as you go from north to south the section is gradually thickening because you’re going into the Anadarko basin. Now, that’s significant because it also means that the source rock is also thickening and that might explain why the wells on the north side of the survey were poor producers than those wells on the south side. People would traditionally go in and drill Meramec and absolutely just perforate the whole interval. With the exception, really interesting, of the Effie Casady well. Now here’s the results of the principal component analysis for the area. The green line represents kind of the median information for all of the first component analysis, but you can pick any one line in the 3D, in this case, I picked a line, 754, which happens to be the line running through the Effie Casady to see what attributes were important at that well. In this case, the top five out of 17 wells, I believe 16 attributes, were sweetness, envelope, instantaneous frequency, thin bed, and smoothed frequency. With the exception of envelope, all these guys to kind of frequency based attributes. Thin bed, instantaneous frequency, especially, are telling me that I’m looking for thinner beds in here, which we would suspect in an unconventional reservoir.
I did a lot of different SOM and it’s kind of an eyeball thing, but the one I settled on was a 10 by 10. It broke down the Meramec in a lot of different patterns so that I could kind of see how it related to the logs, but what was interesting about the Effie Casady is that you can see that the perforation is here in blue, right here and right here and then all of this section, that they segregated out spots instead of being just the whole darn area. The lower perforations are interesting because they have this yellow and brown pattern in them, okay? Those are neuron number 71 and neuron number 72 out of a hundred different patterns. Again, I’m working with a one millisecond sample right here. My recipe that I ended up with was attenuation envelope, hilbert, instantaneous frequency, instantaneous phase, normalize amplitude, relative acoustic impedance, sweetness, and thin bed. But it’s interesting that these lower perforations are specifically targeting this zone right here.
When we look at it close up and we’re looking at the resistivity curve, this is my density curve, this is my resistivity curve, and this is a [Robi] curve right here. The better resistivity for the whole interval is in these lower two sets of perforations, with the best resistivity being in this lower set, again, associated with neurons number 71 and 72. I think this is indicating a little bit sandier environment. The hydrocarbons probably have a little bit higher porosity in here than the rest of them all and the resistivity is reflecting that. Here are the two neurons, 71 72, which I colored in the package to be pretty similar colors to the Keenan display earlier. What I’ve done is I’ve taken and I’ve kind of squeezed down the neural analysis to just include this lower zone so that I am not picking up spurious information from scattered places around the rest of the wellbore and just focusing on this lower couple of sets of pearls, primarily neuron 71 and 72. You can see that the Casady well right here has combination of both 71 and 72, the three better wells up here that are in the median category, did not have any 71, but they did have 72, the brown.
What we can do, since we understand the bin spacing size and we know our sample thickness, we can go through a process and turn these chiclets or these voxels into volumes and so they become like little platelets that you can see and I start associating volumetric information with the geobodies around the wells. In this case, I’ve created geobodies out of neurons number 71 and 72 and I’ve highlighted the geobodies that go through the Effie Casady well. Now, with a little bit of knowledge about the interval velocity, net to gross ratio, porosity, and water saturation, you can actually get poor volume information that gives you information on reservoirs. I had a buddy working for a company whose group was involved in the Meramec, so he gave me some of the numbers to use for this analysis. The end result of this dendritic looking pattern right here, which I believe is key to the production of the well, is that when you get down to it that well had a little bit over 118 million cubic feet of rock core volume.
If you put the 14,000 feet per second, the net to gross of 60%, a porosity of 6%, because this is an unconventional reservoir, and a water saturation of 40%, and the recovery factor is 225 barrels of oil equivalent per acre foot, so that calculates 613,125 BOE for this geobody. The actual, for the well, when you convert the gas back to oil, is 611,685, so in this case, when we are within a 1% margin of error on understanding where the production for the Effie Casady is coming from. After I showed this to TGS, they said, “Okay. What about the other three wells, which were in the median category?” Those three wells together had a combined total of 205.800 barrels of oil and 1.38 BCF of gas. Here’s an arbitrary line that goes from the Effie Casady up to those three wells and here they are, so you can see that there’s actually four wells in here that were in the median category, but the magic circle number two, Barnard, didn’t actually perforate a neurons number 71 and 72 as we’ll see in just a second those wells. Here’s the production from the three and then here’s the fourth well, but it didn’t produce in the main Meramec zone. It’s producing from perforations that are above the top of the Meramec in a lower zone associated with the big lime.
When you look at the neural information, you can see the brown number 72, again, as well as the Effie Casady down here in 71 and 72, in the brown and yellow. Now there’s scan occurrences of 71 and 72 between these two wells. These scan occurrences have not been penetrated by any other wells. We’re going to look at this area right here and when we blow it up, we can see that these three median wells are in the 72 neuron pattern. While the fourth well did not produce from the 72 that they encountered, their production came from another group of 71 and 72 above the Meramec, which is kind of interesting. So obviously those two neurons are showing production information.
I kind of looked at the dendritic pattern that made up the geobody for number 72 at this point. I took the keynote of information from the three wells that produced out of the Meramec section. Those three wells, the BOE was about 423,080 barrels of oil. Now this whole geobody calculates to 481,681 BOE, so there’s a big gap between what the geobody is saying and what the three wells were, however, there was a fourth well up here, the MacKellar Clydena, which actually did perforate in that neuron number 72 and produce 58,601 barrel of oil equivalent. So I added that well, because it did perforate this geobody, add that well to the three down here and the actual from the three was 481 and what I calculated the geobody was 475.
Here I am about 2% error from what was actually produced from the four wells in neuron number 71, so this is a very accurate way of looking at your production and understand what’s going on. Then the last thing I did is I looked at the geometric attributes to see what might be causing that 71, 72 zone. This is a time slice and approximately the level of the perforations in the Casady well. You can distinctly see a series of channels coming down in the data. Okay? This is about 17 milliseconds above the top of Woodford, my suspect is that there are probably little sandy channels that are coming through this area, that have eroded down into the moral stone when it was exposed. It’s those Sandy channels that give you your better production in resistivity when you can find wells that have hit them.
Then I looked at the curvature volume, I think there was a little glitch in their processing for some reason, to look at the fracture patterns in the area because that also is going to give you some indication of why some production is better than others. I ran a SOM with two curvature volumes in two similarity volumes and here is just four neurons, four patterns, which give you the lineaments and the fracture system in this 3D at 17 milliseconds above the top of the Woodford, which is about the same zone as where the better perforations were in the good well.
The conclusions from that study is that SOM can be very effective in finding sweet spot in unconventional reservoirs, especially where there’s changes in depositions, siltstones, calcareous sands, et cetera, and can be key to porosity and hydrocarbons. Geobodies can and are related to find porosity streaks and can be back calculated to production for use in reserve estimates. It’s also a good way of going forward to estimate new sweet spots which have not been drilled. The key is understanding depositional environments, tying to wells with synthetics and understanding the function of the attributes you used in the analysis. Here’s the conventional side of things. This is a study I just did earlier this year on a conventional play in East Texas. My client has asked me to sanitize it as much as I can, so I can’t tell you counties, I can’t tell you wells or anything like that, but it’s a significant field that I studied.
For the client, it had to do with a proof of concept. They didn’t understand machine learning. They wanted to see how it worked on information that they had and what they’d been drilling. They wanted to see if on a sample scale could identify reservoir rock in Cretaceous sands below 13,500 feet and to see if there were any remaining locations to be drilled in this field. Now, what they gave me was the 3D survey, which is the PSTM only, I had no ABO attributes, no offset stacks that were gathers or anything like that. They gave me the wells, production tops, and about 55 digital logs for the field. Here’s a map on a key horizon, just above the Cretaceous sand, and again, I built a couple of key cross sections, a west to east cross section B and a south to north cross section A, going through the kind of heart of the field. The red boxes represent those wells on which I have created synthetics. When you’re working in a sample world, synthetics in a good tie to your data is critical because if you’re slobby, then you’d be tying to a neuron that has nothing to do with your reservoir and you’ll make some mistakes. I had all the shallow wells turned off, so the only thing that you’re seeing here are those wells would your deeper than 12,000 feet.
Here’s the south and north arbitrary line in the seismic data. Here you see the logs and you can see the perforations in green, where the Cretaceous sand has been perforated. You’re coming up against an unconformable surface, so these come up and terminate at an angular unconformity and then come up and terminate and come up and terminate. There’s a series of different sands in here that are being cut off by this unconformable surface and the key horizon has been mapped on the unconformity. Here is the west to east line and since you’re going quasi strike to the deposition in the sands, it’s not quite as clear and apparent that there’s an angular unconformity here, but you can see some disturbance and see that there is kind of an unconformable surface. What I’ve done also is I’ve put it on the cumulative production for the wells. We started very poor on the west side, we get to some really good wells, and 25 BCF and 24 BCF. The best well at the yield is this well right here with a cumulative production of 33.1 B’s and 1.77 million barrels of oil and then you taper off again. This well across the fault is obviously down dip to the better wells, which seem to have some kind of structural advantage here.
I went through principal component analysis, and I used a zone from 30 milliseconds above the key horizon to a hundred milliseconds below the key horizon. Again, the green line represents kind of the median information for the field itself. Each one of these blue lines they’ve done a principal component analysis along every inline. This line right here represents the inline that went through the very best well because I wanted to see which attributes were important after good well. In this case, in the first eigenvector, I have relative acoustic impedance, which is going to be important, because that kind of tells you where you have porosity. I have an envelope, hilbert, envelope second derivative, and I have sweetness. Sweetness in here could be a hydrocarbon indicator and it can be a sand finder.
I’ve ran many SOM and I played with the recipe somewhat and ended up with this recipe right here, where I’m using attenuation, and I’m using it a relative acoustic impedance and sweetness as my hydrocarbon indicators and then envelope, hilbert, instantaneous frequency, instantaneous phase, and thin bed kind of my stratigraphy attributes, if you would say that. Along the south and north line, starting at poor wells, and I, again, put the production on here so you can see what’s going on, I have neural information at the better wells. Here 47, 48 and 37 seem to be key, numbers 37, which is a blue and 73, which is yellow, seem to be key in this well, which is 11.3 B is, 47 and 48 and this well 6 B is 37 and 38.
You can kind of see what’s going on here in terms of the changes in the rock properties from poor to better in this area. Now on the west to east line, where I’m crossing through the better part of the field, I start in a poor sense and I get better and better production. Here’s 25 BCF right here, and I’m in 37, 47 and 48. Here’s a 24 BCF and I’m in 37, 48 and 73, which is this little piece of yellow right there and in my very best well, I’m in 55 and 73 with a little bit of 37 here in blue. Now, I talked to the operator in the field and even though this well only produce 3.5 BCF is very significantly structurally hampered. He said that the rock in it was absolutely beautiful, it just has to be structurally low and on water. He said the permeability and porosity ever been as good as what was in the best well a field, which is probably why it’s getting this yellow neuron.
Then I went through the process of extracting the neural information right at the well, not just looking visually at the cross-section. This is a cross section showing the wells in the south and north and then I’ll head west east. I’ve used a designation to kind of break it down into their production, poor being anything less than 5 BCF, fair being in the five to 10 BCF range, good being in a 10 to 20 BCF range, great being 20 to 25, and excellent, the one well I have as excellent, is that well over 25 BCF of reduction. I start out poor, poor, good, fair, good, fair, great, good, fair. Again, I’m pointing out the neural numbers, the pattern numbers at the perforations. This red zone right here indicates the zone appropriations.
I’ve put the perfs down here and I’ve put what the initial IP was and then what the cumulative production was, so that you can kind of see what neurons are being repeated over and over again in these wells. Here’s the west to east, here’s the well that’s across the fault, but look at the 73 and here’s the 33 BCF well at 73. It had excellent rock property and qualities. It just was structurally disadvantaged, but here’s the west to east and you got number 50 and number 30 and the poor wells 36 and five and then fair, you’re starting to pick up some of the 48. The 48, 47’s are kind of secondary neurons to the better ones, 55 and 73 and 37. Here’s all of the neurons that were key to the production.
There was a cluster of neurons in the poor wells down here and then the better wells, the good, great and excellent wells were all in this range right here, 37, 47, 48, 55, and 73. You can kind of see the distribution right here. Now I’ve got the sculpted down from the key horizon, which is the unconformity down 20 milliseconds, which catches 95% of all perforated intervals in the field. There were a couple of deep wells that went deep to catch a sand but they were poor producers, so I didn’t include that. But you can kind of see it. I put the key, the poor, the poor, the good, the great, the great, the excellent along the cross section here, so you can kind of see the distribution of those neurons. Then I’m breaking it down for the top three neurons, which in my mind is 73 55 and 37 and looking at what actually goes into making up that neuron.
So for 73, what’s causing that to pick up some of the bitter production is you’ve got hilbert, you’ve got relative acoustic impedance, again, which is my porosity indicator, you’ve got envelope and you’ve got sweetness, which is my sand and hydrocarbon indicator, so that’s really important. The number of 55, you still have relative acoustic impedance and sweetness, they’re not as high up in the list and I think that’s because you’re probably dealing with a rock property that’s not quite as porous, or maybe quite as permeable as number 73. What you can see the distribution of that pattern in the wells in here, again, this is my best well in the field, these are my cross sections right here. Neuron number 37, while it still has relative acoustic impedance, I no longer have sweetness that’s rating very high, but I do have attenuation. So I would say that I’m still getting a porosity indicator, but I may be looking at more of a stratigraphic component now instead of just pure out now hydrocarbon indicators. Here’s the distribution for number 37 throughout this portion of the 3D.
Now, one of the things I look at when I’m going through and evaluating an area is I like to look at the classification in the patterns, obviously, but there’s another volume I can look at called probability or low probability. When you’re looking at the clusters that make up these patterns, here’s kind of a spread of your data points. The center of the cluster is right here. You can see that there are a lot of data points around the center of the cluster, but there’s still data points on the very fringe areas that are part of that cluster, but are not epicenter. These are the most anomalous data points, they are the low probability data points. Why that’s significant is that because they’re anomalous to the cluster, if you’re using hydrocarbon indicating attributes or you’re using ADR attributes, these tend to be the places where you have your ADNR lines or your hydrocarbons, okay?
I want to use this slide to preface what you’re going to see next, which is, I’ve taken the same patterns and I’ve turned everything off except the lowest 10% probability data points. Why is that significant? This field was discovered in the ’90s, well before they ever shot the 3D. I think they’d probably over drilled it because you can see the density of wells. By the time they shot the 3D a large part of the field had been depleted. I think in this case, the low probability volume is actually showing me my depletion. I’m looking for zones that are left that are possibly not depleted. This well actually did not go deep enough to hit the zone right here. I’m a little afraid of it, so I’m circling this zone, this area. The well symbols posting at the wrong end of the deviated wellbore, so that well should be posting over here, this area down here and this area down here. So I have four areas that I have a lot of competency in as zones are areas that may not be depleted from the rest of the wells.
Just like in the unconventional case study, I went through and created geobodies. Now the geobodies are all full data point geobodies. There’s no geobodies created on just the lower 10% or we can’t parse a probability in the geobodies. So I’m having to take a percentage of a geobody that may cover a larger area to calculate my number of acre feet for these areas. I’m just looking at 73 55 and 37. I didn’t include 47 and 48 in this case. The first area to the south looks like this. There are no wells penetrating it, which is a good sign. It ended up with about 10,252 acre feet from these two geobodies. I’ve got neuron number 37, which is blue and neuron number 73, which is yellow. Now because we’re deep and this is Cretaceous age rock, I’m using an interval velocity at 14,000 feet per second. I’m using a net to gross ratio of 80%. My high side case would be 25% porosity, water saturation of 30%. I come out with 10,252 acre feet. Now, the field average for production is about 2000 mcfg/AcFt at 54 barrels of oil per million cubic feet.
If you use those numbers and apply it towards the number of acre feet we have, this area has the potential of producing 20.5 BCF and over 1 million barrels of oil, that would put this estimate in the great category, if you remember the poor, fair, good, great, excellent, so that would be a great well to drill. The area to the north, the other area to the south, and keep in mind these are posting at the wrong end, is this area right here. Now, because this geobody is so big, I only chose to use about 15% of it to help me with my acre-feet calculations. But going through and using the same numbers, I come up with something around 13.5 Bcf and 727,000 barrels of oil, still not shabby, but that would be a good category. Now going up to the north, it’s not quite as big or quite as good, so I’m down to one of the north areas. I only used a 10% portion of it because it’s a much bigger geobody than what would be implied, but I’ve got a good stack of all the three key neurons right here, which is important.
It would be somewhere in the order of 8.6 Bcf and 466,000 barrels of oil, this would put it in the good category. Of course, changing any of these parameters, increasing or decreasing the porosity, the porosity is going to change the calculation on the potential reserves. You can play around with the numbers and if you have an economic limit that you want, you can put it in the 15% porosity category and see if it meets your needs, but I’m using the porosities and the information from my very best well, so this is the most optimistic view. Finally, the last one to the north, because the geobody is so big up here, I did only use 10% of it. It would be only in the fair category, producing 5.5 BCF and 298,000 barrels of oil, so again, you can play with the numbers, but this was based on information from the very best well on the field.
We have four areas that have potential of drilling, but there’s still an economic risk here or a risk for drilling. These are not cheap wells to drill, there are three strings of pipe, they run drilling complete somewhere around $6 million, so you have to be very careful before you go poke a hole in the ground with these guys. What is another way I can reduce the risk even further? We had an intern this year that put together a wonderful spreadsheet program for statistical analysis. This is where we’re really starting to get petrophysical information tied to the neural patterns that we’re seeing at the wells. Here again, time depth relationships are extremely important because you don’t want to be sloppy with your synthetics. You want to make sure you have good ties, but in this case, we can put wells in. We can export the well logs from the software and we can put well logs in and we can give them petrophysical cutoffs for what is reservoir and what is not reservoir. Then we get an evaluation.
We also have a theoretical Chi-square table, which can give us a degree of relationship or non-relationship from the neurons to the wells, to the information, the logs, so there’s a lot of things we can do with this. The Chi-square can also, if you have multiple topologies can look at a five by five, six by six, seven by seven, and eight by eight, and tell you which ones theoretically are the best number of classes to use for the well information too. In this case, I only use the nine by nine, so I didn’t go into that aspect of it. I’m looking at three wells in here just to show you a sampling of what this program does. Here’s a good well, it’s well number three on the south to north cross section. The production was 12.6 BCF and 176,000 barrels of oil. At the well itself, I have neurons number 38 and 41 that are part of a perforating zone here. When I use a cutoff of gamma ray of less than 60 units, anything greater than 8% porosity and resistivity of 8 Ohms as the lower limits for what’s reservoir and what’s not reservoir.
I do get, in the analysis, neurons number 38 and number 41 as being net reservoir. The brownish area is no reservoir, the yellow in this case is reservoir. Even though I’m not using multiple topologies, the Chi-square table is still telling me null hypothesis is the lithological contrasts SOM neuron occurrences are independent of the presence of net reservoir, but the null hypothesis is rejected, so the alternative hypothesis is that there is a relationship between net reservoir and lithological contrast SOM variables. Not only am I seeing net reservoir based on my cutoffs for the key neurons at the perforations, but I’m getting a Chi-square confirmation that there is a relationship between net reservoir and lithological contrast variables, so that’s well number three. Let’s go to well number one in the south to north, which is a poor well. It only produced 165 million cubic feet of gas and only 28,073 barrels of oil, but the key neurons at the perforations are number 74 and number 64.
When I put in the same variables, I’m using the same variables, less than 60 units, in the gamma ray 8% and 8 Ohms, what I’m seeing here is I’m seeing very little reservoir and mainly non-reservoir at those two neurons. So this is telling me that that’s probably why the production was so poor. Poor rock quality at those perforations. Even though the null hypothesis is still rejected saying that there is a relationship between net reservoir and lithological contrast SOM variable. So I do know what little reservoir I have is right here at those two neurons. And then finally looking at the very best well in the field. Number 73 and number 55 are pertinent to the production here. Lo and behold, number 55 and number 73 are outstanding when it comes to net reservoir. Now, there is some non-reservoir in number 73 and I think that’s because you have this little shale section right here on your resistivity curve that you have this little shale section that is going through.
So it’s picking up a little bit of non-reservoir information, but overall, most of it is reservoir and again the null hypothesis is rejecting. I am seeing a relationship between net reservoir and lithological contrast SOM variables. So this is a really excellent way of taking the patterns that you’re seeing and petrophysical limits on them to see if the patterns really have meaning for reservoir, so back to my four zones that are left behind from depletion. Now I probably have more confidence understanding what I’m seeing in the field itself to go and possibly spend the $6 million it’s going to take to drill one or two of these locations and that’s exactly what my client was hoping to see. They didn’t want to spend $6 million without having some kind of lower risk verification that they were going to have a high rate of success on this and they’re looking for partners in this area. So what does this tell you?
In summary and conclusion is that, unsupervised neural analysis using multiple sides of the attributes can show patterns in earth that one can relate to reservoirs when calibrating wells. The use of geobodies can predict accurate volumes of potential hydrocarbons when one knows the correct input data. The use of low probability data points can help identify stranded reserves in a field and application of statistical petrophysical methods can verify and reduce risk in identifying reservoir grade rock in the potential stranded areas. This workflow is not one and done. It took me several different recipes and many different topologies before I came to the conclusion that the nine by nine I was using and the recipe I used was actually the best tie to the well information. Careful calibration to well information is what it’s all about. Having good time depth relationships, good log curves that are consistent, and an understanding of seismic attributes is necessary for successful results. Finally, I would like to acknowledge TGS for allowing the presentation of the Meramec study that we did. I want to thank my new East Texas client for the use of their data and a big shout out and thank you to Carrie Laudon of Geophysical Insights for guiding me through the statistical analysis of the wells in East Texas. I could not have put that part together without her. If there are any questions at this point, I’ll be glad to answer them.
Okay. Thank you very much, Deb. It was a fantastic presentation. I just wanted to have a reminder for everyone. It is now open for questions. You can access the chat room from the big blue button right below the presentation that says, ask questions. There was one question Deb that came in during your presentation and it was, how different is the sweetness attribute when it behaves between like oil and gas? What are the different kind of responses?
Deborah Sacrey: Can you hear me okay?
Hana Kabazi: Yes.
Okay. I have been using sweetness as a hydrocarbon indicator in the Gulf Coast, primarily classic environments, for probably about 15 years. It’s something that needs to be calibrated to the wells in the 3D. It’s an excellent sand finder, but many times it’s also a hydrocarbon indicator. If you’re looking, say, at the AOA in the middle of a big shale package, anywhere you have a sweetness anomaly you can pretty much be sure that you’re going to have sand, but you need to look at other productive wells to determine if that sweetness anomaly is really a hydrocarbon indicator or just the sand indicator.
Sure. There was another question that came in that was just asking whether or not you’ve ever tried a random forest classification and whether or not Paradise does that. But there has been a response that, that is currently an R&D, but have you ever tried that anywhere else?
No. I’m perfectly happy with SOM classification and I’ve used it probably in 300 different surveys at this point in the last eight years and have had excellent results. I haven’t tried any other type of supervised learning process.
Fantastic. As of right now, there are no other questions. Does anyone else have any questions? Maybe we can give them a minute or two. Can supervise learning be used to directly predict per porosity and net to gross, et cetera?
Well, I’m not using supervised learning in this case. I have found that when it comes to the subsurface that the unsupervised machine learning is actually superior. We don’t know going in to any one 3D what the natural patterns in the data really are. If you’re trying to restrict it or you’re trying to control it, then you’re forcing patterns that may or may not be there. The unsupervised learning methodology to me is the best for discovering what’s in the subsurface.
Great. Has there been any case of predictions of phase of hydrocarbons?
Deborah Sacrey: Predictions of what?
Have there been any cases of predictions of phase of hydrocarbon?
Deborah Sacrey: You’re saying phase of hydrocarbons?
Hana Kabazi: Phase, yeah. The phases of hydrocarbon.
No. I have a lot of people that ask me how easy it is to find oil versus gas and it’s not easy at all because oil is so close to water in the subsurface, but believe it or not subtle pattern changes can occur using the unsupervised methodology and can differentiate between oil and gas and water. I’ve used that in reef systems a couple of times with good success.
Fantastic. Thank you. If there aren’t any more questions, I think that is the last one, I want to thank you for the wonderful presentation. It was very informative. I want to thank everyone for attending our last breakout session. There will be a final presentation back in the auditorium at three o’clock and the subject is the impact of machine learning on the oil and gas service industry. Thank you so much for participating. Have a wonderful day. Thank you Deb.
Deborah Sacrey: Thanks.