OPTIMAL LEARNING FROM PREPOST DRILL EVALUATIONS. THE CASE OF MULTIPLE TARGET PROSPECTS 
Exploration Hot Spots 
^{1}GeoKnowledge AS

Introduction 
Systematic tracking of exploration results relative to predrill predictions is critical for improving both assessment performance and exploration decisions. There are significant risks and uncertainty in exploration and all efforts that can assist in delivering accurate, unbiased and consistent assessment will ultimately enhance company exploration performance.
Producing accurate and unbiased assessments is challenging. So is measuring assessment performance. Choice of appropriate metrics is not obvious when assessments produce range estimates and where we often drill targets that we have assessed to have a significant dry hole risk (COS << 0.5). A dry hole is not necessarily an indicator of substandard assessment; failure rates over a multiyear exploration campaign that are much higher (or lower) than predicted, on the other hand, can signal biases in estimation of chances of success. Similarly, finding reserves that are much higher or lower than what was expected does not mean that the assessment of the target was incorrect or biased. Recurring results where discovered reserves are all the time lower (or higher) than expected, on the other hand, would seem to signal the potential for improvement.
This paper deals with an increasingly important dimension of assessment performance  the case where a single well targets multiple zones, compartments or reservoirs in a prospect. Most companies handle the situation as a case of multiple distinct targets. However, while simple, this approach ignores to what extent assessment has handled estimation of risk dependencies and volume correlations between targets. Ignoring dependencies can also potentially bias the assessment performance tracking  in the sense that the tracking has less degrees of freedom than implied by the number of targets in the sample and particularly when there are systematic patterns in the relationship between risks and number of segments targeted by the well. Using examples, we will review the issue, its potential impact and a simple approach to separating out the component related to assessment of independent target attributes and the component related to the assessment of dependencies. We propose a consistent twolevel tracking of risk assessment performance at the target and at the well level. We also propose an index that can measure the degree of observed dependency and therefore also be used to track the quality of dependency estimates. As experienced with the current bestpractice of systematic tracking of assessment performance of individual targets, implementation of multiple target prospect assessment performance tracking should both improve assessments of this class of exploration ventures and stimulate more robust, accurate and transparent estimation of prospectscale dependencies. 
TRACKING RISK & UNCERTAINTY ASSESSMENT 
Tracking assessment performance is comparing postdrill results with predrill predictions. As such tracking assessment performance is a key element and an important byproduct of postdrill review of the results of an exploration well. The postdrill review provides a basis for deciding what to do as a followup of drilling the well. Most of the effort centers on following up if the results were a discovery: Assessment of the target in light of the data from the well, assessment of economic potential, decisions about the need for appraisal well(s) or decisions about doing a preliminary development plan.
Postdrill review is also an important source of data that can be used to improve and calibrate our assessment methods, guidelines, standards and procedures. It can identify biases and inconsistencies both within and across teams. However, singlewell comparisons between predrill predictions and postdrill results can only provide limited data for systematic learning concerning the performance, consistency and calibration of assessment procedures and methods. Meaningful measurement with stochastic inputs requires considering an exploration program as opposed to individual wells.

SIMPLE ASSESSMENT PERFORMANCE METRICS 
A simple measure of prediction quality is “has the program found what was expected?” This can be determined by computing the predrill expected resources for the program and compare these with the total expected reserves discovered. For the predrill expectation, we multiply the COS for each well with the mean resource estimate for the well. This gives the risked estimated resources for the well. The sum for all exploration wells is the predrill expectation of the program. For the postdrill results, we add up the expected reserves for each discovery. The difference between predrill expectation and postdrilled results is a simple measure of both the program and the quality of predrill assessments.
Another simple measure of assessment performance is the predrill versus postdrill (prepost) resource crossplot. The Xaxis shows the predrill expected size while the Yaxis shows the postdrill discoveries recorded. The central diagonal line represents perfect calibration in terms of expectations. We often use a loglog format for the crossplot in order to be able to accommodate programs with significant differences in predrill expected recoverable reserves.
Both the program expected resource metric and the prepost resource crossplot provide solely information on expected values. Neither captures explicitly the distributional quality of the stochastic assessments. 
CHANCE HISTOGRAM 
Consider the quality of chance of success (COS) estimates. In order to get a reasonable review of forecast COS, we need to relate discovery rates to relative predrill forecasts. The basic idea for such a performance metric is that we would expect more discoveries for wells that target prospects with a relatively high COS than wells that target prospects with a relatively low COS.
The chance histogram (Otis and Schneidermann, 1997) provides such a relative performance metric. In the chance histogram we define a set of chance categories. There is one chance category for each 10 % of additional COS and the intervals are [0.00,0.09], [0.10, 0.19], [0.20, 0.29], …, [0.90, 0.99] with the midpoint of the interval as the representative value for the category. We then count the number of dry wells and discoveries in each chance category.
Fig 1. Example of Chance Histogram
For an unbiased and calibrated risk assessment we would expect the relative number of discoveries in each chance category to be identical to the chance category value. We can get a more direct indicator by redrawing the chance histogram as the % chance histogram where each bar the relative percentages of dry and discovery wells. The diagonal line on the % chance histogram shows the proportion discoveries with a fully calibrated predrill chance of adequacy estimates (for more details and metrics on volume assessment performance see Stabell, 2006). 
MULTISEGMENT PROSPECT BASICS 
Let us now consider the case of multisegment (multiple target) prospects. Figure 2 is an example of a prospect (Alpha) with 6 segments (A, B, C, D, E and F).
Fig 2. Prospect Alpha with 6 segments and well that targets 2 segments
The chance of success of a prospect is defined as the chance that at least one segment in the prospect is a success. Similarly, the success case volumes are the volumes obtained when 1, 2 ,… n segments succeed.
To keep things simple, let us assume that all segments in Alpha have the same COS (chance of success) = P(discovery) = 0.3. The chance that at least one segment in Alpha is a success, if we drill all segments, is 1 minus the chance that all segments fail.
P (Alpha discovery)=1–(1COS(A))*(1COS(B))*(1COS(C))*(1COS(D))*(1COS(E))*(1COS(F))= 1 – 0.7^{6} = 0.88
For assessment performance tracking on a wellbywell basis, we need to consider the segments actually targeted by the well. For the case shown in Figure 2, we have a well that targets segments A and B. The P (Alpha well) is then 1 minus the chance that both A and B fail:
P (Alpha well discovery) = 1  (1COS(A))*(1COS(B)) = 1 – (10.3) *(10.3) = 0.51
The above results hold if the COS of each segment is independent. If they all are fully dependent, on the other hand, the P(Alpha discovery) = 0.3 as is the P(Alpha well discovery) – see Figure 3.
Fig 3. Independent vs. Dependent Alpha COS 
TRACKING MULTISEGMENT PROSPECTS 
With the above background on COS and the effects of risk dependency, we can now discuss issues of tracking performance at the prospect level.
As we see, we need to track wellbore chance of success (as opposed to prospect chance of success). That should be relatively straightforward.
It should also be apparent that if all segments are risk independent, then there is no direct need for tracking prospect level results. The prospect level results are the sum of the results for each segment. The results for each segment can be tracked independently. It is, however, still useful to report separately prospect level performance as otherwise it is easy to gloss over the well level results.
With risk dependency, on the other hand, there it is important to recognize and be able to disentangle the assessment of COS for each segment and the assessment of risk dependency between segments in a prospect.
First, however, note that risk dependency affects how we can use and interpret segment level results. The example in Figure 3 provides a clear illustration of the issue.
Drilling out the six segments in Alpha for the case with full risk dependency, we will obtain that either all the segments are dry or all the segments are successful. In terms of performance tracking statistics, the result will therefore either tend to overstate or understate the actual segment level results. Obviously, if we drill a lot of Alpha analogs, the results will average out. However, for cases with a large number of segments with low COS we will need to drill a lot Alpha analogs to start seeing the correct average estimates of segment COS. In short, risk dependency can bias estimates of performance tracking at the segment level.
In order to both expose the degree of segmentlevel bias and track performance in terms of assessment of risk dependency, we need to track prospect level assessment performance.

PROSPECT LEVEL ASSESSMENT PERFORMANCE METRICS 
We propose three indices for the tracking of risk estimates at the prospect level (where we in all cases will consider wellbore COS estimates).
The first measure is the estimated chance of success of the prospect. This assessment will include all the risk dependency assumptions.
The second measure is the ratio of the estimated chance of success of the prospect relative to the estimated chance of success assuming full risk independence of all the segments in the prospect. We call this the INDEP (independency) index.
INDEP = Actual COS estimate / COS with fully independent segments
INDEP index will of course be 1.0 if the there is no assumed risk dependency. The INDEP index will be relatively low if there (A) are a large number of segments in the prospect and (B) the segments have all a COS that is relatively low.
In our example above, INDEP if we assume full dependency, then INDEP = 0.3 /0.88 = 0.34. If we assume no dependency, then INDEP = 0.88/0.88 = 1.0
We will never be able to measure INDEP postdrill. However, it gives us an indicator for categorizing our prospect level COS estimates. In other words, we can potentially make a separate prospect level chance histogram for prospects where INDEP > .75 and for prospects INDEP < .75.
The third measure of prospect level assessment performance tracking, COS2 is a derisked measure of the chance of obtaining success in two or more segments given that the prospect is a success (i.e., given that the prospect has at least one segment that is a success).
For the case of a threesegment (A, B and C) prospect where each prospect has the same COS = 0.3, then we have that the chance that there is at least two segments that succeed, is 1 minus the chance that at least two fail. There is three ways that two segments can fail: A and B fail, A and C fail and B and C fail. In addition there is the chance that all three fail.
For the full dependency case, COS2 is equal to 1.0. For the fully independent case, COS2 is calculated as follows:
COS2 (three segments with COS = 0.3) = P( 2 or more successes) / P( at least one success)
COS2 (three segments with COS = 0.3) = (1 3 *( (10.3)^2*0.3) – (10.3)^3) / (1 – (10.3)^3) = 0.22/ 0.66 = 0.33

ILLUSTRATIVE APPLICATION 
We have a considered a synthetic exploration program with 30 prospects and 30 wells (where each well targets all the segments in the prospects). The segments in the prospects all have the same COS. 15 have no dependencies and 15 of the prospects have full dependency. The COS of the segments vary from 0.2 to 0.6.
The following two figures show the prospect level chance (COS and COS2) histogram for the exploration program.
Figure 4 Prospectlevel COS chance histogram (N = 30 prospects)
Figure 5 Prospectlevel COS2 chance histogram (N = 12 successes) 
References 
Otis, R.M and Schneidermann, N., 1997, A process for evaluating exploration prospects, AAPG Bulletin; July 1997; v. 81; no. 7; p. 10871109
Stabell, C, 2006, A New Metric for Tracking Assessment Performance, Petex, 2006
