Geostatistics - Jean-Paul Chilès - ebook

Geostatistics ebook

Jean-Paul Chilès

579,99 zł


Praise for the First Edition ". . . a readable, comprehensive volume that . . . belongs onthe desk, close at hand, of any serious researcher orpractitioner." --Mathematical Geosciences The state of the art in Geostatistics Geostatistical models and techniques such as kriging andstochastic multi-realizations exploit spatial correlations toevaluate natural resources, help optimize their development, andaddress environmental issues related to air and water quality, soilpollution, and forestry. Geostatistics: Modeling SpatialUncertainty, Second Edition presents a comprehensive, up-to-datereference on the topic, now featuring the latest developments inthe field. The authors explain both the theory and applications ofgeostatistics through a unified treatment that emphasizesmethodology. Key topics that are the foundation of geostatisticsare explored in-depth, including stationary and nonstationarymodels; linear and nonlinear methods; change of support;multivariate approaches; and conditional simulations. The SecondEdition highlights the growing number of applications ofgeostatistical methods and discusses three key areas of growth inthe field: * New results and methods, including kriging very large datasets;kriging with outliers; nonse??parable space-time covariances;multipoint simulations; pluri-gaussian simulations; gradualdeformation; and extreme value Geostatistics * Newly formed connections between Geostatistics and otherapproaches such as radial basis functions, Gaussian Markov randomfields, and data assimilation * New perspectives on topics such as collocated cokriging, krigingwith an external drift, discrete Gaussian change-of-support models,and simulation algorithms Geostatistics, Second Edition is an excellent book for courseson the topic at the graduate level. It also serves as an invaluablereference for earth scientists, mining and petroleum engineers,geophysicists, and environmental statisticians who collect andanalyze data in their everyday work.

Ebooka przeczytasz w aplikacjach Legimi na:

czytnikach certyfikowanych
przez Legimi

Liczba stron: 1337


Preface to the Second Edition

Preface to the First Edition



Types of Problems Considered

Description or Interpretation?

Chapter 1. Preliminaries

1.1 Random Functions

1.2 On the Objectivity of Probabilistic Statements

1.3 Transitive Theory

Chapter 2. Structural Analysis

2.1 General Principles

2.2 Variogram Cloud and Sample Variogram

2.3 Mathematical Properties of the Variogram

2.4 Regularization and Nugget Effect

2.5 Variogram Models

2.6 Fitting a Variogram Model

2.7 Variography in the Presence of a Drift

2.8 Simple Applications of the Variogram

2.9 Complements: Theory of Variogram Estimation and Fluctuation

Chapter 3. Kriging

3.1 Introduction

3.2 Notations and Assumptions

3.3 Kriging With a Known Mean

3.4 Kriging With an Unknown Mean

3.5 Estimation of a Spatial Average

3.6 Selection of a Kriging Neighborhood

3.7 Measurement Errors and Outliers

3.8 Case Study: The Channel Tunnel

3.9 Kriging Under Inequality Constraints

Chapter 4. Intrinsic Model of Order k

4.1 Introduction

4.2 A Second Look At the Model of Universal Kriging

4.3 Allowable Linear Combinations of Order k

4.4 Intrinsic Random Functions of Order k

4.5 Generalized Covariance Functions

4.6 Estimation in the IRF Model

4.7 Generalized Variogram

4.8 Automatic Structure Identification

4.9 Stochastic Differential Equations

Chapter 5. Multivariate Methods

5.1 Introduction

5.2 Notations and Assumptions

5.3 Simple Cokriging

5.4 Universal Cokriging

5.5 Derivative Information

5.6 Multivariate Random Functions

5.7 Shortcuts

5.8 Space–Time Models

Chapter 6. Nonlinear Methods

6.1 Introduction

6.2 Global Point Distribution

6.3 Local Point Distribution: Simple Methods

6.4 Local Estimation By Disjunctive Kriging

6.5 Selectivity and Support Effect

6.6 Multi-Gaussian Change-of-Support Model

6.7 Affine Correction

6.8 Discrete Gaussian Model

6.9 Non-Gaussian Isofactorial Change-of-Support Models

6.10 Applications and Discussion

6.11 Change of Support By the Maximum (C. Lantuéjoul)

Chapter 7. Conditional Simulations

7.1 Introduction and Definitions

7.2 Direct Conditional Simulation of a Continuous Variable

7.3 Conditioning By Kriging

7.4 Turning Bands

7.5 Nonconditional Simulation of a Continuous Variable

7.6 Simulation of a Categorical Variable

7.7 Object-Based Simulations: Boolean Models

7.8 Beyond Standard Conditioning

7.9 Additional Topics

7.10 Case Studies




In memory of Georges MATHERON (1930–2000)



Editors: David J. Balding, Noel A. C. Cressie, Garrett M. Fitzmaurice, Harvey Goldstein, Iain M. Johnstone, Geert Molenberghs, David W. Scott, Adrian F. M. Smith, Ruey S. Tsay, Sanford Weisberg

Editors Emeriti: Vic Barnett, J. Stuart Hunter, Joseph B. Kadane, Jozef L. Teugels

A complete list of the titles in this series appears at the end of this volume.

Copyright © 2012 by John Wiley & Sons, Inc. All rights reserved.

Published by John Wiley & Sons, Inc., Hoboken, New Jersey

Published simultaneously in Canada

No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 750-4470, or on the web at Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008, or online at

Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.

For general information on our other products and services or for technical support, please contact our Customer Care Department within the United States at (800) 762-2974, outside the United States at (317) 572-3993 or fax (317) 572-4002.

Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic formats. For more information about Wiley products, visit our web site at

ISBN: 978-0-470-18315-1

Library of Congress Cataloging-in-Publication Data is available.

Preface to the Second Edition

Twelve years after publication of the first edition in 1999, ideas have matured and new perspectives have emerged. It has become possible to sort out material that has lost relevance from core methods which are here to stay. Many new developments have been made to the field, a number of pending problems have been solved, and bridges with other approaches have been established. At the same time there has been an explosion in the applications of geostatistical methods, including in new territories unrelated to geosciences—who would have thought that one day engineers would krige aircraft wings? All these factors called for a thoroughly revised and updated second edition.

Our intent was to integrate the new material without increasing the size of the book. To this end we removed Chapter 8 (Scale effects and inverse problems) which covered stochastic hydrogeology but was too detailed for the casual reader and too incomplete for the specialist. We decided to keep only the specific contributions of geostatistics to hydrogeology and to distribute the material throughout the relevant chapters. The following is an overview of the main changes from the first edition and their justification.

Chapter 2 (Structural analysis) gives complements on practical questions such as spatial declustering and declustered statistics, variogram map calculation for data on a regular grid, variogram in a non-Euclidean coordinate system (transformation to a geochronological coordinate system). The Cauchy model is extended to the Cauchy class whose two shape parameters can account for a variety of behaviors at short as well as at large distances. The Matérn model and the logarithmic (de Wijsian) model are related to Gaussian Markov random fields (GMRF). New references are given on variogram fitting and sampling design. New sections propose covariance models on the sphere or on a river network. The chapter also includes new points on random function theory, such as a reference to the recent proof of a conjecture of Matheron on the characterization of an indicator function by its covariogram. The introductory example of variography in presence of a drift was removed to gain space.

The external drift model which was presented with multivariate methods is now introduced in Chapter 3 (Kriging) as a variant of the universal kriging model with polynomial drift. The special case of a constant unknown mean (ordinary kriging) is treated explicitly and in detail as it is the most common in applications. Dual kriging receives more attention because of its kinship with radial basis function interpolation (RBF), and its wide use in the design and analysis of computer experiments (DACE) to solve engineering problems. Three solutions are proposed to address the longstanding problem of the spurious discontinuities created by the use of moving neighborhoods in the case of a large dataset, namely covariance tapering, Gaussian Markov random field approximation, and continuous moving neighborhoods. Another important kriging issue, how to deal with outliers, is discussed and a new, relatively simple, truncation model developed for gold and uranium mines is presented. Finally a new form of kriging, Poisson kriging, in which observations derive from a Poisson time process, is introduced.

Few changes were made to Chapter 4 (Intrinsic model of order k). The main one is the addition of Micchelli’s theorem providing a simple characterization of isotropic generalized covariances of order k. Another addition is an analysis of the structure of the inverse of the intrinsic kriging matrix. The Poisson differential equation ΔZ = Y previously in the deleted chapter 8 survives in this chapter.

Chapter 5 (Multivariate methods) was largely rewritten and augmented. The main changes concern collocated cokriging and space–time models. The chapter now includes a thorough review of different forms of collocated cokriging, with a clear picture of which underlying models support the approach without loss of information and which use it just as a convenient simplification. Collocated cokriging is also systematically compared with its common alternative, kriging with an external drift. As for space–time models, they were a real threat for the size of the book because of the surge of activity in the subject. To deal with situations where a physical model is available to describe the time evolution of the system, we chose to present sequential data assimilation and ensemble Kalman filtering (EnKF) in some detail, highlighting their links with geostatistics. For the alternative case where no dynamic model is available, the focus is on new classes of nonseparable space–time covariances that enable kriging in a space–time domain. The chapter contains numerous other additions such as potential field interpolation of orientation data, extraction of the common part of two surveys using automatic factorial cokriging, maximum autocorrelation factors, multivariate Matérn cross-covariance model, layer-cake estimation including seismic information, compositional data with geometry on the simplex.

Nonlinear methods and conditional simulations generally require a preliminary transformation of the variable of interest into a variable with a specified marginal distribution, usually a normal one. As this step is critical for the quality of the results, it has been expanded and updated and now forms a specific section of chapter 6 (Nonlinear methods). More elaborate methods than the simple normal score transform are proposed. The presentation of the change of support has been restructured. We now present each model at the global scale and then immediately continue with the local scale. Conditional expectation is more detailed and accounts for a locally variable mean. The most widely used change-of-support model, the discrete Gaussian model, is discussed in depth, including the variant that appeared in the 2000s. Practical implementation questions are examined: locally variable mean, selection on the basis of future information (indirect resources), uniform conditioning. Finally this chapter features a section on the change of support by the maximum, a topic whose development in a spatial context is still in infancy but is important for extreme-value studies.

Chapter 7 incorporates the numerous advances made in conditional simulations in the last decade. The simulation of the fractional Brownian motion and its extension to IRF–k’s, which was possible only in specific cases (regular 1D grid, or at the cost of an approximation) is now possible exactly. A new insight into the Gibbs sampler enables the definition of a Gibbs propagation algorithm that does not require inversion of the covariance matrix. Pluri-Gaussian simulations are explained in detail and their use is illustrated in the Brent cliff case study, which has been completely reworked to reflect current practice (separable covariance models are no longer required). New simulation methods are presented: stochastic process-based simulation, multi-point simulation, gradual deformation. The use of simulated annealing for building conditional simulations has been completely revised. Stochastic seismic inversion and Bayesian approaches are up-to-date. Upscaling is also discussed in the chapter.


Special acknowledgement is due to Christian Lantuéjoul for his meticulous reading of Chapters 6 and 7, numerous helpful comments and suggestions, and for writing the section on change of support by the maximum. We are also greatly indebted to Jacques Rivoirard for many contributions and insights. Thierry Coléou helped us with seismic applications and Henning Omre with Bayesian methods. Xavier Freulon provided the top-cut gold grades example and Hélène Beucher the revised simulation of the Brent cliff. Didier Renard carried out calculations for new figures and Philippe Le Caër redrew the cover figure. This second edition also benefits from fine remarks of some readers of the first edition, notably Tilmann Gneiting, and from many informal discussions with our colleagues of the Geostatistics group of MINES ParisTech.

We remain, of course, grateful to the individuals acknowledged in the Preface to the first edition, and especially to Georges Matheron, who left us in 2000, but continues to be a source of inspiration.


October 23, 2011

Jean-Paul Chilès

Pierre Delfiner

Preface to the First Edition

This book covers a relatively specialized subject matter, geostatistics, as it was defined by Georges Matheron in 1962, when he coined this term to designate his own methodology of ore reserve evaluation. Yet it addresses a larger audience, because the applications of geostatistics now extend to many fields in the earth sciences, including not only the subsurface but also the land, the atmosphere, and the oceans.

The reader may wonder why such a narrow subject should occupy so many pages. Our intent was to write a short book. But this would have required us to sacrifice either the theory or the applications. We felt that neither of these options was satisfactory—there is no need for yet another introductory book, and geostatistics is definitely an applied subject. We have attempted to reconcile theory and practice by including application examples, which are discussed with due care, and about 160 figures. This results in a somewhat weighty volume, although hopefully more readable.

This book gathers in a single place a number of results that were either scattered, not easily accessible, or unpublished. Our ambition is to provide the reader with a unified view of geostatistics, with an emphasis on methodology. To this end we detail simple proofs when their understanding is deemed essential for geostatisticians, and we omit complex proofs that are too technical. Although some theoretical arguments may fall beyond the mathematical and statistical background of practitioners, they have been included for the sake of a complete and consistent development that the more theoretically inclined reader will appreciate. These sections, as well as ancillary or advanced topics, are set in smaller type.

Many references in this book point to the works of Matheron and the Center for Geostatistics in Fontainebleau, which he founded at the Paris School of Mines in 1967 and headed until his retirement in 1996. Without overlooking the contribution of Gandin, Matérn, Yaglom, Krige, de Wijs, and many others, it is from Matheron that geostatistics emerged as a discipline in its own right—a body of concepts and methods, a theory, and a practice—for the study of spatial phenomena. Of course this initial group spawned others, notably in Europe and North America, under the impetus of Michel David and André Journel, followed by numerous researchers trained in Fontainebleau first, and then elsewhere. This books pays tribute to all those who participated in the development of geostatistics, and our large list of references attempts to give credit to the various contributions in a complete and fair manner.

This book is the outcome of a long maturing process nourished by experience. We hope that it will communicate to the reader our enthusiasm for this discipline at the intersection between probability theory, physics, and earth sciences.


This book owes more than we can say to Georges Matheron. Much of the theory presented here is his work, and we had the privilege of seeing it in the making during the years that we spent at the Center for Geostatistics. In later years he always generously opened his door to us when we asked for advice on fine points. It was a great comfort to have access to him for insight and support. We are also indebted to the late Geoffrey S. Watson, who showed an early interest in geostatistics and introduced it to the statistical community. He was kind enough to invite one of the authors to Princeton University and, as an advisory editor of the Wiley Interscience Series, made this book possible. We wish he had been with us to see the finished product.

The manuscript of this book greatly benefited from the meticulous reading and quest for perfection of Christian Lantuéjoul, who suggested many valuable improvements. We also owe much to discussions with Paul Switzer, whose views are always enlightening and helped us relate our presentation to mainstream statistics. We have borrowed some original ideas from Jean-Pierre Delhomme, who shared the beginnings of this adventure with us. Bernard Bourgine contributed to the illustrations. This book could not have been completed without the research funds of Bureau de Recherches Géologiques et Minières, whose support is gratefully acknowledged.

We would like to express our thanks to John Wiley & Sons for their encouragement and exceptional patience during a project which has spanned many years, and especially to Bea Shube, the Wiley-Interscience Editor when we started, and her successors Kate Roach and Steve Quigley.

Finally, we owe our families, and especially our dear wives Chantal and Edith, apologies for all the time we stole from them, and we thank them for their understanding and forebearance.

La Villetertre

July 12, 1998

Jean-Paul Chilès

Pierre Delfiner


ALC–kallowable linear combination of order kc.d.f.cumulative density functionCKcokrigingDFTdiscrete Fourier transformDGM1, DGM2discrete Gaussian model 1, 2DKdisjunctive krigingGC–kgeneralized covariance of order kGLSgeneralized least squaresGVgeneralized variogrami.i.d.independent and identically distributedIRFintrinsic random functionIRF–kintrinsic random function of order kKEDkriging with external driftMM1, MM2Markov model 1, 2m.s.mean squarem.s.e.mean square errorOKordinary krigingPCAprincipal component analysisp.d.f.probability density functionRFrandom functionSKsimple krigingSRFstationary random functionUKuniversal kriging


Geostatistics aims at providing quantitative descriptions of natural variables distributed in space or in time and space. Examples of such variables are

Ore grades in a mineral depositDepth and thickness of a geological layerPorosity and permeability in a porous mediumDensity of trees of a certain species in a forestSoil properties in a regionRainfall over a catchment areaPressure, temperature, and wind velocity in the atmosphereConcentrations of pollutants in a contaminated site

These variables exhibit an immense complexity of detail that precludes a description by simplistic models such as constant values within polygons, or even by standard well-behaved mathematical functions. Furthermore, for economic reasons, these variables are often sampled very sparsely. In the petroleum industry, for example, the volume of rock sampled typically represents a minute fraction of the total volume of a hydrocarbon reservoir. The following figures, from the Brent field in the North Sea, illustrate the orders of magnitude of the volume fractions investigated by each type of data (“cuttings” are drilling debris, and “logging” data are geophysical measurements in a wellbore):

Cores 0.000 000 001

Cuttings 0.000 000 007

Logging 0.000 001

By comparison, if we used the same proportions for an opinion poll of the 100 million US households (to take a round number), we would interview only between 0.1 and 100 households, while 1500 is standard. Yet the economic implications of sampling for natural resources development projects can be significant. The cost of a deep offshore development is of the order of 10 billion dollars. Similarly, in the mining industry “the decision to invest up to 1–2 billion dollars to bring a major new mineral deposit on line is ultimately based on a very judicious assessment of a set of assays from a hopefully very carefully chosen and prepared group of samples which can weigh in aggregate less than 5 to 10 kilograms” (Parker, 1984).

Naturally, these examples are extreme. Such investment decisions are based on studies involving many disciplines besides geostatistics, but they illustrate the notion of spatial uncertainty and how it affects development decisions. The fact that our descriptions of spatial phenomena are subject to uncertainty is now generally accepted, but for a time it met with much resistance, especially from engineers who are trained to work deterministically. In the oil industry there are anecdotes of managers who did not want to see uncertainty attached to resources estimates because it did not look good—it meant incompetence. For job protection, it was better to systematically underestimate resources. (Ordered by his boss to get rid of uncertainty, an engineer once gave an estimate of proven oil resources equal to the volume of oil contained in the borehole!) Such conservative attitude led to the abandonment of valuable prospects. In oil exploration, profit comes with risk.

Geostatistics provides the practitioner with a methodology to quantify spatial uncertainty. Statistics come into play because probability distributions are the meaningful way to represent the range of possible values of a parameter of interest. In addition, a statistical model is well-suited to the apparent randomness of spatial variations. The prefix “geo” emphasizes the spatial aspect of the problem. Spatial variables are not completely random but usually exhibit some form of structure, in an average sense, reflecting the fact that points close in space tend to assume close values. G. Matheron (1965) coined the term regionalized variable to designate a numerical function z(x) depending on a continuous space index x and combining high irregularity of detail with spatial correlation. Geostatistics can then be defined as “the application of probabilistic methods to regionalized variables.” This is different from the vague usage of the word in the sense “statistics in the geosciences.” In this book, geostatistics refers to a specific set of models and techniques, largely developed by G. Matheron, in the lineage of the works of L. S. Gandin in meteorology, B. Matérn in forestry, D. G. Krige and H. J. de Wijs in mining, and A. Y. Khinchin, A. N. Kolmogorov, P. Lévy, N. Wiener, A. M. Yaglom, among others, in the theory of stochastic processes and random fields. We will now give an overview of the various geostatistical methods and the types of problems they address and conclude by elaborating on the important difference between description and interpretation.


The presentation follows the order of the chapters. For specificity, the problems presented refer to the authors’ own background in earth sciences applications, but newcomers with different backgrounds and interests will surely find equivalent formulations of the problems in their own disciplines. Geostatistical terms will be introduced and highlighted by italics.


The quantification of spatial uncertainty requires a model specifying the mechanism by which spatial randomness is generated. The simplest approach is to treat the regionalized variable as deterministic and the positions of the samples as random, assuming for example that they are selected uniformly and independently over a reference area, in which case standard statistical rules for independent random variables apply, such as that for the variance of the mean. If the samples are collected on a systematic grid, they are not independent and things become more complicated, but a theory is possible by randomizing the grid origin.

Geostatistics takes the bold step of associating randomness with the regionalized variable itself, by using a stochastic model in which the regionalized variable is regarded as one among many possible realizations of a random function. Some practitioners dispute the validity of such probabilistic approach on the grounds that the objects we deal with—a mineral deposit or a petroleum reservoir—are uniquely deterministic. Probabilities and their experimental foundation in the famous “law of large numbers” require the possibility of repetitions, which are impossible with objects that exist unambiguously in space and time. The objective meaning and relevance of a stochastic model under such circumstances is a fundamental question of epistemology that needs to be resolved. The clue is to carefully distinguish the model from the reality it attempts to capture. Probabilities do not exist in Nature but only in our models. We do not choose to use a stochastic model because we believe Nature to be random (whatever that may mean), but simply because it is analytically useful. The probabilistic content of our models reflects our imperfect knowledge of a deterministic reality. We should also keep in mind that models have their limits and represent reality only up to a certain point. And finally, no matter what we do and how carefully we work, there is always a possibility that our predictions and our assessments of uncertainty turn out to be completely wrong, because for no foreseeable reason the phenomenon at unknown places is radically different than anything observed (what Matheron calls the risk of a “radical error”).

Structural Analysis

Having observed that spatial variability is a source of spatial uncertainty, we have to quantify and model spatial variability. What does an observation at a point tell us about the values at neighboring points? Can we expect continuity in a mathematical sense, or in a statistical sense, or no continuity at all? What is the signal-to-noise ratio? Are variations similar in all directions or is there anisotropy? Do the data exhibit any spatial trend? Are there characteristic scales and what do they represent? Is the histogram symmetric or skewed?

Answering these questions, among others, is known in geostatistics as structural analysis. One key tool is a structure function, the variogram, which describes statistically how the values at two points become different as the separation between these points increases. The variogram is the simplest way to relate uncertainty with distance from an observation. Other two-point structure functions can be defined that, when considered together, provide further clues for modeling. If the phenomenon is spatially homogeneous and densely sampled, it is even possible to go beyond structure functions and determine the complete bivariate distributions of measurements at pairs of points. In applications there is rarely enough data to allow empirical determination of multiple-point statistics beyond two points, a notable exception being when the data are borrowed from training images.

Survey Optimization

In resources estimation problems the question arises as to which sampling pattern ensures the best precision. The variogram alone permits a comparison between random, random stratified, and systematic sampling patterns. Optimizing variogram estimation may actually be a goal in itself. In practice the design is often constrained by operational and economic considerations, and the real question is how to optimize the parameters of the survey. Which grid mesh should be used to achieve a required precision? What is the optimal spacing between survey lines? What is the best placement for an additional appraisal well? Does the information expected from acquiring or processing more data justify the extra cost and delay? What makes life interesting is that these questions must be answered, of course, prior to acquiring the data.


We often need to estimate the values of a regionalized variable at places where it has not been measured. Typically, these places are the nodes of a regular grid laid out on the studied domain, the interpolation process being then sometimes known as “gridding.” Once grids are established, they are often used as the representation of reality, without reference to the original data. They are the basis for new grids obtained by algebraic or Boolean operations, contour maps, volumetric calculations, and the like. Thus the computation of grids deserves care and cannot rely on simplistic interpolation methods.

The estimated quantity is not necessarily the value at a point; in many cases a grid node is meant to represent the grid cell surrounding it. This is typical for inventory estimation or for numerical modeling. Then we estimate the mean value over a cell, or a block, and more generally some weighted average.

In all cases we wish our estimates to be “accurate.” This means, first, that on the average our estimates are correct; they are not systematically too high or too low. This property is captured statistically by the notion of unbiasedness. It is especially critical for inventory estimation and was the original motivation for the invention of kriging. The other objective is precision, and it is quantified by the notion of error variance, or its square root the standard error, which is expressed in the same units as the data.

The geostatistical interpolation technique of kriging comes in different flavors qualified by an adjective: simple kriging, ordinary kriging, universal kriging, intrinsic kriging, and so on, depending on the underlying model. The general approach is to consider a class of unbiased estimators, usually linear in the observations, and to find the one with minimum uncertainty, as measured by the error variance. This optimization involves the statistical model established during the structural analysis phase, and there lies the fundamental difference with standard interpolation methods: These focus on modeling the interpolating surface, whereas geostatistics focuses on modeling the phenomenon itself.

Polynomial Drift

Unexpected difficulties arise when the data exhibit a spatial trend, which in geostatistical theory is modeled as a space-varying mean called drift. The determination of the variogram in the presence of a drift is often problematic due to the unclear separation between global and local scales. The problem disappears by considering a new structural tool, the generalized covariance, which is associated with increments of order k that filter out polynomial drifts, just like ordinary increments filter out a constant mean. When a polynomial drift is present, the generalized covariance is the minimum parametric information required for kriging. An insightful bridge with radial basis function interpolation, including thin plate splines, can be established.

Intrinsic random functions of order k (IRF–k), which are associated with generalized covariances, also provide a class of nonstationary models that are useful to represent the nonstationary solutions of stochastic partial differential equations such as found in hydrogeology.

Integration of Multiparameter Information

In applications the greatest challenge is often to “integrate” (i.e., combine) information from various sources. To take a specific example, a petroleum geologist must integrate into a coherent geological model information from cores, cuttings, open-hole well logs, dip and azimuth computations, electrical and acoustic images, surface and borehole seismic, and well tests. The rule of the game is: “Don’t offend anything that is already known.” Geostatistics and multivariate statistical techniques provide the framework and the tools to build a consistent model.

The technique of cokriging generalizes kriging to multivariate interpolation. It exploits the relationships between the different variables as well as the spatial structure of the data. An important particular case is the use of slope information in conjunction with the variable itself. When the implementation of cokriging requires a statistical inference beyond reach, shortcuts can be used. The most popular ones are the external drift method and collocated cokriging, which use a densely sampled auxiliary field to compensate for the scarcity of observations of the variable of interest.

Spatiotemporal Problems

Aside from geological processes which are so slow that time is not a factor, most phenomena have both a space and a time component. Typical examples are meteorological variables or pollutant concentrations, measured at different time points and space locations. We may wish to predict these variables at a new location at a future time.

One possibility is to perform kriging in a space–time domain using spatiotemporal covariance models. New classes of nonseparable stationary covariance functions have been developed in the recent years that allow space–time interaction.

Alternatively, if a physical model is available to describe the time evolution of the system, the techniques of data assimilation can be used—and in particular the ensemble Kalman filter (EnKF), which has received much attention.

Indicator Estimation

We are interested in the event: “at a given point x the value z(x) exceeds the level z0.” We can think of z0 as a pollution alert threshold, or a cutoff grade in mining. The event can be represented by a binary function, the indicator function, valued 1 if the event is true, and zero if it is false, whose expected value is the probability of the event “z(x) exceeds z0.” Note that the indicator is a nonlinear function of the observation z(x). The mean value of the indicator over a domain V represents the fraction of V where the threshold is exceeded. When we vary the threshold, it appears that indicator estimation amounts to the determination of the histogram or the cumulative distribution function of the values of z(x) within V. The interesting application is to estimate this locally over a subdomain ν to obtain a local distribution function reflecting the values observed in the vicinity of ν. Disjunctive kriging, a nonlinear technique based on a careful modeling of bivariate distributions, provides a solution to this difficult problem.

Selection and Change-of-Support Problems

The support of a regionalized variable is the averaging volume over which the data are measured or defined. Typically, there are point values and block values, or high-resolution and low-resolution measurements. As the size of the support changes, the histogram of the variable is deformed, but there is no straightforward relationship between the distributions of values measured over two different supports, except under very stringent Gaussian assumptions. For example, ore sample grades and blocks grades cannot both be exactly lognormally distributed—although they might as approximations. Predicting the change of distribution when passing from one size of support to another, generally point to block, is the change of support problem. Specific isofactorial models are proposed to solve this problem.

Change of support is central in inventory estimation problems in which the resource is subject to selection. Historically, the most important application has been in mining, where the decision to process the ore or send it to waste, depending on its mineral content, is made at the level of a block, say a cube of 10-m side, rather than, say, a teaspoon. The recoverable resources then depend on the local distributions of block values. Modeling the effect of selection may be a useful concept in other applications, such as the delineation of producing beds in a petroleum reservoir, the remediation of contaminated areas, or the definition of pollution alert thresholds.


Kriging, as any reasonable interpolation method, has a smoothing effect. It does not reproduce spatial heterogeneity. In the world of images we would say that it is not true to the “texture” of the image. This can cause significant biases when nonlinear effects are involved. To take a simple example, compare the length of an interpolated curve with the length of the true curve: It is much shorter—the true curve may not even have a finite length! Similarly, for the same average permeability, a porous medium has a very different flow behavior if it is homogeneous or heterogeneous.

This is where the stochastic nature of the model really comes into play. The formalism of random functions involves a family of alternative realizations similar in their spatial variability to the reality observed but different otherwise. By simulation techniques it is possible to generate some of these “virtual realities” and produce pictures that are true to the fluctuations of the phenomenon. A further step toward realism is to constrain the realizations to pass through the observed data, thus producing conditional simulations. By generating several of these digital models, we are able to materialize spatial uncertainty. Then if we are interested in some quantity that depends on the spatial field in a complex manner, such as modeling fluid flow in a porous medium, we can compute a result for each simulation and study the statistical distribution of the results. A typical application is the determination of scaling laws.

Iterative methods based on Markov chain Monte Carlo enable conditioning non-Gaussian random functions and constraining simulations on auxiliary information such as seismic data and production data in reservoir engineering. These methods provide an essential contribution to stochastic inverse modeling.

Problems Omitted

A wide class of spatial problems concerns the processing and analysis of images. This is a world by itself, and we will not enter it, even though there will be occasional points of contact. An image analysis approach very much in line with geostatistics, and developed in fact by the same group of researchers, is Mathematical Morphology [see Serra (1982)]. Variables regionalized in time only will also be left out. Even though geostatistical methods apply, the types of problems considered are often of an electrical engineering nature and are better handled by digital signal processing techniques.

Finally, the study of point patterns (e.g., the distribution of trees in a forest) and the modeling of data on a lattice or on a graph are intentionally omitted from this book. The reader is referred to Cressie (1991) for a comprehensive overview of the first two approaches, to Guyon (1995) for a presentation of Markov fields on a lattice, and to Jordan (1998, 2004) for graphical models.


Geostatistical methods are goal-oriented. Their purpose is not to build an explanatory model of the world but to solve specific problems using the minimal prerequisites required, following the principle of parsimony. They are descriptive rather than interpretive models. We illustrate this important point with an example borrowed from contour mapping.

Mathematically inclined people—including the present authors—have long thought that computer mapping was the definitive, clean, and objective replacement of hand contouring. Hand-drawn maps are subjective; they can be biased consciously or unconsciously. Even when drafted honestly, they seem suspect: If two competent and experienced interpreters can produce different maps from the same data, why should one believe any of them? And of course there is always the possibility of a gross clerical error such as overlooking or misreading some data points. By contrast, computer maps have all the attributes of respectability: They don’t make clerical mistakes, they are “objective,” reproducible, and fast. Yet this comparison misses an important point: It neglects the semantic content of a map. For a geologist, or a meteorologist, a map is far more than a set of contours: It represents the state of an interpretation. It reflects the attempt of its author to build a coherent picture of the geological object, or the meteorological situation, of interest.

This is demonstrated in a striking manner by a synthetic sedimentological example constructed by O. Serra, a pioneer in the geological interpretation of well logs. He considered a regular array of wells (the favorable case) and assigned them sand thickness values, without any special design, in fact using only the round numbers 0, 10, 20, 30. From this data set he derived four very different isopach maps. Figure 0.1a pictures the sand body as a meandering channel; Figure 0.1b as an infill channel with an abrupt bank to the east; Figure 0.1c as a transgressive sand filling paleo-valleys; and Figure 0.1d as a barrier bar eroded by a tidal channel. Each of these maps reflects a different depositional environment model, which was on the interpreter’s mind at the time and guided his hand.

FIGURE 0.1 Four interpretations of the same synthetic data (hand-drawn isopach maps): (a) meandering channel; (b) infill channel; (c) transgressive sand filling paleo-valleys; (d) barrier bar eroded by a tidal channel. (From O. Serra, personal communication.)

Geostatistical models have no such explanatory goals. They model mathematical objects, a two-dimensional isopach surface, for example, not geological objects. The complex mental process by which a geologist draws one of the above maps can better be described as pattern recognition than as interpolation. Compared with this complexity, interpolation algorithms look pathetically crude, and this is why geological maps are still drawn by hand. To the geostatistician’s comfort, the fact that widely different interpretations are consistent with the same data makes them questionable. For one brilliant interpretation (the correct one), how many “geofantasies” are produced?

Another way to qualify description versus interpretation is to oppose data-driven and model-driven techniques. Traditionally, geostatistics has been data-driven rather than model-driven: It captures the main structural features from the data, and knowledge of the subject matter does not have much impact beyond the selection of a variogram model. Therefore it cannot discriminate between several plausible interpretations. We can, however, be less demanding and simply require geostatistics to take external knowledge into account, and in particular an interpretation proposed by a physicist or a geologist. The current trend in geostatistics is precisely an attempt to include physical equations and model-specific constraints.

Hydrogeologists, who sought ways of introducing spatial randomness in aquifer models, have pioneered the research to incorporate physical equations in geostatistical models. Petroleum applications where data are scarce initially have motivated the development of object-based models. For example, channel sands are simulated directly as sinusoidal strips with an elliptic or rectangular cross section. This is still crude, but the goal is clear: Import geological concepts to the mapping or simulation processes. We can dream of a system that would find “the best” meandering channel consistent with a set of observations. Stochastic process-based models work in this direction.

To summarize, the essence of the geostatistical approach is to (a) recognize the inherent variability of natural spatial phenomena and the fragmentary character of our data and (b) incorporate these notions in a model of a stochastic nature. It identifies the structural relationships in the data and uses them to solve specific problems. It does not attempt any physical or genetic interpretations but uses them as much as possible when they are available.

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!