Voici les éléments 1 - 6 sur 6
  • Publication
    Accès libre
    Missing data simulation inside flow rate time-series using multiple-point statistics
    The direct sampling (DS) multiple-point statistical technique is proposed as a non-parametric missing data simulator for hydrological flow rate time-series. The algorithm makes use of the patterns contained inside a training data set to reproduce the complexity of the missing data. The proposed setup is tested in the reconstruction of a flow rate time-series while considering several missing data scenarios, as well as a comparative test against a time-series model of type ARMAX. The results show that DS generates more realistic simulations than ARMAX, better recovering the statistical content of the missing data. The predictive power of both techniques is much increased when a correlated flow rate time-series is used, but DS can also use incomplete auxiliary time-series, with a comparable prediction power. This makes the technique a handy simulation tool for practitioners dealing with incomplete data sets.
  • Publication
    Accès libre
    Conditioning multiple-point statistics simulations to block data
    Multiple-points statistics (MPS) allows to generate random fields reproducing spatial statistics derived from a training image. MPS methods consist in borrowing patterns from the training set. Therefore, the simulation domain is assumed to be at the same resolution as the conceptual model, although geometrical deformations can be handled by such techniques. Whereas punctual conditioning data corresponding to the scale of the grid node can be easily integrated, accounting for data available at larger scales is challenging. In this paper, we propose an extension of MPS able to deal with block data, i.e. target mean values over subsets of the simulation domain. Our extension is based on the direct sampling algorithm and consists to add a criterion for the acceptance of the candidate node scanned in the training image to constrain the simulation to block data. Likelihood ratios are used to compare the averages of the simulated variable taken on the informed nodes in the blocks and the target mean values. Moreover, the block data may overlap and their support can be of any shape and size. Illustrative examples show the potential of the presented algorithm for practical applications.
  • Publication
    Accès libre
    A practical guide to performing multiple-point statistical simulations with the Direct Sampling algorithm
    (2013-3)
    Meerschman, Eef
    ;
    ; ; ;
    Van Meirvenne, Marc
    ;
    The Direct Sampling (DS) algorithm is a recently developed multiple-point statistical simulation technique. It directly scans the training image (TI) for a given data event instead of storing the training probability values in a catalogue prior to simulation. By using distances between the given data events and the TI patterns, DS allows to simulate categorical, continuous and multivariate problems. Benefiting from the wide spectrum of potential applications of DS, requires understanding of the user-defined input parameters. Therefore, we list the most important parameters and assess their impact on the generated simulations. Real case TIs are used, including an image of ice-wedge polygons, a marble slice and snow crystals, all three as continuous and categorical images. We also use a 3D categorical TI representing a block of concrete to demonstrate the capacity of DS to generate 3D simulations. First, a quantitative sensitivity analysis is conducted on the three parameters balancing simulation quality and CPU time: the acceptance threshold t, the fraction of TI to scan f and the number of neighbors n. Next to a visual inspection of the generated simulations, the performance is analyzed in terms of speed of calculation and quality of pattern reproduction. Whereas decreasing the CPU time by influencing t and n is at the expense of simulation quality, reducing the scanned fraction of the TI allows substantial computational gains without degrading the quality as long as the TI contains enough reproducible patterns. We also illustrate the quality improvement resulting from post-processing and the potential of DS to simulate bivariate problems and to honor conditioning data. We report a comprehensive guide to performing multiple-point statistical simulations with the DS algorithm and provide recommendations on how to set the input parameters appropriately.
  • Publication
    Accès libre
    Conditioning Facies Simulations with Connectivity Data
    When characterizing and simulating underground reservoirs for flow simulations, one of the key characteristics that needs to be reproduced accurately is its connectivity. More precisely, field observations frequently allow the identification of specific points in space that are connected. For example, in hydrogeology, tracer tests are frequently conducted that show which springs are connected to which sink-hole. Similarly well tests often allow connectivity information in a petroleum reservoir to be provided. To account for this type of information, we propose a new algorithm to condition stochastic simulations of lithofacies to connectivity information. The algorithm is based on the multiple-point philosophy but does not imply necessarily the use of multiple-point simulation. However, the challenge lies in generating realizations, for example of a binary medium, such that the connectivity information is honored as well as any prior structural information (e.g. as modeled through a training image). The algorithm consists of using a training image to build a set of replicates of connected paths that are consistent with the prior model. This is done by scanning the training image to find point locations that satisfy the constraints. Any path (a string of connected cells) between these points is therefore consistent with the prior model. For each simulation, one sample from this set of connected paths is sampled to generate hard conditioning data prior to running the simulation algorithm. The paper presents in detail the algorithm and some examples of two-dimensional and three-dimensional applications with multiple-point simulations.
  • Publication
    Accès libre
    An Improved Parallel Multiple-Point Algorithm Using a List Approach
    Among the techniques used to simulate categorical variables, multiple-point statistics is becoming very popular because it allows the user to provide an explicit conceptual model via a training image. In classic implementations, the multiple-point statistics are inferred from the training image by storing all the observed patterns of a certain size in a tree structure. This type of algorithm has the advantage of being fast to apply, but it presents some critical limitations. In particular, a tree is extremely RAM demanding. For three-dimensional problems with numerous facies, large templates cannot be used. Complex structures are then difficult to simulate. In this paper, we propose to replace the tree by a list. This structure requires much less RAM. It has three main advantages. First, it allows for the use of larger templates. Second, the list structure being parsimonious, it can be extended to include additional information. Here, we show how this can be used to develop a new approach for dealing with non-stationary training images. Finally, an interesting aspect of the list is that it allows one to parallelize the part of the algorithm in which the conditional probability density function is computed. This is especially important for large problems that can be solved on clusters of PCs with distributed memory or on multicore machines with shared memory.
  • Publication
    Accès libre
    A practical guide to performing multiple-point statistical simulations with the Direct Sampling algorithm
    The Direct Sampling (DS) algorithm is a recently developed multiple-point statistical simulation technique. It directly scans the training image (TI) for a given data event instead of storing the training probability values in a catalogue prior to simulation. By using distances between the given data events and the TI patterns, DS allows to simulate categorical, continuous and multivariate problems. Benefiting from the wide spectrum of potential applications of DS, requires understanding of the user-defined input parameters. Therefore, we list the most important parameters and assess their impact on the generated simulations. Real case TIs are used, including an image of ice-wedge polygons, a marble slice and snow crystals, all three as continuous and categorical images. We also use a 3D categorical TI representing a block of concrete to demonstrate the capacity of DS to generate 3D simulations. First, a quantitative sensitivity analysis is conducted on the three parameters balancing simulation quality and CPU time: the acceptance threshold t, the fraction of TI to scan f and the number of neighbors n. Next to a visual inspection of the generated simulations, the performance is analyzed in terms of speed of calculation and quality of pattern reproduction. Whereas decreasing the CPU time by influencing t and n is at the expense of simulation quality, reducing the scanned fraction of the TI allows substantial computational gains without degrading the quality as long as the TI contains enough reproducible patterns. We also illustrate the quality improvement resulting from post-processing and the potential of DS to simulate bivariate problems and to honor conditioning data. We report a comprehensive guide to performing multiple-point statistical simulations with the DS algorithm and provide recommendations on how to set the input parameters appropriately.