Options
An Improved Parallel Multiple-Point Algorithm Using a List Approach
Date de parution
2011-4
In
MATHEMATICAL GEOSCIENCES
Vol.
3
No
43
De la page
305
A la page
328
Résumé
Among the techniques used to simulate categorical variables, multiple-point statistics is becoming very popular because it allows the user to provide an explicit conceptual model via a training image. In classic implementations, the multiple-point statistics are inferred from the training image by storing all the observed patterns of a certain size in a tree structure. This type of algorithm has the advantage of being fast to apply, but it presents some critical limitations. In particular, a tree is extremely RAM demanding. For three-dimensional problems with numerous facies, large templates cannot be used. Complex structures are then difficult to simulate.
In this paper, we propose to replace the tree by a list. This structure requires much less RAM. It has three main advantages. First, it allows for the use of larger templates. Second, the list structure being parsimonious, it can be extended to include additional information. Here, we show how this can be used to develop a new approach for dealing with non-stationary training images. Finally, an interesting aspect of the list is that it allows one to parallelize the part of the algorithm in which the conditional probability density function is computed. This is especially important for large problems that can be solved on clusters of PCs with distributed memory or on multicore machines with shared memory.
In this paper, we propose to replace the tree by a list. This structure requires much less RAM. It has three main advantages. First, it allows for the use of larger templates. Second, the list structure being parsimonious, it can be extended to include additional information. Here, we show how this can be used to develop a new approach for dealing with non-stationary training images. Finally, an interesting aspect of the list is that it allows one to parallelize the part of the algorithm in which the conditional probability density function is computed. This is especially important for large problems that can be solved on clusters of PCs with distributed memory or on multicore machines with shared memory.
Identifiants
Type de publication
journal article
Dossier(s) à télécharger