Voici les éléments 1 - 3 sur 3
  • Publication
    Accès libre
    Stochastic simulation of rainfall and climate variables using the direct sampling technique
    An accurate statistical representation of hydrological processes is of paramount importance to evaluate the uncertainty of the present scenario and make reliable predictions in a changing climate. A wealth of historic data has been made available in the last decades, including a consistent amount of remote sensing imagery describing the spatio-temporal nature of climatic and hydrological processes. The statistics based on such data are quite robust and reliable. However, to explore their variability, most stochastic simulation methods are based on low-order statistics that can only represent the heterogeneity up to a certain degree of complexity.
    In the recent years, the stochastic hydrogeology group of the University of Neuchâtel has developed a multiple-point simulation method called Direct Sampling (DS). DS is a resampling technique that allows the preservation of the complex data structure by simply generating data patterns similar to the ones found in the historical data set. Contrarily to the other multiple-point methods, DS can simulate either categorical or continuous variables, or a combination of both in a multivariate framework.
    In this thesis, the DS algorithm is adapted to the simulation of rainfall and climate variables in both time and space. The developed stochastic weather or climate generators include the simulation of the target variable with a series of auxiliary variables describing some aspects of the complex statistical structure characterizing the simulated process. These methods are tested on real application cases including the simulation of rainfall time-series from different climates, the variability exploration of future climate change scenarios, the missing data simulation within flow rate time-series and the simulation of spatial rainfall fields at different scales. If a representative training data set is used, the proposed methodologies can generate realistic simulations, preserving fairly well the statistical properties of the heterogeneity. Moreover, these techniques result to be practical simulation tools, since they are adaptive to different data sets with minimal effort from the user perspective. Although leaving large room for improvement, the proposed simulation approaches show a good potential to explore the variability of complex hydrological processes without the need of a complex statistical model.
  • Publication
    Accès libre
    Analog-based meandering channel simulation
    (2014-1-10) ;
    Comunian, Alessandro
    ;
    Irarrazaval, Inigo
    ;
  • Publication
    Accès libre
    Geological stochastic imaging for aquifer characterization
    (2009)
    Mariéthoz, Grégoire
    ;
    Accurately modeling connectivity of geological structures is critical for flow and transport problems. Using multiple-points simulations is one of the most advanced tools to produce realistic reservoir structures. It proceeds by considering data events (spatial arrangements of values) derived from a training image (TI). The usual method consists in storing all the data events of the TI in a database, which is used to compute conditional probabilities for the simulation. Instead, the Direct Sampling method (DS) proposed in this thesis consists in sampling directly the TI for a given data event. As soon as the data event in the TI matches the data event at the node to simulate, the value at its central node is directly pasted in the simulation. Because it accommodates data events of varying geometry, multi-grids are not needed. The method can deal with categorical and continuous variables and can be extended to multivariate cases. Therefore, it can handle new classes of problems. Different adaptations of the DS are proposed. The first one is aimed at reconstructing partially informed images or datasets. Instead of inferring data events from a TI, a training dataset is used. If the density of measurements is high enough, significant non-parametric spatial statistics can be derived from the data, and the patterns found in those data are mimicked without model inference. Therefore, minimum assumptions are made on the spatial structure of the reconstructed fields. Moreover, very limited parameterization is needed. The method gives good results for the reconstruction of complex 3D geometries from relatively small datasets. Another adaptation of the DS algorithm is aimed at performing super-resolution of coarse images. DS is used to stochastically simulate the structures at scales smaller than the measurement resolution. These structures are inferred using a hypothesis of scale-invariance on the spatial patterns found at the coarse scale. The approach is illustrated with examples of satellite imaging and digital photography. Parallelization is another important topic treated in this thesis. The size of simulation grids used for numerical models has increased by many orders of magnitude in the past years. Efficient pixel-based geostatistical simulation algorithms exist, but for very large grids and complex spatial models, computational burden remains heavy. As cluster computers become widely available, using parallel strategies is a natural step for increasing the usable grid size and the complexity of the models. These strategies must take profit of the possibilities offered by machines with a large number of processors. On such machines, the bottleneck is often the communication time between processors. This thesis presents a strategy distributing grid nodes among all available processors while minimizing communication and latency times. It consists in centralizing the simulation on a master processor that calls other slave processors as if they were functions simulating one node every time. The key is to decouple the sending and the receiving operations to avoid synchronization. Centralization allows having a conflict management system ensuring that nodes being simulated simultaneously do not interfere in terms of neighborhood. The strategy is computationally efficient and is versatile enough to be applicable to all random path based simulation methods. In addition to the preceding topics, a new cosimulation algorithm is proposed for simulating a primary attribute using one or several secondary attributes known exhaustively on the domain. This problem is frequently encountered in surface and groundwater hydrology when a variable of interest is measured only at a discrete number of locations and when a secondary variable is mapped by indirect techniques such as geophysics or remote sensing. In the proposed approach, the correlation between the two variables is modeled by a joint probability distribution function. A technique to construct such relations using latent variables and physical laws is proposed when field data are insufficient. The simulation algorithm proceeds sequentially. At each node of the grid, two conditional probability distribution functions (cpdf) are inferred. The first is inferred in a classical way from the neighboring data of the main attribute and a model of its spatial variability. The second is inferred directly from the joint probability distribution function of the two attributes and the value of the secondary attribute at the location to be simulated. The two distribution functions are combined by probability aggregation to obtain the local cpdf from which a value is randomly drawn. Various examples using synthetic and remote sensing data demonstrate that the method is more accurate than the classical collocated cosimulation technique when a complex relation links the two attributes.