• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 13
  • 2
  • 1
  • Tagged with
  • 16
  • 16
  • 5
  • 4
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Continuous reservoir model updating using an ensemble Kalman filter with a streamline-based covariance localization

Arroyo Negrete, Elkin Rafael 25 April 2007 (has links)
This work presents a new approach that combines the comprehensive capabilities of the ensemble Kalman filter (EnKF) and the flow path information from streamlines to eliminate and/or reduce some of the problems and limitations of the use of the EnKF for history matching reservoir models. The recent use of the EnKF for data assimilation and assessment of uncertainties in future forecasts in reservoir engineering seems to be promising. EnKF provides ways of incorporating any type of production data or time lapse seismic information in an efficient way. However, the use of the EnKF in history matching comes with its shares of challenges and concerns. The overshooting of parameters leading to loss of geologic realism, possible increase in the material balance errors of the updated phase(s), and limitations associated with non-Gaussian permeability distribution are some of the most critical problems of the EnKF. The use of larger ensemble size may mitigate some of these problems but are prohibitively expensive in practice. We present a streamline-based conditioning technique that can be implemented with the EnKF to eliminate or reduce the magnitude of these problems, allowing for the use of a reduced ensemble size, thereby leading to significant savings in time during field scale implementation. Our approach involves no extra computational cost and is easy to implement. Additionally, the final history matched model tends to preserve most of the geological features of the initial geologic model. A quick look at the procedure is provided that enables the implementation of this approach into the current EnKF implementations. Our procedure uses the streamline path information to condition the covariance matrix in the Kalman Update. We demonstrate the power and utility of our approach with synthetic examples and a field case. Our result shows that using the conditioned technique presented in this thesis, the overshooting/undershooting problems disappears and the limitation to work with non- Gaussian distribution is reduced. Finally, an analysis of the scalability in a parallel implementation of our computer code is given.
12

Location and Relocation of Seismic Sources

Li, Ka Lok January 2017 (has links)
This dissertation is a comprehensive summary of four papers on the development and application of new strategies for locating tremor and relocating events in earthquake catalogs. In the first paper, two new strategies for relocating events in a catalog are introduced. The seismicity pattern of an earthquake catalog is often used to delineate seismically active faults. However, the delineation is often hindered by the diffuseness of earthquake locations in the catalog. To reduce the diffuseness and simplify the seismicity pattern, a relocation and a collapsing method are developed and applied. The relocation method uses the catalog event density as an a priori constraint for relocations in a Bayesian inversion. The catalog event density is expressed in terms of the combined probability distribution of all events in the catalog. The collapsing method uses the same catalog density as an attractor for focusing the seismicity in an iterative scheme. These two strategies are applied to an aftershock sequence after a pair of earthquakes which occurred in southwest Iceland, 2008. The seismicity pattern is simplified by application of the methods and the faults of the mainshocks are delineated by the reworked catalog. In the second paper, the spatial distribution of seismicity of the Hengill region, southwest Iceland is analyzed. The relocation and collapsing methods developed in the first paper and a non-linear relocation strategy using empirical traveltime tables are used to process a catalog collected by the Icelandic Meteorological Office. The reworked catalog reproduces details of the spatial distribution of seismicity that independently emerges from relative relocations of a small subset of the catalog events. The processed catalog is then used to estimate the depth to the brittle-ductile transition. The estimates show that in general the northern part of the area, dominated by volcanic processes, has a shallower depth than the southern part, where tectonic deformation predominates. In the third and the fourth papers, two back-projection methods using inter-station cross correlations are proposed for locating tremor sources. For the first method, double correlations, defined as the cross correlations of correlations from two station pairs sharing a common reference station, are back projected. For the second method, the products of correlation envelopes from a group of stations sharing a common reference station are back projected. Back projecting these combinations of correlations, instead of single correlations, suppresses random noise and reduces the strong geometrical signature caused by the station configuration. These two methods are tested with volcanic tremor at Katla volcano, Iceland. The inferred source locations agree with surface observations related to volcanic events which occurred during the tremor period.
13

Reconstitution de la convection du manteau terrestre par assimilation de données séquentielle / Reconstruction of Mantle Circulation Using Sequential Data Assimilation

Bocher, Marie 25 November 2016 (has links)
Cette thèse vise à proposer de nouvelles méthodes permettant de reconstruire la circulation dans le manteau terrestre et l'évolution de la tectonique de surface pour les deux cents derniers millions d'années. Nous utilisons des modèles numériques de convection mantellique dans lesquels la dynamique de surface est comparable à la tectonique terrestre. En combinant ces modèles avec des reconstructions de la tectonique des plaques il est possible d'estimerla structure et l'évolution du champ de température dans le manteau. Jusqu'à présent, l'inclusion des reconstructions de la tectonique des plaques se faisait en imposant des conditions aux limites du modèle (équilibre des forces, vitesses imposées...). Ces techniques, bien que permettant de tester la validité de différents scénarios tectoniques alternatifs, n'autorisent pas de rétroaction dynamique de la convection mantellique sur la tectonique de surface.Dans ce travail, nous avons développé des techniques d'assimilation de données permettant d'intégrer les reconstructions de la tectonique des plaques dans un modèle numérique tout en laissant se développer de manière auto-cohérente cette rétroaction. Les techniques développées permettent également de prendre en compte les incertitudes associées aux reconstructions de la tectonique des plaques et de calculer les erreurs sur l'estimation finale de la circulationmantellique.Dans un premier temps, nous avons développé un filtre de Kalman suboptimal qui permet d'estimer la structure et l'évolution de la circulation mantellique la plus probable à partir d'un modèle numérique de convection et d'une sérietemporelle d'observations de surface, ainsi que de leurs incertitudes respectives.Ce filtre a été testé sur des expériences synthétiques. Celles-ci consistent à tenter de retrouver une évolution témoin à partir d'une série temporelle de données issues de cette évolution. Ces expériences ont montré qu'il était possible, enprincipe, de reconstruire la structure et l'évolution de l'ensemble du manteau à partir d'observations de vitesses et de flux de chaleur à la surface.Dans un second temps, nous avons développé un filtre de Kalman d'ensemble. Ce filtre permet non seulement d'estimer de manière plus précise la géométrie des structures mantelliques, mais aussi les incertitudes sur cette estimation. / This dissertation focuses on the developpement of data assimilation methods to reconstruct the circulation of the Earth's mantle and the evolution of its surface tectonics for the last 200~Myrs. We use numerical models of mantle convection in which the surface dynamics is similar to the Earth's. By combining these models with plate tectonics reconstructions, it is possible to estimate the structure and evolution of the temperature field of the mantle. So far, the assimilation of plate tectonics reconstructions was done by imposing specific boundary conditions in the model (force balance, imposed velocities...). These techniques, although insightful to test the likeliness of alternative tectonic scenarios, do not allow the full expression of the dynamical feedback between mantle convection and surface tectonics. We develop sequential data assimilation techniques able to assimilate plate tectonics reconstructions in a numerical model while simultaneously letting this dynamicalfeedback develop self-consistently. Moreover, these techniques take into account errors in plate tectonics reconstructions, and compute the error on the final estimation of mantle circulation.First, we develop a suboptimal Kalman filter. This filter estimates the most likely structure and evolution of mantle circulation from a numerical model of mantle convection, a time series of surface observations and the uncertainty on both. This filter was tested on synthetic experiments. The principle of a synthetic experiment is to apply the data assimilation algorithm to a set of synthetic observations obtained from a reference run, and to then compare the obtained estimation of the evolution with the reference evolution. The synthetic experiments we conducted showed that it was possible, in principle, to reconstruct the structure and evolution of the whole mantle from surface velocities and heat flux observations.Second, we develop an Ensemble Kalman Filter. Instead of estimating the most likely evolution, an ensemble of possible evolutions are computed. This technique leads to a better estimation of the geometry of mantle structures and a more complete estimation of the uncertainties associated.
14

Inverse Problems and Self-similarity in Imaging

Ebrahimi Kahrizsangi, Mehran 28 July 2008 (has links)
This thesis examines the concept of image self-similarity and provides solutions to various associated inverse problems such as resolution enhancement and missing fractal codes. In general, many real-world inverse problems are ill-posed, mainly because of the lack of existence of a unique solution. The procedure of providing acceptable unique solutions to such problems is known as regularization. The concept of image prior, which has been of crucial importance in image modelling and processing, has also been important in solving inverse problems since it algebraically translates to the regularization procedure. Indeed, much recent progress in imaging has been due to advances in the formulation and practice of regularization. This, coupled with progress in optimization and numerical analysis, has yielded much improvement in computational methods of solving inverse imaging problems. Historically, the idea of self-similarity was important in the development of fractal image coding. Here we show that the self-similarity properties of natural images may be used to construct image priors for the purpose of addressing certain inverse problems. Indeed, new trends in the area of non-local image processing have provided a rejuvenated appreciation of image self-similarity and opportunities to explore novel self-similarity-based priors. We first revisit the concept of fractal-based methods and address some open theoretical problems in the area. This includes formulating a necessary and sufficient condition for the contractivity of the block fractal transform operator. We shall also provide some more generalized formulations of fractal-based self-similarity constraints of an image. These formulations can be developed algebraically and also in terms of the set-based method of Projection Onto Convex Sets (POCS). We then revisit the traditional inverse problems of single frame image zooming and multi-frame resolution enhancement, also known as super-resolution. Some ideas will be borrowed from newly developed non-local denoising algorithms in order to formulate self-similarity priors. Understanding the role of scale and choice of examples/samples is also important in these proposed models. For this purpose, we perform an extensive series of numerical experiments and analyze the results. These ideas naturally lead to the method of self-examples, which relies on the regularity properties of natural images at different scales, as a means of solving the single-frame image zooming problem. Furthermore, we propose and investigate a multi-frame super-resolution counterpart which does not require explicit motion estimation among video sequences.
15

Inverse Problems and Self-similarity in Imaging

Ebrahimi Kahrizsangi, Mehran 28 July 2008 (has links)
This thesis examines the concept of image self-similarity and provides solutions to various associated inverse problems such as resolution enhancement and missing fractal codes. In general, many real-world inverse problems are ill-posed, mainly because of the lack of existence of a unique solution. The procedure of providing acceptable unique solutions to such problems is known as regularization. The concept of image prior, which has been of crucial importance in image modelling and processing, has also been important in solving inverse problems since it algebraically translates to the regularization procedure. Indeed, much recent progress in imaging has been due to advances in the formulation and practice of regularization. This, coupled with progress in optimization and numerical analysis, has yielded much improvement in computational methods of solving inverse imaging problems. Historically, the idea of self-similarity was important in the development of fractal image coding. Here we show that the self-similarity properties of natural images may be used to construct image priors for the purpose of addressing certain inverse problems. Indeed, new trends in the area of non-local image processing have provided a rejuvenated appreciation of image self-similarity and opportunities to explore novel self-similarity-based priors. We first revisit the concept of fractal-based methods and address some open theoretical problems in the area. This includes formulating a necessary and sufficient condition for the contractivity of the block fractal transform operator. We shall also provide some more generalized formulations of fractal-based self-similarity constraints of an image. These formulations can be developed algebraically and also in terms of the set-based method of Projection Onto Convex Sets (POCS). We then revisit the traditional inverse problems of single frame image zooming and multi-frame resolution enhancement, also known as super-resolution. Some ideas will be borrowed from newly developed non-local denoising algorithms in order to formulate self-similarity priors. Understanding the role of scale and choice of examples/samples is also important in these proposed models. For this purpose, we perform an extensive series of numerical experiments and analyze the results. These ideas naturally lead to the method of self-examples, which relies on the regularity properties of natural images at different scales, as a means of solving the single-frame image zooming problem. Furthermore, we propose and investigate a multi-frame super-resolution counterpart which does not require explicit motion estimation among video sequences.
16

Automatizace určování zdrojových parametrů zemětřesení / Automated determination of earthquake source parameters

Vackář, Jiří January 2018 (has links)
Title: Automated determination of earthquake source parameters Author: Jiří Vackář Department: Department of Geophysics Supervisor: prof. RNDr. Jiří Zahradník, DrSc., Department of Geophysics Abstract: The thesis deals with methods for automated inversion of seismic source parameters. We studied the influence of structure model used and show an ex- ample how the existing model can be improved. We have developed a new, fully automated tool for the centroid moment tensor (CMT) inversion in a Bayesian framework. It includes automated data retrieval from ArcLink server or local data storage. Step-like disturbances are detected using modeling of the distur- bance according to instrument parameters and such components are automati- cally excluded from further processing. Frequency ranges for the filtration and time windows for the inversion are determined automatically due to epicentral distance. Full-waveform inversion is performed in a space-time grid around a provided hypocenter. A data covariance matrix calculated from pre-event noise yields an automated weighting of the station recordings according to their noise levels and also serves as an automated frequency filter suppressing noisy frequency ranges. The method is tested on synthetic and observed data. It is applied on a dataset from the Swiss seismic...

Page generated in 0.0594 seconds