• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 83
  • 18
  • 13
  • 3
  • 3
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 148
  • 148
  • 148
  • 30
  • 25
  • 23
  • 20
  • 20
  • 19
  • 19
  • 18
  • 16
  • 15
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

A Comparative Study of the Particle Filter and the Ensemble Kalman Filter

Datta Gupta, Syamantak January 2009 (has links)
Non-linear Bayesian estimation, or estimation of the state of a non-linear stochastic system from a set of indirect noisy measurements is a problem encountered in several fields of science. The particle filter and the ensemble Kalman filter are both used to get sub-optimal solutions of Bayesian inference problems, particularly for high-dimensional non-Gaussian and non-linear models. Both are essentially Monte Carlo techniques that compute their results using a set of estimated trajectories of the variable to be monitored. It has been shown that in a linear and Gaussian environment, solutions obtained from both these filters converge to the optimal solution obtained by the Kalman Filter. However, it is of interest to explore how the two filters compare to each other in basic methodology and construction, especially due to the similarity between them. In this work, we take up a specific problem of Bayesian inference in a restricted framework and compare analytically the results obtained from the particle filter and the ensemble Kalman filter. We show that for the chosen model, under certain assumptions, the two filters become methodologically analogous as the sample size goes to infinity.
72

Numerical analysis of highly oscillatory Stochastic PDEs

Bréhier, Charles-Edouard 27 November 2012 (has links) (PDF)
In a first part, we are interested in the behavior of a system of Stochastic PDEs with two time-scales- more precisely, we focus on the approximation of the slow component thanks to an efficient numerical scheme. We first prove an averaging principle, which states that the slow component converges to the solution of the so-called averaged equation. We then show that a numerical scheme of Euler type provides a good approximation of an unknown coefficient appearing in the averaged equation. Finally, we build and we analyze a discretization scheme based on the previous results, according to the HMM methodology (Heterogeneous Multiscale Method). We precise the orders of convergence with respect to the time-scale parameter and to the parameters of the numerical discretization- we study the convergence in a strong sense - approximation of the trajectories - and in a weak sense - approximation of the laws. In a second part, we study a method for approximating solutions of parabolic PDEs, which combines a semi-lagrangian approach and a Monte-Carlo discretization. We first show in a simplified situation that the variance depends on the discretization steps. We then provide numerical simulations of solutions, in order to show some possible applications of such a method.
73

Modelling microstructural evolution in binary alloys

Rautiainen, Terhi January 1998 (has links)
In this thesis morphologies, coarsening mechanisms and kinetics are examined in a systematic way, when phase separation and subsequent microstructural coarsening is modelled using deterministic mean field and stochastic Monte Carlo methods. For the mean field approach a microscopic diffusion equation due to Khachaturyan is employed, and a variation of it with an environment dependent mobility. Monte Carlo simulations are carried out with vacancy and Kawasaki dynamics, and a residence time algorithm is applied in the vacancy case. In mean field models microstructural evolution results from a direct minimization of a free energy functional, and the mechanism of atomic diffusion does not appear explicitly. In Monte Carlo models, changes in site occupancies are effected by direct exchanges of neighbouring atoms (Kawasaki dynamics), or through vacancy motion. In this thesis the correspondence between mean field and Monte Carlo models in describing phase transformations in binary alloys is examined. Several examples of cases in which these differences between deterministic and stochastic models affect the phase transformation are given, and the underlying differences are analyzed. It is also investigated how the choice of diffusion mechanism in the Monte Carlo model affects the microstructural evolution. Most Monte Carlo studies have been carried out with Kawasaki dynamics, although in real metals such direct exchanges are very unlikely to occur. It will be shown how the vacancy diffusion mechanism produces a variety of coarsening mechanisms over a range of temperatures, which the Kawasaki dynamics fails to capture. Consequently, kinetics and resulting morphologies, especially at low temperatures, are affected. Finally, the question of physicality of time scales in mean field and Monte Carlo models is addressed. Often a linear dependence between Monte Carlo time and real physical time is assumed, although there is no rigorous justifcation for this. In mean field models, time is defined through the atomic mobility. By examining the effect of a realistic diffusion mechanism in systems undergoing phase transformation, a critical discussion of time scales in microscopic mean field models and a Monte Carlo model with Kawasaki dynamics is presented.
74

Analyse und Simulation von Unsicherheiten in der flächendifferenzierten Niederschlags-Abfluss-Modellierung / Analysis and simulation of uncertainties in spatial distributed rainfall-runoff modelling

Grundmann, Jens 10 June 2010 (has links) (PDF)
Die deterministische Modellierung des Niederschlags-Abfluss(N-A)-Prozesses mit flächendifferenzierten, prozessbasierten Modellen ist von zahlreichen Unsicherheiten beeinflusst. Diese Unsicherheiten resultieren hauptsächlich aus den genutzten Daten, die Messfehlern unterliegen sowie für eine flächendifferenzierte Modellierung entsprechend aufbereitet werden müssen, und der Abstraktion der natürlichen Prozesse im Modell selbst. Da N-A-Modelle in der hydrologischen Praxis vielfältig eingesetzt werden, sind Zuverlässigkeitsaussagen im Hinblick auf eine spezielle Anwendung nötig, um das Vertrauen in die Modellergebnisse zu festigen. Die neu entwickelte Strategie zur Analyse und Simulation der Unsicherheiten eines flächendifferenzierten, prozessbasierten N-A-Modells ermöglicht eine umfassende, globale und komponentenbasierte Unsicherheitsbestimmung. Am Beispiel des mesoskaligen Einzugsgebiets der Schwarzen Pockau/Pegel Zöblitz im mittleren Erzgebirge wird der Einfluss maßgebender Unsicherheiten im N-A-Prozess sowie deren Kombination zu einer Gesamt-Unsicherheit auf den Gebietsabfluss aufgezeigt. Zunächst werden die maßgebenden Unsicherheiten separat quantifiziert, wobei die folgenden Methoden eingesetzt werden: (i) Monte-Carlo Simulationen mit flächendifferenzierten stochastischen Bodenparametern zur Analyse des Einflusses unsicherer Bodeninformationen, (ii) Bayes’sche Inferenz und Markov-Ketten-Monte-Carlo Simulationen, die eine Unsicherheitsbestimmung der konzeptionellen Modellparameter der Abflussbildung und -konzentration ermöglichen und (iii) Monte-Carlo Simulationen mit stochastisch generierten Niederschlagsfeldern, die die raum-zeitliche Variabilität interpolierter Niederschlagsdaten beschreiben. Die Kombination der Unsicherheiten zu einer hydrologischen Unsicherheit und einer Gesamt-Unsicherheit erfolgt ebenfalls mit Monte-Carlo Methoden. Dieses Vorgehen ermöglicht die Korrelationen der Zufallsvariablen zu erfassen und die mehrdimensionale Abhängigkeitsstruktur innerhalb der Zufallsvariablen empirisch zu beschreiben. Die Ergebnisse zeigen für das Untersuchungsgebiet eine Dominanz der Unsicherheit aus der raum-zeitlichen Niederschlagsverteilung im Gebietsabfluss gefolgt von den Unsicherheiten aus den Bodeninformationen und den konzeptionellen Modellparametern. Diese Dominanz schlägt sich auch in der Gesamt-Unsicherheit nieder. Die aus Messdaten abgeleiteten Unsicherheiten weisen eine Heteroskedastizität auf, die durch den Prozessablauf geprägt ist. Weiterhin sind Indizien für eine Abhängigkeit der Unsicherheit von der Niederschlagsintensität sowie strukturelle Defizite des N-A-Modells sichtbar. Die neu entwickelte Strategie ist prinzipiell auf andere Gebiete und Modelle übertragbar. / Modelling rainfall-runoff (R-R) processes using deterministic, spatial distributed, process-based models is affected by numerous uncertainties. One major source of these uncertainties origins from measurement errors together with the errors occurring in the process of data processing. Inadequate representation of the governing processes in the model with respect to a given application is another source of uncertainty. Considering that R-R models are commonly used in the hydrologic practise a quantification of the uncertainties is essential for a realistic interpretation of the model results. The presented new framework allows for a comprehensive, total as well as component-based estimation of the uncertainties of model results from spatial distributed, process-based R-R modelling. The capabilities of the new framework to estimate the influence of the main sources of uncertainties as well as their combination to a total uncertainty is shown and analysed at the mesoscale catchment of the Schwarze Pockau of the Ore Mountains. The approach employs the following methods to quantify the uncertainties: (i) Monte Carlo simulations using spatial distributed stochastic soil parameters allow for the analysis of the impact of uncertain soil data (ii) Bayesian inference und Markov Chain Monte Carlo simulations, yield an estimate of the uncertainty of the conceptual model parameters governing the runoff formation and - concentration processes. (iii) Monte Carlo simulations using stochastically generated rainfall patterns describing the spatiotemporal variability of interpolated rainfall data. Monte Carlo methods are also employed to combine the single sources of uncertainties to a hydrologic uncertainty and a total uncertainty. This approach accounts for the correlations between the random variables as well as an empirical description of their multidimensional dependence structure. The example application shows a dominance of the uncertainty resulting from the spatio-temporal rainfall distribution followed by the uncertainties from the soil data and the conceptual model parameters with respect to runoff. This dominance is also reflected in the total uncertainty. The uncertainties derived from the data show a heteroscedasticity which is dominated by the process. Furthermore, the degree of uncertainty seems to depend on the rainfall intensity. The analysis of the uncertainties also indicates structural deficits of the R-R model. The developed framework can principally be transferred to other catchments as well as to other R-R models.
75

Stochastic routing models in sensor networks

Keeler, Holger Paul January 2010 (has links)
Sensor networks are an evolving technology that promise numerous applications. The random and dynamic structure of sensor networks has motivated the suggestion of greedy data-routing algorithms. / In this thesis stochastic models are developed to study the advancement of messages under greedy routing in sensor networks. A model framework that is based on homogeneous spatial Poisson processes is formulated and examined to give a better understanding of the stochastic dependencies arising in the system. The effects of the model assumptions and the inherent dependencies are discussed and analyzed. A simple power-saving sleep scheme is included, and its effects on the local node density are addressed to reveal that it reduces one of the dependencies in the model. / Single hop expressions describing the advancement of messages are derived, and asymptotic expressions for the hop length moments are obtained. Expressions for the distribution of the multihop advancement of messages are derived. These expressions involve high-dimensional integrals, which are evaluated with quasi-Monte Carlo integration methods. An importance sampling function is derived to speed up the quasi-Monte Carlo methods. The subsequent results agree extremely well with those obtained via routing simulations. A renewal process model is proposed to model multihop advancements, and is justified under certain assumptions. / The model framework is extended by incorporating a spatially dependent density, which is inversely proportional to the sink distance. The aim of this extension is to demonstrate that an inhomogeneous Poisson process can be used to model a sensor network with spatially dependent node density. Elliptic integrals and asymptotic approximations are used to describe the random behaviour of hops. The final model extension entails including random transmission radii, the effects of which are discussed and analyzed. The thesis is concluded by giving future research tasks and directions.
76

Stochastic routing models in sensor networks

Keeler, Holger Paul January 2010 (has links)
Sensor networks are an evolving technology that promise numerous applications. The random and dynamic structure of sensor networks has motivated the suggestion of greedy data-routing algorithms. / In this thesis stochastic models are developed to study the advancement of messages under greedy routing in sensor networks. A model framework that is based on homogeneous spatial Poisson processes is formulated and examined to give a better understanding of the stochastic dependencies arising in the system. The effects of the model assumptions and the inherent dependencies are discussed and analyzed. A simple power-saving sleep scheme is included, and its effects on the local node density are addressed to reveal that it reduces one of the dependencies in the model. / Single hop expressions describing the advancement of messages are derived, and asymptotic expressions for the hop length moments are obtained. Expressions for the distribution of the multihop advancement of messages are derived. These expressions involve high-dimensional integrals, which are evaluated with quasi-Monte Carlo integration methods. An importance sampling function is derived to speed up the quasi-Monte Carlo methods. The subsequent results agree extremely well with those obtained via routing simulations. A renewal process model is proposed to model multihop advancements, and is justified under certain assumptions. / The model framework is extended by incorporating a spatially dependent density, which is inversely proportional to the sink distance. The aim of this extension is to demonstrate that an inhomogeneous Poisson process can be used to model a sensor network with spatially dependent node density. Elliptic integrals and asymptotic approximations are used to describe the random behaviour of hops. The final model extension entails including random transmission radii, the effects of which are discussed and analyzed. The thesis is concluded by giving future research tasks and directions.
77

Hidden states, hidden structures : Bayesian learning in time series models

Murphy, James Kevin January 2014 (has links)
This thesis presents methods for the inference of system state and the learning of model structure for a number of hidden-state time series models, within a Bayesian probabilistic framework. Motivating examples are taken from application areas including finance, physical object tracking and audio restoration. The work in this thesis can be broadly divided into three themes: system and parameter estimation in linear jump-diffusion systems, non-parametric model (system) estimation and batch audio restoration. For linear jump-diffusion systems, efficient state estimation methods based on the variable rate particle filter are presented for the general linear case (chapter 3) and a new method of parameter estimation based on Particle MCMC methods is introduced and tested against an alternative method using reversible-jump MCMC (chapter 4). Non-parametric model estimation is examined in two settings: the estimation of non-parametric environment models in a SLAM-style problem, and the estimation of the network structure and forms of linkage between multiple objects. In the former case, a non-parametric Gaussian process prior model is used to learn a potential field model of the environment in which a target moves. Efficient solution methods based on Rao-Blackwellized particle filters are given (chapter 5). In the latter case, a new way of learning non-linear inter-object relationships in multi-object systems is developed, allowing complicated inter-object dynamics to be learnt and causality between objects to be inferred. Again based on Gaussian process prior assumptions, the method allows the identification of a wide range of relationships between objects with minimal assumptions and admits efficient solution, albeit in batch form at present (chapter 6). Finally, the thesis presents some new results in the restoration of audio signals, in particular the removal of impulse noise (pops and clicks) from audio recordings (chapter 7).
78

Bayesian Model Selections for Log-binomial Regression

Zhou, Wei January 2018 (has links)
No description available.
79

Validation of the Westinghouse BWR nodal core simulator POLCA8 against Serpent2 reference results / Validering av Westinghouse BWR nodal core simulator POLCA8 mot Serpent2 referensresultat

Gaillard, Mathilde January 2021 (has links)
When a new nodal core simulator is developed, like all other simulators, it must go through an extensive verification and validation effort where, in the first stage, it will be tested against appropriate reference tools in various theoretical benchmark problems. The series of tests consist of comparing several geometries, from the simplest to the most complex, by simulating them with the nodal core simulator developed and with some higher order solver representing the reference solution, in this case on the Serpent2 Monte Carlo transport code. The aim of this master’s thesis is to carry out one part of these tests. It consisted in simulating a three-dimensional (3D) 2x2 mini boiling water reactor (BWR) core with the latest version of the Westinghouse BWR nodal core simulator POLCA8, and in comparing the outcome of these simulations against Serpent2 reference results. Prior to this work, POLCA8 was successfully tested on a 3D single-channel benchmark problem using the same Serpent2/POLCA8 methodology. However, this benchmark problem considered in this work is challenging in several aspects. Indeed, the nodal core simulator should accurately predict the eigenvalues and power distribu- tions against reference results, and this by taking into account axial leakage, resulting from the passage from two-dimensional (2D) infinite lattice physics calculations to 3D simulations, or strong axial flux gradients due to the insertion or withdrawal of the control rods after a certain depletion. This last effect is known as the Control Blade History (CBH) effect and will be the main focus of this study. In addition to the development of a new version of the nodal core simulator, a new version of the Westinghouse deterministic transport code PHOENIX5 is also under development. The accuracy of PHOENIX5 was indirectly tested through this benchmark by providing the cross sections for the POLCA8 simulations. In addition, Serpent2 based nodal cross sections were generated to POLCA8 to provide means of comparing these two sets of nodal cross section data. The results obtained lead to the conclusion that the CBH model gives very good results, especially with regard to all power distributions, and especially those after the removal of the control bars when needed most.keywords: Nodal Core Analysis, Monte Carlo Methods, CBH Effects / När en ny nodal-kärnsimulator utvecklas, som alla andra simulatorer, måste den genomgå en omfattande verifierings och valideringsinsats där den i det första steget kommer att testas mot lämpliga referensverktyg i olika teoretiska riktmärkesproblem. Testserien består av att jämföra flera geometrier, från den enklaste till den mest komplexa, genom att simulera dem med den utvecklade nodkärnsimulatorn och med någon högre ord- ningslösning som representerar referenslösningen, i detta fall på Serpent2 Monte Carlo-transportkoden. Syftet med detta examensarbete är att genomföra en del av dessa tester. Den bestod av att simulera en tredimensionell (3D) 2x2 mini-kokande vattenreaktor (BWR) -kärna med den senaste versionen av Westinghouse BWR- nodalkärnasimulator POLCA8, och att jämföra resultatet av dessa simuleringar mot Serpent2-referensresultat. Före detta arbete testades POLCA8 framgångsrikt på ett 3D-enkanaligt riktmärkesproblem med samma Serpent2 / POLCA8-metodik. Detta riktmärkesproblem som beaktas i detta arbete är dock utmanande i flera aspekter. I själva verket bör nodkärnsimulatorn noggrant förutsäga egenvärdena och kraftfördelningarna mot referensre- sultat, och detta genom att ta hänsyn till axiellt läckage, resulterande från övergången från tvådimensionella (2D) oändliga gitterfysikberäkningar till 3D-simuleringar eller starkt axiellt flöde gradienter på grund av att styrstavarna sätts in eller dras ut efter en viss utarmning. Denna sista effekt är känd som CBH-effekten (Control Blade History) och kommer att vara huvudfokus för denna studie. Förutom utvecklingen av en ny version av nodal core-simulatorn är också en ny version av Westinghouse deterministiska transportkod PHOENIX5 under utveckling. PHOENIX5: s noggrannhet testades indirekt genom detta riktmärke genom att tillhandahålla tvärsnitt för POLCA8-simuleringar. Dessutom genererades Serpent2-baserade nodtvärsnitt till POLCA8 för att tillhandahålla medel för att jämföra dessa två uppsättningar av nodtvärsnittsdata. De erhållna resultaten leder till slutsatsen att CBH-modellen ger mycket bra resultat, särskilt med avseende på alla effektfördelningar, och särskilt de som har tagits bort när man behöver mest.
80

Neutron Radiographic Imaging Analysis

Butler, Michael Paul January 1980 (has links)
<p> In analyzing the processes involved in neutron radiography, there is a need for a well-defined mathematical structure which can simultaneously be used in practical situations without great difficulty. In this report, the edge-spread function method of analysis is considered in some detail. The basic theory is developed, and both the general and the specific viewpoint are considered, in terms of the mathematical functions used. The usefulness of ESF theory in predicting optical density patterns is illustrated. Specific applications of the theory are developed; in particular, studies of image resolution and unsharpness are undertaken. </p> <p> To determine whether or not ESF methods are a good representation of the physical situation, some alternate methods which consider radiography from a more basic viewpoint are developed. The first of these is a strictly numerical approach, where experimental data is examined without specifying a model for the image formation process; a matrix formulation suitable for characterizing an image is developed. </p> <p> The second alternate method involves the use of Monte Carlo methods; this allows the incorporation of more realistic parameters into the analysis. For example, screen-film separation and object scattering of neutrons, and their effects on the image, are evaluated. Finally, a two-dimensional analysis of a simple problem is considered, with the end result being a confirmation of the usefulness of ESF theory. </p> / Thesis / Master of Engineering (MEngr)

Page generated in 0.0752 seconds