Spelling suggestions: "subject:"ehe fonte carlo 3methods"" "subject:"ehe fonte carlo 4methods""
61 |
An empirical analysis of scenario generation methods for stochastic optimizationLöhndorf, Nils 17 May 2016 (has links) (PDF)
This work presents an empirical analysis of popular scenario generation methods for stochastic optimization, including quasi-Monte Carlo, moment matching, and methods based on probability metrics, as well as a new method referred to as Voronoi cell sampling. Solution quality is assessed by measuring the error that arises from using scenarios to solve a multi-dimensional newsvendor problem, for which analytical solutions are available. In addition to the expected value, the work also studies scenario quality when minimizing the expected shortfall using the conditional value-at-risk. To quickly solve problems with millions of random parameters, a reformulation of the risk-averse newsvendor problem is proposed which can be solved via Benders decomposition. The empirical analysis identifies Voronoi cell sampling as the method that provides the lowest errors, with particularly good results for heavy-tailed distributions. A controversial finding concerns evidence for the ineffectiveness of widely used methods based on minimizing probability metrics under high-dimensional randomness.
|
62 |
Kinetic Monte Carlo Methods for Computing First Capture Time Distributions in Models of Diffusive AbsorptionSchmidt, Daniel 01 January 2017 (has links)
In this paper, we consider the capture dynamics of a particle undergoing a random walk above a sheet of absorbing traps. In particular, we seek to characterize the distribution in time from when the particle is released to when it is absorbed. This problem is motivated by the study of lymphocytes in the human blood stream; for a particle near the surface of a lymphocyte, how long will it take for the particle to be captured? We model this problem as a diffusive process with a mixture of reflecting and absorbing boundary conditions. The model is analyzed from two approaches. The first is a numerical simulation using a Kinetic Monte Carlo (KMC) method that exploits exact solutions to accelerate a particle-based simulation of the capture time. A notable advantage of KMC is that run time is independent of how far from the traps one begins. We compare our results to the second approach, which is asymptotic approximations of the FPT distribution for particles that start far from the traps. Our goal is to validate the efficacy of homogenizing the surface boundary conditions, replacing the reflecting (Neumann) and absorbing (Dirichlet) boundary conditions with a mixed (Robin) boundary condition.
|
63 |
Particle Filtering for Track Before Detect ApplicationsTorstensson, Johan, Trieb, Mikael January 2005 (has links)
<p>Integrated tracking and detection, based on unthresholded measurements, also referred to as track before detect (TBD) is a hard nonlinear and non-Gaussian dynamical estimation and detection problem. However, it is a technique that enables the user to track and detect targets that would be extremely hard to track and detect, if possible at all with ''classical'' methods. TBD enables us to be better able to detect and track weak, stealthy or dim targets in noise and clutter and particles filter have shown to be very useful in the implementation of TBD algorithms. </p><p>This Master's thesis has investigated the use of particle filters on radar measurements, in a TBD approach.</p><p>The work has been divided into two major problems, a time efficient implementation and new functional features, as estimating the radar cross section (RCS) and the extension of the target. The later is of great importance when the resolution of the radar is such, that specific features of the target can be distinguished. Results will be illustrated by means of realistic examples.</p>
|
64 |
A Comparative Study of the Particle Filter and the Ensemble Kalman FilterDatta Gupta, Syamantak January 2009 (has links)
Non-linear Bayesian estimation, or estimation of the state of a non-linear stochastic system from a set of indirect noisy measurements is a problem encountered in several fields of science. The particle filter and the ensemble Kalman filter are both used to get sub-optimal solutions of Bayesian inference problems, particularly for
high-dimensional non-Gaussian and non-linear models. Both are essentially Monte Carlo techniques that compute their results using a set of estimated trajectories of the variable to be monitored. It has been shown that in a linear and Gaussian environment, solutions obtained from both these filters converge to the optimal solution obtained by the Kalman Filter. However, it is of interest to explore how the two filters compare to each other in basic methodology and construction, especially due to the
similarity between them. In this work, we take up a specific problem of Bayesian inference in a restricted framework and compare analytically the results obtained from the particle filter and the ensemble Kalman filter. We show that for the chosen model, under certain assumptions, the two filters become methodologically analogous as the sample size goes to infinity.
|
65 |
Particle Filtering for Track Before Detect ApplicationsTorstensson, Johan, Trieb, Mikael January 2005 (has links)
Integrated tracking and detection, based on unthresholded measurements, also referred to as track before detect (TBD) is a hard nonlinear and non-Gaussian dynamical estimation and detection problem. However, it is a technique that enables the user to track and detect targets that would be extremely hard to track and detect, if possible at all with ''classical'' methods. TBD enables us to be better able to detect and track weak, stealthy or dim targets in noise and clutter and particles filter have shown to be very useful in the implementation of TBD algorithms. This Master's thesis has investigated the use of particle filters on radar measurements, in a TBD approach. The work has been divided into two major problems, a time efficient implementation and new functional features, as estimating the radar cross section (RCS) and the extension of the target. The later is of great importance when the resolution of the radar is such, that specific features of the target can be distinguished. Results will be illustrated by means of realistic examples.
|
66 |
A Comparative Study of the Particle Filter and the Ensemble Kalman FilterDatta Gupta, Syamantak January 2009 (has links)
Non-linear Bayesian estimation, or estimation of the state of a non-linear stochastic system from a set of indirect noisy measurements is a problem encountered in several fields of science. The particle filter and the ensemble Kalman filter are both used to get sub-optimal solutions of Bayesian inference problems, particularly for
high-dimensional non-Gaussian and non-linear models. Both are essentially Monte Carlo techniques that compute their results using a set of estimated trajectories of the variable to be monitored. It has been shown that in a linear and Gaussian environment, solutions obtained from both these filters converge to the optimal solution obtained by the Kalman Filter. However, it is of interest to explore how the two filters compare to each other in basic methodology and construction, especially due to the
similarity between them. In this work, we take up a specific problem of Bayesian inference in a restricted framework and compare analytically the results obtained from the particle filter and the ensemble Kalman filter. We show that for the chosen model, under certain assumptions, the two filters become methodologically analogous as the sample size goes to infinity.
|
67 |
Numerical analysis of highly oscillatory Stochastic PDEsBréhier, Charles-Edouard 27 November 2012 (has links) (PDF)
In a first part, we are interested in the behavior of a system of Stochastic PDEs with two time-scales- more precisely, we focus on the approximation of the slow component thanks to an efficient numerical scheme. We first prove an averaging principle, which states that the slow component converges to the solution of the so-called averaged equation. We then show that a numerical scheme of Euler type provides a good approximation of an unknown coefficient appearing in the averaged equation. Finally, we build and we analyze a discretization scheme based on the previous results, according to the HMM methodology (Heterogeneous Multiscale Method). We precise the orders of convergence with respect to the time-scale parameter and to the parameters of the numerical discretization- we study the convergence in a strong sense - approximation of the trajectories - and in a weak sense - approximation of the laws. In a second part, we study a method for approximating solutions of parabolic PDEs, which combines a semi-lagrangian approach and a Monte-Carlo discretization. We first show in a simplified situation that the variance depends on the discretization steps. We then provide numerical simulations of solutions, in order to show some possible applications of such a method.
|
68 |
Modelling microstructural evolution in binary alloysRautiainen, Terhi January 1998 (has links)
In this thesis morphologies, coarsening mechanisms and kinetics are examined in a systematic way, when phase separation and subsequent microstructural coarsening is modelled using deterministic mean field and stochastic Monte Carlo methods. For the mean field approach a microscopic diffusion equation due to Khachaturyan is employed, and a variation of it with an environment dependent mobility. Monte Carlo simulations are carried out with vacancy and Kawasaki dynamics, and a residence time algorithm is applied in the vacancy case. In mean field models microstructural evolution results from a direct minimization of a free energy functional, and the mechanism of atomic diffusion does not appear explicitly. In Monte Carlo models, changes in site occupancies are effected by direct exchanges of neighbouring atoms (Kawasaki dynamics), or through vacancy motion. In this thesis the correspondence between mean field and Monte Carlo models in describing phase transformations in binary alloys is examined. Several examples of cases in which these differences between deterministic and stochastic models affect the phase transformation are given, and the underlying differences are analyzed. It is also investigated how the choice of diffusion mechanism in the Monte Carlo model affects the microstructural evolution. Most Monte Carlo studies have been carried out with Kawasaki dynamics, although in real metals such direct exchanges are very unlikely to occur. It will be shown how the vacancy diffusion mechanism produces a variety of coarsening mechanisms over a range of temperatures, which the Kawasaki dynamics fails to capture. Consequently, kinetics and resulting morphologies, especially at low temperatures, are affected. Finally, the question of physicality of time scales in mean field and Monte Carlo models is addressed. Often a linear dependence between Monte Carlo time and real physical time is assumed, although there is no rigorous justifcation for this. In mean field models, time is defined through the atomic mobility. By examining the effect of a realistic diffusion mechanism in systems undergoing phase transformation, a critical discussion of time scales in microscopic mean field models and a Monte Carlo model with Kawasaki dynamics is presented.
|
69 |
Analyse und Simulation von Unsicherheiten in der flächendifferenzierten Niederschlags-Abfluss-Modellierung / Analysis and simulation of uncertainties in spatial distributed rainfall-runoff modellingGrundmann, Jens 10 June 2010 (has links) (PDF)
Die deterministische Modellierung des Niederschlags-Abfluss(N-A)-Prozesses mit flächendifferenzierten, prozessbasierten Modellen ist von zahlreichen Unsicherheiten beeinflusst. Diese Unsicherheiten resultieren hauptsächlich aus den genutzten Daten, die Messfehlern unterliegen sowie für eine flächendifferenzierte Modellierung entsprechend aufbereitet werden müssen, und der Abstraktion der natürlichen Prozesse im Modell selbst. Da N-A-Modelle in der hydrologischen Praxis vielfältig eingesetzt werden, sind Zuverlässigkeitsaussagen im Hinblick auf eine spezielle Anwendung nötig, um das Vertrauen in die Modellergebnisse zu festigen.
Die neu entwickelte Strategie zur Analyse und Simulation der Unsicherheiten eines flächendifferenzierten, prozessbasierten N-A-Modells ermöglicht eine umfassende, globale und komponentenbasierte Unsicherheitsbestimmung. Am Beispiel des mesoskaligen Einzugsgebiets der Schwarzen Pockau/Pegel Zöblitz im mittleren Erzgebirge wird der Einfluss maßgebender Unsicherheiten im N-A-Prozess sowie deren Kombination zu einer Gesamt-Unsicherheit auf den Gebietsabfluss aufgezeigt. Zunächst werden die maßgebenden Unsicherheiten separat quantifiziert, wobei die folgenden Methoden eingesetzt werden:
(i) Monte-Carlo Simulationen mit flächendifferenzierten stochastischen Bodenparametern zur Analyse des Einflusses unsicherer Bodeninformationen,
(ii) Bayes’sche Inferenz und Markov-Ketten-Monte-Carlo Simulationen, die eine Unsicherheitsbestimmung der konzeptionellen Modellparameter der Abflussbildung und -konzentration ermöglichen und
(iii) Monte-Carlo Simulationen mit stochastisch generierten Niederschlagsfeldern, die die raum-zeitliche Variabilität interpolierter Niederschlagsdaten beschreiben.
Die Kombination der Unsicherheiten zu einer hydrologischen Unsicherheit und einer Gesamt-Unsicherheit erfolgt ebenfalls mit Monte-Carlo Methoden. Dieses Vorgehen ermöglicht die Korrelationen der Zufallsvariablen zu erfassen und die mehrdimensionale Abhängigkeitsstruktur innerhalb der Zufallsvariablen empirisch zu beschreiben.
Die Ergebnisse zeigen für das Untersuchungsgebiet eine Dominanz der Unsicherheit aus der raum-zeitlichen Niederschlagsverteilung im Gebietsabfluss gefolgt von den Unsicherheiten aus den Bodeninformationen und den konzeptionellen Modellparametern. Diese Dominanz schlägt sich auch in der Gesamt-Unsicherheit nieder. Die aus Messdaten abgeleiteten Unsicherheiten weisen eine Heteroskedastizität auf, die durch den Prozessablauf geprägt ist. Weiterhin sind Indizien für eine Abhängigkeit der Unsicherheit von der Niederschlagsintensität sowie strukturelle Defizite des N-A-Modells sichtbar.
Die neu entwickelte Strategie ist prinzipiell auf andere Gebiete und Modelle übertragbar. / Modelling rainfall-runoff (R-R) processes using deterministic, spatial distributed, process-based models is affected by numerous uncertainties. One major source of these uncertainties origins from measurement errors together with the errors occurring in the process of data processing. Inadequate representation of the governing processes in the model with respect to a given application is another source of uncertainty. Considering that R-R models are commonly used in the hydrologic practise a quantification of the uncertainties is essential for a realistic interpretation of the model results.
The presented new framework allows for a comprehensive, total as well as component-based estimation of the uncertainties of model results from spatial distributed, process-based R-R modelling. The capabilities of the new framework to estimate the influence of the main sources of uncertainties as well as their combination to a total uncertainty is shown and analysed at the mesoscale catchment of the Schwarze Pockau of the Ore Mountains.
The approach employs the following methods to quantify the uncertainties:
(i) Monte Carlo simulations using spatial distributed stochastic soil parameters allow for the analysis of the impact of uncertain soil data
(ii) Bayesian inference und Markov Chain Monte Carlo simulations, yield an estimate of the uncertainty of the conceptual model parameters governing the runoff formation and - concentration processes.
(iii) Monte Carlo simulations using stochastically generated rainfall patterns describing the spatiotemporal variability of interpolated rainfall data.
Monte Carlo methods are also employed to combine the single sources of uncertainties to a hydrologic uncertainty and a total uncertainty. This approach accounts for the correlations between the random variables as well as an empirical description of their multidimensional dependence structure.
The example application shows a dominance of the uncertainty resulting from the spatio-temporal rainfall distribution followed by the uncertainties from the soil data and the conceptual model parameters with respect to runoff. This dominance is also reflected in the total uncertainty. The uncertainties derived from the data show a heteroscedasticity which is dominated by the process. Furthermore, the degree of uncertainty seems to depend on the rainfall intensity. The analysis of the uncertainties also indicates structural deficits of the R-R model.
The developed framework can principally be transferred to other catchments as well as to other R-R models.
|
70 |
Stochastic routing models in sensor networksKeeler, Holger Paul January 2010 (has links)
Sensor networks are an evolving technology that promise numerous applications. The random and dynamic structure of sensor networks has motivated the suggestion of greedy data-routing algorithms. / In this thesis stochastic models are developed to study the advancement of messages under greedy routing in sensor networks. A model framework that is based on homogeneous spatial Poisson processes is formulated and examined to give a better understanding of the stochastic dependencies arising in the system. The effects of the model assumptions and the inherent dependencies are discussed and analyzed. A simple power-saving sleep scheme is included, and its effects on the local node density are addressed to reveal that it reduces one of the dependencies in the model. / Single hop expressions describing the advancement of messages are derived, and asymptotic expressions for the hop length moments are obtained. Expressions for the distribution of the multihop advancement of messages are derived. These expressions involve high-dimensional integrals, which are evaluated with quasi-Monte Carlo integration methods. An importance sampling function is derived to speed up the quasi-Monte Carlo methods. The subsequent results agree extremely well with those obtained via routing simulations. A renewal process model is proposed to model multihop advancements, and is justified under certain assumptions. / The model framework is extended by incorporating a spatially dependent density, which is inversely proportional to the sink distance. The aim of this extension is to demonstrate that an inhomogeneous Poisson process can be used to model a sensor network with spatially dependent node density. Elliptic integrals and asymptotic approximations are used to describe the random behaviour of hops. The final model extension entails including random transmission radii, the effects of which are discussed and analyzed. The thesis is concluded by giving future research tasks and directions.
|
Page generated in 0.0639 seconds