• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 83
  • 18
  • 13
  • 3
  • 3
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 148
  • 148
  • 148
  • 30
  • 25
  • 23
  • 20
  • 20
  • 19
  • 19
  • 18
  • 16
  • 15
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Resampling in particle filters

Hol, Jeroen D. January 2004 (has links)
In this report a comparison is made between four frequently encountered resampling algorithms for particle filters. A theoretical framework is introduced to be able to understand and explain the differences between the resampling algorithms. This facilitates a comparison of the algorithms based on resampling quality and on computational complexity. Using extensive Monte Carlo simulations the theoretical results are verified. It is found that systematic resampling is favourable, both in resampling quality and computational complexity.
32

General Adaptive Monte Carlo Bayesian Image Denoising

Zhang, Wen January 2010 (has links)
Image noise reduction, or denoising, is an active area of research, although many of the techniques cited in the literature mainly target additive white noise. With an emphasis on signal-dependent noise, this thesis presents the General Adaptive Monte Carlo Bayesian Image Denoising (GAMBID) algorithm, a model-free approach based on random sampling. Testing is conducted on synthetic images with two different signal-dependent noise types as well as on real synthetic aperture radar and ultrasound images. Results show that GAMBID can achieve state-of-the-art performance, but suffers from some limitations in dealing with textures and fine low-contrast features. These aspects can by addressed in future iterations when GAMBID is expanded to become a versatile denoising framework.
33

Geometric Optimization of Solar Concentrating Collectors using Quasi-Monte Carlo Simulation

Marston, Andrew James January 2010 (has links)
This thesis is a study of the geometric design of solar concentrating collectors. In this work, a numerical optimization methodology was developed and applied to various problems in linear solar concentrator design, in order to examine overall optimization success as well as the effect of various strategies for improving computational efficiency. Optimization is performed with the goal of identifying the concentrator geometry that results in the greatest fraction of incoming solar radiation absorbed at the receiver surface, for a given collector configuration. Surfaces are parametrically represented in two-dimensions, and objective function evaluations are performed using various Monte Carlo ray-tracing techniques. Design optimization is performed using a gradient-based search scheme, with the gradient approximated through finite-difference estimation and updates based on the direction of steepest-descent. The developed geometric optimization methodology was found to perform with mixed success for the given test problems. In general, in every case a significant improvement in performance was achieved over that of the initial design guess, however, in certain cases, the quality of the identified optimal geometry depended on the quality of the initial guess. It was found that, through the use of randomized quasi-Monte Carlo, instead of traditional Monte Carlo, overall computational time to converge is reduced significantly, with times typically reduced by a factor of four to six for problems assuming perfect optics, and by a factor of about 2.5 for problems assuming realistic optical properties. It was concluded that the application of numerical optimization to the design of solar concentrating collectors merits additional research, especially given the improvements possible through quasi-Monte Carlo techniques.
34

Online Learning of Non-Stationary Networks, with Application to Financial Data

Hongo, Yasunori January 2012 (has links)
<p>In this paper, we propose a new learning algorithm for non-stationary Dynamic Bayesian Networks is proposed. Although a number of effective learning algorithms for non-stationary DBNs have previously been proposed and applied in Signal Pro- cessing and Computational Biology, those algorithms are based on batch learning algorithms that cannot be applied to online time-series data. Therefore, we propose a learning algorithm based on a Particle Filtering approach so that we can apply that algorithm to online time-series data. To evaluate our algorithm, we apply it to the simulated data set and the real-world financial data set. The result on the simulated data set shows that our algorithm performs accurately makes estimation and detects change. The result applying our algorithm to the real-world financial data set shows several features, which are suggested in previous research that also implies the effectiveness of our algorithm.</p> / Thesis
35

A Comparative Evaluation Of Conventional And Particle Filter Based Radar Target Tracking

Yildirim, Berkin 01 November 2007 (has links) (PDF)
In this thesis the radar target tracking problem in Bayesian estimation framework is studied. Traditionally, linear or linearized models, where the uncertainty in the system and measurement models is typically represented by Gaussian densities, are used in this area. Therefore, classical sub-optimal Bayesian methods based on linearized Kalman filters can be used. The sequential Monte Carlo methods, i.e. particle filters, make it possible to utilize the inherent non-linear state relations and non-Gaussian noise models. Given the sufficient computational power, the particle filter can provide better results than Kalman filter based methods in many cases. A survey over relevant radar tracking literature is presented including aspects as estimation and target modeling. In various target tracking related estimation applications, particle filtering algorithms are presented.
36

Anomalous diffusion and random walks on random fractals

Ngoc Anh, Do Hoang 08 March 2010 (has links) (PDF)
The purpose of this research is to investigate properties of diffusion processes in porous media. Porous media are modelled by random Sierpinski carpets, each carpet is constructed by mixing two different generators with the same linear size. Diffusion on porous media is studied by performing random walks on random Sierpinski carpets and is characterized by the random walk dimension $d_w$. In the first part of this work we study $d_w$ as a function of the ratio of constituents in a mixture. The simulation results show that the resulting $d_w$ can be the same as, higher or lower than $d_w$ of carpets made by a single constituent generator. In the second part, we discuss the influence of static external fields on the behavior of diffusion. The biased random walk is used to model these phenomena and we report on many simulations with different field strengths and field directions. The results show that one structural feature of Sierpinski carpets called traps can have a strong influence on the observed diffusion properties. In the third part, we investigate the effect of diffusion under the influence of external fields which change direction back and forth after a certain duration. The results show a strong dependence on the period of oscillation, the field strength and structural properties of the carpet.
37

Simulating the performance of dual layer LSO-LuYAP phoswich PET detectors using GATE Monte Carlo simulation platform / Προσομοίωση της συμπεριφοράς ανιχνευτών διπλής στρώσης LSO-LuYAP (phoswich detector) για εφαρμογή στην τομογραφία εκπομπής ποζιτρονίων (PET) με χρήση της πλατφόρμας προσομοίωσης Monte Carlo-GATE

Μπερτσέκας, Νίκος 22 December 2008 (has links)
- / -
38

Geometric Optimization of Solar Concentrating Collectors using Quasi-Monte Carlo Simulation

Marston, Andrew James January 2010 (has links)
This thesis is a study of the geometric design of solar concentrating collectors. In this work, a numerical optimization methodology was developed and applied to various problems in linear solar concentrator design, in order to examine overall optimization success as well as the effect of various strategies for improving computational efficiency. Optimization is performed with the goal of identifying the concentrator geometry that results in the greatest fraction of incoming solar radiation absorbed at the receiver surface, for a given collector configuration. Surfaces are parametrically represented in two-dimensions, and objective function evaluations are performed using various Monte Carlo ray-tracing techniques. Design optimization is performed using a gradient-based search scheme, with the gradient approximated through finite-difference estimation and updates based on the direction of steepest-descent. The developed geometric optimization methodology was found to perform with mixed success for the given test problems. In general, in every case a significant improvement in performance was achieved over that of the initial design guess, however, in certain cases, the quality of the identified optimal geometry depended on the quality of the initial guess. It was found that, through the use of randomized quasi-Monte Carlo, instead of traditional Monte Carlo, overall computational time to converge is reduced significantly, with times typically reduced by a factor of four to six for problems assuming perfect optics, and by a factor of about 2.5 for problems assuming realistic optical properties. It was concluded that the application of numerical optimization to the design of solar concentrating collectors merits additional research, especially given the improvements possible through quasi-Monte Carlo techniques.
39

Méthodes de Monte Carlo EM et approximations particulaires : Application à la calibration d'un modèle de volatilité stochastique.

09 December 2013 (has links) (PDF)
Ce travail de thèse poursuit une perspective double dans l'usage conjoint des méthodes de Monte Carlo séquentielles (MMS) et de l'algorithme Espérance-Maximisation (EM) dans le cadre des modèles de Markov cachés présentant une structure de dépendance markovienne d'ordre supérieur à 1 au niveau de la composante inobservée. Tout d'abord, nous commençons par un exposé succinct de l'assise théorique des deux concepts statistiques à travers les chapitres 1 et 2 qui leurs sont consacrés. Dans un second temps, nous nous intéressons à la mise en pratique simultanée des deux concepts au chapitre 3 et ce dans le cadre usuel où la structure de dépendance est d'ordre 1. L'apport des méthodes MMS dans ce travail réside dans leur capacité à approximer efficacement des fonctionnelles conditionnelles bornées, notamment des quantités de filtrage et de lissage dans un cadre non linéaire et non gaussien. Quant à l'algorithme EM, il est motivé par la présence à la fois de variables observables et inobservables (ou partiellement observées) dans les modèles de Markov Cachés et singulièrement les mdèles de volatilité stochastique étudié. Après avoir présenté aussi bien l'algorithme EM que les méthodes MCs ainsi que quelques unes de leurs propriétés dans les chapitres 1 et 2 respectivement, nous illustrons ces deux outils statistiques au travers de la calibration d'un modèle de volatilité stochastique. Cette application est effectuée pour des taux change ainsi que pour quelques indices boursiers au chapitre 3. Nous concluons ce chapitre sur un léger écart du modèle de volatilité stochastique canonique utilisé ainsi que des simulations de Monte Carlo portant sur le modèle résultant. Enfin, nous nous efforçons dans les chapitres 4 et 5 à fournir les assises théoriques et pratiques de l'extension des méthodes Monte Carlo séquentielles notamment le filtrage et le lissage particulaire lorsque la structure markovienne est plus prononcée. En guise d'illustration, nous donnons l'exemple d'un modèle de volatilité stochastique dégénéré dont une approximation présente une telle propriété de dépendance.
40

MCMC Estimation of Classical and Dynamic Switching and Mixture Models

Frühwirth-Schnatter, Sylvia January 1998 (has links) (PDF)
In the present paper we discuss Bayesian estimation of a very general model class where the distribution of the observations is assumed to depend on a latent mixture or switching variable taking values in a discrete state space. This model class covers e.g. finite mixture modelling, Markov switching autoregressive modelling and dynamic linear models with switching. Joint Bayesian estimation of all latent variables, model parameters and parameters determining the probability law of the switching variable is carried out by a new Markov Chain Monte Carlo method called permutation sampling. Estimation of switching and mixture models is known to be faced with identifiability problems as switching and mixture are identifiable only up to permutations of the indices of the states. For a Bayesian analysis the posterior has to be constrained in such a way that identifiablity constraints are fulfilled. The permutation sampler is designed to sample efficiently from the constrained posterior, by first sampling from the unconstrained posterior - which often can be done in a convenient multimove manner - and then by applying a suitable permutation, if the identifiability constraint is violated. We present simple conditions on the prior which ensure that this method is a valid Markov Chain Monte Carlo method (that is invariance, irreducibility and aperiodicity hold). Three case studies are presented, including finite mixture modelling of fetal lamb data, Markov switching Autoregressive modelling of the U.S. quarterly real GDP data, and modelling the U .S./U.K. real exchange rate by a dynamic linear model with Markov switching heteroscedasticity. (author's abstract) / Series: Forschungsberichte / Institut für Statistik

Page generated in 0.0565 seconds