• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 41
  • 7
  • 5
  • 2
  • 1
  • Tagged with
  • 69
  • 69
  • 69
  • 25
  • 24
  • 18
  • 16
  • 15
  • 14
  • 14
  • 11
  • 10
  • 10
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Bulk electric system reliability simulation and application

Wangdee, Wijarn 19 December 2005 (has links)
Bulk electric system reliability analysis is an important activity in both vertically integrated and unbundled electric power utilities. Competition and uncertainty in the new deregulated electric utility industry are serious concerns. New planning criteria with broader engineering consideration of transmission access and consistent risk assessment must be explicitly addressed. Modern developments in high speed computation facilities now permit the realistic utilization of sequential Monte Carlo simulation technique in practical bulk electric system reliability assessment resulting in a more complete understanding of bulk electric system risks and associated uncertainties. Two significant advantages when utilizing sequential simulation are the ability to obtain accurate frequency and duration indices, and the opportunity to synthesize reliability index probability distributions which describe the annual index variability. <p>This research work introduces the concept of applying reliability index probability distributions to assess bulk electric system risk. Bulk electric system reliability performance index probability distributions are used as integral elements in a performance based regulation (PBR) mechanism. An appreciation of the annual variability of the reliability performance indices can assist power engineers and risk managers to manage and control future potential risks under a PBR reward/penalty structure. There is growing interest in combining deterministic considerations with probabilistic assessment in order to evaluate the system well-being of bulk electric systems and to evaluate the likelihood, not only of entering a complete failure state, but also the likelihood of being very close to trouble. The system well-being concept presented in this thesis is a probabilistic framework that incorporates the accepted deterministic N-1 security criterion, and provides valuable information on what the degree of the system vulnerability might be under a particular system condition using a quantitative interpretation of the degree of system security and insecurity. An overall reliability analysis framework considering both adequacy and security perspectives is proposed using system well-being analysis and traditional adequacy assessment. The system planning process using combined adequacy and security considerations offers an additional reliability-based dimension. Sequential Monte Carlo simulation is also ideally suited to the analysis of intermittent generating resources such as wind energy conversion systems (WECS) as its framework can incorporate the chronological characteristics of wind. The reliability impacts of wind power in a bulk electric system are examined in this thesis. Transmission reinforcement planning associated with large-scale WECS and the utilization of reliability cost/worth analysis in the examination of reinforcement alternatives are also illustrated.
22

Structural Estimation Using Sequential Monte Carlo Methods

Chen, Hao January 2011 (has links)
<p>This dissertation aims to introduce a new sequential Monte Carlo (SMC) based estimation framework for structural models used in macroeconomics and industrial organization. Current Markov chain Monte Carlo (MCMC) estimation methods for structural models suffer from slow Markov chain convergence, which means parameter and state spaces of interest might not be properly explored unless huge numbers of samples are simulated. This could lead to insurmountable computational burdens for the estimation of those structural models that are expensive to solve. In contrast, SMC methods rely on the principle of sequential importance sampling to jointly evolve simulated particles, thus bypassing the dependence on Markov chain convergence altogether. This dissertation will explore the feasibility and the potential benefits to estimating structural models using SMC based methods.</p><p> Chapter 1 casts the structural estimation problem in the form of inference of hidden Markov models and demonstrates with a simple growth model.</p><p> Chapter 2 presents the key ingredients, both conceptual and theoretical, to successful SMC parameter estimation strategies in the context of structural economic models.</p><p> Chapter 3, based on Chen, Petralia and Lopes (2010), develops SMC estimation methods for dynamic stochastic general equilibrium (DSGE) models. SMC algorithms allow a simultaneous filtering of time-varying state vectors and estimation of fixed parameters. We first establish empirical feasibility of the full SMC approach by comparing estimation results from both MCMC batch estimation and SMC on-line estimation on a simple neoclassical growth model. We then estimate a large scale DSGE model for the Euro area developed in Smets and Wouters (2003) with a full SMC approach, and revisit the on-going debate between the merits of reduced form and structural models in the macroeconomics context by performing sequential model assessment between the DSGE model and various VAR/BVAR models.</p><p> Chapter 4 proposes an SMC estimation procedure and show that it readily applies to the estimation of dynamic discrete games with serially correlated endogenous state variables. I apply this estimation procedure to a dynamic oligopolistic game of entry using data from the generic pharmaceutical industry and demonstrate that the proposed SMC method can potentially better explore the parameter posterior space while being more computationally efficient than MCMC estimation. In addition, I show how the unobserved endogenous cost paths could be recovered using particle smoothing, both with and without parameter uncertainty. Parameter estimates obtained using this SMC based method largely concur with earlier findings that spillover effect from market entry is significant and plays an important role in the generic drug industry, but that it might not be as high as previously thought when full model uncertainty is taken into account during estimation.</p> / Dissertation
23

A Comparative Evaluation Of Conventional And Particle Filter Based Radar Target Tracking

Yildirim, Berkin 01 November 2007 (has links) (PDF)
In this thesis the radar target tracking problem in Bayesian estimation framework is studied. Traditionally, linear or linearized models, where the uncertainty in the system and measurement models is typically represented by Gaussian densities, are used in this area. Therefore, classical sub-optimal Bayesian methods based on linearized Kalman filters can be used. The sequential Monte Carlo methods, i.e. particle filters, make it possible to utilize the inherent non-linear state relations and non-Gaussian noise models. Given the sufficient computational power, the particle filter can provide better results than Kalman filter based methods in many cases. A survey over relevant radar tracking literature is presented including aspects as estimation and target modeling. In various target tracking related estimation applications, particle filtering algorithms are presented.
24

Acoustic Sound Source Localisation and Tracking : in Indoor Environments

Johansson, Anders January 2008 (has links)
With advances in micro-electronic complexity and fabrication, sophisticated algorithms for source localisation and tracking can now be deployed in cost sensitive appliances for both consumer and commercial markets. As a result, such algorithms are becoming ubiquitous elements of contemporary communication, robotics and surveillance systems. Two of the main requirements of acoustic localisation and tracking algorithms are robustness to acoustic disturbances (to maximise localisation accuracy), and low computational complexity (to minimise power-dissipation and cost of hardware components). The research presented in this thesis covers both advances in robustness and in computational complexity for acoustic source localisation and tracking algorithms. This thesis also presents advances in modelling of sound propagation in indoor environments; a key to the development and evaluation of acoustic localisation and tracking algorithms. As an advance in the field of tracking, this thesis also presents a new method for tracking human speakers in which the problem of the discontinuous nature of human speech is addressed using a new state-space filter based algorithm which incorporates a voice activity detector. The algorithm is shown to achieve superior tracking performance compared to traditional approaches. Furthermore, the algorithm is implemented in a real-time system using a method which yields a low computational complexity. Additionally, a new method is presented for optimising the parameters for the dynamics model used in a state-space filter. The method features an evolution strategy optimisation algorithm to identify the optimum dynamics’ model parameters. Results show that the algorithm is capable of real-time online identification of optimum parameters for different types of dynamics models without access to ground-truth data. Finally, two new localisation algorithms are developed and compared to older well established methods. In this context an analytic analysis of noise and room reverberation is conducted, considering its influence on the performance of localisation algorithms. The algorithms are implemented in a real-time system and are evaluated with respect to robustness and computational complexity. Results show that the new algorithms outperform their older counterparts, both with regards to computational complexity, and robustness to reverberation and background noise. The field of acoustic modelling is advanced in a new method for predicting the energy decay in impulse responses simulated using the image source method. The new method is applied to the problem of designing synthetic rooms with a defined reverberation time, and is compared to several well established methods for reverberation time prediction. This comparison reveals that the new method is the most accurate.
25

A review on computation methods for Bayesian state-space model with case studies

Yang, Mengta, 1979- 24 November 2010 (has links)
Sequential Monte Carlo (SMC) and Forward Filtering Backward Sampling (FFBS) are the two most often seen algorithms for Bayesian state space models analysis. Various results regarding the applicability has been either claimed or shown. It is said that SMC would excel under nonlinear, non-Gaussian situations, and less computationally expansive. On the other hand, it has been shown that with techniques such as Grid approximation (Hore et al. 2010), FFBS based methods would do no worse, though still can be computationally expansive, but provide more exact information. The purpose of this report to compare the two methods with simulated data sets, and further explore whether there exist some clear criteria that may be used to determine a priori which methods would suit the study better. / text
26

Méthodes de Monte Carlo EM et approximations particulaires : Application à la calibration d'un modèle de volatilité stochastique.

09 December 2013 (has links) (PDF)
Ce travail de thèse poursuit une perspective double dans l'usage conjoint des méthodes de Monte Carlo séquentielles (MMS) et de l'algorithme Espérance-Maximisation (EM) dans le cadre des modèles de Markov cachés présentant une structure de dépendance markovienne d'ordre supérieur à 1 au niveau de la composante inobservée. Tout d'abord, nous commençons par un exposé succinct de l'assise théorique des deux concepts statistiques à travers les chapitres 1 et 2 qui leurs sont consacrés. Dans un second temps, nous nous intéressons à la mise en pratique simultanée des deux concepts au chapitre 3 et ce dans le cadre usuel où la structure de dépendance est d'ordre 1. L'apport des méthodes MMS dans ce travail réside dans leur capacité à approximer efficacement des fonctionnelles conditionnelles bornées, notamment des quantités de filtrage et de lissage dans un cadre non linéaire et non gaussien. Quant à l'algorithme EM, il est motivé par la présence à la fois de variables observables et inobservables (ou partiellement observées) dans les modèles de Markov Cachés et singulièrement les mdèles de volatilité stochastique étudié. Après avoir présenté aussi bien l'algorithme EM que les méthodes MCs ainsi que quelques unes de leurs propriétés dans les chapitres 1 et 2 respectivement, nous illustrons ces deux outils statistiques au travers de la calibration d'un modèle de volatilité stochastique. Cette application est effectuée pour des taux change ainsi que pour quelques indices boursiers au chapitre 3. Nous concluons ce chapitre sur un léger écart du modèle de volatilité stochastique canonique utilisé ainsi que des simulations de Monte Carlo portant sur le modèle résultant. Enfin, nous nous efforçons dans les chapitres 4 et 5 à fournir les assises théoriques et pratiques de l'extension des méthodes Monte Carlo séquentielles notamment le filtrage et le lissage particulaire lorsque la structure markovienne est plus prononcée. En guise d'illustration, nous donnons l'exemple d'un modèle de volatilité stochastique dégénéré dont une approximation présente une telle propriété de dépendance.
27

Reliability and risk analysis of post fault capacity services in smart distribution networks

Syrri, Angeliki Lydia Antonia January 2017 (has links)
Recent technological developments are bringing about substantial changes that are converting traditional distribution networks into "smart" distribution networks. In particular, it is possible to observe seamless integration of Information and Communication Technologies (ICTs), including the widespread installation of automatic equipment, smart meters, etc. The increased automation facilitates active network management, interaction between market actors and demand side participation. If we also consider the increasing penetration of distributed generation, renewables and various emerging technologies such as storage and dynamic rating, it can be argued that the capacity of distribution networks should not only depend on conventional asset. In this context, taking into account uncertain load growth and ageing infrastructure, which trigger network investments, the above-mentioned advancements could alter and be used to improve the network design philosophy adopted so far. Hitherto, in fact, networks have been planned according to deterministic and conservative standards, being typically underutilised, in order for capacity to be available during emergencies. This practice could be replaced by a corrective philosophy, where existing infrastructure could be fully unlocked for normal conditions and distributed energy resources could be used for post fault capacity services. Nonetheless, to thoroughly evaluate the contribution of the resources and also to properly model emergency conditions, a probabilistic analysis should be carried out, which captures the stochasticity of some technologies, the randomness of faults and, thus, the risk profile of smart distribution networks. The research work in this thesis proposes a variety of post fault capacity services to increase distribution network utilisation but also to provide reliability support during emergency conditions. In particular, a demand response (DR) scheme is proposed where DR customers are optimally disconnected during contingencies from the operator depending on their cost of interruption. Additionally, time-limited thermal ratings have been used to increase network utilisation and support higher loading levels. Besides that, a collaborative operation of wind farms and electrical energy storage is proposed and evaluated, and their capacity contribution is calculated through the effective load carrying capability. Furthermore, the microgrid concept is examined, where multi-generation technologies collaborate to provide capacity services to internal customers but also to the remaining network. Finally, a distributed software infrastructure is examined which could be effectively used to support services in smart grids. The underlying framework for the reliability analysis is based on Sequential Monte Carlo Simulations, capturing inter-temporal constraints of the resources (payback effects, dynamic rating, DR profile, storage remaining available capacity) and the stochasticity of electrical and ICT equipment. The comprehensive distribution network reliability analysis includes network reconfiguration, restoration process, and ac power flow calculations, supporting a full risk analysis and building the risk profile for the arising smart distribution networks. Real case studies from ongoing project in England North West demonstrate the concepts and tools developed and provide noteworthy conclusions to network planners, including to inform design of DR contracts.
28

Target Discrimination Against Clutter Based on Unsupervised Clustering and Sequential Monte Carlo Tracking

January 2016 (has links)
abstract: The radar performance of detecting a target and estimating its parameters can deteriorate rapidly in the presence of high clutter. This is because radar measurements due to clutter returns can be falsely detected as if originating from the actual target. Various data association methods and multiple hypothesis filtering approaches have been considered to solve this problem. Such methods, however, can be computationally intensive for real time radar processing. This work proposes a new approach that is based on the unsupervised clustering of target and clutter detections before target tracking using particle filtering. In particular, Gaussian mixture modeling is first used to separate detections into two Gaussian distinct mixtures. Using eigenvector analysis, the eccentricity of the covariance matrices of the Gaussian mixtures are computed and compared to threshold values that are obtained a priori. The thresholding allows only target detections to be used for target tracking. Simulations demonstrate the performance of the new algorithm and compare it with using k-means for clustering instead of Gaussian mixture modeling. / Dissertation/Thesis / Masters Thesis Electrical Engineering 2016
29

Inférence bayésienne dans les modèles de croissance de plantes pour la prévision et la caractérisation des incertitudes / Bayesian inference in plant growth models for prediction and uncertainty assessment

Chen, Yuting 27 June 2014 (has links)
La croissance des plantes en interaction avec l'environnement peut être décrite par des modèles mathématiques. Ceux-ci présentent des perspectives prometteuses pour un nombre considérable d'applications telles que la prévision des rendements ou l'expérimentation virtuelle dans le contexte de la sélection variétale. Dans cette thèse, nous nous intéressons aux différentes solutions capables d'améliorer les capacités prédictives des modèles de croissance de plantes, en particulier grâce à des méthodes statistiques avancées. Notre contribution se résume en quatre parties.Tout d'abord, nous proposons un nouveau modèle de culture (Log-Normal Allocation and Senescence ; LNAS). Entièrement construit dans un cadre probabiliste, il décrit seulement les processus écophysiologiques essentiels au bilan de la biomasse végétale afin de contourner les problèmes d'identification et d'accentuer l'évaluation des incertitudes. Ensuite, nous étudions en détail le paramétrage du modèle. Dans le cadre Bayésien, nous mettons en œuvre des méthodes Monte-Carlo Séquentielles (SMC) et des méthodes de Monte-Carlo par Chaînes de Markov (MCMC) afin de répondre aux difficultés soulevées lors du paramétrage des modèles de croissance de plantes, caractérisés par des équations dynamiques non-linéaires, des données rares et un nombre important de paramètres. Dans les cas où la distribution a priori est peu informative, voire non-informative, nous proposons une version itérative des méthodes SMC et MCMC, approche équivalente à une variante stochastique d'un algorithme de type Espérance-Maximisation, dans le but de valoriser les données d'observation tout en préservant la robustesse des méthodes Bayésiennes. En troisième lieu, nous soumettons une méthode d'assimilation des données en trois étapes pour résoudre le problème de prévision du modèle. Une première étape d'analyse de sensibilité permet d'identifier les paramètres les plus influents afin d'élaborer une version plus robuste de modèle par la méthode de sélection de modèles à l'aide de critères appropriés. Ces paramètres sélectionnés sont par la suite estimés en portant une attention particulière à l'évaluation des incertitudes. La distribution a posteriori ainsi obtenue est considérée comme information a priori pour l'étape de prévision, dans laquelle une méthode du type SMC telle que le filtrage par noyau de convolution (CPF) est employée afin d'effectuer l'assimilation de données. Dans cette étape, les estimations des états cachés et des paramètres sont mis à jour dans l'objectif d'améliorer la précision de la prévision et de réduire l'incertitude associée. Finalement, d'un point de vue applicatif, la méthodologie proposée est mise en œuvre et évaluée avec deux modèles de croissance de plantes, le modèle LNAS pour la betterave sucrière et le modèle STICS pour le blé d'hiver. Quelques pistes d'utilisation de la méthode pour l'amélioration du design expérimental sont également étudiées, dans le but d'améliorer la qualité de la prévision. Les applications aux données expérimentales réelles montrent des performances prédictives encourageantes, ce qui ouvre la voie à des outils d'aide à la décision en agriculture. / Plant growth models aim to describe plant development and functional processes in interaction with the environment. They offer promising perspectives for many applications, such as yield prediction for decision support or virtual experimentation inthe context of breeding. This PhD focuses on the solutions to enhance plant growth model predictive capacity with an emphasis on advanced statistical methods. Our contributions can be summarized in four parts. Firstly, from a model design perspective, the Log-Normal Allocation and Senescence (LNAS) crop model is proposed. It describes only the essential ecophysiological processes for biomass budget in a probabilistic framework, so as to avoid identification problems and to accentuate uncertainty assessment in model prediction. Secondly, a thorough research is conducted regarding model parameterization. In a Bayesian framework, both Sequential Monte Carlo (SMC) methods and Markov chain Monte Carlo (MCMC) based methods are investigated to address the parameterization issues in the context of plant growth models, which are frequently characterized by nonlinear dynamics, scarce data and a large number of parameters. Particularly, whenthe prior distribution is non-informative, with the objective to put more emphasis on the observation data while preserving the robustness of Bayesian methods, an iterative version of the SMC and MCMC methods is introduced. It can be regarded as a stochastic variant of an EM type algorithm. Thirdly, a three-step data assimilation approach is proposed to address model prediction issues. The most influential parameters are first identified by global sensitivity analysis and chosen by model selection. Subsequently, the model calibration is performed with special attention paid to the uncertainty assessment. The posterior distribution obtained from this estimation step is consequently considered as prior information for the prediction step, in which a SMC-based on-line estimation method such as Convolution Particle Filtering (CPF) is employed to perform data assimilation. Both state and parameter estimates are updated with the purpose of improving theprediction accuracy and reducing the associated uncertainty. Finally, from an application point of view, the proposed methodology is implemented and evaluated with two crop models, the LNAS model for sugar beet and the STICS model for winter wheat. Some indications are also given on the experimental design to optimize the quality of predictions. The applications to real case scenarios show encouraging predictive performances and open the way to potential tools for yield prediction in agriculture.
30

Advances in computational Bayesian statistics and the approximation of Gibbs measures / Avancées en statistiques computationelles Bayesiennes et approximation de mesures de Gibbs

Ridgway, James 17 September 2015 (has links)
Ce mémoire de thèse regroupe plusieurs méthodes de calcul d'estimateur en statistiques bayésiennes. Plusieurs approches d'estimation seront considérées dans ce manuscrit. D'abord en estimation nous considérerons une approche standard dans le paradigme bayésien en utilisant des estimateurs sous la forme d'intégrales par rapport à des lois \textit{a posteriori}. Dans un deuxième temps nous relâcherons les hypothèses faites dans la phase de modélisation. Nous nous intéresserons alors à l'étude d'estimateurs répliquant les propriétés statistiques du minimiseur du risque de classification ou de ranking théorique et ceci sans modélisation du processus génératif des données. Dans les deux approches, et ce malgré leur dissemblance, le calcul numérique des estimateurs nécessite celui d'intégrales de grande dimension. La plus grande partie de cette thèse est consacrée au développement de telles méthodes dans quelques contextes spécifiques. / This PhD thesis deals with some computational issues of Bayesian statistics. I start by looking at problems stemming from the standard Bayesian paradigm. Estimators in this case take the form of integrals with respect to the posterior distribution. Next we will look at another approach where no, or almost no model is necessary. This will lead us to consider a Gibbs posterior. Those two approaches, although different in aspect, will lead to similar computational difficulties. In this thesis, I address some of these issues.

Page generated in 0.0586 seconds