• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 2
  • Tagged with
  • 8
  • 8
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Information Geometry and Model Reduction in Oscillatory and Networked Systems

Francis, Benjamin Lane 18 June 2020 (has links)
In this dissertation, I consider the problem of model reduction in both oscillatory and networked systems. Previously, the Manifold Boundary Approximation Method (MBAM) has been demonstrated as a data-driven tool for reducing the parametric complexity of so-called sloppy models. To be effective, MBAM requires the model manifold to have low curvature. I show that oscillatory models are characterized by model manifolds with high curvature in one or more directions. I propose methods for transforming the model manifolds of these models into ones with low curvature and demonstrate on a couple of test systems. I demonstrate MBAM as a tool for data-driven network reduction on a small model from power systems. I derive multiple effective networks for the model, each tailored to a specific choice of system observations. I find several important types of parameter reductions, including network reductions, which can be used in large power systems models. Finally, I consider the problem of piecemeal reduction of large systems. When a large system is split into pieces that are to be reduced separately using MBAM, there is no guarantee that the reduced pieces will be compatible for reassembly. I propose a strategy for reducing a system piecemeal while guaranteeing that the reduced pieces will be compatible. I demonstrate the reduction strategy on a small resistor network.
2

Investigating the Estimation of the infection rate and the fraction of infections leading to death in epidemiological simulation

Gölén, Jakob January 2023 (has links)
The main goal of this project is to investigate the behaviors of parameters used when modeling an epidemic. A stochastic SIHDRe model is used to simulate how an epidemic evolves over time. The SIHDRe model has nine parameters, and this project focuses on the infection rate (β) and the fraction of infections leading to deaths (FID), with all other parameters being considered known. Both parameters are time dependent. To estimate the two chosen parameters, this project uses synthetic data so that comparisons between estimations with true parameters are possible. A dynamic optimization procedure inspired by Model Predictive Control is utilized for the predictions. Using synthesized data from hospitalizations and deaths, a cost function is minimized to obtain estimations of the parameters. Only a subset of the time span, called a window, is considered for every parameter optimization. The parameters within the window are optimized and the window then moves forward in time defined by a time step until the parameters are optimized over the whole time span. To obtain error estimations of the parameters, synthetic bootstrapping is used, using optimized parameters to simulate new epidemics of which the parameters are optimized. The square difference between the new estimations compared to the original estimations can be used to obtain the standard deviation of the estimated parameters. This project also discusses how regularization parameters within the cost functions are chosen so that the estimated parameters will be most similar to the real parameter values, and end-of-data effects, i.e. increased uncertainty towards the end of a window, is also discussed. / Projektet undersöker hur olika parametrar till en epidemisk modell kan skattas. En stokastisk SIHDRe modell (Susceptible, Infected, Hospitalizalized, Dead, Recovered) används för att simulera hur en epidemi utvecklas över tid. SIDHRe modellen delar in populationen i olika grupper baserat på hur epidemin har påverkat dem, till exempel om de har blivit smittade eller om de har hamnat på sjukhus på grund av sjukdomen. Personer kan flyttas mellan olika grupper beroende på en rad parametrar samt storleken på de olika grupperna. Detta projekt fokuserar på att skatta två parameterar: β, som påverkar hur personer med risk för infektion blir smittade, samt FID som påverkar hur många infekterade som dör av sjukdomen. Modellen har nio parametrar totalt och alla andra parametrar anses kända. Projektet använder syntetisk data, som gör det möjligt att jämföra skattningar av parametrarna med deras sanna värden. Båda okända parametrarna är tidsberoende. För att bestämma parametervärdena används en dynamisk optimiseringsmetod. Data från antal individer inlagda på sjukhus samt antal döda anses känt och kan användas för att minimera en kostfunktion som har de okända parametrarna som inmatningsvärden genom att ändra dessa. Tidsspannet begränsas till en mindre del, det sägs att man ser ett fönster av hela tidsspannet. Fönstret startar vid den första tidspunkten och kostfunktionen minimiseras för inmatningsvärden inom fönstret. När detta är gjort flyttas fönstret ett kort tidsteg fram i tiden och optimiseringsprocessen återupprepas tills fönstret når slutet av hela tidsserien och alla parametervärden har uppskattats. Dessa skattade parametervärden kan sen jämföras med de sanna värdena. För att kunna uppskatta felet när parametervärdena bestäms används en metod kallad ”Synthetic Bootstrap”. Grundidén är att parameterna uppskattas en gång ochdenna uppskattning används sen som inmatningsvärde till epidemimodellen. Nya epidemier simuleras och baserat på dessa simuleringar, kan nya parametervärden estimeras. Dessa kommer att skilja i värde på grund av att modellen är stokastisk. De nya uppskattningarna jämförs sedan med de första uppskattningarna och en uppfattning om skillnaden mellan dessa kan sedan beskrivas som en standardavvikelse mellan de nya skattningarna och den första skattningen. Projektet diskuterar också val av olika regulariseringsparametrar för kostfunktionen. Dessa kontrollerar hur mycket de uppskattade värdena kan ändras från tidpunkt till tidpunkt genom att ett stort värde minskar möjliga ändringar och ett litet värde ökar dem. Ett fenomen som kallas ”end-of-data effects” diskuteras också och handlar om att osäkerheten växer i ett fönster ju längre in i fönstret man är.
3

Écologie des communautés neutralistes : inférence des paramètres des modèles à l'aide de la composition spécifique en forêt tropicale / Neutral Community Ecology : inferring model parameters from species composition data with reference to tropical forests

Beeravolu Reddy, Champak 09 December 2010 (has links)
La compréhension de la dynamique des forêts tropicales hyperdiverses a toujours été un défi en écologie. Historiquement les modèles se basant sur le concept de la niche ou la courbe logistique ont montré leurs limites lorsqu'il s'agissait d'expliquer la diversité d'espèces en forêt tropicale. L'arrivée des modèles neutres en écologie a permis d'exprimer dans un cadre mathématique l'échantillonnage des forêts tropicales, ouvrant de nouvelles perspectives. Ces modèles, très réduits en nombre de paramètres, ont été développés depuis la génétique des populations. Encore peu explorés, ces modèles considèrent les espèces comme étant fonctionnellement équivalentes entre elles. Pour commencer, nous réexaminerons les avancées récentes dans ce domaine extrêmement actif, pour discuter ensuite du développement futur de ces modèles. Dans un second temps, nous analyserons l'inférence des paramètres neutres, afin d'établir ce lien important entre modèles théoriques et données du terrain. De plus, nous introduirons un nouvel estimateur du paramètre décrivant la richesse d'espèces rencontrées dans ces forêts. Ces résultats seront mis en perspective par l'utilisation des données de terrain provenant des forêts sempervirentes des Ghâts Occidentaux d'Inde ainsi que des forêts humides autour du Canal du Panama. Nous testerons également ces approches sur des simulations variées. Finalement, nous essayerons d'évaluer la pertinence des estimations du paramètre de migration en les comparant avec les distances de dispersion des graines observées en forêt tropicale. / Understanding the dynamics of highly diverse communities such as tropical forests has always been a challenging task in ecology. Historically, simplified logistic models and complex niche theories have had a limited success in explaining the species diversity and composition in a tropical context. With the advent of neutral models, we have an original quantitative framework in terms of a sampling theory which opens new perspectives in the field of tropical community ecology. These parsimonious models originally developed from existing theories in population genetics, have a highly selective interpretation of niche theory defined as the functional equivalence of species which has been insufficiently explored. To begin with, we review recent advances of this extremely active field and provide insights into future developments of this theory. Further on, we provide a detailed account of parameter inference which is the crucial link between theoret ical models and field data. In addition, we improve on existing approaches by introducing a novel estimator for the parameter explaining the species richness found in these forests. These results are put into perspective by using field data from the wet evergreen forests of the Western Ghats region of India and the tropical rain forests around the Panama Canal Watershed. Our results are also rigorously tested using simulations of neutral community composition. Lastly, we provide insights into whether parameter inferences dealing with immigration correspond to the seed dispersal distances typically found in tropical forests.
4

Efficient Parameter Inference for Stochastic Chemical Kinetics

PAUL, DEBDAS January 2014 (has links)
Parameter inference for stochastic systems is considered as one of the fundamental classical problems in the domain of computational systems biology. The problem becomes challenging and often analytically intractable with the large number of uncertain parameters. In this scenario, Markov Chain Monte Carlo (MCMC) algorithms have been proved to be highly effective. For a stochastic system, the most accurate description of the kinetics is given by the Chemical Master Equation (CME). Unfortunately, analytical solution of CME is often intractable even for considerably small amount of chemically reacting species due to its super exponential state space complexity. As a solution, Stochastic Simulation Algorithm (SSA) using Monte Carlo approach was introduced to simulate the chemical process defined by the CME. SSA is an exact stochastic method to simulate CME but it also suffers from high time complexity due to simulation of every reaction. Therefore computation of likelihood function (based on exact CME) in MCMC becomes expensive which alternately makes the rejection step expensive. In this generic work, we introduce different approximations of CME as a pre-conditioning step to the full MCMC to make rejection cheaper. The goal is to avoid expensive computation of exact CME as far as possible. We show that, with effective pre-conditioning scheme, one can save a considerable amount of exact CME computations maintaining similar convergence characteristics. Additionally, we investigate three different sampling schemes (dense sampling, longer sampling and i.i.d sampling) under which convergence for MCMC using exact CME for parameter estimation can be analyzed. We find that under i.i.d sampling scheme, better convergence can be achieved than that of dense sampling of the same process or sampling the same process for longer time. We verify our theoretical findings for two different processes: linear birth-death and dimerization.Apart from providing a framework for parameter inference using CME, this work also provides us the reasons behind avoiding CME (in general) as a parameter estimation technique for so long years after its formulation
5

A Likelihood Method to Estimate/Detect Gene Flow and A Distance Method to Estimate Species Trees in the Presence of Gene Flow

Cui, Lingfei January 2014 (has links)
No description available.
6

Estimation statistique des paramètres pour les processus de Cox-Ingersoll-Ross et de Heston / Statistical inference for the parameters of the Cox-Ingersoll-Ross process and the Heston process

Du Roy de Chaumaray, Marie 02 December 2016 (has links)
Les processus de Cox-Ingersoll-Ross et de Heston jouent un rôle prépondérant dans la modélisation mathématique des cours d’actifs financiers ou des taux d’intérêts. Dans cette thèse, on s’intéresse à l’estimation de leurs paramètres à partir de l’observation en temps continu d’une de leurs trajectoires. Dans un premier temps, on se place dans le cas où le processus CIR est géométriquement ergodique et ne s’annule pas. On établit alors un principe de grandes déviationspour l’estimateur du maximum de vraisemblance du couple des paramètres de dimension et de dérive d’un processus CIR. On établit ensuite un principe de déviations modérées pour l’estimateur du maximum de vraisemblance des quatre paramètres d’un processus de Heston, ainsi que pour l’estimateur du maximum de vraisemblance du couple des paramètres d’un processus CIR. Contrairement à ce qui a été fait jusqu’ici dans la littérature,les paramètres sont estimés simultanément. Dans un second temps, on ne se restreint plus au cas où le processus CIR n’atteint jamais zéro et on propose un nouvel estimateur des moindres carrés pondérés pour le quadruplet des paramètres d’un processus de Heston.On établit sa consistance forte et sa normalité asymptotique, et on illustre numériquement ses bonnes performances. / The Cox-Ingersoll-Ross process and the Heston process are widely used in financial mathematics for pricing and hedging or to model interest rates. In this thesis, we focus on estimating their parameters using continuous-time observations. Firstly, we restrict ourselves to the most tractable situation where the CIR processis geometrically ergodic and does not vanish. We establish a large deviations principle for the maximum likelihood estimator of the couple of dimensionnal and drift parameters of a CIR process. Then we establish a moderate deviations principle for the maximum likelihood estimator of the four parameters of an Heston process, as well as for the maximum likelihood estimator of the couple of parameters of a CIR process. In contrast to the previous literature, parameters are estimated simultaneously. Secondly, we do not restrict ourselves anymore to the case where the CIR process never reaches zero and we introduce a new weighted least squares estimator for the quadruplet of parameters of an Heston process. We establish its strong consitency and asymptotic normality, and we illustrate numerically its good performances.
7

Bayesian state estimation in partially observable Markov processes / Estimation bayésienne dans les modèles de Markov partiellement observés

Gorynin, Ivan 13 December 2017 (has links)
Cette thèse porte sur l'estimation bayésienne d'état dans les séries temporelles modélisées à l'aide des variables latentes hybrides, c'est-à-dire dont la densité admet une composante discrète-finie et une composante continue. Des algorithmes généraux d'estimation des variables d'états dans les modèles de Markov partiellement observés à états hybrides sont proposés et comparés avec les méthodes de Monte-Carlo séquentielles sur un plan théorique et appliqué. Le résultat principal est que ces algorithmes permettent de réduire significativement le coût de calcul par rapport aux méthodes de Monte-Carlo séquentielles classiques / This thesis addresses the Bayesian estimation of hybrid-valued state variables in time series. The probability density function of a hybrid-valued random variable has a finite-discrete component and a continuous component. Diverse general algorithms for state estimation in partially observable Markov processesare introduced. These algorithms are compared with the sequential Monte-Carlo methods from a theoretical and a practical viewpoint. The main result is that the proposed methods require less processing time compared to the classic Monte-Carlo methods
8

Monte Carlo identifikační strategie pro stavové modely / Monte Carlo-Based Identification Strategies for State-Space Models

Papež, Milan January 2019 (has links)
Stavové modely jsou neobyčejně užitečné v mnoha inženýrských a vědeckých oblastech. Jejich atraktivita vychází především z toho faktu, že poskytují obecný nástroj pro popis široké škály dynamických systémů reálného světa. Nicméně, z důvodu jejich obecnosti, přidružené úlohy inference parametrů a stavů jsou ve většině praktických situacích nepoddajné. Tato dizertační práce uvažuje dvě zvláště důležité třídy nelineárních a ne-Gaussovských stavových modelů: podmíněně konjugované stavové modely a Markovsky přepínající nelineární modely. Hlavní rys těchto modelů spočívá v tom, že---navzdory jejich nepoddajnosti---obsahují poddajnou podstrukturu. Nepoddajná část požaduje abychom využily aproximační techniky. Monte Carlo výpočetní metody představují teoreticky a prakticky dobře etablovaný nástroj pro řešení tohoto problému. Výhoda těchto modelů spočívá v tom, že poddajná část může být využita pro zvýšení efektivity Monte Carlo metod tím, že se uchýlíme k Rao-Blackwellizaci. Konkrétně, tato doktorská práce navrhuje dva Rao-Blackwellizované částicové filtry pro identifikaci buďto statických anebo časově proměnných parametrů v podmíněně konjugovaných stavových modelech. Kromě toho, tato práce adoptuje nedávnou particle Markov chain Monte Carlo metodologii pro návrh Rao-Blackwellizovaných částicových Gibbsových jader pro vyhlazování stavů v Markovsky přepínajících nelineárních modelech. Tyto jádra jsou posléze použity pro inferenci parametrů metodou maximální věrohodnosti v uvažovaných modelech. Výsledné experimenty demonstrují, že navržené algoritmy překonávají příbuzné techniky ve smyslu přesnosti odhadu a výpočetního času.

Page generated in 0.0989 seconds