• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 152
  • 86
  • 54
  • 21
  • 10
  • 7
  • 4
  • 4
  • 4
  • 3
  • 2
  • 2
  • Tagged with
  • 412
  • 181
  • 87
  • 86
  • 78
  • 78
  • 77
  • 70
  • 65
  • 58
  • 57
  • 56
  • 48
  • 43
  • 42
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
261

Imputation of Missing Data with Application to Commodity Futures / Imputation av saknad data med tillämpning på råvaruterminer

Östlund, Simon January 2016 (has links)
In recent years additional requirements have been imposed on financial institutions, including Central Counterparty clearing houses (CCPs), as an attempt to assess quantitative measures of their exposure to different types of risk. One of these requirements results in a need to perform stress tests to check the resilience in case of a stressed market/crisis. However, financial markets develop over time and this leads to a situation where some instruments traded today are not present at the chosen date because they were introduced after the considered historical event. Based on current routines, the main goal of this thesis is to provide a more sophisticated method to impute (fill in) historical missing data as a preparatory work in the context of stress testing. The models considered in this paper include two methods currently regarded as state-of-the-art techniques, based on maximum likelihood estimation (MLE) and multiple imputation (MI), together with a third alternative approach involving copulas. The different methods are applied on historical return data of commodity futures contracts from the Nordic energy market. By using conventional error metrics, and out-of-sample log-likelihood, the conclusion is that it is very hard (in general) to distinguish the performance of each method, or draw any conclusion about how good the models are in comparison to each other. Even if the Student’s t-distribution seems (in general) to be a more adequate assumption regarding the data compared to the normal distribution, all the models are showing quite poor performance. However, by analysing the conditional distributions more thoroughly, and evaluating how well each model performs by extracting certain quantile values, the performance of each method is increased significantly. By comparing the different models (when imputing more extreme quantile values) it can be concluded that all methods produce satisfying results, even if the g-copula and t-copula models seems to be more robust than the respective linear models. / På senare år har ytterligare krav införts för finansiella institut (t.ex. Clearinghus) i ett försök att fastställa kvantitativa mått på deras exponering mot olika typer av risker. Ett av dessa krav innebär att utföra stresstester för att uppskatta motståndskraften under stressade marknader/kriser. Dock förändras finansiella marknader över tiden vilket leder till att vissa instrument som handlas idag inte fanns under den dåvarande perioden, eftersom de introducerades vid ett senare tillfälle. Baserat på nuvarande rutiner så är målet med detta arbete att tillhandahålla en mer sofistikerad metod för imputation (ifyllnad) av historisk data som ett förberedande arbete i utförandet av stresstester. I denna rapport implementeras två modeller som betraktas som de bäst presterande metoderna idag, baserade på maximum likelihood estimering (MLE) och multiple imputation (MI), samt en tredje alternativ metod som involverar copulas. Modellerna tillämpas på historisk data förterminskontrakt från den nordiska energimarkanden. Genom att använda väl etablerade mätmetoder för att skatta noggrannheten förrespektive modell, är det väldigt svårt (generellt) att särskilja prestandan för varje metod, eller att dra några slutsatser om hur bra varje modell är i jämförelse med varandra. även om Students t-fördelningen verkar (generellt) vara ett mer adekvat antagande rörande datan i jämförelse med normalfördelningen, så visar alla modeller ganska svag prestanda vid en första anblick. Däremot, genom att undersöka de betingade fördelningarna mer noggrant, för att se hur väl varje modell presterar genom att extrahera specifika kvantilvärden, kan varje metod förbättras markant. Genom att jämföra de olika modellerna (vid imputering av mer extrema kvantilvärden) kan slutsatsen dras att alla metoder producerar tillfredställande resultat, även om g-copula och t-copula modellerna verkar vara mer robusta än de motsvarande linjära modellerna.
262

Reaction Time Modeling in Bayesian Cognitive Models of Sequential Decision-Making Using Markov Chain Monte Carlo Sampling

Jung, Maarten Lars 25 February 2021 (has links)
In this thesis, a new approach for generating reaction time predictions for Bayesian cognitive models of sequential decision-making is proposed. The method is based on a Markov chain Monte Carlo algorithm that, by utilizing prior distributions and likelihood functions of possible action sequences, generates predictions about the time needed to choose one of these sequences. The plausibility of the reaction time predictions produced by this algorithm was investigated for simple exemplary distributions as well as for prior distributions and likelihood functions of a Bayesian model of habit learning. Simulations showed that the reaction time distributions generated by the Markov chain Monte Carlo sampler exhibit key characteristics of reaction time distributions typically observed in decision-making tasks. The introduced method can be easily applied to various Bayesian models for decision-making tasks with any number of choice alternatives. It thus provides the means to derive reaction time predictions for models where this has not been possible before. / In dieser Arbeit wird ein neuer Ansatz zum Generieren von Reaktionszeitvorhersagen für bayesianische Modelle sequenzieller Entscheidungsprozesse vorgestellt. Der Ansatz basiert auf einem Markov-Chain-Monte-Carlo-Algorithmus, der anhand von gegebenen A-priori-Verteilungen und Likelihood-Funktionen von möglichen Handlungssequenzen Vorhersagen über die Dauer einer Entscheidung für eine dieser Handlungssequenzen erstellt. Die Plausibilität der mit diesem Algorithmus generierten Reaktionszeitvorhersagen wurde für einfache Beispielverteilungen sowie für A-priori-Verteilungen und Likelihood-Funktionen eines bayesianischen Modells zur Beschreibung von Gewohnheitslernen untersucht. Simulationen zeigten, dass die vom Markov-Chain-Monte-Carlo-Sampler erzeugten Reaktionszeitverteilungen charakteristische Eigenschaften von typischen Reaktionszeitverteilungen im Kontext sequenzieller Entscheidungsprozesse aufweisen. Das Verfahren lässt sich problemlos auf verschiedene bayesianische Modelle für Entscheidungsparadigmen mit beliebig vielen Handlungsalternativen anwenden und eröffnet damit die Möglichkeit, Reaktionszeitvorhersagen für Modelle abzuleiten, für die dies bislang nicht möglich war.
263

Multiscale Methods in Image Modelling and Image Processing

Alexander, Simon January 2005 (has links)
The field of modelling and processing of 'images' has fairly recently become important, even crucial, to areas of science, medicine, and engineering. The inevitable explosion of imaging modalities and approaches stemming from this fact has become a rich source of mathematical applications. <br /><br /> 'Imaging' is quite broad, and suffers somewhat from this broadness. The general question of 'what is an image?' or perhaps 'what is a natural image?' turns out to be difficult to address. To make real headway one may need to strongly constrain the class of images being considered, as will be done in part of this thesis. On the other hand there are general principles that can guide research in many areas. One such principle considered is the assertion that (classes of) images have multiscale relationships, whether at a pixel level, between features, or other variants. There are both practical (in terms of computational complexity) and more philosophical reasons (mimicking the human visual system, for example) that suggest looking at such methods. Looking at scaling relationships may also have the advantage of opening a problem up to many mathematical tools. <br /><br /> This thesis will detail two investigations into multiscale relationships, in quite different areas. One will involve Iterated Function Systems (IFS), and the other a stochastic approach to reconstruction of binary images (binary phase descriptions of porous media). The use of IFS in this context, which has often been called 'fractal image coding', has been primarily viewed as an image compression technique. We will re-visit this approach, proposing it as a more general tool. Some study of the implications of that idea will be presented, along with applications inferred by the results. In the area of reconstruction of binary porous media, a novel, multiscale, hierarchical annealing approach is proposed and investigated.
264

Caractérisation de la variabilité interindividuelle de la toxicocinétique des composés organiques volatils

Nong, Andy January 2006 (has links)
Thèse numérisée par la Direction des bibliothèques de l'Université de Montréal.
265

Nonparametric Mixture Modeling on Constrained Spaces

Putu Ayu G Sudyanti (7038110) 16 August 2019 (has links)
<div>Mixture modeling is a classical unsupervised learning method with applications to clustering and density estimation. This dissertation studies two challenges in modeling data with mixture models. The first part addresses problems that arise when modeling observations lying on constrained spaces, such as the boundaries of a city or a landmass. It is often desirable to model such data through the use of mixture models, especially nonparametric mixture models. Specifying the component distributions and evaluating normalization constants raise modeling and computational challenges. In particular, the likelihood forms an intractable quantity, and Bayesian inference over the parameters of these models results in posterior distributions that are doubly-intractable. We address this problem via a model based on rejection sampling and an algorithm based on data augmentation. Our approach is to specify such models as restrictions of standard, unconstrained distributions to the constraint set, with measurements from the model simulated by a rejection sampling algorithm. Posterior inference proceeds by Markov chain Monte Carlo, first imputing the rejected samples given mixture parameters and then resampling parameters given all samples. We study two modeling approaches: mixtures of truncated Gaussians and truncated mixtures of Gaussians, along with Markov chain Monte Carlo sampling algorithms for both. We also discuss variations of the models, as well as approximations to improve mixing, reduce computational cost, and lower variance.</div><div><br></div><div>The second part of this dissertation explores the application of mixture models to estimate contamination rates in matched tumor and normal samples. Bulk sequencing of tumor samples are prone to contaminations from normal cells, which lead to difficulties and inaccuracies in determining the mutational landscape of the cancer genome. In such instances, a matched normal sample from the same patient can be used to act as a control for germline mutations. Probabilistic models are popularly used in this context due to their flexibility. We propose a hierarchical Bayesian model to denoise the contamination in such data and detect somatic mutations in tumor cell populations. We explore the use of a Dirichlet prior on the contamination level and extend this to a framework of Dirichlet processes. We discuss MCMC schemes to sample from the joint posterior distribution and evaluate its performance on both synthetic experiments and publicly available data.</div>
266

Cross-sectional dependence model specifications in a static trade panel data setting

LeSage, James, Fischer, Manfred M. 25 March 2019 (has links) (PDF)
The focus is on cross-sectional dependence in panel trade flow models. We propose alternative specifications for modeling time invariant factors such as socio-cultural indicator variables, e.g., common language and currency. These are typically treated as a source of heterogeneity eliminated using fixed effects transformations, but we find evidence of cross-sectional dependence after eliminating country-specific and time-specific effects. These findings suggest use of alternative simultaneous dependence model specifications that accommodate cross-sectional dependence, which we set forth along with Bayesian estimation methods. Ignoring cross-sectional dependence implies biased estimates from panel trade flow models that rely on fixed effects. / Series: Working Papers in Regional Science
267

Modèles bayésiens hiérarchiques pour le traitement multi-capteur

Dobigeon, Nicolas 19 October 2007 (has links) (PDF)
Afin de traiter la masse d'informations récoltée dans de nombreuses applications, il est nécessaire de proposer de nouvelles méthodes de traitement permettant d'exploiter le caractère « multi-capteur » des données observées. Le sujet de cette thèse consiste à étudier des algorithmes d'estimation dans un contexte multi-capteur où plusieurs signaux ou images issus d'une même application sont disponibles. Ce problème présente un grand intérêt puisqu'il permet d'améliorer les performances d'estimation par rapport à une analyse qui serait menée sur chaque signal indépendamment des autres. Nous avons développé dans ce contexte des méthodes d'inférence bayésienne hiérarchique afin de résoudre efficacement des problèmes de segmentation de signaux multiples et d'analyse d'images hyperspectrales. L'utilisation de méthodes de Monte Carlo par chaînes de Markov permet alors de surmonter les difficultés liées à la complexité calculatoire de ces méthodes d'inférence.
268

Contributions à l'apprentissage et l'inférence adaptatifs : Applications à l'ajustement d'hyperparamètres et à la physique des astroparticules

Bardenet, Rémi 19 November 2012 (has links) (PDF)
Les algorithmes d'inférence ou d'optimisation possèdent généralement des hyperparamètres qu'il est nécessaire d'ajuster. Nous nous intéressons ici à l'automatisation de cette étape d'ajustement et considérons différentes méthodes qui y parviennent en apprenant en ligne la structure du problème considéré.La première moitié de cette thèse explore l'ajustement des hyperparamètres en apprentissage artificiel. Après avoir présenté et amélioré le cadre générique de l'optimisation séquentielle à base de modèles (SMBO), nous montrons que SMBO s'applique avec succès à l'ajustement des hyperparamètres de réseaux de neurones profonds. Nous proposons ensuite un algorithme collaboratif d'ajustement qui mime la mémoire qu'ont les humains d'expériences passées avec le même algorithme sur d'autres données.La seconde moitié de cette thèse porte sur les algorithmes MCMC adaptatifs, des algorithmes d'échantillonnage qui explorent des distributions de probabilité souvent complexes en ajustant leurs paramètres internes en ligne. Pour motiver leur étude, nous décrivons d'abord l'observatoire Pierre Auger, une expérience de physique des particules dédiée à l'étude des rayons cosmiques. Nous proposons une première partie du modèle génératif d'Auger et introduisons une procédure d'inférence des paramètres individuels de chaque événement d'Auger qui ne requiert que ce premier modèle. Ensuite, nous remarquons que ce modèle est sujet à un problème connu sous le nom de label switching. Après avoir présenté les solutions existantes, nous proposons AMOR, le premier algorithme MCMC adaptatif doté d'un réétiquetage en ligne qui résout le label switching. Nous présentons une étude empirique et des résultats théoriques de consistance d'AMOR, qui mettent en lumière des liens entre le réétiquetage et la quantification vectorielle.
269

Bayesian Phylogenetics and the Evolution of Gall Wasps

Nylander, Johan A. A. January 2004 (has links)
This thesis concerns the phylogenetic relationships and the evolution of the gall-inducing wasps belonging to the family Cynipidae. Several previous studies have used morphological data to reconstruct the evolution of the family. DNA sequences from several mitochondrial and nuclear genes where obtained and the first molecular, and combined molecular and morphological, analyses of higher-level relationships in the Cynipidae is presented. A Bayesian approach to data analysis is adopted, and models allowing combined analysis of heterogeneous data, such as multiple DNA data sets and morphology, are developed. The performance of these models is evaluated using methods that allow the estimation of posterior model probabilities, thus allowing selection of most probable models for the use in phylogenetics. The use of Bayesian model averaging in phylogenetics, as opposed to model selection, is also discussed. It is shown that Bayesian MCMC analysis deals efficiently with complex models and that morphology can influence combined-data analyses, despite being outnumbered by DNA data. This emphasizes the utility and potential importance of using morphological data in statistical analyses of phylogeny. The DNA-based and combined-data analyses of cynipid relationships differ from previous studies in two important respects. First, it was previously believed that there was a monophyletic clade of woody rosid gallers but the new results place the non-oak gallers in this assemblage (tribes Pediaspidini, Diplolepidini, and Eschatocerini) outside the rest of the Cynipidae. Second, earlier studies have lent strong support to the monophyly of the inquilines (tribe Synergini), gall wasps that develop inside the galls of other species. The new analyses suggest that the inquilines either originated several times independently, or that some inquilines secondarily regained the ability to induce galls. Possible reasons for the incongruence between morphological and DNA data is discussed in terms of heterogeneity in evolutionary rates among lineages, and convergent evolution of morphological characters.
270

Multiscale Methods in Image Modelling and Image Processing

Alexander, Simon January 2005 (has links)
The field of modelling and processing of 'images' has fairly recently become important, even crucial, to areas of science, medicine, and engineering. The inevitable explosion of imaging modalities and approaches stemming from this fact has become a rich source of mathematical applications. <br /><br /> 'Imaging' is quite broad, and suffers somewhat from this broadness. The general question of 'what is an image?' or perhaps 'what is a natural image?' turns out to be difficult to address. To make real headway one may need to strongly constrain the class of images being considered, as will be done in part of this thesis. On the other hand there are general principles that can guide research in many areas. One such principle considered is the assertion that (classes of) images have multiscale relationships, whether at a pixel level, between features, or other variants. There are both practical (in terms of computational complexity) and more philosophical reasons (mimicking the human visual system, for example) that suggest looking at such methods. Looking at scaling relationships may also have the advantage of opening a problem up to many mathematical tools. <br /><br /> This thesis will detail two investigations into multiscale relationships, in quite different areas. One will involve Iterated Function Systems (IFS), and the other a stochastic approach to reconstruction of binary images (binary phase descriptions of porous media). The use of IFS in this context, which has often been called 'fractal image coding', has been primarily viewed as an image compression technique. We will re-visit this approach, proposing it as a more general tool. Some study of the implications of that idea will be presented, along with applications inferred by the results. In the area of reconstruction of binary porous media, a novel, multiscale, hierarchical annealing approach is proposed and investigated.

Page generated in 2.1477 seconds