• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 89
  • 15
  • 15
  • 7
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 156
  • 156
  • 70
  • 31
  • 29
  • 24
  • 23
  • 22
  • 18
  • 18
  • 15
  • 15
  • 15
  • 14
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Semi-parametric bayesian model, applications in dose finding studies / Modèle bayésien semi-paramétrique, applications en positionnement de dose

Clertant, Matthieu 22 June 2016 (has links)
Les Phases I sont un domaine des essais cliniques dans lequel les statisticiens ont encore beaucoup à apporter. Depuis trente ans, ce secteur bénéficie d'un intérêt croissant et de nombreuses méthodes ont été proposées pour gérer l'allocation séquentielle des doses aux patients intégrés à l'étude. Durant cette Phase, il s'agit d'évaluer la toxicité, et s'adressant à des patients gravement atteints, il s'agit de maximiser les effets curatifs du traitement dont les retours toxiques sont une conséquence. Parmi une gamme de doses, on cherche à déterminer celle dont la probabilité de toxicité est la plus proche d'un seuil souhaité et fixé par les praticiens cliniques. Cette dose est appelée la MTD (maximum tolerated dose). La situation canonique dans laquelle sont introduites la plupart des méthodes consiste en une gamme de doses finie et ordonnée par probabilité de toxicité croissante. Dans cette thèse, on introduit une modélisation très générale du problème, la SPM (semi-parametric methods), qui recouvre une large classe de méthodes. Cela permet d'aborder des questions transversales aux Phases I. Quels sont les différents comportements asymptotiques souhaitables? La MTD peut-elle être localisée? Comment et dans quelles circonstances? Différentes paramétrisations de la SPM sont proposées et testées par simulations. Les performances obtenues sont comparables, voir supérieures à celles des méthodes les plus éprouvées. Les résultats théoriques sont étendus au cas spécifique de l'ordre partiel. La modélisation de la SPM repose sur un traitement hiérarchique inférentiel de modèles satisfaisant des contraintes linéaires de paramètres inconnus. Les aspects théoriques de cette structure sont décrits dans le cas de lois à supports discrets. Dans cette circonstance, de vastes ensembles de lois peuvent aisément être considérés, cela permettant d'éviter les cas de mauvaises spécifications. / Phase I clinical trials is an area in which statisticians have much to contribute. For over 30 years, this field has benefited from increasing interest on the part of statisticians and clinicians alike and several methods have been proposed to manage the sequential inclusion of patients to a study. The main purpose is to evaluate the occurrence of dose limiting toxicities for a selected group of patients with, typically, life threatening disease. The goal is to maximize the potential for therapeutic success in a situation where toxic side effects are inevitable and increase with increasing dose. From a range of given doses, we aim to determine the dose with a rate of toxicity as close as possible to some threshold chosen by the investigators. This dose is called the MTD (maximum tolerated dose). The standard situation is where we have a finite range of doses ordered with respect to the probability of toxicity at each dose. In this thesis we introduce a very general approach to modeling the problem - SPM (semi-parametric methods) - and these include a large class of methods. The viewpoint of SPM allows us to see things in, arguably, more relevant terms and to provide answers to questions such as asymptotic behavior. What kind of behavior should we be aiming for? For instance, can we consistently estimate the MTD? How, and under which conditions? Different parametrizations of SPM are considered and studied theoretically and via simulations. The obtained performances are comparable, and often better, to those of currently established methods. We extend the findings to the case of partial ordering in which more than one drug is under study and we do not necessarily know how all drug pairs are ordered. The SPM model structure leans on a hierarchical set-up whereby certain parameters are linearly constrained. The theoretical aspects of this structure are outlined for the case of distributions with discrete support. In this setting the great majority of laws can be easily considered and this enables us to avoid over restrictive specifications than can results in poor behavior.
112

Détection et caractérisation d’exoplanètes : développement et exploitation du banc d’interférométrie annulante Nulltimate et conception d’un système automatisé de classement des transits détectés par CoRoT / Detection and characterisation of exoplanets : development and operation of the nulling interferometer testbed Nulltimate and design of an automated software for the ranking of transit candidates detected by CoRoT

Demangeon, Olivier 28 June 2013 (has links)
Parmi les méthodes qui permettent de détecter des exoplanètes, la photométrie des transits est celle qui a connu le plus grand essor ces dernières années grâce à l’arrivée des télescopes spatiaux CoRoT (en 2006) puis Kepler (en 2009). Ces deux satellites ont permis de détecter des milliers de transits potentiellement planétaires. Étant donnés leur nombre et l’effort nécessaire à la confirmation de leur nature, il est essentiel d’effectuer, à partir des données photométriques, un classement efficace permettant d’identifier les transits les plus prometteurs et qui soit réalisable en un temps raisonnable. Pour ma thèse, j’ai développé un outil logiciel, rapide et automatisé, appelé BART (Bayesian Analysis for the Ranking of Transits) qui permet de réaliser un tel classement grâce une estimation de la probabilité que chaque transit soit de nature planétaire. Pour cela, mon outil s’appuie notamment sur le formalisme bayésien des probabilités et l’exploration de l’espace des paramètres libres par méthode de Monte Carlo avec des chaînes de Markov (mcmc).Une fois les exoplanètes détectées, l’étape suivante consiste à les caractériser. L’étude du système solaire nous a démontré, si cela était nécessaire, que l’information spectrale est un point clé pour comprendre la physique et l’histoire d’une planète. L’interférométrie annulante est une solution technologique très prometteuse qui pourrait permettre cela. Pour ma thèse, j’ai travaillé sur le banc optique Nulltimate afin d’étudier la faisabilité de certains objectifs technologiques liés à cette technique. Au-delà de la performance d’un taux d’extinction de 3,7.10^-5 en monochromatique et de 6,3.10^-4 en polychromatique dans l’infrarouge proche, ainsi qu’une stabilité de σN30 ms = 3,7.10^-5 estimée sur 1 heure, mon travail a permis d’assainir la situation en réalisant un budget d’erreur détaillé, une simulation en optique gaussienne de la transmission du banc et une refonte complète de l’informatique de commande. Tout cela m’a finalement permis d’identifier les faiblesses de Nulltimate. / From all exoplanet detection methods, transit photometry went through the quickest growth over the last few years thanks to the two space telescopes, CoRoT (in 2006) and Kepler (in 2009). These two satellites have identified thousands of potentially planetary transits. Given the number of detected transits and the effort required to demonstrate their natures, it is essential to perform, from photometric data only, a ranking allowing to efficiently identify the most promising transits within a reasonable period of time. For my thesis, I have developed a quick and automated software called bart (Bayesian Analysis for the Ranking of Transits) which realizes such a ranking thanks to the estimation of the probability regarding the planetary nature of each transit. For this purpose, I am relying on the Bayesian framework and free parameter space exploration with Markov Chain Monte Carlo (mcmc) methods.Once you have detected exoplanets, the following step is to characterise them. The study of the solar system demonstrated, if it was necessary, that the spectral information is a crucial clue for the understanding of the physics and history of a planet. Nulling interferometry is a promising solution which could make this possible. For my thesis, I worked on the optical bench Nulltimate in order to study the feasibility of certain technological requirements associated with this technique. Beyond the obtention of a nulling ratio of 3,7.10^-5 in monochromatic light and 6,3.10^-4 in polychromatic light in the near infrared, as well as a stability of σN30 ms = 3,7.10^-5 estimated on 1 hour, my work allowed to clarify the situation thanks to a detailed error budget, a simulation of the transmission based on Gaussian beam optics and a complete overhaul of the computer control system. All of this finally resulted in the identification of the weaknesses of Nulltimate.
113

Lois a priori non-informatives et la modélisation par mélange / Non-informative priors and modelization by mixtures

Kamary, Kaniav 15 March 2016 (has links)
L’une des grandes applications de la statistique est la validation et la comparaison de modèles probabilistes au vu des données. Cette branche des statistiques a été développée depuis la formalisation de la fin du 19ième siècle par des pionniers comme Gosset, Pearson et Fisher. Dans le cas particulier de l’approche bayésienne, la solution à la comparaison de modèles est le facteur de Bayes, rapport des vraisemblances marginales, quelque soit le modèle évalué. Cette solution est obtenue par un raisonnement mathématique fondé sur une fonction de coût.Ce facteur de Bayes pose cependant problème et ce pour deux raisons. D’une part, le facteur de Bayes est très peu utilisé du fait d’une forte dépendance à la loi a priori (ou de manière équivalente du fait d’une absence de calibration absolue). Néanmoins la sélection d’une loi a priori a un rôle vital dans la statistique bayésienne et par conséquent l’une des difficultés avec la version traditionnelle de l’approche bayésienne est la discontinuité de l’utilisation des lois a priori impropres car ils ne sont pas justifiées dans la plupart des situations de test. La première partie de cette thèse traite d’un examen général sur les lois a priori non informatives, de leurs caractéristiques et montre la stabilité globale des distributions a posteriori en réévaluant les exemples de [Seaman III 2012]. Le second problème, indépendant, est que le facteur de Bayes est difficile à calculer à l’exception des cas les plus simples (lois conjuguées). Une branche des statistiques computationnelles s’est donc attachée à résoudre ce problème, avec des solutions empruntant à la physique statistique comme la méthode du path sampling de [Gelman 1998] et à la théorie du signal. Les solutions existantes ne sont cependant pas universelles et une réévaluation de ces méthodes suivie du développement de méthodes alternatives constitue une partie de la thèse. Nous considérons donc un nouveau paradigme pour les tests bayésiens d’hypothèses et la comparaison de modèles bayésiens en définissant une alternative à la construction traditionnelle de probabilités a posteriori qu’une hypothèse est vraie ou que les données proviennent d’un modèle spécifique. Cette méthode se fonde sur l’examen des modèles en compétition en tant que composants d’un modèle de mélange. En remplaçant le problème de test original avec une estimation qui se concentre sur le poids de probabilité d’un modèle donné dans un modèle de mélange, nous analysons la sensibilité sur la distribution a posteriori conséquente des poids pour divers modélisation préalables sur les poids et soulignons qu’un intérêt important de l’utilisation de cette perspective est que les lois a priori impropres génériques sont acceptables, tout en ne mettant pas en péril la convergence. Pour cela, les méthodes MCMC comme l’algorithme de Metropolis-Hastings et l’échantillonneur de Gibbs et des approximations de la probabilité par des méthodes empiriques sont utilisées. Une autre caractéristique de cette variante facilement mise en œuvre est que les vitesses de convergence de la partie postérieure de la moyenne du poids et de probabilité a posteriori correspondant sont assez similaires à la solution bayésienne classique / One of the major applications of statistics is the validation and comparing probabilistic models given the data. This branch statistics has been developed since the formalization of the late 19th century by pioneers like Gosset, Pearson and Fisher. In the special case of the Bayesian approach, the comparison solution of models is the Bayes factor, ratio of marginal likelihoods, whatever the estimated model. This solution is obtained by a mathematical reasoning based on a loss function. Despite a frequent use of Bayes factor and its equivalent, the posterior probability of models, by the Bayesian community, it is however problematic in some cases. First, this rule is highly dependent on the prior modeling even with large datasets and as the selection of a prior density has a vital role in Bayesian statistics, one of difficulties with the traditional handling of Bayesian tests is a discontinuity in the use of improper priors since they are not justified in most testing situations. The first part of this thesis deals with a general review on non-informative priors, their features and demonstrating the overall stability of posterior distributions by reassessing examples of [Seaman III 2012].Beside that, Bayes factors are difficult to calculate except in the simplest cases (conjugate distributions). A branch of computational statistics has therefore emerged to resolve this problem with solutions borrowing from statistical physics as the path sampling method of [Gelman 1998] and from signal processing. The existing solutions are not, however, universal and a reassessment of the methods followed by alternative methods is a part of the thesis. We therefore consider a novel paradigm for Bayesian testing of hypotheses and Bayesian model comparison. The idea is to define an alternative to the traditional construction of posterior probabilities that a given hypothesis is true or that the data originates from a specific model which is based on considering the models under comparison as components of a mixture model. By replacing the original testing problem with an estimation version that focus on the probability weight of a given model within a mixture model, we analyze the sensitivity on the resulting posterior distribution of the weights for various prior modelings on the weights and stress that a major appeal in using this novel perspective is that generic improper priors are acceptable, while not putting convergence in jeopardy. MCMC methods like Metropolis-Hastings algorithm and the Gibbs sampler are used. From a computational viewpoint, another feature of this easily implemented alternative to the classical Bayesian solution is that the speeds of convergence of the posterior mean of the weight and of the corresponding posterior probability are quite similar.In the last part of the thesis we construct a reference Bayesian analysis of mixtures of Gaussian distributions by creating a new parameterization centered on the mean and variance of those models itself. This enables us to develop a genuine non-informative prior for Gaussian mixtures with an arbitrary number of components. We demonstrate that the posterior distribution associated with this prior is almost surely proper and provide MCMC implementations that exhibit the expected component exchangeability. The analyses are based on MCMC methods as the Metropolis-within-Gibbs algorithm, adaptive MCMC and the Parallel tempering algorithm. This part of the thesis is followed by the description of R package named Ultimixt which implements a generic reference Bayesian analysis of unidimensional mixtures of Gaussian distributions obtained by a location-scale parameterization of the model. This package can be applied to produce a Bayesian analysis of Gaussian mixtures with an arbitrary number of components, with no need to specify the prior distribution.
114

Mají devizové rezervy centrálních bank dopad na inflaci? / Do Central Bank FX Reserves Matter for Inflation?

Keblúšek, Martin January 2020 (has links)
01 Abstract Foreign exchange reserves are a useful tool and a buffer but maintaining an amount that is too large can be costly to the economy. Recent accumulation of these reserves points to the importance of this topic. This thesis focuses on one specific part of the effect of FX reserves on the economy - the inflation. I use panel data for 74 countries from the year 1996 to the year 2017. There is a certain degree of model uncertainty for which this thesis accounts for by using Bayesian model averaging (BMA) estimation technique. The findings from my model averaging estimations show FX reserves to not be of importance for inflation determination with close to no change when altering lags, variables, when limiting the sample to fixed FX regimes nor when limiting the sample to inflation targeting regimes. The most important variables are estimated to be a central bank financial strength proxy, exchange rate depreciation, money supply, inflation targeting, and capital account openness. These results are robust to lag changes, prior changes, and for the most part remain the same when Pooled OLS is used.
115

Jaká je hodnota mého vozu? Hedonická metoda oceňování německého trhu ojetých vozů / What is My Car Worth? Hedonic Price Analysis of the German Used Car Market

Doležalová, Radka January 2020 (has links)
Valuation of used cars, affected by various technical attributes and information asymmetry, is the key objective of all agents operating on the automobile mar- ket. This thesis, focusing on a hedonic price analysis, aims to determine basic as well as additional attributes as determinants of a used car market price. In addition, the analysis sheds light upon novel attributes (service records, cigarette smoke pollution of a vehicle interior, selling channel factor in the e- commerce environment, and a German geographical division). The hedonic price research uses the unique data sample of the German used car market, extracted from the database of the e-commerce platform AutoScout24 com- prised of almost 51 thousand vehicles and 57 attributes. The model selection is specified by the incorporation of the Bayesian model averaging approach. The research proves the complexity of a valuation of a used vehicle in a term of a substantial number of relevant variables. The most interesting innovative conclusions are non-significant effect of selling channels and small local price differences among two German regions. Remarkable are also the significant effect of the status of previous owners, bodywork colour, and smoke pollution. The estimated vehicle lifespan of 10 years shows that cars have shorter than...
116

Islám a ekonomický rozvoj: meta-analýza / Islam and Economic Performance: A Meta-Analysis

Kratochvíla, Patrik January 2021 (has links)
Islam and Economic Performance: A Meta-Analysis Patrik Kratochvíla June 28, 2021 Abstract The ongoing economic supremacy of the West has prompted debates on the ability of non-Christian religions to generate economic growth. The academic literature focusing on the Islamic religion o↵ers multiple answers, leaving the matter unresolved and with no definite conclusion. Based on a quantitative sur- vey of 315 estimates collected from 41 relevant academic studies, Islam exerts a positive and statistically significant e↵ect on economic growth in 40% of cases, a negative and statistically significant e↵ect in 10% of cases, and virtually zero e↵ect in 50% of cases. Tests for publication bias indicate slightly preferential reporting against negative estimates. When I correct for this bias, I find that the mean e↵ect of Islam on economic growth is positive but economically small. I also construct 79 moderator variables capturing methodological heterogeneity among the primary studies and apply the method of Bayesian model averaging to deal with model uncertainty in meta-analysis. The analysis shows that the heterogeneity in the results is primarily driven by di↵erences in the sample com- position and the choice of control variables, and to a lesser extent by estimation characteristics and proxies for Islam employed. 1
117

Modeling Impacts of Climate Change on Crop Yield

Hu, Tongxi January 2021 (has links)
No description available.
118

Measuring Skill Importance in Women's Soccer and Volleyball

Allan, Michelle L. 11 March 2009 (has links) (PDF)
The purpose of this study is to demonstrate how to measure skill importance for two sports: soccer and volleyball. A division I women's soccer team filmed each home game during a competitive season. Every defensive, dribbling, first touch, and passing skill was rated and recorded for each team. It was noted whether each sequence of plays led to a successful shot. A hierarchical Bayesian logistic regression model is implemented to determine how the performance of the skill affects the probability of a successful shot. A division I women's volleyball team rated each skill (serve, pass, set, etc.) and recorded rally outcomes during home games in a competitive season. The skills were only rated when the ball was on the home team's side of the net. Events followed one of these three patterns: serve-outcome, pass-set-attack-outcome, or dig-set-attack-outcome. We analyze the volleyball data using two different techniques, Markov chains and Bayesian logistic regression. These sequences of events are assumed to be first-order Markov chains. This means the quality of the current skill only depends on the quality of the previous skill. The count matrix is assumed to follow a multinomial distribution, so a Dirichlet prior is used to estimate each row of the count matrix. Bayesian simulation is used to produce the unconditional posterior probability (e.g., a perfect serve results in a point). The volleyball logistic regression model uses a Bayesian approach to determine how the performance of the skill affects the probability of a successful outcome. The posterior distributions produced from each of the models are used to calculate importance scores. The soccer data importance scores revealed that passing, first touch, and dribbling skills are the most important to the primary team. The Markov chain model for the volleyball data indicates setting 3–5 feet off the net increases the probability of a successful outcome. The logistic regression model for the volleyball data reveals that serves have a high importance score because of their steep slope. Importance scores can be used to assist coaches in allocating practice time, developing new strategies, and analyzing each player's skill performance.
119

Bayesian Model Checking Methods for Dichotomous Item Response Theory and Testlet Models

Combs, Adam 02 April 2014 (has links)
No description available.
120

A Bayesian Method for Accelerated Magnetic Resonance Elastography of the Liver

Ebersole, Christopher 31 October 2017 (has links)
No description available.

Page generated in 0.0468 seconds