• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 88
  • 15
  • 15
  • 7
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 155
  • 155
  • 70
  • 31
  • 28
  • 24
  • 23
  • 22
  • 17
  • 17
  • 15
  • 15
  • 14
  • 14
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Détection et caractérisation d’exoplanètes : développement et exploitation du banc d’interférométrie annulante Nulltimate et conception d’un système automatisé de classement des transits détectés par CoRoT / Detection and characterisation of exoplanets : development and operation of the nulling interferometer testbed Nulltimate and design of an automated software for the ranking of transit candidates detected by CoRoT

Demangeon, Olivier 28 June 2013 (has links)
Parmi les méthodes qui permettent de détecter des exoplanètes, la photométrie des transits est celle qui a connu le plus grand essor ces dernières années grâce à l’arrivée des télescopes spatiaux CoRoT (en 2006) puis Kepler (en 2009). Ces deux satellites ont permis de détecter des milliers de transits potentiellement planétaires. Étant donnés leur nombre et l’effort nécessaire à la confirmation de leur nature, il est essentiel d’effectuer, à partir des données photométriques, un classement efficace permettant d’identifier les transits les plus prometteurs et qui soit réalisable en un temps raisonnable. Pour ma thèse, j’ai développé un outil logiciel, rapide et automatisé, appelé BART (Bayesian Analysis for the Ranking of Transits) qui permet de réaliser un tel classement grâce une estimation de la probabilité que chaque transit soit de nature planétaire. Pour cela, mon outil s’appuie notamment sur le formalisme bayésien des probabilités et l’exploration de l’espace des paramètres libres par méthode de Monte Carlo avec des chaînes de Markov (mcmc).Une fois les exoplanètes détectées, l’étape suivante consiste à les caractériser. L’étude du système solaire nous a démontré, si cela était nécessaire, que l’information spectrale est un point clé pour comprendre la physique et l’histoire d’une planète. L’interférométrie annulante est une solution technologique très prometteuse qui pourrait permettre cela. Pour ma thèse, j’ai travaillé sur le banc optique Nulltimate afin d’étudier la faisabilité de certains objectifs technologiques liés à cette technique. Au-delà de la performance d’un taux d’extinction de 3,7.10^-5 en monochromatique et de 6,3.10^-4 en polychromatique dans l’infrarouge proche, ainsi qu’une stabilité de σN30 ms = 3,7.10^-5 estimée sur 1 heure, mon travail a permis d’assainir la situation en réalisant un budget d’erreur détaillé, une simulation en optique gaussienne de la transmission du banc et une refonte complète de l’informatique de commande. Tout cela m’a finalement permis d’identifier les faiblesses de Nulltimate. / From all exoplanet detection methods, transit photometry went through the quickest growth over the last few years thanks to the two space telescopes, CoRoT (in 2006) and Kepler (in 2009). These two satellites have identified thousands of potentially planetary transits. Given the number of detected transits and the effort required to demonstrate their natures, it is essential to perform, from photometric data only, a ranking allowing to efficiently identify the most promising transits within a reasonable period of time. For my thesis, I have developed a quick and automated software called bart (Bayesian Analysis for the Ranking of Transits) which realizes such a ranking thanks to the estimation of the probability regarding the planetary nature of each transit. For this purpose, I am relying on the Bayesian framework and free parameter space exploration with Markov Chain Monte Carlo (mcmc) methods.Once you have detected exoplanets, the following step is to characterise them. The study of the solar system demonstrated, if it was necessary, that the spectral information is a crucial clue for the understanding of the physics and history of a planet. Nulling interferometry is a promising solution which could make this possible. For my thesis, I worked on the optical bench Nulltimate in order to study the feasibility of certain technological requirements associated with this technique. Beyond the obtention of a nulling ratio of 3,7.10^-5 in monochromatic light and 6,3.10^-4 in polychromatic light in the near infrared, as well as a stability of σN30 ms = 3,7.10^-5 estimated on 1 hour, my work allowed to clarify the situation thanks to a detailed error budget, a simulation of the transmission based on Gaussian beam optics and a complete overhaul of the computer control system. All of this finally resulted in the identification of the weaknesses of Nulltimate.
112

Lois a priori non-informatives et la modélisation par mélange / Non-informative priors and modelization by mixtures

Kamary, Kaniav 15 March 2016 (has links)
L’une des grandes applications de la statistique est la validation et la comparaison de modèles probabilistes au vu des données. Cette branche des statistiques a été développée depuis la formalisation de la fin du 19ième siècle par des pionniers comme Gosset, Pearson et Fisher. Dans le cas particulier de l’approche bayésienne, la solution à la comparaison de modèles est le facteur de Bayes, rapport des vraisemblances marginales, quelque soit le modèle évalué. Cette solution est obtenue par un raisonnement mathématique fondé sur une fonction de coût.Ce facteur de Bayes pose cependant problème et ce pour deux raisons. D’une part, le facteur de Bayes est très peu utilisé du fait d’une forte dépendance à la loi a priori (ou de manière équivalente du fait d’une absence de calibration absolue). Néanmoins la sélection d’une loi a priori a un rôle vital dans la statistique bayésienne et par conséquent l’une des difficultés avec la version traditionnelle de l’approche bayésienne est la discontinuité de l’utilisation des lois a priori impropres car ils ne sont pas justifiées dans la plupart des situations de test. La première partie de cette thèse traite d’un examen général sur les lois a priori non informatives, de leurs caractéristiques et montre la stabilité globale des distributions a posteriori en réévaluant les exemples de [Seaman III 2012]. Le second problème, indépendant, est que le facteur de Bayes est difficile à calculer à l’exception des cas les plus simples (lois conjuguées). Une branche des statistiques computationnelles s’est donc attachée à résoudre ce problème, avec des solutions empruntant à la physique statistique comme la méthode du path sampling de [Gelman 1998] et à la théorie du signal. Les solutions existantes ne sont cependant pas universelles et une réévaluation de ces méthodes suivie du développement de méthodes alternatives constitue une partie de la thèse. Nous considérons donc un nouveau paradigme pour les tests bayésiens d’hypothèses et la comparaison de modèles bayésiens en définissant une alternative à la construction traditionnelle de probabilités a posteriori qu’une hypothèse est vraie ou que les données proviennent d’un modèle spécifique. Cette méthode se fonde sur l’examen des modèles en compétition en tant que composants d’un modèle de mélange. En remplaçant le problème de test original avec une estimation qui se concentre sur le poids de probabilité d’un modèle donné dans un modèle de mélange, nous analysons la sensibilité sur la distribution a posteriori conséquente des poids pour divers modélisation préalables sur les poids et soulignons qu’un intérêt important de l’utilisation de cette perspective est que les lois a priori impropres génériques sont acceptables, tout en ne mettant pas en péril la convergence. Pour cela, les méthodes MCMC comme l’algorithme de Metropolis-Hastings et l’échantillonneur de Gibbs et des approximations de la probabilité par des méthodes empiriques sont utilisées. Une autre caractéristique de cette variante facilement mise en œuvre est que les vitesses de convergence de la partie postérieure de la moyenne du poids et de probabilité a posteriori correspondant sont assez similaires à la solution bayésienne classique / One of the major applications of statistics is the validation and comparing probabilistic models given the data. This branch statistics has been developed since the formalization of the late 19th century by pioneers like Gosset, Pearson and Fisher. In the special case of the Bayesian approach, the comparison solution of models is the Bayes factor, ratio of marginal likelihoods, whatever the estimated model. This solution is obtained by a mathematical reasoning based on a loss function. Despite a frequent use of Bayes factor and its equivalent, the posterior probability of models, by the Bayesian community, it is however problematic in some cases. First, this rule is highly dependent on the prior modeling even with large datasets and as the selection of a prior density has a vital role in Bayesian statistics, one of difficulties with the traditional handling of Bayesian tests is a discontinuity in the use of improper priors since they are not justified in most testing situations. The first part of this thesis deals with a general review on non-informative priors, their features and demonstrating the overall stability of posterior distributions by reassessing examples of [Seaman III 2012].Beside that, Bayes factors are difficult to calculate except in the simplest cases (conjugate distributions). A branch of computational statistics has therefore emerged to resolve this problem with solutions borrowing from statistical physics as the path sampling method of [Gelman 1998] and from signal processing. The existing solutions are not, however, universal and a reassessment of the methods followed by alternative methods is a part of the thesis. We therefore consider a novel paradigm for Bayesian testing of hypotheses and Bayesian model comparison. The idea is to define an alternative to the traditional construction of posterior probabilities that a given hypothesis is true or that the data originates from a specific model which is based on considering the models under comparison as components of a mixture model. By replacing the original testing problem with an estimation version that focus on the probability weight of a given model within a mixture model, we analyze the sensitivity on the resulting posterior distribution of the weights for various prior modelings on the weights and stress that a major appeal in using this novel perspective is that generic improper priors are acceptable, while not putting convergence in jeopardy. MCMC methods like Metropolis-Hastings algorithm and the Gibbs sampler are used. From a computational viewpoint, another feature of this easily implemented alternative to the classical Bayesian solution is that the speeds of convergence of the posterior mean of the weight and of the corresponding posterior probability are quite similar.In the last part of the thesis we construct a reference Bayesian analysis of mixtures of Gaussian distributions by creating a new parameterization centered on the mean and variance of those models itself. This enables us to develop a genuine non-informative prior for Gaussian mixtures with an arbitrary number of components. We demonstrate that the posterior distribution associated with this prior is almost surely proper and provide MCMC implementations that exhibit the expected component exchangeability. The analyses are based on MCMC methods as the Metropolis-within-Gibbs algorithm, adaptive MCMC and the Parallel tempering algorithm. This part of the thesis is followed by the description of R package named Ultimixt which implements a generic reference Bayesian analysis of unidimensional mixtures of Gaussian distributions obtained by a location-scale parameterization of the model. This package can be applied to produce a Bayesian analysis of Gaussian mixtures with an arbitrary number of components, with no need to specify the prior distribution.
113

Mají devizové rezervy centrálních bank dopad na inflaci? / Do Central Bank FX Reserves Matter for Inflation?

Keblúšek, Martin January 2020 (has links)
01 Abstract Foreign exchange reserves are a useful tool and a buffer but maintaining an amount that is too large can be costly to the economy. Recent accumulation of these reserves points to the importance of this topic. This thesis focuses on one specific part of the effect of FX reserves on the economy - the inflation. I use panel data for 74 countries from the year 1996 to the year 2017. There is a certain degree of model uncertainty for which this thesis accounts for by using Bayesian model averaging (BMA) estimation technique. The findings from my model averaging estimations show FX reserves to not be of importance for inflation determination with close to no change when altering lags, variables, when limiting the sample to fixed FX regimes nor when limiting the sample to inflation targeting regimes. The most important variables are estimated to be a central bank financial strength proxy, exchange rate depreciation, money supply, inflation targeting, and capital account openness. These results are robust to lag changes, prior changes, and for the most part remain the same when Pooled OLS is used.
114

Jaká je hodnota mého vozu? Hedonická metoda oceňování německého trhu ojetých vozů / What is My Car Worth? Hedonic Price Analysis of the German Used Car Market

Doležalová, Radka January 2020 (has links)
Valuation of used cars, affected by various technical attributes and information asymmetry, is the key objective of all agents operating on the automobile mar- ket. This thesis, focusing on a hedonic price analysis, aims to determine basic as well as additional attributes as determinants of a used car market price. In addition, the analysis sheds light upon novel attributes (service records, cigarette smoke pollution of a vehicle interior, selling channel factor in the e- commerce environment, and a German geographical division). The hedonic price research uses the unique data sample of the German used car market, extracted from the database of the e-commerce platform AutoScout24 com- prised of almost 51 thousand vehicles and 57 attributes. The model selection is specified by the incorporation of the Bayesian model averaging approach. The research proves the complexity of a valuation of a used vehicle in a term of a substantial number of relevant variables. The most interesting innovative conclusions are non-significant effect of selling channels and small local price differences among two German regions. Remarkable are also the significant effect of the status of previous owners, bodywork colour, and smoke pollution. The estimated vehicle lifespan of 10 years shows that cars have shorter than...
115

Islám a ekonomický rozvoj: meta-analýza / Islam and Economic Performance: A Meta-Analysis

Kratochvíla, Patrik January 2021 (has links)
Islam and Economic Performance: A Meta-Analysis Patrik Kratochvíla June 28, 2021 Abstract The ongoing economic supremacy of the West has prompted debates on the ability of non-Christian religions to generate economic growth. The academic literature focusing on the Islamic religion o↵ers multiple answers, leaving the matter unresolved and with no definite conclusion. Based on a quantitative sur- vey of 315 estimates collected from 41 relevant academic studies, Islam exerts a positive and statistically significant e↵ect on economic growth in 40% of cases, a negative and statistically significant e↵ect in 10% of cases, and virtually zero e↵ect in 50% of cases. Tests for publication bias indicate slightly preferential reporting against negative estimates. When I correct for this bias, I find that the mean e↵ect of Islam on economic growth is positive but economically small. I also construct 79 moderator variables capturing methodological heterogeneity among the primary studies and apply the method of Bayesian model averaging to deal with model uncertainty in meta-analysis. The analysis shows that the heterogeneity in the results is primarily driven by di↵erences in the sample com- position and the choice of control variables, and to a lesser extent by estimation characteristics and proxies for Islam employed. 1
116

Modeling Impacts of Climate Change on Crop Yield

Hu, Tongxi January 2021 (has links)
No description available.
117

Measuring Skill Importance in Women's Soccer and Volleyball

Allan, Michelle L. 11 March 2009 (has links) (PDF)
The purpose of this study is to demonstrate how to measure skill importance for two sports: soccer and volleyball. A division I women's soccer team filmed each home game during a competitive season. Every defensive, dribbling, first touch, and passing skill was rated and recorded for each team. It was noted whether each sequence of plays led to a successful shot. A hierarchical Bayesian logistic regression model is implemented to determine how the performance of the skill affects the probability of a successful shot. A division I women's volleyball team rated each skill (serve, pass, set, etc.) and recorded rally outcomes during home games in a competitive season. The skills were only rated when the ball was on the home team's side of the net. Events followed one of these three patterns: serve-outcome, pass-set-attack-outcome, or dig-set-attack-outcome. We analyze the volleyball data using two different techniques, Markov chains and Bayesian logistic regression. These sequences of events are assumed to be first-order Markov chains. This means the quality of the current skill only depends on the quality of the previous skill. The count matrix is assumed to follow a multinomial distribution, so a Dirichlet prior is used to estimate each row of the count matrix. Bayesian simulation is used to produce the unconditional posterior probability (e.g., a perfect serve results in a point). The volleyball logistic regression model uses a Bayesian approach to determine how the performance of the skill affects the probability of a successful outcome. The posterior distributions produced from each of the models are used to calculate importance scores. The soccer data importance scores revealed that passing, first touch, and dribbling skills are the most important to the primary team. The Markov chain model for the volleyball data indicates setting 3–5 feet off the net increases the probability of a successful outcome. The logistic regression model for the volleyball data reveals that serves have a high importance score because of their steep slope. Importance scores can be used to assist coaches in allocating practice time, developing new strategies, and analyzing each player's skill performance.
118

Bayesian Model Checking Methods for Dichotomous Item Response Theory and Testlet Models

Combs, Adam 02 April 2014 (has links)
No description available.
119

A Bayesian Method for Accelerated Magnetic Resonance Elastography of the Liver

Ebersole, Christopher 31 October 2017 (has links)
No description available.
120

Semiparametric Bayesian Approach using Weighted Dirichlet Process Mixture For Finance Statistical Models

Sun, Peng 07 March 2016 (has links)
Dirichlet process mixture (DPM) has been widely used as exible prior in nonparametric Bayesian literature, and Weighted Dirichlet process mixture (WDPM) can be viewed as extension of DPM which relaxes model distribution assumptions. Meanwhile, WDPM requires to set weight functions and can cause extra computation burden. In this dissertation, we develop more efficient and exible WDPM approaches under three research topics. The first one is semiparametric cubic spline regression where we adopt a nonparametric prior for error terms in order to automatically handle heterogeneity of measurement errors or unknown mixture distribution, the second one is to provide an innovative way to construct weight function and illustrate some decent properties and computation efficiency of this weight under semiparametric stochastic volatility (SV) model, and the last one is to develop WDPM approach for Generalized AutoRegressive Conditional Heteroskedasticity (GARCH) model (as an alternative approach for SV model) and propose a new model evaluation approach for GARCH which produces easier-to-interpret result compared to the canonical marginal likelihood approach. In the first topic, the response variable is modeled as the sum of three parts. One part is a linear function of covariates that enter the model parametrically. The second part is an additive nonparametric model. The covariates whose relationships to response variable are unclear will be included in the model nonparametrically using Lancaster and Šalkauskas bases. The third part is error terms whose means and variance are assumed to follow non-parametric priors. Therefore we denote our model as dual-semiparametric regression because we include nonparametric idea for both modeling mean part and error terms. Instead of assuming all of the error terms follow the same prior in DPM, our WDPM provides multiple candidate priors for each observation to select with certain probability. Such probability (or weight) is modeled by relevant predictive covariates using Gaussian kernel. We propose several different WDPMs using different weights which depend on distance in covariates. We provide the efficient Markov chain Monte Carlo (MCMC) algorithms and also compare our WDPMs to parametric model and DPM model in terms of Bayes factor using simulation and empirical study. In the second topic, we propose an innovative way to construct weight function for WDPM and apply it to SV model. SV model is adopted in time series data where the constant variance assumption is violated. One essential issue is to specify distribution of conditional return. We assume WDPM prior for conditional return and propose a new way to model the weights. Our approach has several advantages including computational efficiency compared to the weight constructed using Gaussian kernel. We list six properties of this proposed weight function and also provide the proof of them. Because of the additional Metropolis-Hastings steps introduced by WDPM prior, we find the conditions which can ensure the uniform geometric ergodicity of transition kernel in our MCMC. Due to the existence of zero values in asset price data, our SV model is semiparametric since we employ WDPM prior for non-zero values and parametric prior for zero values. On the third project, we develop WDPM approach for GARCH type model and compare different types of weight functions including the innovative method proposed in the second topic. GARCH model can be viewed as an alternative way of SV for analyzing daily stock prices data where constant variance assumption does not hold. While the response variable of our SV models is transformed log return (based on log-square transformation), GARCH directly models the log return itself. This means that, theoretically speaking, we are able to predict stock returns using GARCH models while this is not feasible if we use SV model. Because SV models ignore the sign of log returns and provides predictive densities for squared log return only. Motivated by this property, we propose a new model evaluation approach called back testing return (BTR) particularly for GARCH. This BTR approach produces model evaluation results which are easier to interpret than marginal likelihood and it is straightforward to draw conclusion about model profitability by applying this approach. Since BTR approach is only applicable to GARCH, we also illustrate how to properly cal- culate marginal likelihood to make comparison between GARCH and SV. Based on our MCMC algorithms and model evaluation approaches, we have conducted large number of model fittings to compare models in both simulation and empirical study. / Ph. D.

Page generated in 0.063 seconds