• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 9
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 21
  • 14
  • 8
  • 6
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Bayesian Reference Inference on the Ratio of Poisson Rates.

Guo, Changbin 06 May 2006 (has links) (PDF)
Bayesian reference analysis is a method of determining the prior under the Bayesian paradigm. It incorporates as little information as possible from the experiment. Estimation of the ratio of two independent Poisson rates is a common practical problem. In this thesis, the method of reference analysis is applied to derive the posterior distribution of the ratio of two independent Poisson rates, and then to construct point and interval estimates based on the reference posterior. In addition, the Frequentist coverage property of HPD intervals is verified through simulation.
12

Toward Error-Statistical Principles of Evidence in Statistical Inference

Jinn, Nicole Mee-Hyaang 02 June 2014 (has links)
The context for this research is statistical inference, the process of making predictions or inferences about a population from observation and analyses of a sample. In this context, many researchers want to grasp what inferences can be made that are valid, in the sense of being able to uphold or justify by argument or evidence. Another pressing question among users of statistical methods is: how can spurious relationships be distinguished from genuine ones? Underlying both of these issues is the concept of evidence. In response to these (and similar) questions, two questions I work on in this essay are: (1) what is a genuine principle of evidence? and (2) do error probabilities have more than a long-run role? Concisely, I propose that felicitous genuine principles of evidence should provide concrete guidelines on precisely how to examine error probabilities, with respect to a test's aptitude for unmasking pertinent errors, which leads to establishing sound interpretations of results from statistical techniques. The starting point for my definition of genuine principles of evidence is Allan Birnbaum's confidence concept, an attempt to control misleading interpretations. However, Birnbaum's confidence concept is inadequate for interpreting statistical evidence, because using only pre-data error probabilities would not pick up on a test's ability to detect a discrepancy of interest (e.g., "even if the discrepancy exists" with respect to the actual outcome. Instead, I argue that Deborah Mayo's severity assessment is the most suitable characterization of evidence based on my definition of genuine principles of evidence. / Master of Arts
13

Currency and political choice : analytical political economy of exchange rate policy in East Asia

Meng, Chih-Cheng 15 September 2010 (has links)
How do catch-up East Asian countries cultivate their exchange rate (ER) policies in a different trajectory than advanced economies often cited in current literature? What are the dynamics and results (pros and cons) of choosing a particular ER policy, and what influence does it have on the progress of developmental states? How do domestic and international politics explain the convergences and variances of ER policy decisions in East Asia? The decisions of ER policy are by all means political choices. ERs influence the prices of daily exchanged goods, and thereby determine resource allocation within and across national borders. Therefore, any internal political actor, including a government, interest group, foreign party or constituent exerts discretionary power to manipulate an ER to satisfy its own interests. Externally, the size of foreign trade and the status of international monetary accounts closely depend on the valuation and volatility of ER. Thus for the transitional polities and the trade-driving economies in East Asia, the analysis of ER politics not only helps to clarify the complex mechanisms of ER influences combined with various interests and institutional settings, but also to advance the political study of globalization. My dissertation proposes an integrated framework to contend that the domestic distributional politics and economic determinants, as well as the international monetary relations, and regional market force and adaptive policy diffusion are crucial factors that influence and interact with ER policy in East Asia. This theoretical framework explains how an ER policy decision is compromised between domestically generated preferences and apparently intense international interactions. Likewise, this dissertation provides a vigorous empirical specification toward the spatiotemporal differences of ER policy in East Asia. The application of the structural vector autoregression (SVAR) model properly specifies the theoretical dynamics across variables in the East Asian panel data compiled from 1980 to 2004. Furthermore, by using the alternative Bayesian estimation, SVAR successfully demonstrates the "spinning stories" that distinguish the variances with regard to country-specific development under the asymmetrically international and interdependently regional monetary system. The empirical findings verify that my theoretical variables interact significantly with ER policy decisions in East Asia. The statistics also demonstrate that most East Asian countries tend to strategically withstand influences from the various waves of capital liberalization and keep their currencies at low values. In a general testing, however, domestic pursuits for preferred interests gradually yield to the persistent influences of international and regional forces on ER policy making in East Asia. / text
14

Bayesian and frequentist methods and analyses of genome-wide association studies

Vukcevic, Damjan January 2009 (has links)
Recent technological advances and remarkable successes have led to genome-wide association studies (GWAS) becoming a tool of choice for investigating the genetic basis of common complex human diseases. These studies typically involve samples from thousands of individuals, scanning their DNA at up to a million loci along the genome to discover genetic variants that affect disease risk. Hundreds of such variants are now known for common diseases, nearly all discovered by GWAS over the last three years. As a result, many new studies are planned for the future or are already underway. In this thesis, I present analysis results from actual studies and some developments in theory and methodology. The Wellcome Trust Case Control Consortium (WTCCC) published one of the first large-scale GWAS in 2007. I describe my contribution to this study and present the results from some of my follow-up analyses. I also present results from a GWAS of a bipolar disorder sub-phenotype, and a recent and on-going fine mapping experiment. Building on methods developed as part of the WTCCC, I describe a Bayesian approach to GWAS analysis and compare it to widely used frequentist approaches. I do so both theoretically, by interpreting each approach from the perspective of the other, and empirically, by comparing their performance in the context of replicated GWAS findings. I discuss the implications of these comparisons on the interpretation and analysis of GWAS generally, highlighting the advantages of the Bayesian approach. Finally, I examine the effect of linkage disequilibrium on the detection and estimation of various types of genetic effects, particularly non-additive effects. I derive a theoretical result showing how the power to detect a departure from an additive model at a marker locus decays faster than the power to detect an association.
15

Letramento probabilístico no Ensino Médio: um estudo de invariantes operatórios mobilizados por alunos

Caberlim, Cristiane Candido Luz 18 March 2015 (has links)
Made available in DSpace on 2016-04-27T16:57:36Z (GMT). No. of bitstreams: 1 Cristiane Candido Luz Caberlim.pdf: 1224596 bytes, checksum: 30f0bfb460f189e74563254c547baede (MD5) Previous issue date: 2015-03-18 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / The Subject in which this research was developed is the development of the learning process of the probability. For this, we search the subjects of official documents and researches that addressed the teaching or the learning of probability, and we realize its growing withing mathematics education s field, confirming our hypotheses about the relevance of develop a research in this subject. In this context, we formulate our goal that s diagnose invariant operative mobilized by students. In situation of troubleshooting, and to seek elements that allowing a proposal for a concept of building model (learning evolution model). The work developed trying to relate the identified operative invariant with the elements of probabilistic literacy when learning of probability mobilizes elements of geometric probability, articulating the classical approach and the frequentist approach to probability. To achieve the goal, we formulated the following research question: Which literacy probabilistic elements identified in mobilizing operative invariants by third grade of high school students to solve problems that articulate the classical approach and the frequentist concepto f probability? Claving to answer this question, We Will use the theory of conceptual fields linking it with the principles of probabilistic literacy. As a research methodology chose the case study. Our sequence comprises three adapted teaching situations developed earlier research, in our research group, called A Bernoulli urn , B Pixels urn and C Franc-Carreau game and these situations were applied to a group of student volunteers attending the third high school of a private school in São Paulo. The analysis of the protocols built allowed us to identify students mobilized operative invariants allowing estimate the probability, articulating the classical approach and frequentist, confirming development of probabilistic assumptions literacy. Reported proportions via an own speech, transiting the concrete domains and pseudo-concrete. No student has achieved the full probabilistic literacy, that supposed to problem solving in the abstract domain, under the proposed scheme for a process of abstraction to be followed during learning / O tema no qual esta pesquisa se desenvolveu é o desenvolvimento do processo de aprendizagem da probabilidade. Para tal, buscamos primeiramente os conteúdos de documentos oficiais e pesquisas que abordaram o ensino ou a aprendizagem da probabilidade, e percebemos o seu crescimento dentro do campo da Educação Matemática, confirmando nossas hipóteses sobre a relevância de se desenvolver uma pesquisa nesse tema. Neste contexto, formulamos nosso objetivo que é diagnosticar invariantes operatórios mobilizados pelos alunos em situação de resolução de problemas, para que busquemos elementos que permitam uma proposta de modelo de construção de conceito (modelo de evolução de aprendizagem). O trabalho se desenvolveu buscando relacionar os invariantes operatórios identificados com os elementos do letramento probabilístico quando a aprendizagem da probabilidade mobiliza elementos da probabilidade geométrica, articulando o enfoque clássico e o enfoque frequentista da probabilidade. Para alcançarmos tal objetivo, formulamos a seguinte questão de pesquisa: Que elementos do letramento probabilístico identificamos na mobilização de invariantes operatórios por alunos do 3º ano do Ensino Médio ao resolver problemas que articulam o enfoque clássico e frequentista do conceito de probabilidade? Almejando responder a essa questão, utilizaremos a Teoria dos Campos Conceituais articulando-a com os princípios do letramento probabilístico. Como metodologia de pesquisa escolhemos o estudo de caso. Nossa sequência é composta por três situações didáticas adaptadas de pesquisa anterior desenvolvida em nosso grupo de pesquisa, denominadas A-Urna de Bernoulli , B-Urna de Pixels e C-O jogo Franc-Carreau e estas situações foram aplicadas a um grupo de alunos voluntários, cursando o terceiro ano do Ensino Médio de uma escola da rede privada da cidade de São Paulo. A análise dos protocolos construídos nos permitiu identificar que os alunos mobilizaram invariantes operatórios que permitiam estimar a probabilidade, articulando o enfoque clássico e frequentista, confirmando hipótese de desenvolvimento do letramento probabilístico. Descreveram proporções e por meio de um discurso próprio, transitando pelos domínios concreto e pseudoconcreto. Nenhum aluno atingiu o letramento probabilístico pleno, que supunha a resolução de problemas no domínio abstrato, segundo o esquema proposto para um processo de abstração a ser percorrido durante a aprendizagem
16

Objective Bayesian Analysis of Kullback-Liebler Divergence of two Multivariate Normal Distributions with Common Covariance Matrix and Star-shape Gaussian Graphical Model

Li, Zhonggai 22 July 2008 (has links)
This dissertation consists of four independent but related parts, each in a Chapter. The first part is an introductory. It serves as the background introduction and offer preparations for later parts. The second part discusses two population multivariate normal distributions with common covariance matrix. The goal for this part is to derive objective/non-informative priors for the parameterizations and use these priors to build up constructive random posteriors of the Kullback-Liebler (KL) divergence of the two multivariate normal populations, which is proportional to the distance between the two means, weighted by the common precision matrix. We use the Cholesky decomposition for re-parameterization of the precision matrix. The KL divergence is a true distance measurement for divergence between the two multivariate normal populations with common covariance matrix. Frequentist properties of the Bayesian procedure using these objective priors are studied through analytical and numerical tools. The third part considers the star-shape Gaussian graphical model, which is a special case of undirected Gaussian graphical models. It is a multivariate normal distribution where the variables are grouped into one "global" group of variable set and several "local" groups of variable set. When conditioned on the global variable set, the local variable sets are independent of each other. We adopt the Cholesky decomposition for re-parametrization of precision matrix and derive Jeffreys' prior, reference prior, and invariant priors for new parameterizations. The frequentist properties of the Bayesian procedure using these objective priors are also studied. The last part concentrates on the discussion of objective Bayesian analysis for partial correlation coefficient and its application to multivariate Gaussian models. / Ph. D.
17

Modélisation de la contamination par Listeria monocytogenes pour l'amélioration de la surveillance dans les industries agro-alimentaires / Contamination modeling of Listeria monocytogenes to improve surveillance in food industry

Commeau, Natalie 04 June 2012 (has links)
Les industriels du secteur agro-alimentaire sont responsables de la qualité des produits mis sur le marché. Un moyen de vérifier cette qualité consiste à déterminer la distribution de la contamination. Dans cette thèse, nous avons utilisé des données portant sur L. monocytogenes durant le procédé de fabrication de lardons et du saumon fumé. Nous avons ensuite élaboré des modèles hiérarchiques pour décrire la concentration en prenant ou non en compte diverses variabilités, nous avons estimé les paramètres par inférence bayésienne, puis comparé leur capacité à simuler des données proches des observations. Nous avons également comparé l'estimation de paramètres par inférence fréquentiste sur deux modèles en utilisant les données brutes issues des analyses microbiologiques et ces mêmes données converties en concentration. Par ailleurs, nous avons amélioré un modèle décrivant le devenir de L. monocytogenes au cours de la fabrication des lardons. Le plan d'échantillonnage permettant d'estimer la qualité des produits, nous avons appliqué la théorie de la décision aux couples L. monocytogenes/lardons et L. monocytogenes/saumon fumé en sortie usine pour déterminer la taille optimale de l'échantillon à prélever par lot de manière à minimiser les coûts moyens supportés par le fabricant. Enfin, nous avons comparé plusieurs plans d'échantillonnage de mesure de la température d'un plat en sauce fabriqué dans une cuisine centrale et placé dans une cellule de refroidissement rapide. L'objectif était de sélectionner le meilleur plan d'échantillonnage en fonction du risque admissible pour le gestionnaire quant à la croissance de C. perfringens. / Food business operators are responsible for the quality of the products they sell. A way to assess the safety of food is to determine the contamination distribution. During my PhD thesis, we used data about L. monocytogenes during the process of diced bacon and of cold smoked salmon. Then, we constructed several hierarchical models to describe contamination taking or not into account several kinds of variability such as between batches variability. We compared the capacity of each model to simulate data close to the observed ones. We also compared the parameters assessment by frequentist inference using raw data (the results of the microbiological analyses) and concentration-like data. In addition to the models describing the contamination at one step of the process, we improved an existing model describing the fate of L. monocytogenes throughout the diced bacon process. A tool to assess the quality of a product is the sampling plan. We applied the Bayesian theory of decision to the pairs L. monocytogenes/diced bacon and L. monocytogenes/cold smoked salmon at the end of the process to determine the optimal size of a sample analysed per batch so that the average cost for the manufacturer is as los as possible. We also compared several sampling plans of temperature measurement of a meal cooked in an institutional food service facility and put in a blast-chiller just after cooking. The aim was to select the best sampling plan regarding the risk of C. perfringens growth that the manager is ready to take.
18

BINOCULAR DEPTH PERCEPTION, PROBABILITY, FUZZY LOGIC, AND CONTINUOUS QUANTIFICATION OF UNIQUENESS

Val, Petran 02 February 2018 (has links)
No description available.
19

Experimental identification of physical thermal models for demand response and performance evaluation / Identification expérimentale des modèles thermiques physiques pour la commande et la mesure des performances énergétiques

Raillon, Loic 16 May 2018 (has links)
La stratégie de l’Union Européenne pour atteindre les objectifs climatiques, est d’augmenter progressivement la part d’énergies renouvelables dans le mix énergétique et d’utiliser l’énergie plus efficacement de la production à la consommation finale. Cela implique de mesurer les performances énergétiques du bâtiment et des systèmes associés, indépendamment des conditions climatiques et de l’usage, pour fournir des solutions efficaces et adaptées de rénovation. Cela implique également de connaître la demande énergétique pour anticiper la production et le stockage d’énergie (mécanismes de demande et réponse). L’estimation des besoins énergétiques et des performances énergétiques des bâtiments ont un verrou scientifique commun : l’identification expérimentale d’un modèle physique du comportement intrinsèque du bâtiment. Les modèles boîte grise, déterminés d’après des lois physiques et les modèles boîte noire, déterminés heuristiquement, peuvent représenter un même système physique. Des relations entre les paramètres physiques et heuristiques existent si la structure de la boîte noire est choisie de sorte qu’elle corresponde à la structure physique. Pour trouver la meilleure représentation, nous proposons d’utiliser, des simulations de Monte Carlo pour analyser la propagation des erreurs dans les différentes transformations de modèle et, une méthode de priorisation pour classer l’influence des paramètres. Les résultats obtenus indiquent qu’il est préférable d’identifier les paramètres physiques. Néanmoins, les informations physiques, déterminées depuis l’estimation des paramètres, sont fiables si la structure est inversible et si la quantité d’information dans les données est suffisante. Nous montrons comment une structure de modèle identifiable peut être choisie, notamment grâce au profil de vraisemblance. L’identification expérimentale comporte trois phases : la sélection, la calibration et la validation du modèle. Ces trois phases sont détaillées dans le cas d’une expérimentation d’une maison réelle en utilisant une approche fréquentiste et Bayésienne. Plus précisément, nous proposons une méthode efficace de calibration Bayésienne pour estimer la distribution postérieure des paramètres et ainsi réaliser des simulations en tenant compte de toute les incertitudes, ce qui représente un atout pour le contrôle prédictif. Nous avons également étudié les capacités des méthodes séquentielles de Monte Carlo pour estimer simultanément les états et les paramètres d’un système. Une adaptation de la méthode de prédiction d’erreur récursive, dans une stratégie séquentielle de Monte Carlo, est proposée et comparée à une méthode de la littérature. Les méthodes séquentielles peuvent être utilisées pour identifier un premier modèle et fournir des informations sur la structure du modèle sélectionnée pendant que les données sont collectées. Par la suite, le modèle peut être amélioré si besoin, en utilisant le jeu de données et une méthode itérative. / The European Union strategy for achieving the climate targets, is to progressively increase the share of renewable energy in the energy mix and to use the energy more efficiently from production to final consumption. It requires to measure the energy performance of buildings and associated systems, independently of weather conditions and user behavior, to provide efficient and adapted retrofitting solutions. It also requires to known the energy demand to anticipate the energy production and storage (demand response). The estimation of building energy demand and the estimation of energy performance of buildings have a common scientific: the experimental identification of the physical model of the building’s intrinsic behavior. Grey box models, determined from first principles, and black box models, determined heuristically, can describe the same physical process. Relations between the physical and mathematical parameters exist if the black box structure is chosen such that it matches the physical ones. To find the best model representation, we propose to use, Monte Carlo simulations for analyzing the propagation of errors in the different model transformations, and factor prioritization, for ranking the parameters according to their influence. The obtained results show that identifying the parameters on the state-space representation is a better choice. Nonetheless, physical information determined from the estimated parameters, are reliable if the model structure is invertible and the data are informative enough. We show how an identifiable model structure can be chosen, especially thanks to profile likelihood. Experimental identification consists of three phases: model selection, identification and validation. These three phases are detailed on a real house experiment by using a frequentist and Bayesian framework. More specifically, we proposed an efficient Bayesian calibration to estimate the parameter posterior distributions, which allows to simulate by taking all the uncertainties into account, which is suitable for model predictive control. We have also studied the capabilities of sequential Monte Carlo methods for estimating simultaneously the states and parameters. An adaptation of the recursive prediction error method into a sequential Monte Carlo framework, is proposed and compared to a method from the literature. Sequential methods can be used to provide a first model fit and insights on the selected model structure while the data are collected. Afterwards, the first model fit can be refined if necessary, by using iterative methods with the batch of data.
20

Estimating and Correcting the Effects of Model Selection Uncertainty / Estimating and Correcting the Effects of Model Selection Uncertainty

Nguefack Tsague, Georges Lucioni Edison 03 February 2006 (has links)
No description available.

Page generated in 0.0954 seconds