Spelling suggestions: "subject:"inference"" "subject:"lnference""
211 |
On conjugate families and Jeffreys priors for von Mises-Fisher distributionsHornik, Kurt, Grün, Bettina January 2013 (has links) (PDF)
This paper discusses characteristics of standard conjugate priors and their induced
posteriors in Bayesian inference for von Mises-Fisher distributions, using either the
canonical natural exponential family or the more commonly employed polar coordinate
parameterizations. We analyze when standard conjugate priors as well as posteriors are
proper, and investigate the Jeffreys prior for the von Mises-Fisher family. Finally, we
characterize the proper distributions in the standard conjugate family of the (matrixvalued)
von Mises-Fisher distributions on Stiefel manifolds.
|
212 |
Permutation Tests for Structural ChangeZeileis, Achim, Hothorn, Torsten January 2006 (has links) (PDF)
The supLM test for structural change is embedded into a permutation test framework for a simple location model. The resulting conditional permutation distribution is compared to the usual (unconditional) asymptotic distribution, showing that the power of the test can be clearly improved in small samples. Furthermore, generalizations are discussed for binary and multivariate dependent variables as well as model-based permutation testing for structural change. The procedures suggested are illustrated using both artificial and real-world data (number of youth homicides, employment discrimination data, structural-change publications, and stock returns). / Series: Research Report Series / Department of Statistics and Mathematics
|
213 |
Prédiction, inférence sélective et quelques problèmes connexesYadegari, Iraj January 2017 (has links)
Nous étudions le problème de l'estimation de moyenne et de la densité prédictive d'une population sélectionnée, en obtenant de nouveaux développements qui incluent l'analyse de biais, la décomposition du risque et les problèmes avec restrictions sur les paramètres (chapitre 2). Nous proposons des estimateurs de densité prédictive efficaces en termes de pertes Kullback-Leibler et Hellinger (chapitre 3) améliorant les procédures de plug-in via une perte duale et via une d'expansion de variance. Enfin, nous présentons les résultats de l'amélioration de l'estimateur du maximum de vraisemblance (EMV) d'une moyenne normale bornée pour une classe de fonctions de perte, y compris la perte normale réfléchie, avec des implications pour l'estimation de densité prédictive. A savoir, nous donnons des conditions sur la perte et la largeur de l'espace paramétrique pour lesquels l'estimateur de Bayes par rapport à la loi a priori uniforme sur la frontière domine la EMV. / Abstract : We study the problem of point estimation and predictive density estimation of the mean of a selected population, obtaining novel developments which include bias analysis, decomposition of risk, and problems with restricted parameters (Chapter 2). We propose efficient predictive density estimators in terms of Kullback-Leibler and Hellinger losses (Chapter 3) improving on plug-in procedures via a dual loss and via a variance expansion scheme. Finally (Chapter 4), we present findings on improving on the maximum likelihood estimator (MLE) of a bounded normal mean under a class of loss functions, including reflected normal loss, with implications for predictive density estimation. Namely, we give conditions on the loss and the width of the parameter space for which the Bayes estimator with respect to the boundary uniform prior dominates the MLE.
|
214 |
Eficiência de produção: um enfoque Bayesiano. / Production efficiency: a bayesian approach.Cespedes, Juliana Garcia 28 January 2004 (has links)
O uso de fronteira de produ¸c ao estoc´ astica com m´ ultiplos produtos tem despertado um interesse especial em ´areas da economia que defrontam-se com o problema de quantificar a eficiencia t´ecnica de firmas. Na estat´ýstica cl´ assica, quando se defronta com firmas que possuem v´arios produtos, as fun¸c oes custo ou demanda s ao mais utilizadas para calcular essa eficiencia, mas isso requer uma quantidade maior de informa¸c oes sobre os dados, al´em das quantidades de insumos e produtos, tamb´em s ao necess´ arios seus pre¸cos e custos. Quando existem apenas informa¸c oes sobre os insumos (x) e os produtos (y) h´a a necessidade de se trabalhar com a fun¸c ao de produ¸c ao e a inexistencia de estat´ýsticas suficientes para alguns par ametros tornam a an´alise d´ýficil. A abordagem Bayesiana pode se tornar uma ferramenta muito ´ util para esse caso, pois ´e poss´ývel obter uma amostra da distribui¸ c ao de probabilidade dos par ametros do modelo, possibilitando a obten¸c ao de resumos de interesse. Para obter as amostras dessas distribui¸ c oes m´etodos Monte Carlo com cadeias de Markov, tais como, amostrador de Gibbs, Metropolis-Hastings e "Slice sampling" s ao utilizados. / The use of stochastic production frontier with multiple-outputs has been waking up a special interest in areas of the economy that are confronted with the problem of quantifying the technical efficiency of firms. In the classic statistics, when it is confronted with firms that possess several outputs, cost or profit functions are more used to calculate that efficiency, but that requests an amount larger of information about data set, besides the amounts of inputs and outputs, are also necessary your prices and costs. When just exist information on inputs (x) and outputs (y) there is need to work with the production function and the lack of enough statistics for some parameters turn the difficult analysis. Bayesian approach can become a useful tool for that case, because is possible to obtain a sample of the distribution of probability of the parameters of the model, making possible the obtaining of summaries of interest. To obtain samples of those distributions methods Markov chains Monte Carlo, that is, Gibbs sampling, Metropolis-Hastings and Slice sampling are used.
|
215 |
Ponderação Bayesiana de modelos em regressão linear clássica / Bayesian model averaging in classic linear regression modelsNunes, Hélio Rubens de Carvalho 07 October 2005 (has links)
Este trabalho tem o objetivo de divulgar a metodologia de ponderação de modelos ou Bayesian Model Averaging (BMA) entre os pesquisadores da área agronômica e discutir suas vantagens e limitações. Com o BMA é possível combinar resultados de diferentes modelos acerca de determinada quantidade de interesse, com isso, o BMA apresenta-se como sendo uma metodologia alternativa de análise de dados frente os usuais métodos de seleção de modelos tais como o Coeficiente de Determinação Múltipla (R2 ), Coeficiente de Determinação Múltipla Ajustado (R2), Estatística de Mallows ( Cp) e Soma de Quadrados de Predição (PRESS). Vários trabalhos foram, recentemente, realizados com o objetivo de comparar o desempenho do BMA em relação aos métodos de seleção de modelos, porém, há ainda muitas situações para serem exploradas até que se possa chegar a uma conclusão geral acerca desta metodologia. Neste trabalho, o BMA foi aplicado a um conjunto de dados proveniente de um experimento agronômico. A seguir, o desempenho preditivo do BMA foi comparado com o desempenho dos métodos de seleção acima citados por meio de um estudo de simulação variando o grau de multicolinearidade e o tamanho amostral. Em cada uma dessas situações, foram utilizadas 1000 amostras geradas a partir de medidas descritivas de conjuntos de dados reais da área agronômica. O desempenho preditivo das metodologias em comparação foi medido pelo Logaritmo do Escore Preditivo (LEP). Os resultados empíricos obtidos indicaram que o BMA apresenta desempenho semelhante aos métodos usuais de seleção de modelos nas situações de multicolinearidade exploradas neste trabalho. / The objective of this work was divulge to Bayesian Model Averaging (BMA) between the researchers of the agronomy area and discuss its advantages and limitations. With the BMA is possible combine results of difeerent models about determined quantity of interest, with that, the BMA presents as being a metodology alternative of data analysis front the usual models selection approaches, for example the Coefficient of Multiple Determination (R2), Coefficient of Multiple Determination Adjusted (R2), Mallows (Cp Statistics) and Prediction Error Sum Squares (PRESS). Several works recently were carried out with the objective of compare the performance of the BMA regarding the approaches of models selection, however, there is still many situations for will be exploited to that can arrive to a general conclusion about this metodology. In this work, the BMA was applied to data originating from an agronomy experiment. It follow, the predictive performance of the BMA was compared with the performance of the approaches of selection above cited by means of a study of simulation varying the degree of multicollinearity, measured by the number of condition of the matrix standardized X'X and the number of observations in the sample. In each one of those situations, were utilized 1000 samples generated from the descriptive information of agronomy data. The predictive performance of the metodologies in comparison was measured by the Logarithm of the Score Predictive (LEP). The empirical results obtained indicated that the BMA presents similar performance to the usual approaches of selection of models in the situations of multicollinearity exploited.
|
216 |
Statistical physics for compressed sensing and information hiding / Física Estatística para Compressão e Ocultação de DadosManoel, Antonio André Monteiro 22 September 2015 (has links)
This thesis is divided into two parts. In the first part, we show how problems of statistical inference and combinatorial optimization may be approached within a unified framework that employs tools from fields as diverse as machine learning, statistical physics and information theory, allowing us to i) design algorithms to solve the problems, ii) analyze the performance of these algorithms both empirically and analytically, and iii) to compare the results obtained with the optimal achievable ones. In the second part, we use this framework to study two specific problems, one of inference (compressed sensing) and the other of optimization (information hiding). In both cases, we review current approaches, identify their flaws, and propose new schemes to address these flaws, building on the use of message-passing algorithms, variational inference techniques, and spin glass models from statistical physics. / Esta tese está dividida em duas partes. Na primeira delas, mostramos como problemas de inferência estatística e de otimização combinatória podem ser abordados sob um framework unificado que usa ferramentas de áreas tão diversas quanto o aprendizado de máquina, a física estatística e a teoria de informação, permitindo que i) projetemos algoritmos para resolver os problemas, ii) analisemos a performance destes algoritmos tanto empiricamente como analiticamente, e iii) comparemos os resultados obtidos com os limites teóricos. Na segunda parte, este framework é usado no estudo de dois problemas específicos, um de inferência (compressed sensing) e outro de otimização (ocultação de dados). Em ambos os casos, revisamos abordagens recentes, identificamos suas falhas, e propomos novos esquemas que visam corrigir estas falhas, baseando-nos sobretudo em algoritmos de troca de mensagens, técnicas de inferência variacional, e modelos de vidro de spin da física estatística.
|
217 |
Comprehension monitoring strategies: effects of self-questions on comprehension and inference processing = 閱讀操控策略 : 自設提問對閱讀理解及推論過程的效應. / 閱讀操控策略 / Comprehension monitoring strategies: effects of self-questions on comprehension and inference processing = Yue du cao kong ce lüe : zi she ti wen dui yue du li jie ji tui lun guo cheng de xiao ying. / Yue du cao kong ce lüeJanuary 1995 (has links)
by Cheung Shuk Fan. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1995. / Includes bibliographical references (leaves 135-146). / by Cheung Shuk Fan. / ACKNOWLEDGEMENT --- p.i / ABSTRACT --- p.ii / TABLE OF CONTENTS --- p.iv / LIST OF TABLES --- p.viii / LIST OF FIGURES --- p.x / Chapter CHAPTER 1 --- INTRODUCTION / Chapter 1.1 --- Background of the Study --- p.1 / Chapter 1.2 --- Purpose of the Study --- p.3 / Chapter 1.3 --- Significance of the Study --- p.3 / Chapter CHAPTER 2 --- THEORETICAL FRAMEWORK / Chapter 2.1 --- Macrostructure Theory --- p.4 / Chapter 2.1.1 --- Text Processing --- p.4 / Chapter 2.1.2 --- Microstructure of Discourse --- p.6 / Chapter 2.1.3 --- Macrostructure of Discourse --- p.7 / Chapter 2.1.4 --- Macrorules --- p.8 / Chapter 2.1.5 --- Macro-operators --- p.9 / Chapter 2.1.6 --- Factors affecting Comprehension and Inference Processing --- p.10 / Chapter 2.2 --- Anderson's ACT Production Theory --- p.14 / Chapter 2.2.1 --- Memories and Knowledge Representation --- p.14 / Chapter 2.2.2 --- Psychological Processes --- p.16 / Chapter 2.2.3 --- Activation --- p.16 / Chapter 2.2.4 --- Cognitive Skills Acquisition --- p.17 / Chapter 2.2.5 --- Comprehension and Inference Processing under the ACT --- p.19 / Chapter CHAPTER 3 --- REVIEW OF LITERATURE / Chapter 3.1 --- Review on Self-Questioning Instructional Research --- p.25 / Chapter 3.1.1 --- Active Processing Theory --- p.25 / Chapter 3.1.2 --- Metacognitive Theory --- p.26 / Chapter 3.1.3 --- Schema Theory --- p.28 / Chapter 3.2 --- Studies on Instructional Strategies in Self-Questioning Research --- p.30 / Chapter 3.2.1 --- Types of Intervention Training adopted by Self-questioning Research --- p.30 / Chapter 3.2.2 --- Review of Studies on Instructional Strategies --- p.33 / Chapter 3.2.3 --- Instructional Processes --- p.38 / Chapter 3.3 --- Variables in Self-Questioning Research --- p.41 / Chapter 3.3.1 --- Types and Frequency of Questions --- p.42 / Chapter 3.3.2 --- Demand Task --- p.45 / Chapter 3.3.3 --- Text Control --- p.46 / Chapter 3.3.4 --- Assessment Formats --- p.49 / Chapter 3.4 --- Methodological Considerations in Self-Questioning Research --- p.50 / Chapter CHAPTER 4 --- METHODOLOGY / Chapter 4.1 --- Research Questions --- p.52 / Chapter 4.2 --- Variables and Hypotheses --- p.52 / Chapter 4.2.1 --- Variables --- p.52 / Chapter 4.2.2 --- Null Hypotheses --- p.53 / Chapter 4.3 --- Subjects --- p.54 / Chapter 4.4 --- Materials --- p.56 / Chapter 4.4.1 --- Readability Level of the Materials --- p.56 / Chapter 4.4.2 --- Interrater Reliability of Texts --- p.58 / Chapter 4.4.3 --- "Passages for Pretest, While-test and Posttest" --- p.59 / Chapter 4.4.4 --- Passages for Training --- p.61 / Chapter 4.4.5 --- Comprehension Questions --- p.62 / Chapter 4.5 --- Procedure --- p.62 / Chapter 4.5.1 --- Pretest --- p.63 / Chapter 4.5.2 --- The Training Program --- p.63 / Chapter 4.5.3 --- Posttest --- p.67 / Chapter 4.6 --- Data Collection --- p.68 / Chapter 4.6.1 --- Comprehension Scores --- p.68 / Chapter 4.6.2 --- Number and Types of Self-Questions --- p.70 / Chapter 4.7 --- Scoring for Comprehension Questions --- p.71 / Chapter 4.8 --- Data Analysis --- p.72 / Chapter 4.9 --- Limitations --- p.73 / Chapter CHAPTER 5 --- RESULTS / Chapter 5.1 --- Self-questioning Effects on Comprehension --- p.75 / Chapter 5.1.1 --- Results on Comprehension during While-test --- p.76 / Chapter 5.1.2 --- Results on Comprehension during Posttest --- p.82 / Chapter 5.1.3 --- A Summary of Self-questioning Effects on Comprehension --- p.86 / Chapter 5.2 --- Self-questioning Effects on Inference Generation --- p.87 / Chapter 5.2.1 --- Results on Inference Generation during While-test --- p.88 / Chapter 5.2.2 --- Results on Inference Generation during Posttest --- p.92 / Chapter 5.2.3 --- A Summary of Self-questioning Effects on Inference Generation --- p.96 / Chapter 5.3 --- Correlation between Comprehension and Inference Scores --- p.97 / Chapter 5.3.1 --- Correlation among Comprehension Scores --- p.98 / Chapter 5.3.2 --- Correlation among Inference Scores --- p.98 / Chapter 5.3.3 --- Correlation between Comprehension and Inference Scores --- p.99 / Chapter 5.3.4 --- A Summary --- p.100 / Chapter 5.4 --- The Effects of Nature and Levels of Self-questions on Comprehension and Inference Processing --- p.100 / Chapter 5.4.1 --- Distribution of Self-questions classified by Nature and Levels --- p.101 / Chapter 5.4.2 --- Nature of Self-questions and Comprehension and Inference Processing --- p.105 / Chapter 5.4.3 --- Levels of Self-questions and Comprehension and Inference Processing --- p.109 / Chapter 5.4.4 --- A Summary on Nature and Levels of Self-questions --- p.113 / Chapter CHAPTER 6 --- DISCUSSION / Chapter 6.1 --- Comprehension Monitoring Strategies and Self-questions --- p.116 / Chapter 6.2 --- Comprehension and Inference Processing in Reading --- p.117 / Chapter 6.3 --- Effects of Self-questions on Comprehension and Inference Processing --- p.119 / Chapter 6.3.1 --- The Effects of the Self-questioning Training Program --- p.119 / Chapter 6.3.2 --- Nature of Self-questions and Comprehension and Inference Processing --- p.122 / Chapter 6.3.3 --- Levels of Self-questions and Comprehension and Inference Processing --- p.125 / Chapter 6.4 --- Length of Passage and Comprehension and Inference Processing under the Effects of Self-questions --- p.127 / Chapter CHAPTER 7 --- CONCLUSION / Chapter 7.1 --- Summary of Findings --- p.129 / Chapter 7.2 --- Implications of Findings --- p.130 / Chapter 7.2.1 --- Self-questioning Intervention --- p.130 / Chapter 7.2.2 --- Comprehension and Inference --- p.132 / Chapter 7.2.3 --- Student-generated Questions --- p.133 / Chapter 7.3 --- Future Directions --- p.134 / REFERENCES --- p.135 / APPENDICES / Appendix A 19 Narrative texts --- p.147 / Appendix B Readability Evaluation Form --- p.156 / Appendix C 13 texts in cloze form --- p.159 / "Appendix D Pretest, While-test and Posttest passages with Comprehension Questions" --- p.166 / Appendix E Opinion Survey Evaluation Form --- p.180
|
218 |
Bayesian Predictive Inference and Multivariate Benchmarking for Small Area MeansToto, Ma. Criselda Santos 20 April 2010 (has links)
Direct survey estimates for small areas are likely to yield unacceptably large standard errors due to the small sample sizes in the areas. This makes it necessary to use models to“borrow strength" from related areas to find more reliable estimate for a given area or, simultaneously, for several areas. For instance, in many applications, data on related multiple characteristics and auxiliary variables are available. Thus, multivariate modeling of related characteristics with multiple regression can be implemented. However, while model-based small area estimates are very useful, one potential difficulty with such estimates when models are used is that the combined estimate from all small areas does not usually match the value of the single estimate on the large area. Benchmarking is done by applying a constraint to ensure that the“total" of the small areas matches the“grand total". Benchmarking can help to prevent model failure, an important issue in small area estimation. It can also lead to improved prediction for most areas because of the information incorporated in the sample space due to the additional constraint. We describe both the univariate and multivariate Bayesian nested error regression models and develop a Bayesian predictive inference with a benchmarking constraint to estimate the finite population means of small areas. Our models are unique in the sense that our benchmarking constraint involves unit-level sampling weights and the prior distribution for the covariance of the area effects follows a specific structure. We use Markov chain Monte Carlo procedures to fit our models. Specifically, we use Gibbs sampling to fit the multivariate model; our univariate benchmarking only needs random samples. We use two datasets, namely the crop data (corn and soybeans) from the LANDSAT and Enumerative survey and the NHANES III data (body mass index and bone mineral density), to illustrate our results. We also conduct a simulation study to assess frequentist properties of our models.
|
219 |
Towards Personalized Learning using Counterfactual Inference for Randomized Controlled TrialsZhao, Siyuan 26 April 2018 (has links)
Personalized learning considers that the causal effects of a studied learning intervention may differ for the individual student (e.g., maybe girls do better with video hints while boys do better with text hints). To evaluate a learning intervention inside ASSISTments, we run a randomized control trial (RCT) by randomly assigning students into either a control condition or a treatment condition. Making the inference about causal effects of studies interventions is a central problem. Counterfactual inference answers “What if� questions, such as "Would this particular student benefit more if the student were given the video hint instead of the text hint when the student cannot solve a problem?". Counterfactual prediction provides a way to estimate the individual treatment effects and helps us to assign the students to a learning intervention which leads to a better learning. A variant of Michael Jordan's "Residual Transfer Networks" was proposed for the counterfactual inference. The model first uses feed-forward neural networks to learn a balancing representation of students by minimizing the distance between the distributions of the control and the treated populations, and then adopts a residual block to estimate the individual treatment effect. Students in the RCT usually have done a number of problems prior to participating it. Each student has a sequence of actions (performance sequence). We proposed a pipeline to use the performance sequence to improve the performance of counterfactual inference. Since deep learning has achieved a huge amount of success in learning representations from raw logged data, student representations were learned by applying the sequence autoencoder to performance sequences. Then, incorporate these representations into the model for counterfactual inference. Empirical results showed that the representations learned from the sequence autoencoder improved the performance of counterfactual inference.
|
220 |
Inference system for selection of an appropriate multiple attribute decision making methodNagashima, Kazunobu January 2010 (has links)
Typescript (photocopy). / Digitized by Kansas Correctional Industries / Department: Industrial Engineering.
|
Page generated in 0.0439 seconds