Spelling suggestions: "subject:"bayesian"" "subject:"eayesian""
561 |
Investigation on Bayesian Ying-Yang learning for model selection in unsupervised learning. / CUHK electronic theses & dissertations collection / Digital dissertation consortiumJanuary 2005 (has links)
For factor analysis models, we develop an improved BYY harmony data smoothing learning criterion BYY-HDS in help of considering the dependence between the factors and observations. We make empirical comparisons of the BYY harmony empirical learning criterion BYY-HEC, BYY-HDS, the BYY automatic model selection method BYY-AUTO, AIC, CAIC, BIC, and CV for selecting the number of factors not only on simulated data sets of different sample sizes, noise variances, data dimensions and factor numbers, but also on two real data sets from air pollution data and sport track records, respectively. / Model selection is a critical issue in unsupervised learning. Conventionally, model selection is implemented in two phases by some statistical model selection criterion such as Akaike's information criterion (AIC), Bozdogan's consistent Akaike's information criterion (CAIC), Schwarz's Bayesian inference criterion (BIC) which formally coincides with the minimum description length (MDL) criterion, and the cross-validation (CV) criterion. These methods are very time intensive and may become problematic when sample size is small. Recently, the Bayesian Ying-Yang (BYY) harmony learning has been developed as a unified framework with new mechanisms for model selection and regularization. In this thesis we make a systematic investigation on BYY learning as well as several typical model selection criteria for model selection on factor analysis models, Gaussian mixture models, and factor analysis mixture models. / The most remarkable findings of our study is that BYY-HDS is superior to its counterparts, especially when the sample size is small. AIC, BYY-HEC, BYY-AUTO and CV have a risk of overestimating, while BIC and CAIC have a risk of underestimating in most cases. BYY-AUTO is superior to other methods in a computational cost point of view. The cross-validation method requires the highest computing cost. (Abstract shortened by UMI.) / Hu Xuelei. / "November 2005." / Adviser: Lei Xu. / Source: Dissertation Abstracts International, Volume: 67-07, Section: B, page: 3899. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2005. / Includes bibliographical references (p. 131-142). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. Ann Arbor, MI : ProQuest Information and Learning Company, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract in English and Chinese. / School code: 1307.
|
562 |
Some Bayesian methods for analyzing mixtures of normal distributions. / CUHK electronic theses & dissertations collection / Digital dissertation consortiumJanuary 2003 (has links)
Juesheng Fu. / "April 2003." / Thesis (Ph.D.)--Chinese University of Hong Kong, 2003. / Includes bibliographical references (p. 124-132). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. Ann Arbor, MI : ProQuest Information and Learning Company, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Mode of access: World Wide Web.
|
563 |
Statistical methods for the analysis of corrosion data for integrity assessmentsTan, Hwei-Yang January 2017 (has links)
In the oil and gas industry, statistical methods have been used for corrosion analysis for various asset systems such as pipelines, storage tanks, and so on. However, few industrial standards and guidelines provide comprehensive stepwise procedures for the usage of statistical approaches for corrosion analysis. For example, the UK HSE (2002) report "Guidelines for the use of statistics for analysis of sample inspection of corrosion" demonstrates how statistical methods can be used to evaluate corrosion samples, but the methods explained in the document are very basic and do not consider risk factors such as pressure, temperature, design, external factors and other factors for the analyses. Furthermore, often the industrial practice that uses linear approximation on localised corrosion such as pitting is considered inappropriate as pitting growth is not uniform. The aim of this research is to develop an approach that models the stochastic behaviour of localised corrosion and demonstrate how the influencing factors can be linked to the corrosion analyses, for predicting the remaining useful life of components in oil and gas plants. This research addresses a challenge in industry practice. Non-destructive testing (NDT) and inspection techniques have improved in recent years making more and more data available to asset operators. However, this means that these data need to be processed to extract meaningful information. Increasing computer power has enabled the use of statistics for such data processing. Statistical software such as R and OpenBUGS is available to users to explore new and pragmatic statistical methods (e.g. regression models and stochastic models) and fully use the available data in the field. In this thesis, we carry out extreme value analysis to determine maximum defect depth of an offshore conductor pipe and simulate the defect depth using geometric Brownian motion in Chapter 2. In Chapter 3, we introduce a Weibull density regression that is based on a gamma transformation proportional hazards model to analyse the corrosion data of piping deadlegs. The density regression model takes multiple influencing factors into account; this model can be used to extrapolate the corrosion density of inaccessible deadlegs with data available from other piping systems. In Chapter 4, we demonstrate how the corrosion prediction models in Chapters 2 and 3 could be used to predict the remaining useful life of these components. Chapter 1 sets the background to the techniques used, and Chapter 5 presents concluding remarks based on the application of the techniques.
|
564 |
The impact of river flow on the distribution and abundance of salmonid fishesWarren, Andrew Mark January 2017 (has links)
River flow regime is fundamental in determining lotic fish communities and populations, and especially of salmonid fishes. Quantifying the effects of human induced flow alteration on salmonids is a key question for conservation and water resources management. While qualitative responses to flow alteration are well characterised, a more intractable problem is quantifying responses in a way that is practical for environmental management. Using data drawn from the Environment Agency national database, I fitted generalised linear mixed models (GLMMs) using Bayesian inference to quantify the response of salmonid populations to the effects of impounding rivers, flow loss from rivers due to water abstraction, and the mitigating effects of flow restoration. I showed that in upland rivers downstream of impounded lakes, the magnitude of antecedent summer low flows had an important effect on the late summer abundance of 0+ salmonids Atlantic salmon (Salmo salar) and brown trout (Salmo trutta). In contrast, the abundance of 1+ salmon and brown trout appeared to be largely unresponsive to the same flows. I demonstrated that short-term flow cessation had a negative impact on the abundance of 1+ brown trout in the following spring, but that recovery was rapid with negligible longer-term consequences. I further established that flow restoration in upland streams impacted by water abstraction provided limited short-term benefits to salmonid abundance when compared with changes at control locations. However, while benefits to salmonid abundance were limited, I detected important benefits to the mean growth rates of 0+ and 1+ brown trout from flow restoration. I discuss the implications of my findings for salmonid management and conservation and propose a more evidence-based approach to fishery management based on robust quantitative evidence derived using appropriate statistical models. The current approach to flow management for salmonids requires revision and I recommend an alternative approach based on quantitative evidence.
|
565 |
Investigations on number selection for finite mixture models and clustering analysis.January 1997 (has links)
by Yiu Ming Cheung. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1997. / Includes bibliographical references (leaves 92-99). / Abstract --- p.i / Acknowledgement --- p.iii / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Background --- p.1 / Chapter 1.1.1 --- Bayesian YING-YANG Learning Theory and Number Selec- tion Criterion --- p.5 / Chapter 1.2 --- General Motivation --- p.6 / Chapter 1.3 --- Contributions of the Thesis --- p.6 / Chapter 1.4 --- Other Related Contributions --- p.7 / Chapter 1.4.1 --- A Fast Number Detection Approach --- p.7 / Chapter 1.4.2 --- Application of RPCL to Prediction Models for Time Series Forecasting --- p.7 / Chapter 1.4.3 --- Publications --- p.8 / Chapter 1.5 --- Outline of the Thesis --- p.8 / Chapter 2 --- Open Problem: How Many Clusters? --- p.11 / Chapter 3 --- Bayesian YING-YANG Learning Theory: Review and Experiments --- p.17 / Chapter 3.1 --- Briefly Review of Bayesian YING-YANG Learning Theory --- p.18 / Chapter 3.2 --- Number Selection Criterion --- p.20 / Chapter 3.3 --- Experiments --- p.23 / Chapter 3.3.1 --- Experimental Purposes and Data Sets --- p.23 / Chapter 3.3.2 --- Experimental Results --- p.23 / Chapter 4 --- Conditions of Number Selection Criterion --- p.39 / Chapter 4.1 --- Alternative Condition of Number Selection Criterion --- p.40 / Chapter 4.2 --- Conditions of Special Hard-cut Criterion --- p.45 / Chapter 4.2.1 --- Criterion Conditions in Two-Gaussian Case --- p.45 / Chapter 4.2.2 --- Criterion Conditions in k*-Gaussian Case --- p.59 / Chapter 4.3 --- Experimental Results --- p.60 / Chapter 4.3.1 --- Purpose and Data Sets --- p.60 / Chapter 4.3.2 --- Experimental Results --- p.63 / Chapter 4.4 --- Discussion --- p.63 / Chapter 5 --- Application of Number Selection Criterion to Data Classification --- p.80 / Chapter 5.1 --- Unsupervised Classification --- p.80 / Chapter 5.1.1 --- Experiments --- p.81 / Chapter 5.2 --- Supervised Classification --- p.82 / Chapter 5.2.1 --- RBF Network --- p.85 / Chapter 5.2.2 --- Experiments --- p.86 / Chapter 6 --- Conclusion and Future Work --- p.89 / Chapter 6.1 --- Conclusion --- p.89 / Chapter 6.2 --- Future Work --- p.90 / Bibliography --- p.92 / Chapter A --- A Number Detection Approach for Equal-and-Isotropic Variance Clusters --- p.100 / Chapter A.1 --- Number Detection Approach --- p.100 / Chapter A.2 --- Demonstration Experiments --- p.102 / Chapter A.3 --- Remarks --- p.105 / Chapter B --- RBF Network with RPCL Approach --- p.106 / Chapter B.l --- Introduction --- p.106 / Chapter B.2 --- Normalized RBF net and Extended Normalized RBF Net --- p.108 / Chapter B.3 --- Demonstration --- p.110 / Chapter B.4 --- Remarks --- p.113 / Chapter C --- Adaptive RPCL-CLP Model for Financial Forecasting --- p.114 / Chapter C.1 --- Introduction --- p.114 / Chapter C.2 --- Extraction of Input Patterns and Outputs --- p.115 / Chapter C.3 --- RPCL-CLP Model --- p.116 / Chapter C.3.1 --- RPCL-CLP Architecture --- p.116 / Chapter C.3.2 --- Training Stage of RPCL-CLP --- p.117 / Chapter C.3.3 --- Prediction Stage of RPCL-CLP --- p.122 / Chapter C.4 --- Adaptive RPCL-CLP Model --- p.122 / Chapter C.4.1 --- Data Pre-and-Post Processing --- p.122 / Chapter C.4.2 --- Architecture and Implementation --- p.122 / Chapter C.5 --- Computer Experiments --- p.125 / Chapter C.5.1 --- Data Sets and Experimental Purpose --- p.125 / Chapter C.5.2 --- Experimental Results --- p.126 / Chapter C.6 --- Conclusion --- p.134 / Chapter D --- Publication List --- p.135 / Chapter D.1 --- Publication List --- p.135
|
566 |
Analysis of structural equation models by Bayesian computation methods.January 1996 (has links)
by Jian-Qing Shi. / Thesis (Ph.D.)--Chinese University of Hong Kong, 1996. / Includes bibliographical references (leaves 118-123). / Chapter Chapter 1. --- Introduction and overview --- p.1 / Chapter Chapter 2. --- General methodology --- p.8 / Chapter Chapter 3. --- A Bayesian approach to confirmatory factor analysis --- p.16 / Chapter 3.1 --- Confirmatory factor analysis model and its prior --- p.16 / Chapter 3.2 --- The algorithm of data augmentation --- p.19 / Chapter 3.2.1 --- Data augmentation and one-run method --- p.19 / Chapter 3.2.2 --- Rao-Blackwellized estimation --- p.22 / Chapter 3.3 --- Asymptotic properties --- p.28 / Chapter 3.3.1 --- Asymptotic normality and posterior covariance matrix --- p.28 / Chapter 3.3.2 --- Goodness-of-fit statistic --- p.31 / Chapter Chapter 4. --- Bayesian inference for structural equation models --- p.34 / Chapter 4.1 --- LISREL Model and prior information --- p.34 / Chapter 4.2 --- Algorithm and conditional distributions --- p.38 / Chapter 4.2.1 --- Data augmentation algorithm --- p.38 / Chapter 4.2.2 --- Conditional distributions --- p.39 / Chapter 4.3 --- Posterior analysis --- p.44 / Chapter 4.3.1 --- Rao-Blackwellized estimation --- p.44 / Chapter 4.3.2 --- Asymptotic properties and goodness-of-fit statistic --- p.45 / Chapter 4.4 --- Simulation study --- p.47 / Chapter Chapter 5. --- A Bayesian estimation of factor score with non-standard data --- p.52 / Chapter 5.1 --- General Bayesian approach to polytomous data --- p.52 / Chapter 5.2 --- Covariance matrix of the posterior distribution --- p.61 / Chapter 5.3 --- Data augmentation --- p.65 / Chapter 5.4 --- EM algorithm --- p.68 / Chapter 5.5 --- Analysis of censored data --- p.72 / Chapter 5.5.1 --- General Bayesian approach --- p.72 / Chapter 5.5.2 --- EM algorithm --- p.76 / Chapter 5.6 --- Analysis of truncated data --- p.78 / Chapter Chapter 6. --- Structural equation model with continuous and polytomous data --- p.82 / Chapter 6.1 --- Factor analysis model with continuous and polytomous data --- p.83 / Chapter 6.1.1 --- Model and Bayesian inference --- p.83 / Chapter 6.1.2 --- Gibbs sampler algorithm --- p.85 / Chapter 6.1.3 --- Thresholds parameters --- p.89 / Chapter 6.1.4 --- Posterior analysis --- p.92 / Chapter 6.2 --- LISREL model with continuous and polytomous data --- p.94 / Chapter 6.2.1 --- LISREL model and Bayesian inference --- p.94 / Chapter 6.2.2 --- Posterior analysis --- p.101 / Chapter 6.3 --- Simulation study --- p.103 / Chapter Chapter 7. --- Further development --- p.108 / Chapter 7.1 --- More about one-run method --- p.108 / Chapter 7.2 --- Structural equation model with censored data --- p.111 / Chapter 7.3 --- Multilevel structural equation model --- p.114 / References --- p.118 / Appendix --- p.124 / Chapter A.1 --- The derivation of conditional distribution --- p.124 / Chapter A.2 --- Generate a random variate from normal density which restricted in an interval --- p.129 / Tables --- p.132 / Figures --- p.155
|
567 |
Bayesian approach for a multigroup structural equation model with fixed covariates.January 2003 (has links)
Oi-Ping Chiu. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2003. / Includes bibliographical references (leaves 45-46). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 2 --- Model --- p.4 / Chapter 2.1 --- General Model --- p.4 / Chapter 2.2 --- Constraint --- p.5 / Chapter 3 --- Bayesian Estimation via Gibbs Sampler --- p.7 / Chapter 3.1 --- Conditional Distributions --- p.10 / Chapter 3.2 --- Constraint --- p.15 / Chapter 3.3 --- Bayesian Estimation --- p.16 / Chapter 4 --- Model Comparison using the Bayes Factor --- p.18 / Chapter 5 --- Simulation Study --- p.22 / Chapter 6 --- Real Example --- p.27 / Chapter 6.1 --- Model Selection --- p.29 / Chapter 6.2 --- Bayesian Estimate --- p.30 / Chapter 6.3 --- Sensitivity Analysis --- p.31 / Chapter 7 --- Discussion --- p.32 / Chapter A --- p.34 / Bibliography --- p.45
|
568 |
FBST seqüencial / Sequential FBSTMarcelo Leme de Arruda 04 June 2012 (has links)
O FBST (Full Bayesian Significance Test) é um instrumento desenvolvido por Pereira e Stern (1999) com o objetivo de apresentar uma alternativa bayesiana aos testes de hipóteses precisas. Desde sua introdução, o FBST se mostrou uma ferramenta muito útil para a solução de problemas para os quais não havia soluções freqüentistas. Esse teste, contudo, depende de que a amostra seja coletada uma única vez, após o que a distribuição a posteriori dos parâmetros é obtida e a medida de evidência, calculada. Ensejadas por esse aspecto, são apresentadas abordagens analíticas e computacionais para a extensão do FBST ao contexto de decisão seqüencial (DeGroot, 2004). É apresentado e analisado um algoritmo para a execução do FBST Seqüencial, bem como o código-fonte de um software baseado nesse algoritmo. / FBST (Full Bayesian Significance Test) is a tool developed by Pereira and Stern (1999), to show a bayesian alternative to the tests of precise hypotheses. Since its introduction, FBST has shown to be a very useful tool to solve problems to which there were no frequentist solutions. This test, however, needs that the sample be collected just one time and, after this, the parameters posterior distribution is obtained and the evidence measure, computed. Suggested by this feature, there are presented analytic and computational approaches to the extension of the FBST to the sequential decision context (DeGroot, 2004). It is presented and analyzed an algorithm to execute the Sequential FBST, as well as the source code of a software based on this algorithm.
|
569 |
Dynamic bayesian statistical models for the estimation of the origin-destination matrix / Dynamic bayesian statistical models for the estimation of the origin-destination matrix / Dynamic bayesian statistical models for the estimation of the origin-destination matrixAnselmo Ramalho Pitombeira Neto 29 June 2015 (has links)
In transportation planning, one of the first steps is to estimate the travel demand. A product of the estimation process is the so-called origin-destination matrix (OD matrix), whose entries correspond to the number of trips between pairs of zones in a geographic region in a reference time period. Traditionally, the OD matrix has been estimated through direct methods, such as home-based surveys, road-side interviews and license plate automatic recognition. These direct methods require large samples to achieve a target statistical error, which may be technically or economically infeasible. Alternatively, one can use a statistical model to indirectly estimate the OD matrix from observed traffic volumes on links of the transportation network. The first estimation models proposed in the literature assume that traffic volumes in a sequence of days are independent and identically distributed samples of a static probability distribution. Moreover, static estimation models do not allow for variations in mean OD flows or non-constant variability over time. In contrast, day-to-day dynamic models are in theory more capable of capturing underlying changes of system parameters which are only indirectly observed through variations in traffic volumes. Even so, there is still a dearth of statistical models in the literature which account for the day-today dynamic evolution of transportation systems. In this thesis, our objective is to assess the potential gains and limitations of day-to-day dynamic models for the estimation of the OD matrix based on link volumes. First, we review the main static and dynamic models available in the literature. We then describe our proposed day-to-day dynamic Bayesian model based on the theory of linear dynamic models. The proposed model is tested by means of computational experiments and compared with a static estimation model and with the generalized least squares (GLS) model. The results show some advantage in favor of dynamic models in informative scenarios, while in non-informative scenarios the performance of the models were equivalent. The experiments also indicate a significant dependence of the estimation errors on the assignment matrices. / In transportation planning, one of the first steps is to estimate the travel demand. A product of the estimation process is the so-called origin-destination matrix (OD matrix), whose entries correspond to the number of trips between pairs of zones in a geographic region in a reference time period. Traditionally, the OD matrix has been estimated through direct methods, such as home-based surveys, road-side interviews and license plate automatic recognition. These direct methods require large samples to achieve a target statistical error, which may be technically or economically infeasible. Alternatively, one can use a statistical model to indirectly estimate the OD matrix from observed traffic volumes on links of the transportation network. The first estimation models proposed in the literature assume that traffic volumes in a sequence of days are independent and identically distributed samples of a static probability distribution. Moreover, static estimation models do not allow for variations in mean OD flows or non-constant variability over time. In contrast, day-to-day dynamic models are in theory more capable of capturing underlying changes of system parameters which are only indirectly observed through variations in traffic volumes. Even so, there is still a dearth of statistical models in the literature which account for the day-today dynamic evolution of transportation systems. In this thesis, our objective is to assess the potential gains and limitations of day-to-day dynamic models for the estimation of the OD matrix based on link volumes. First, we review the main static and dynamic models available in the literature. We then describe our proposed day-to-day dynamic Bayesian model based on the theory of linear dynamic models. The proposed model is tested by means of computational experiments and compared with a static estimation model and with the generalized least squares (GLS) model. The results show some advantage in favor of dynamic models in informative scenarios, while in non-informative scenarios the performance of the models were equivalent. The experiments also indicate a significant dependence of the estimation errors on the assignment matrices. / In transportation planning, one of the first steps is to estimate the travel demand. A product of the estimation process is the so-called origin-destination matrix (OD matrix), whose entries correspond to the number of trips between pairs of zones in a geographic region in a reference time period. Traditionally, the OD matrix has been estimated through direct methods, such as home-based surveys, road-side interviews and license plate automatic recognition. These direct methods require large samples to achieve a target statistical error, which may be technically or economically infeasible. Alternatively, one can use a statistical model to indirectly estimate the OD matrix from observed traffic volumes on links of the transportation network. The first estimation models proposed in the literature assume that traffic volumes in a sequence of days are independent and identically distributed samples of a static probability distribution. Moreover, static estimation models do not allow for variations in mean OD flows or non-constant variability over time. In contrast, day-to-day dynamic models are in theory more capable of capturing underlying changes of system parameters which are only indirectly observed through variations in traffic volumes. Even so, there is still a dearth of statistical models in the literature which account for the day-today dynamic evolution of transportation systems. In this thesis, our objective is to assess the potential gains and limitations of day-to-day dynamic models for the estimation of the OD matrix based on link volumes. First, we review the main static and dynamic models available in the literature. We then describe our proposed day-to-day dynamic Bayesian model based on the theory of linear dynamic models. The proposed model is tested by means of computational experiments and compared with a static estimation model and with the generalized least squares (GLS) model. The results show some advantage in favor of dynamic models in informative scenarios, while in non-informative scenarios the performance of the models were equivalent. The experiments also indicate a significant dependence of the estimation errors on the assignment matrices.
|
570 |
Métodos bayesianos em metanálise: especificação da distribuição a priori para a variabilidade entre os estudos / Bayesian methods in meta-analysis: specication of prior distributions for the between-studies variabilitySuleimy Cristina Mazin 27 November 2009 (has links)
MAZIN, S. C.Metodos Bayesianos em Metanalise: Especicac~ao da Distribuic~ao a Priori para a Variabilidade entre os Estudos. 2009. 175f. Dissertac~ao (mestrado) - Faculdade de Medicina de Ribeir~ao Preto, Universidade de S~ao Paulo, Ribeir~ao Preto, 2009. Prossionais da saude, pesquisadores e outros responsaveis por polticas de saude s~ao frequentemente inundados com quantidades de informac~oes nem sempre manejaveis, o que torna a revis~ao sistematica uma maneira eciente de integrar o conhecimento existente gerando dados que auxiliem a tomada de decis~ao. Em uma revis~ao sistematica os dados dos diferentes estudos podem ser quantitativamente combinados por metodos estatsticos chamados metanalise. A metanalise e uma ferramenta estatstica utilizada para combinar ou integrar os resultados dos diversos estudos independentes, sobre o mesmo tema. Entre os estudos que comp~oem a metanalise pode existir uma variabilidade que n~ao e devida ao acaso, chamada heterogeneidade. A heterogeneidade e geralmente testada pelo teste Q ou quanticada pela estatstica I2. A investigac~ao da heterogeneidade na metanalise e de grande import^ancia pois a aus^encia ou a presenca indica o modelo estatstico mais adequado. Assim, na aus^encia desta variabilidade utilizamos um modelo estatstico de efeito xo e na presenca utilizamos um modelo de efeitos aleatorios que incorpora a variabilidade entre os estudos na metanalise. Muitas metanalises s~ao compostas por poucos estudos, e quando isso acontece, temos diculdades de estimar as medidas de efeito metanalticas atraves da teoria classica, pois esta e dependente de pressupostos assintoticos. Na abordagem bayesiana n~ao temos esse problema, mas devemos ter muito cuidado com a especicac~ao da distribuic~ao a priori. Uma vantagem da infer^encia bayesiana e a possibilidade de predizer um resultado para um estudo futuro. Neste trabalho, conduzimos um estudo sobre a especicac~ao da distribuic~ao a priori para o par^ametro que expressa a vari^ancia entre os estudos e constatamos que n~ao existe uma unica escolha que caracterize uma distribuic~ao a priori que possa ser considerada ~ao informativa\"em todas as situac~oes. A escolha de uma distribuic~ao a priori ~ao informativa\"depende da heterogeneidade entre os estudos na metanalise. Assim a distribuic~ao a priori deve ser escolhida com muito cuidado e seguida de uma analise de sensibilidade, especialmente quando o numero de estudos e pequeno. / MAZIN, S. C. Bayesian methods in meta-analysis: specication of prior distributions for the between-studies variability. 2009. 175s. Dissertation (master degree) - Faculty of Medicine of Ribeir~ao Preto, University of S~ao Paulo, Ribeir~ao Preto, 2009. Health professionals, researchers and others responsible for health policy are often overwhelmed by amounts of information that can not always be manageable, which makes the systematic review an ecient way to integrate existing knowledge generating information that may help decision making. In a systematic review, data from dierent studies can be quantitatively combined by statistical methods called meta-analysis. The meta-analysis is a statistical tool used to combine or integrate the results of several independent studies on the same topic. Among the studies that comprise the meta-analysis we have a variability that does not yield from the chance, called the heterogeneity. Heterogeneity is usually tested by Q or quantied by the statistic I2. The investigation of heterogeneity in meta-analysis has a great importance because the absence or presence indicates the most appropriate statistical model. In the absence of this variability we used a xed eect statistical model and a random eects model was used to incorporate the variability between studies in the meta-analysis. Many meta-analysis are composed of few studies, and in those cases, it is dicult to estimate the eect of meta-analytic measures by the classical theory because the asymptotic assumptions. In the Bayesian approach we do not have this problem, but we must be very careful about the specication of prior distribution. One advantage of Bayesian inference is the ability to predict an outcome for a future study. In this work, carried out a study about the specication of prior distribution for the parameter that expresses of the variance between studies and found that there is no single choice that features a prior distribution that would be considered uninformative at all times. The choice of a prior distribution uninformative depend heterogeneity among studies in the meta-analysis. Thus, the prior distribution should be examined very carefully and followed by a sensitivity analysis, especially when the number of studies is small.
|
Page generated in 0.042 seconds