• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 65
  • 18
  • 10
  • 6
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 136
  • 136
  • 62
  • 45
  • 38
  • 31
  • 27
  • 26
  • 24
  • 24
  • 20
  • 16
  • 16
  • 14
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Bivariate Random Effects Meta-Analysis Models for Diagnostic Test Accuracy Studies Using Arcsine-Based Transformations

Negeri, Zelalem 11 1900 (has links)
A diagnostic test identifies patients according to their disease status. Different meta-analytic models for diagnostic test accuracy studies have been developed to synthesize the sensitivity and specificity of the test. Because of the likely correlation between the sensitivity and specificity of a test, modeling the two parameters using a bivariate model is desirable. Historically, the logit transformation has been used to model sensitivity and specificity pairs from multiple studies as a bivariate normal. In this thesis, we propose two transformations, the arcsine square root and the Freeman-Tukey double arcsine transformation, in the context of a bivariate random-effects model to meta-analyze diagnostic test accuracy studies. We evaluated the performance of the three transformations (the commonly used logit and the proposed transformations) using an extensive simulation study in terms of bias, root mean square error and coverage probability. We illustrate the methods using three real data sets. The simulation study results showed that, for smaller sample size and higher values of sensitivity and specificity, the proposed transformations are less biased, have smaller root mean square error and better coverage probability than the standard logit transformation regardless of the number of studies. On the other hand, for large sample sizes, the usual logit transformation is less biased and has better coverage probability regardless of the true values of sensitivity, specificity and number of studies. However, when the sample size is large, the logit transformation has better root mean square error for moderate and large number of studies. The point estimates of the two parameters, sensitivity & specificity, from the methods using the three real data sets follow patterns similar to those reported by our simulation. / Thesis / Master of Science (MSc)
22

The impacts of pregnancy status, abortion risk, and other factors on replacement female values in Mississippi cattle auctions

Marshall, Tori Lee 09 August 2019 (has links)
A replacement female's value is primarily determined by her reproductive potential and the expected value of calves produced. To improve sales revenues, sellers benefit from understanding the buyers' valuation of physical characteristics related to reproductive potential and calf values. The goal of this research is to identify the impact of physical characteristics on the valuation of individual replacement females through a hedonic pricing model. Results suggest all facets of pregnancy (i.e. pregnancy status, months pregnant, expected due-date, and cow-calf pairs) are crucial to the valuation. Particularly, pregnant replacement females are discounted relative to non-pregnant, ascending in value as months pregnant increases and reaching a premium over non-pregnant status at approximately five months. It is suspected that newly pregnant replacements are discounted due to higher abortion risks. Finally, the largest premiums were observed for cow-calf pairs, where risk of abortion is zero and the replacement female has proven her reproductive potential.
23

The Influence of Cost-sharing Programs on Southern Non-industrial Private Forests

Goodwin, Christopher C. H. 11 January 2002 (has links)
This study was undertaken in response to concerns that the decreasing levels of funding for government tree planting cost share programs will result in significant reductions in non-industrial private tree planting efforts in the South. The purpose of this study is to quantify how the funding of various cost share programs, and market signals interact and affect the level of private tree planting. The results indicate that the ACP, CRP, and Soil Bank programs have been more influential than the FIP, FRM, FSP, SIP, and State run subsidy programs. Reductions in the CRP funding will result in less tree planting; while it is not clear that funding reductions in FIP, or other programs targeted toward reforestation after harvest, will have a negative impact on tree planting levels. / Master of Science
24

META-ANALYSIS OF GENE EXPRESSION STUDIES

Siangphoe, Umaporn 01 January 2015 (has links)
Combining effect sizes from individual studies using random-effects models are commonly applied in high-dimensional gene expression data. However, unknown study heterogeneity can arise from inconsistency of sample qualities and experimental conditions. High heterogeneity of effect sizes can reduce statistical power of the models. We proposed two new methods for random effects estimation and measurements for model variation and strength of the study heterogeneity. We then developed a statistical technique to test for significance of random effects and identify heterogeneous genes. We also proposed another meta-analytic approach that incorporates informative weights in the random effects meta-analysis models. We compared the proposed methods with the standard and existing meta-analytic techniques in the classical and Bayesian frameworks. We demonstrate our results through a series of simulations and application in gene expression neurodegenerative diseases.
25

The Variation of a Teacher's Classroom Observation Ratings across Multiple Classrooms

Lei, Xiaoxuan 06 January 2017 (has links)
Classroom observations have been increasingly used for teacher evaluations, and thus it is important to examine the measurement quality and the use of observation ratings. When a teacher is observed in multiple classrooms, his or her observation ratings may vary across classrooms. In that case, using ratings from one classroom per teacher may not be adequate to represent a teacher’s quality of instruction. However, the fact that classrooms are nested within teachers is usually not considered while classroom observation data is analyzed. Drawing on the Measures of Effective Teaching dataset, this dissertation examined the variation of a teacher’s classroom observation ratings across his or her multiple classrooms. In order to account for the teacher-level, school-level, and rater-level variation, a cross-classified random effects model was used for the analysis. Two research questions were addressed: (1) What is the variation of a teacher’s classroom observation ratings across multiple classrooms? (2) To what extent is the classroom-level variation within teachers explained by observable classroom characteristics? The results suggested that the math classrooms shared 4.9% to 14.7% of the variance in the classroom observation ratings and English Language and Arts classrooms shared 6.7% to 15.5% of the variance in the ratings. The results also showed that the classroom characteristics (i.e., class size, percent of minority students, percent of male students, percent of English language learners, percent of students eligible for free or reduced lunch, and percent of students with disabilities) had limited contributions to explaining the classroom-level variation in the ratings. The results of this dissertation indicate that teachers’ multiple classrooms should be taken into consideration when classroom observation ratings are used to evaluate teachers in high-stakes settings. In addition, other classroom-level factors that could contribute to explaining the classroom-level variation in classroom observation ratings should be investigated in future research.
26

Bayesian modelling of recurrent pipe failures in urban water systems using non-homogeneous Poisson processes with latent structure

Economou, Theodoros January 2010 (has links)
Recurrent events are very common in a wide range of scientific disciplines. The majority of statistical models developed to characterise recurrent events are derived from either reliability theory or survival analysis. This thesis concentrates on applications that arise from reliability, which in general involve the study about components or devices where the recurring event is failure. Specifically, interest lies in repairable components that experience a number of failures during their lifetime. The goal is to develop statistical models in order to gain a good understanding about the driving force behind the failures. A particular counting process is adopted, the non-homogenous Poisson process (NHPP), where the rate of occurrence (failure rate) depends on time. The primary application considered in the thesis is the prediction of underground water pipe bursts although the methods described have more general scope. First, a Bayesian mixed effects NHPP model is developed and applied to a network of water pipes using MCMC. The model is then extended to a mixture of NHPPs. Further, a special mixture case, the zero-inflated NHPP model is developed to cope with data involving a large number of pipes that have never failed. The zero-inflated model is applied to the same pipe network. Quite often, data involving recurrent failures over time, are aggregated where for instance the times of failures are unknown and only the total number of failures are available. Aggregated versions of the NHPP model and its zero-inflated version are developed to accommodate aggregated data and these are applied to the aggregated version of the earlier data set. Complex devices in random environments often exhibit what may be termed as state changes in their behaviour. These state changes may be caused by unobserved and possibly non-stationary processes such as severe weather changes. A hidden semi-Markov NHPP model is formulated, which is a NHPP process modulated by an unobserved semi-Markov process. An algorithm is developed to evaluate the likelihood of this model and a Metropolis-Hastings sampler is constructed for parameter estimation. Simulation studies are performed to test implementation and finally an illustrative application of the model is presented. The thesis concludes with a general discussion and a list of possible generalisations and extensions as well as possible applications other than the ones considered.
27

LATENT VARIABLE MODELS GIVEN INCOMPLETELY OBSERVED SURROGATE OUTCOMES AND COVARIATES

Ren, Chunfeng 01 January 2014 (has links)
Latent variable models (LVMs) are commonly used in the scenario where the outcome of the main interest is an unobservable measure, associated with multiple observed surrogate outcomes, and affected by potential risk factors. This thesis develops an approach of efficient handling missing surrogate outcomes and covariates in two- and three-level latent variable models. However, corresponding statistical methodologies and computational software are lacking efficiently analyzing the LVMs given surrogate outcomes and covariates subject to missingness in the LVMs. We analyze the two-level LVMs for longitudinal data from the National Growth of Health Study where surrogate outcomes and covariates are subject to missingness at any of the levels. A conventional method for efficient handling of missing data is to reexpress the desired model as a joint distribution of variables, including the surrogate outcomes that are subject to missingness conditional on all of the covariates that are completely observable, and estimate the joint model by maximum likelihood, which is then transformed to the desired model. The joint model, however, identifies more parameters than desired, in general. The over-identified joint model produces biased estimates of LVMs so that it is most necessary to describe how to impose constraints on the joint model so that it has a one-to-one correspondence with the desired model for unbiased estimation. The constrained joint model handles missing data efficiently under the assumption of ignorable missing data and is estimated by a modified application of the expectation-maximization (EM) algorithm.
28

Métodos de predição para modelo logístico misto com k efeitos aleatórios / Prediction methods for mixed logistic regression with k random effects

Tamura, Karin Ayumi 17 December 2012 (has links)
A predição de uma observação futura para modelos mistos é um problema que tem sido extensivamente estudado. Este trabalho trata o problema de atribuir valores para os efeitos aleatórios e/ou variável resposta de novos grupos para o modelo logístico misto, cujo objetivo é predizer respostas futuras com base em parâmetros estimados previamente. Na literatura, existem alguns métodos de predição para este modelo que considera apenas o intercepto aleatório. Para a regressão logística mista com k efeitos aleatórios, atualmente não há métodos propostos para a predição dos efeitos aleatórios de novos grupos. Portanto, foram propostas novas abordagens baseadas no método da média zero, no melhor preditor empírico (MPE), na regressão linear e nos modelos de regressão não-paramétricos. Todos os métodos de predição foram avaliados usando os seguintes métodos de estimação: aproximação de Laplace, quadratura adaptativa de Gauss-Hermite e quase-verossimilhança penalizada. Os métodos de estimação e predição foram analisados por meio de estudos de simulação, com base em sete cenários, com comparações de diferentes valores para: o tamanho de grupo, os desvios-padrão dos efeitos aleatórios, a correlação entre os efeitos aleatórios, e o efeito fixo. Os métodos de predição foram aplicados em dois conjuntos de dados reais. Em ambos os problemas os conjuntos de dados apresentaram estrutura hierárquica, cujo objetivo foi predizer a resposta para novos grupos. Os resultados indicaram que o método MPE apresentou o melhor desempenho em termos de predição, entretanto, apresentou alto custo computacional para grandes bancos de dados. As demais metodologias apresentaram níveis de predição semelhantes ao MPE, e reduziram drasticamente o esforço computacional. / The prediction of a future observation in a mixed regression is a problem that has been extensively studied. This work treat the problem of assigning the random effects and/or the outcome of new groups for the mixed logistic regression, in which the aim is to predict future outcomes based on the parameters previously estimated. In the literature, there are some prediction methods for this model that considers only the random intercept. For the mixed logistic regression with k random effects, there is currently no method for predicting the random effects of new groups. Therefore, we proposed new approaches based on average zero method, empirical best predictor (EBP), linear regression and nonparametric regression models. All prediction methods were evaluated by using the estimation methods: Laplace approximation, adaptive Gauss-Hermite quadrature and penalized quasi-likelihood. The estimation and prediction methods were analyzed by simulation studies, based on seven simulation scenarios, which considered comparisons of different values for: the group size, the standard deviations of the random effects, the correlation between the random effects, and the fixed effect. The prediction methods were applied in two real data sets. In both problems the data set presented hierarchical structure, and the objective was to predict the outcome for new groups. The results indicated that EBP presented the best performance in prediction terms, however it has been presented high computational cost for big data sets. The other methodologies presented similar level of prediction in relation to EBP, and drastically reduced the computational effort.
29

Modelos de sobrevivência com fração de cura e efeitos aleatórios / Cure rate models with random effects

Lopes, Célia Mendes Carvalho 29 April 2008 (has links)
Neste trabalho são apresentados dois modelos de sobrevivência com fração de cura e efeitos aleatórios, um baseado no modelo de Chen-Ibrahim-Sinha para fração de cura e o outro, no modelo de mistura. São estudadas abordagens clássica e bayesiana. Na inferência clássica são utilizados estimadores REML. Para a bayesiana foi utilizado Metropolis-Hastings. Estudos de simulação são feitos para avaliar a acurácia das estimativas dos parâmetros e seus respectivos desvios-padrão. O uso dos modelos é ilustrado com uma análise de dados de câncer na orofaringe. / In this work, it is shown two survival models with long term survivors and random effects, one based on Chen-Ibrahim-Sinha model for models with surviving fraction and the other, on mixture model. We present bayesian and classical approaches. In the first one, we use Metropolis-Hastings. For the second one, we use the REML estimators. A simulation study is done to evaluate the accuracy of the applied techniques for the estimatives and their standard deviations. An example on orofaringe cancer is used to illustrate the models considered in the study.
30

Métodos de predição para modelo logístico misto com k efeitos aleatórios / Prediction methods for mixed logistic regression with k random effects

Karin Ayumi Tamura 17 December 2012 (has links)
A predição de uma observação futura para modelos mistos é um problema que tem sido extensivamente estudado. Este trabalho trata o problema de atribuir valores para os efeitos aleatórios e/ou variável resposta de novos grupos para o modelo logístico misto, cujo objetivo é predizer respostas futuras com base em parâmetros estimados previamente. Na literatura, existem alguns métodos de predição para este modelo que considera apenas o intercepto aleatório. Para a regressão logística mista com k efeitos aleatórios, atualmente não há métodos propostos para a predição dos efeitos aleatórios de novos grupos. Portanto, foram propostas novas abordagens baseadas no método da média zero, no melhor preditor empírico (MPE), na regressão linear e nos modelos de regressão não-paramétricos. Todos os métodos de predição foram avaliados usando os seguintes métodos de estimação: aproximação de Laplace, quadratura adaptativa de Gauss-Hermite e quase-verossimilhança penalizada. Os métodos de estimação e predição foram analisados por meio de estudos de simulação, com base em sete cenários, com comparações de diferentes valores para: o tamanho de grupo, os desvios-padrão dos efeitos aleatórios, a correlação entre os efeitos aleatórios, e o efeito fixo. Os métodos de predição foram aplicados em dois conjuntos de dados reais. Em ambos os problemas os conjuntos de dados apresentaram estrutura hierárquica, cujo objetivo foi predizer a resposta para novos grupos. Os resultados indicaram que o método MPE apresentou o melhor desempenho em termos de predição, entretanto, apresentou alto custo computacional para grandes bancos de dados. As demais metodologias apresentaram níveis de predição semelhantes ao MPE, e reduziram drasticamente o esforço computacional. / The prediction of a future observation in a mixed regression is a problem that has been extensively studied. This work treat the problem of assigning the random effects and/or the outcome of new groups for the mixed logistic regression, in which the aim is to predict future outcomes based on the parameters previously estimated. In the literature, there are some prediction methods for this model that considers only the random intercept. For the mixed logistic regression with k random effects, there is currently no method for predicting the random effects of new groups. Therefore, we proposed new approaches based on average zero method, empirical best predictor (EBP), linear regression and nonparametric regression models. All prediction methods were evaluated by using the estimation methods: Laplace approximation, adaptive Gauss-Hermite quadrature and penalized quasi-likelihood. The estimation and prediction methods were analyzed by simulation studies, based on seven simulation scenarios, which considered comparisons of different values for: the group size, the standard deviations of the random effects, the correlation between the random effects, and the fixed effect. The prediction methods were applied in two real data sets. In both problems the data set presented hierarchical structure, and the objective was to predict the outcome for new groups. The results indicated that EBP presented the best performance in prediction terms, however it has been presented high computational cost for big data sets. The other methodologies presented similar level of prediction in relation to EBP, and drastically reduced the computational effort.

Page generated in 0.0398 seconds