• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 5
  • 4
  • 1
  • 1
  • 1
  • Tagged with
  • 24
  • 24
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Characterization of a Weighted Quantile Score Approach for Highly Correlated Data in Risk Analysis Scenarios

Carrico, Caroline 29 March 2013 (has links)
In risk evaluation, the effect of mixtures of environmental chemicals on a common adverse outcome is of interest. However, due to the high dimensionality and inherent correlations among chemicals that occur together, the traditional methods (e.g. ordinary or logistic regression) are unsuitable. We extend and characterize a weighted quantile score (WQS) approach to estimating an index for a set of highly correlated components. In the case with environmental chemicals, we use the WQS to identify “bad actors” and estimate body burden. The accuracy of the WQS was evaluated through extensive simulation studies in terms of validity (ability of the WQS to select the correct components) and reliability (the variability of the estimated weights across bootstrap samples). The WQS demonstrated high validity and reliability in scenarios with relatively high correlations with an outcome and moderate breakdown in cases where the correlation with the outcome was relatively small compared to the pairwise correlations. In cases where components are independent, weights can be interpreted as association with the outcome relative to the other components. In cases with complex correlation patterns, weights are influenced by both importance with the outcome and the correlation structure. The WQS also showed improvements over ordinary regression and LASSO in the simulations performed. To conclude, an application of this method on the association between environmental chemicals, nutrition and liver toxicity, as measured by ALT (alanine amino-transferase) is presented. The application identifies environmental chemicals (PCBs, dioxins, furans and heavy metals) that are associated with an increase in ALT and a set of nutrients that are identified as non-chemical stressors due to an association with an increase in ALT.
2

Simultaneous Inference for High Dimensional and Correlated Data

Polin, Afroza 22 August 2019 (has links)
No description available.
3

An Applied Investigation of Gaussian Markov Random Fields

Olsen, Jessica Lyn 26 June 2012 (has links) (PDF)
Recently, Bayesian methods have become the essence of modern statistics, specifically, the ability to incorporate hierarchical models. In particular, correlated data, such as the data found in spatial and temporal applications, have benefited greatly from the development and application of Bayesian statistics. One particular application of Bayesian modeling is Gaussian Markov Random Fields. These methods have proven to be very useful in providing a framework for correlated data. I will demonstrate the power of GMRFs by applying this method to two sets of data; a set of temporal data involving car accidents in the UK and a set of spatial data involving Provo area apartment complexes. For the first set of data, I will examine how including a seatbelt covariate effects our estimates for the number of car accidents. In the second set of data, we will scrutinize the effect of BYU approval on apartment complexes. In both applications we will investigate Laplacian approximations when normal distribution assumptions do not hold.
4

Modelos Birnbaum-Saunders usando equações de estimação / Birnbaum-Saunders models using estimating equations

Tsuyuguchi, Aline Barbosa 12 May 2017 (has links)
Este trabalho de tese tem como objetivo principal propor uma abordagem alternativa para analisar dados Birnbaum-Saunders (BS) correlacionados com base em equações de estimação. Da classe ótima de funções de estimação proposta por Crowder (1987), derivamos uma classe ótima para a análise de dados correlacionados em que as distribuições marginais são assumidas log-BS e log-BS-t, respectivamente. Derivamos um processo iterativo para estimação dos parâmetros, métodos de diagnóstico, tais como análise de resíduos, distância de Cook e influência local sob três diferentes esquemas de perturbação: ponderação de casos, perturbação da variável resposta e perturbação individual de covariáveis. Estudos de simulação são desenvolvidos para cada modelo para avaliar as propriedades empíricas dos estimadores dos parâmetros de localização, forma e correlação. A abordagem apresentada é discutida em duas aplicações: o primeiro exemplo referente a um banco de dados sobre a produtividade de capital público nos 48 estados norte-americanos contíguos de 1970 a 1986 e o segundo exemplo referente a um estudo realizado na Escola de Educação Física e Esporte da Universidade de São Paulo (USP) durante 2016 em que 70 corredores foram avaliados em corridas em esteiras em três períodos distintos. / The aim of this thesis is to propose an alternative approach to analyze correlated Birnbaum-Saunders (BS) data based on estimating equations. From the optimal estimating functions class proposed by Crowder (1987), we derive an optimal class for the analysis of correlated data in which the marginal distributions are assumed either log-BS or log-BS-t. It is derived an iterative process, diagnostic procedures such as residual analysis, Cooks distance and local influence under three different perturbation schemes: case-weights, response variable perturbation and single-covariate perturbation. Simulation studies to assess the empirical properties of the parameters estimates are performed for each proposed model. The proposed methodology is discussed in two applications: the first one on a data set of public capital productivity of the contiguous 48 USA states, from 1970 to 1986, and the second data set refers to a study conducted in the School of Physical Education and Sport of the University of São Paulo (USP), during 2016, in which 70 runners were evaluated in running machines races in three periods.
5

Calculating One-sided P-value for TFisher Under Correlated Data

Fang, Jiadong 29 April 2018 (has links)
P-values combination procedure for multiple statistical tests is a common data analysis method in many applications including bioinformatics. However, this procedure is nontrivial when input P-values are dependent. For the Fisher€™s combination procedure, a classic method is the Brown€™s Strategy [1, Brown,1975], which is based empirical moment-matching of gamma distribution. In this project, we address a more general family of weighting-andtruncation p-value combination procedures called TFisher. We first study how to extend Brown€™s Strategy to this problem. Then we make further development in two directions. First, instead of using the empirical polynomial model-fitting strategy to find moments, we developed an analytical calculation strategy based on asymptotic approximation. Second, instead of using the gamma distribution to approximate the null distribution of TFisher, we propose to use a mixed gamma distribution or a shifted-mixed gamma distribution. We focus on calculating the one-sided p-value for TFisher, especially the soft-thresholding version of TFisher. Simulations show that our methods much improve the accuracy than the traditional strategy.
6

Modelos Birnbaum-Saunders usando equações de estimação / Birnbaum-Saunders models using estimating equations

Aline Barbosa Tsuyuguchi 12 May 2017 (has links)
Este trabalho de tese tem como objetivo principal propor uma abordagem alternativa para analisar dados Birnbaum-Saunders (BS) correlacionados com base em equações de estimação. Da classe ótima de funções de estimação proposta por Crowder (1987), derivamos uma classe ótima para a análise de dados correlacionados em que as distribuições marginais são assumidas log-BS e log-BS-t, respectivamente. Derivamos um processo iterativo para estimação dos parâmetros, métodos de diagnóstico, tais como análise de resíduos, distância de Cook e influência local sob três diferentes esquemas de perturbação: ponderação de casos, perturbação da variável resposta e perturbação individual de covariáveis. Estudos de simulação são desenvolvidos para cada modelo para avaliar as propriedades empíricas dos estimadores dos parâmetros de localização, forma e correlação. A abordagem apresentada é discutida em duas aplicações: o primeiro exemplo referente a um banco de dados sobre a produtividade de capital público nos 48 estados norte-americanos contíguos de 1970 a 1986 e o segundo exemplo referente a um estudo realizado na Escola de Educação Física e Esporte da Universidade de São Paulo (USP) durante 2016 em que 70 corredores foram avaliados em corridas em esteiras em três períodos distintos. / The aim of this thesis is to propose an alternative approach to analyze correlated Birnbaum-Saunders (BS) data based on estimating equations. From the optimal estimating functions class proposed by Crowder (1987), we derive an optimal class for the analysis of correlated data in which the marginal distributions are assumed either log-BS or log-BS-t. It is derived an iterative process, diagnostic procedures such as residual analysis, Cooks distance and local influence under three different perturbation schemes: case-weights, response variable perturbation and single-covariate perturbation. Simulation studies to assess the empirical properties of the parameters estimates are performed for each proposed model. The proposed methodology is discussed in two applications: the first one on a data set of public capital productivity of the contiguous 48 USA states, from 1970 to 1986, and the second data set refers to a study conducted in the School of Physical Education and Sport of the University of São Paulo (USP), during 2016, in which 70 runners were evaluated in running machines races in three periods.
7

Methods for Meta–Analyses of Rare Events, Sparse Data, and Heterogeneity

Zabriskie, Brinley 01 May 2019 (has links)
The vast and complex wealth of information available to researchers often leads to a systematic review, which involves a detailed and comprehensive plan and search strategy with the goal of identifying, appraising, and synthesizing all relevant studies on a particular topic. A meta–analysis, conducted ideally as part of a comprehensive systematic review, statistically synthesizes evidence from multiple independent studies to produce one overall conclusion. The increasingly widespread use of meta–analysis has led to growing interest in meta–analytic methods for rare events and sparse data. Conventional approaches tend to perform very poorly in such settings. Recent work in this area has provided options for sparse data, but these are still often hampered when heterogeneity across the available studies differs based on treatment group. Heterogeneity arises when participants in a study are more correlated than participants across studies, often stemming from differences in the administration of the treatment, study design, or measurement of the outcome. We propose several new exact methods that accommodate this common contingency, providing more reliable statistical tests when such patterns on heterogeneity are observed. First, we develop a permutation–based approach that can also be used as a basis for computing exact confidence intervals when estimating the effect size. Second, we extend the permutation–based approach to the network meta–analysis setting. Third, we develop a new exact confidence distribution approach for effect size estimation. We show these new methods perform markedly better than traditional methods when events are rare, and heterogeneity is present.
8

Assessing non-inferiority via risk difference in one-to-many propensity-score matched studies

Perez, Jeremiah 23 January 2018 (has links)
Non-inferiority tests are well developed for randomized parallel group trials where the control and experimental groups are independent. However, these tests may not be appropriate for assessing non-inferiority in correlated one-to-many matched data. We propose a new statistical test that extends Farrington-Manning’s (FM) test to the case where many (≥1) control subjects are matched to each experimental subject. We conducted a Monte Carlo simulation study to compare the size and power of the proposed test with tests developed for clustered one-to-one matched pair data and tests based on generalized estimating equations (GEE). For various correlation patterns, the sizes of tests developed for clustered matched pair data and GEE-based tests are inflated when applied to the case where many control subjects are matched to each experimental subject. The size of the proposed test, on the other hand, is close to the nominal level for a variety of correlation patterns. There is a debate in the literature regarding whether or not statistical tests appropriate for independent samples can be used to assess the statistical significance of treatment effects in propensity-score matched studies. We used Monte Carlo simulations to examine the effect on assessing non-inferiority via risk difference when a method for independent samples (i.e. FM test) is used versus when a method for correlated matched samples is used in propensity-score one-to-many matched studies. If propensity-score matched samples are well-matched on baseline covariates and contain almost all of the experimental treated subjects, a method for correlated matched samples is preferable with respect to power and Type I error than a method for independent samples. Sometimes there are more experimental subjects to choose from for matching than control subjects. We conducted a Monte Carlo simulation study to compare the size and power of the previously mentioned tests when many (≥1) experimental subjects are matched to each control subject. In this case, the Nam-Kwon test for clustered data performs the best in controlling the type I error rate for a variety of correlation patterns. Therefore, the appropriate non-inferiority test to use for correlated matched data depends, in part, on the sample size allocation of subjects.
9

Essays on the Modeling of Binary Longitudinal Data with Time-dependent Covariates

January 2020 (has links)
abstract: Longitudinal studies contain correlated data due to the repeated measurements on the same subject. The changing values of the time-dependent covariates and their association with the outcomes presents another source of correlation. Most methods used to analyze longitudinal data average the effects of time-dependent covariates on outcomes over time and provide a single regression coefficient per time-dependent covariate. This denies researchers the opportunity to follow the changing impact of time-dependent covariates on the outcomes. This dissertation addresses such issue through the use of partitioned regression coefficients in three different papers. In the first paper, an alternative approach to the partitioned Generalized Method of Moments logistic regression model for longitudinal binary outcomes is presented. This method relies on Bayes estimators and is utilized when the partitioned Generalized Method of Moments model provides numerically unstable estimates of the regression coefficients. It is used to model obesity status in the Add Health study and cognitive impairment diagnosis in the National Alzheimer’s Coordination Center database. The second paper develops a model that allows the joint modeling of two or more binary outcomes that provide an overall measure of a subject’s trait over time. The simultaneous modelling of all outcomes provides a complete picture of the overall measure of interest. This approach accounts for the correlation among and between the outcomes across time and the changing effects of time-dependent covariates on the outcomes. The model is used to analyze four outcomes measuring overall the quality of life in the Chinese Longitudinal Healthy Longevity Study. The third paper presents an approach that allows for estimation of cross-sectional and lagged effects of the covariates on the outcome as well as the feedback of the response on future covariates. This is done in two-parts, in part-1, the effects of time-dependent covariates on the outcomes are estimated, then, in part-2, the outcome influences on future values of the covariates are measured. These model parameters are obtained through a Generalized Method of Moments procedure that uses valid moment conditions between the outcome and the covariates. Child morbidity in the Philippines and obesity status in the Add Health data are analyzed. / Dissertation/Thesis / Doctoral Dissertation Statistics 2020
10

New Score Tests for Genetic Linkage Analysis in a Likelihood Framework

Song, Yeunjoo E. 12 March 2013 (has links)
No description available.

Page generated in 0.0854 seconds