• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 2
  • Tagged with
  • 10
  • 6
  • 6
  • 5
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Modeling of High-Dimensional Clinical Longitudinal Oxygenation Data from Retinopathy of Prematurity

Margevicius, Seunghee P. 01 June 2018 (has links)
No description available.
2

Multiple imputation for marginal and mixed models in longitudinal data with informative missingness

Deng, Wei 07 October 2005 (has links)
No description available.
3

Análise de dados categorizados com omissão / Analysis of categorical data with missingness

Poleto, Frederico Zanqueta 30 August 2006 (has links)
Neste trabalho aborda-se aspectos teóricos, computacionais e aplicados de análises clássicas de dados categorizados com omissão. Uma revisão da literatura é apresentada enquanto se introduz os mecanismos de omissão, mostrando suas características e implicações nas inferências de interesse por meio de um exemplo considerando duas variáveis respostas dicotômicas e estudos de simulação. Amplia-se a modelagem descrita em Paulino (1991, Brazilian Journal of Probability and Statistics 5, 1-42) da distribuição multinomial para a produto de multinomiais para possibilitar a inclusão de variáveis explicativas na análise. Os resultados são desenvolvidos em formulação matricial adequada para a implementação computacional, que é realizada com a construção de uma biblioteca para o ambiente estatístico R, a qual é disponibilizada para facilitar o traçado das inferências descritas nesta dissertação. A aplicação da teoria é ilustrada por meio de cinco exemplos de características diversas, uma vez que se ajusta modelos estruturais lineares (homogeneidade marginal), log-lineares (independência, razão de chances adjacentes comum) e funcionais lineares (kappa, kappa ponderado, sensibilidade/especificidade, valor preditivo positivo/negativo) para as probabilidades de categorização. Os padrões de omissão também são variados, com omissões em uma ou duas variáveis, confundimento de células vizinhas, sem ou com subpopulações. / We consider theoretical, computational and applied aspects of classical categorical data analyses with missingness. We present a literature review while introducing the missingness mechanisms, highlighting their characteristics and implications in the inferences of interest by means of an example involving two binary responses and simulation studies. We extend the multinomial modeling scenario described in Paulino (1991, Brazilian Journal of Probability and Statistics 5, 1-42) to the product-multinomial setup to allow for the inclusion of explanatory variables. We develop the results in matrix formulation and implement the computational procedures via subroutines written under R statistical environment. We illustrate the application of the theory by means of five examples with different characteristics, fitting structural linear (marginal homogeneity), log-linear (independence, constant adjacent odds ratio) and functional linear models (kappa, weighted kappa, sensitivity/specificity, positive/negative predictive value) for the marginal probabilities. The missingness patterns includes missingness in one or two variables, neighbor cells confounded, with or without explanatory variables.
4

Análise de dados categorizados com omissão / Analysis of categorical data with missingness

Frederico Zanqueta Poleto 30 August 2006 (has links)
Neste trabalho aborda-se aspectos teóricos, computacionais e aplicados de análises clássicas de dados categorizados com omissão. Uma revisão da literatura é apresentada enquanto se introduz os mecanismos de omissão, mostrando suas características e implicações nas inferências de interesse por meio de um exemplo considerando duas variáveis respostas dicotômicas e estudos de simulação. Amplia-se a modelagem descrita em Paulino (1991, Brazilian Journal of Probability and Statistics 5, 1-42) da distribuição multinomial para a produto de multinomiais para possibilitar a inclusão de variáveis explicativas na análise. Os resultados são desenvolvidos em formulação matricial adequada para a implementação computacional, que é realizada com a construção de uma biblioteca para o ambiente estatístico R, a qual é disponibilizada para facilitar o traçado das inferências descritas nesta dissertação. A aplicação da teoria é ilustrada por meio de cinco exemplos de características diversas, uma vez que se ajusta modelos estruturais lineares (homogeneidade marginal), log-lineares (independência, razão de chances adjacentes comum) e funcionais lineares (kappa, kappa ponderado, sensibilidade/especificidade, valor preditivo positivo/negativo) para as probabilidades de categorização. Os padrões de omissão também são variados, com omissões em uma ou duas variáveis, confundimento de células vizinhas, sem ou com subpopulações. / We consider theoretical, computational and applied aspects of classical categorical data analyses with missingness. We present a literature review while introducing the missingness mechanisms, highlighting their characteristics and implications in the inferences of interest by means of an example involving two binary responses and simulation studies. We extend the multinomial modeling scenario described in Paulino (1991, Brazilian Journal of Probability and Statistics 5, 1-42) to the product-multinomial setup to allow for the inclusion of explanatory variables. We develop the results in matrix formulation and implement the computational procedures via subroutines written under R statistical environment. We illustrate the application of the theory by means of five examples with different characteristics, fitting structural linear (marginal homogeneity), log-linear (independence, constant adjacent odds ratio) and functional linear models (kappa, weighted kappa, sensitivity/specificity, positive/negative predictive value) for the marginal probabilities. The missingness patterns includes missingness in one or two variables, neighbor cells confounded, with or without explanatory variables.
5

Attrition in Studies of Cognitive Aging / Bortfall i studier av kognitivt åldrande

Josefsson, Maria January 2013 (has links)
Longitudinal studies of cognition are preferred to cross-sectional stud- ies, since they offer a direct assessment of age-related cognitive change (within-person change). Statistical methods for analyzing age-related change are widely available. There are, however, a number of challenges accompanying such analyzes, including cohort differences, ceiling- and floor effects, and attrition. These difficulties challenge the analyst and puts stringent requirements on the statistical method being used. The objective of Paper I is to develop a classifying method to study discrepancies in age-related cognitive change. The method needs to take into account the complex issues accompanying studies of cognitive aging, and specifically work out issues related to attrition. In a second step, we aim to identify predictors explaining stability or decline in cognitive performance in relation to demographic, life-style, health-related, and genetic factors. In the second paper, which is a continuation of Paper I, we investigate brain characteristics, structural and functional, that differ between suc- cessful aging elderly and elderly with an average cognitive performance over 15-20 years. In Paper III we develop a Bayesian model to estimate the causal effect of living arrangement (living alone versus living with someone) on cog- nitive decline. The model must balance confounding variables between the two living arrangement groups as well as account for non-ignorable attrition. This is achieved by combining propensity score matching with a pattern mixture model for longitudinal data. In paper IV, the objective is to adapt and implement available impu- tation methods to longitudinal fMRI data, where some subjects are lost to follow-up. We apply these missing data methods to a real dataset, and evaluate these methods in a simulation study.
6

Bayesian nonparametric analysis of longitudinal data with non-ignorable non-monotone missingness

Cao, Yu 01 January 2019 (has links)
In longitudinal studies, outcomes are measured repeatedly over time, but in reality clinical studies are full of missing data points of monotone and non-monotone nature. Often this missingness is related to the unobserved data so that it is non-ignorable. In such context, pattern-mixture model (PMM) is one popular tool to analyze the joint distribution of outcome and missingness patterns. Then the unobserved outcomes are imputed using the distribution of observed outcomes, conditioned on missing patterns. However, the existing methods suffer from model identification issues if data is sparse in specific missing patterns, which is very likely to happen with a small sample size or a large number of repetitions. We extend the existing methods using latent class analysis (LCA) and a shared-parameter PMM. The LCA groups patterns of missingness with similar features and the shared-parameter PMM allows a subset of parameters to be different among latent classes when fitting a model, thus restoring model identifiability. A novel imputation method is also developed using the distribution of observed data conditioned on latent classes. We develop this model for continuous response data and extend it to handle ordinal rating scale data. Our model performs better than existing methods for data with small sample size. The method is applied to two datasets from a phase II clinical trial that studies the quality of life for patients with prostate cancer receiving radiation therapy, and another to study the relationship between the perceived neighborhood condition in adolescence and the drinking habit in adulthood.
7

Latent variable models for longitudinal twin data

Dominicus, Annica January 2006 (has links)
<p>Longitudinal twin data provide important information for exploring sources of variation in human traits. In statistical models for twin data, unobserved genetic and environmental factors influencing the trait are represented by latent variables. In this way, trait variation can be decomposed into genetic and environmental components. With repeated measurements on twins, latent variables can be used to describe individual trajectories, and the genetic and environmental variance components are assessed as functions of age. This thesis contributes to statistical methodology for analysing longitudinal twin data by (i) exploring the use of random change point models for modelling variance as a function of age, (ii) assessing how nonresponse in twin studies may affect estimates of genetic and environmental influences, and (iii) providing a method for hypothesis testing of genetic and environmental variance components. The random change point model, in contrast to linear and quadratic random effects models, is shown to be very flexible in capturing variability as a function of age. Approximate maximum likelihood inference through first-order linearization of the random change point model is contrasted with Bayesian inference based on Markov chain Monte Carlo simulation. In a set of simulations based on a twin model for informative nonresponse, it is demonstrated how the effect of nonresponse on estimates of genetic and environmental variance components depends on the underlying nonresponse mechanism. This thesis also reveals that the standard procedure for testing variance components is inadequate, since the null hypothesis places the variance components on the boundary of the parameter space. The asymptotic distribution of the likelihood ratio statistic for testing variance components in classical twin models is derived, resulting in a mixture of chi-square distributions. Statistical methodology is illustrated with applications to empirical data on cognitive function from a longitudinal twin study of aging. </p>
8

Latent variable models for longitudinal twin data

Dominicus, Annica January 2006 (has links)
Longitudinal twin data provide important information for exploring sources of variation in human traits. In statistical models for twin data, unobserved genetic and environmental factors influencing the trait are represented by latent variables. In this way, trait variation can be decomposed into genetic and environmental components. With repeated measurements on twins, latent variables can be used to describe individual trajectories, and the genetic and environmental variance components are assessed as functions of age. This thesis contributes to statistical methodology for analysing longitudinal twin data by (i) exploring the use of random change point models for modelling variance as a function of age, (ii) assessing how nonresponse in twin studies may affect estimates of genetic and environmental influences, and (iii) providing a method for hypothesis testing of genetic and environmental variance components. The random change point model, in contrast to linear and quadratic random effects models, is shown to be very flexible in capturing variability as a function of age. Approximate maximum likelihood inference through first-order linearization of the random change point model is contrasted with Bayesian inference based on Markov chain Monte Carlo simulation. In a set of simulations based on a twin model for informative nonresponse, it is demonstrated how the effect of nonresponse on estimates of genetic and environmental variance components depends on the underlying nonresponse mechanism. This thesis also reveals that the standard procedure for testing variance components is inadequate, since the null hypothesis places the variance components on the boundary of the parameter space. The asymptotic distribution of the likelihood ratio statistic for testing variance components in classical twin models is derived, resulting in a mixture of chi-square distributions. Statistical methodology is illustrated with applications to empirical data on cognitive function from a longitudinal twin study of aging.
9

Uncertainty intervals and sensitivity analysis for missing data

Genbäck, Minna January 2016 (has links)
In this thesis we develop methods for dealing with missing data in a univariate response variable when estimating regression parameters. Missing outcome data is a problem in a number of applications, one of which is follow-up studies. In follow-up studies data is collected at two (or more) occasions, and it is common that only some of the initial participants return at the second occasion. This is the case in Paper II, where we investigate predictors of decline in self reported health in older populations in Sweden, the Netherlands and Italy. In that study, around 50% of the study participants drop out. It is common that researchers rely on the assumption that the missingness is independent of the outcome given some observed covariates. This assumption is called data missing at random (MAR) or ignorable missingness mechanism. However, MAR cannot be tested from the data, and if it does not hold, the estimators based on this assumption are biased. In the study of Paper II, we suspect that some of the individuals drop out due to bad health. If this is the case the data is not MAR. One alternative to MAR, which we pursue, is to incorporate the uncertainty due to missing data into interval estimates instead of point estimates and uncertainty intervals instead of confidence intervals. An uncertainty interval is the analog of a confidence interval but wider due to a relaxation of assumptions on the missing data. These intervals can be used to visualize the consequences deviations from MAR have on the conclusions of the study. That is, they can be used to perform a sensitivity analysis of MAR. The thesis covers different types of linear regression. In Paper I and III we have a continuous outcome, in Paper II a binary outcome, and in Paper IV we allow for mixed effects with a continuous outcome. In Paper III we estimate the effect of a treatment, which can be seen as an example of missing outcome data.
10

Bayesian Predictive Inference Under Informative Sampling and Transformation

Shen, Gang 29 April 2004 (has links)
We have considered the problem in which a biased sample is selected from a finite population, and this finite population itself is a random sample from an infinitely large population, called the superpopulation. The parameters of the superpopulation and the finite population are of interest. There is some information about the selection mechanism in that the selection probabilities are linearly related to the measurements. This is typical of establishment surveys where the selection probabilities are taken to be proportional to the previous year's characteristics. When all the selection probabilities are known, as in our problem, inference about the finite population can be made, but inference about the distribution is not so clear. For continuous measurements, one might assume that the the values are normally distributed, but as a practical issue normality can be tenuous. In such a situation a transformation to normality may be useful, but this transformation will destroy the linearity between the selection probabilities and the values. The purpose of this work is to address this issue. In this light we have constructed two models, an ignorable selection model and a nonignorable selection model. We use the Gibbs sampler and the sample importance re-sampling algorithm to fit the nonignorable selection model. We have emphasized estimation of the finite population parameters, although within this framework other quantities can be estimated easily. We have found that our nonignorable selection model can correct the bias due to unequal selection probabilities, and it provides improved precision over the estimates from the ignorable selection model. In addition, we have described the case in which all the selection probabilities are unknown. This is useful because many agencies (e.g., government) tend to hide these selection probabilities when public-used data are constructed. Also, we have given an extensive theoretical discussion on Poisson sampling, an underlying sampling scheme in our models especially useful in the case in which the selection probabilities are unknown.

Page generated in 0.0507 seconds