• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 258
  • 111
  • 16
  • 15
  • 13
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 1
  • Tagged with
  • 447
  • 129
  • 110
  • 88
  • 75
  • 57
  • 52
  • 46
  • 39
  • 32
  • 32
  • 31
  • 29
  • 25
  • 23
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A Bayesian Wavelet Based Analysis of Longitudinally Observed Skewed Heteroscedastic Responses

Unknown Date (has links)
Unlike many of the current statistical models focusing on highly skewed longitudinal data, we present a novel model accommodating a skewed error distribution, partial linear median regression function, nonparametric wavelet expansion, and serial observations on the same unit. Parameters are estimated via a semiparametric Bayesian procedure using an appropriate Dirichlet process mixture prior for the skewed error distribution. We use a hierarchical mixture model as the prior for the wavelet coefficients. For the "vanishing" coefficients, the model includes a level dependent prior probability mass at zero. This practice implements wavelet coefficient thresholding as a Bayesian Rule. Practical advantages of our method are illustrated through a simulation study and via analysis of a cardiotoxicity study of children of HIV infected mother. / A Dissertation submitted to the Department of Statistics in partial fulfillment of the requirements for the degree of Doctor of Philosophy. / Summer Semester 2017. / May 23, 2017. / Bayesian, Longitudinal, Semiparametric, Wavelet / Includes bibliographical references. / Eric Chicken, Professor Co-Directing Dissertation; Debajyoti Sinha, Professor Co-Directing Dissertation; Kristine Harper, University Representative; Debdeep Pati, Committee Member.
2

Bayesian Analysis of Survival Data with Missing Censoring Indicators and Simulation of Interval Censored Data

Unknown Date (has links)
In some large clinical studies, it may be impractical to give physical examinations to every subject at his/her last monitoring time in order to diagnose the occurrence of an event of interest. This challenge creates survival data with missing censoring indicators where the probability of missing may depend on time of last monitoring. We present a fully Bayesian semi-parametric method for such survival data to estimate regression parameters of Cox's proportional hazards model [Cox, 1972]. Simulation studies show that our method performs better than competing methods. We apply the proposed method to data from the Orofacial Pain: Prospective Evaluation and Risk Assessment (OPPERA) study. Clinical studies often include interval censored data. We present a method for the simulation of interval censored data based on Poisson processes. We show that our method gives simulated data that fulfills the assumption of independent interval censoring, and is more computationally efficient that other methods used for simulating interval censored data. / A Dissertation submitted to the Department of Statistics in partial fulfillment of the requirements for the degree of Doctor of Philosophy. / Summer Semester 2018. / July 10, 2018. / Includes bibliographical references. / Debajyoti Sinha, Professor Co-Directing Dissertation; Naomi Brownstein, Professor Co-Directing Dissertation; Richard Nowakowski, University Representative; Elizabeth Slate, Committee Member; Antonio Linero, Committee Member.
3

Examining the Effect of Treatment on the Distribution of Blood Pressure in the Population Using Observational Data

Unknown Date (has links)
Since the introduction of anti-hypertensive medications in the mid-1950s, there has been an increased use of blood pressure medications in the US. The growing use of anti-hypertensive treatment has affected the distribution of blood pressure in the population over time. Now observational data no longer reflect natural blood pressure levels. Our goal is to examine the effect of anti-hypertensive drugs on distributions of blood pressure using several well-known observational studies. The statistical concept of censoring is used to estimate the distribution of blood pressure in populations if no treatment were available. The treated and estimated untreated distributions are then compared to determine the general effect of these medications in the population. Our analyses show that these drugs have an increasing impact on controlling blood pressure distributions in populations that are heavily treated. / A Dissertation submitted to the Department of Statistics in partial fulfillment of the requirements for the degree of Doctor of Philosophy. / Fall Semester 2017. / November 15, 2017. / Includes bibliographical references. / Daniel McGee, Professor Co-Directing Dissertation; Elizabeth Slate, Professor Co-Directing Dissertation; Myra M. Hurt, University Representative; Fred Huffer, Committee Member.
4

Bayesian Analysis of Survival Data with Missing Censoring Indicators and Simulation of Interval Censored Data

Unknown Date (has links)
In some large clinical studies, it may be impractical to give physical examinations to every subject at his/her last monitoring time in order to diagnose the occurrence of an event of interest. This challenge creates survival data with missing censoring indicators where the probability of missing may depend on time of last monitoring. We present a fully Bayesian semi-parametric method for such survival data to estimate regression parameters of Cox's proportional hazards model [Cox, 1972]. Simulation studies show that our method performs better than competing methods. We apply the proposed method to data from the Orofacial Pain: Prospective Evaluation and Risk Assessment (OPPERA) study. Clinical studies often include interval censored data. We present a method for the simulation of interval censored data based on Poisson processes. We show that our method gives simulated data that fulfills the assumption of independent interval censoring, and is more computationally efficient that other methods used for simulating interval censored data. / A Dissertation submitted to the Department of Statistics in partial fulfillment of the requirements for the degree of Doctor of Philosophy. / Summer Semester 2018. / July 10, 2018. / Includes bibliographical references. / Debajyoti Sinha, Professor Co-Directing Dissertation; Naomi Brownstein, Professor Co-Directing Dissertation; Richard Nowakowski, University Representative; Elizabeth Slate, Committee Member; Antonio Linero, Committee Member.
5

The Limb-Leaf Design: A New Way to Explore the Dose Response Curve in Adaptive Seamless Phase II/III Trials

Spivack, John Henry January 2011 (has links)
This dissertation proposes a method to explore a dose response curve adaptively, allowing new doses to be inserted into the trial after initial results have been observed. The context of our work is adaptive seamless Phase II/III trials and a systematic Limb-Leaf Design is developed. In a case of a nonmonotonic dose response curve where the desired level of effect exists in only a narrow dose range, a simulated comparison between a Limb-Leaf Design and a standard (Thall, Simon, and Ellenberg or TSE-type) adaptive seamless design shows a savings in risk adjusted expected sample size of up to 25%. Chapter 1 is a review of concepts and particular adaptive seamless designs of interest. Chapter 2 proposes dose addition in adaptive seamless designs and identifes ALS research as an area of application. Chapter 3 develops dose addition as an application of existing methodology. Chapter 4 identifies shortcomings of this approach and proposes a new Horizontal Test as the basis for the Limb-Leaf Design. Chapter 5 supports the development of the Limb-Leaf Design with several theoretical observations. The Limb-Leaf Design is developed in Chapters 6 and 7. Chapter 8 shows a comparison of the Limb-Leaf Design with a TSE-type adaptive seamless design by simulation. Future work is suggested in Chapter 9.
6

Flexible models and methods for longitudinal and multilevel functional data

Chen, Huaihou January 2012 (has links)
In the first part of this dissertation, we propose penalized spline (P-spline)-based methods for functional mixed effects models with varying coefficients. This work is motivated by a clinical study of Complicated Grief (Shear et al. 2005). In the Complicated Grief Study, patients receive active treatment during a treatment period and then enter a follow-up period during which they no longer receive active treatment. It is conceivable that the primary outcome Inventory of Complicated Grief (ICG) Scale shows different trajectories for the treatment phase and follow-up phase. The length of treatment period varies across patients, i.e., some patients stay longer in the treatment than the others, thus a model that can flexibly accommodate the subject-specific curves and predict individual outcomes is desirable. In our proposed model, we decompose the outcome into a sum of several terms: a population mean function, covariates with time-varying coefficients, functional subject-specific random effects, and a residual measurement error process. Using P-splines, we propose nonparametric estimation of the population mean function, the varying coefficient, the random subject-specific curves, the associated covariance function that represents between-subject variation, and the variance function of the residual measurement errors (which represents within-subject variation). The proposed methods offer flexible estimation of both the population- and subject-level curves. In addition, decomposing variability of the outcomes into a between- and within-subject sources is useful for identifying the dominant variance component, which in turn produces an optimal model for the covariance function. We introduce a likelihood-based method to select the smoothing parameters. Furthermore, we study the asymptotic behavior of the baseline P-spline estimator. We conduct simulation studies to investigate the performance of the proposed methods. The benefit of the between- and within-subject covariance decomposition is illustrated through an analysis of the Berkeley growth data (Tuddenham and Snyder 1954). We identify distinct patterns in the between- and within-subject covariance functions of the children's heights. We also apply the proposed methods to the Framingham Heart Study data. In the second part of the dissertation, we applied a semiparametric marginal model to analyze the Northern Manhattan Study (NOMAS) data (Sacco et al. 1998). NOMAS is a prospective, population-based study, with a goal of characterizing the functional status of stroke survivors following stroke. The functional outcome is a binary indicator of functional independence, defined by Barthel Index greater than or equal to 95. Based on generalized estimating equation (GEE) models, a previous parametric analysis showed that the functional status declines over time and the trajectories of decline are different depending on insurance status. The trend in functional status may not be linear, however, which motivates our semiparametric modeling approach. In this work, we consider a partially linear model with time-varying coefficient to model the trend nonparametrically, and we include an interaction term between the nonparametric trend and the insurance variable. We consider both kernel-weighted local polynomial and regression spline approaches for estimating components of the semiparametric model, and we propose a test for the presence of the interaction effect. To evaluate the performance of the parametric model in the case of model misspecification, we study the bias and efficiency of the estimators under various misspecified parametric models. We find that when the adjusted covariates are independent of time, and the link function is identity, the estimators for those covariates are asymptotically unbiased, even if the time trend is misspecified. In general, however, under other conditions and a nonidentity link, the parametric estimators under the misspecified models are biased. We conduct simulation studies and compare power for testing the adjusted covariates when the time trend is modeled parametrically versus nonparametrically. In the simulation studies, we observe significant gain in power of those estimators obtained from a semiparametric model compared to the parametric model when the time trend is nonlinear. In the third part of the dissertation, we extend the semiparametric marginal model in the second part to the multilevel functional data case. This work is motivated by a clinical study of subarachnoid hemorrhage (SAH) at Columbia University, where patients undergo multiple 4-hour treatment cycles and within each treatment cycle, repeated measurements of subjects' vital signs are recorded (Choi et al. 2012). This data has a natural multilevel structure with treatment cycles nested within subjects and measurements nested within cycles. Most literature on nonparametric analysis of such multilevel functional data focus on conditional approaches using functional mixed effects models. However, parameters obtained from the conditional models do not have direct interpretations as population average effects. When population effects are of interest, we may employ marginal regression models. In this work, we propose marginal approaches to fit multilevel functional data through penalized spline generalized estimating equation (penalized spline GEE). The procedure is effective for modeling multilevel correlated categorical outcomes as well as continuous outcomes without suffering from numerical difficulties. We provide a new variance estimator robust to misspecification of correlation structure. We investigate the large sample properties of the penalized spline GEE with multilevel continuous data and show that the asymptotics falls into two categories. In the small knots scenario, the estimated mean function is asymptotically efficient when the true correlation function is used and the asymptotic bias does not depend on the working correlation matrix. In the large knots scenario, both the asymptotic bias and variance depend on the working correlation. We propose a new method to select the smoothing parameter for marginal penalized spline regression based on an estimate of the asymptotic mean squared error (MSE). Simulation studies suggest superior performance of the new smoothing parameter selector to existing alternatives such as cross validation in several settings. Finally, we apply the methods to the SAH study to evaluate a recent debate on discontinuing the use of Nimodipine in the clinical community.
7

Regression based principal component analysis for sparse functional data with applications to screening pubertal growth paths

Zhang, Wenfei January 2012 (has links)
Pediatric growth paths are smooth trajectories of body-size measurements (e.g. height or weight). They are observed at irregular times due to individual needs. It is clinically important to screen such growth paths. However rigorous quantitative methods are largely missing in the literature. In the first part of this dissertation, we proposed a new screening method based on principal component analysis for growth paths (sparse functional data). An estimation algorithm using alternating regressions is developed, and the resulting component functions are shown to be uniformly consistent. The proposed method does not require any distributional assumptions, and is also computationally feasible. It is then applied to monitor the puberty growth among a group of Finnish teenagers, and yields interesting insights. A Monte-Carlo study is conducted to investigate the performance of our proposed algorithm, with comparison to existing methods. In the second part of the dissertation, the proposed screening method is further extended to incorporate subject level covariates, such as parental information. When it is applied to the same group of Finnish teens, it shows enhanced screening performance in identifying possible abnormal growth paths. Simulation studies are also conducted to validate the proposed covariate adjusted method.
8

On Compositional Data Modeling and Its Biomedical Applications

Zhang, Bingzhi January 2013 (has links)
Compositional data occur naturally in biomedical studies which investigate changes in the proportions of various components of a combined medical measurement. The statistical method to analyze this type of data is underdeveloped. Currently the multivariate logitnormal model seems to be the only model routinely used in analyzing compositional data, and its application is mainly in geology and has yet to be known to the biomedical elds. In this dissertation, we propose the multivariate simplex model as an alternative method of modeling compositional data, either cross-sectional or longitudinal and develop statistical methods to analyze such data. We suggest three approaches to making a fair comparison between the multivariate simplex models and the multivariate logit-normal models. The simulations indicate that our proposed multivariate simplex models often outperform the multivariate logit-normal models.
9

On Wavelet-Based Methods for Scalar-on-Function Regression

Ciarleglio, Adam J. January 2013 (has links)
This thesis consists of work done on three projects which extend and employ wavelet-based functional linear regression. In the first project, we propose a wavelet-based approach to functional mixture regression. In our approach, the functional predictor and the unknown component-specific coefficient functions are projected onto an appropriate wavelet basis and simultaneous regularization and estimation are achieved via an l1-penalized fitting procedure that is carried out using an expectation-maximization algorithm. We provide an efficient fitting algorithm, propose a technique for constructing non-parametric confidence bands, demonstrate the performance of our methods through extensive simulations, and apply them to real data in order to investigate the relationship between fractional anisotropy profiles and cognitive function in subjects with multiple sclerosis. In the second project, we propose a new wavelet-based estimator for estimating the coefficient function in a functional linear model. Our estimator attempts to take account of the structured sparsity of the wavelet coefficients used to represent the coefficient function in the fitting procedure. We propose a characterization of the neighborhood structure of wavelet coefficients and exploit this structure in our estimation procedure. We discuss the motivation for our penalized estimator, describe the fitting procedure which can be carried out with existing software, and examine properties of the estimator through simulation. The third and final project explores three novel approaches to using functional data derived from optical coherence tomography devices for diagnosing glaucoma. The first approach uses wavelet-based functional logistic regression to develop predictive models based on measures of retinal nerve fiber layer (RNFL) thickness. The estimates are obtained via an elastic net penalized fitting procedure. The second and third approaches consist of using novel measures of RNFL characteristics to discriminate between healthy and glaucomatous eyes. The three new approaches are compared with commonly used predictive models using data from a case-control study of African American subjects recruited by ophthalmologists at the Harkness Eye Center of Columbia University.
10

Sample Size Calculation Based on the Semiparametric Analysis of Short-term and Long-term Hazard Ratios

Wang, Yi January 2013 (has links)
We derive sample size formulae for survival data with non-proportional hazard functions under both fixed and contiguous alternatives. Sample size determination has been widely discussed in literature for studies with failure-time endpoints. Many researchers have developed methods with the assumption of proportional hazards under contiguous alternatives. Without covariate adjustment, the logrank test statistic is often used for the sample size and power calculation. With covariate adjustment, the approaches are often based on the score test statistic for the Cox proportional hazards model. Such methods, however, are inappropriate when the proportional hazards assumption is violated. We develop methods to calculate the sample size based on the semiparametric analysis of short-term and long-term hazard ratios. The methods are built on a semiparametric model by Yang and Prentice (2005). The model accommodates a wide range of patterns of hazard ratios, and includes the Cox proportional hazards model and the proportional odds model as its special cases. Therefore, the proposed methods can be used for survival data with proportional or non-proportional hazard functions. In particular, the sample size formula by Schoenfeld (1983) and Hsieh and Lavori (2000) can be obtained as a special case of our methods under contiguous alternatives.

Page generated in 0.0691 seconds