• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 331
  • 135
  • 10
  • 4
  • Tagged with
  • 928
  • 928
  • 467
  • 437
  • 384
  • 380
  • 380
  • 184
  • 174
  • 92
  • 68
  • 66
  • 63
  • 62
  • 61
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

A Comparison for Longitudinal Data Missing Due to Truncation

Liu, Rong 01 January 2006 (has links)
Many longitudinal clinical studies suffer from patient dropout. Often the dropout is nonignorable and the missing mechanism needs to be incorporated in the analysis. The methods handling missing data make various assumptions about the missing mechanism, and their utility in practice depends on whether these assumptions apply in a specific application. Ramakrishnan and Wang (2005) proposed a method (MDT) to handle nonignorable missing data, where missing is due to the observations exceeding an unobserved threshold. Assuming that the observations arise from a truncated normal distribution, they suggested an EM algorithm to simplify the estimation.In this dissertation the EM algorithm is implemented for the MDT method when data may include missing at random (MAR) cases. A data set, where the missing data occur due to clinical deterioration and/or improvement is considered for illustration. The missing data are observed at both ends of the truncated normal distribution. A simulation study is conducted to compare the performance of other relevant methods. The factors chosen for the simulation study included, the missing data mechanisms, the forms of response functions, missing at one or two time points, dropout rates, sample sizes and different correlations with AR(1) structure. It was found that the choice of the method for dealing with the missing data is important, especially when a large proportion is missing. The MDT method seems to perform the best when there is reason to believe that the assumption of truncated normal distribution is appropriate.A multiple imputation (MI) procedure under the MDT method to accommodate the uncertainty introduced by imputation is also proposed. The proposed method combines the MDT method with Rubin's (1987) MI method. A procedure to implement the MI method is described.
242

THE EFFECT OF BASELINE CLUSTER STRATIFICATION ON THE POWER OF PRE-POST ANALYSIS

HU, FENGJIAO 18 July 2012 (has links)
The purpose of study is to check whether the power of detecting the effect of intervention versus control in a pre- and post-study can be increased by using a stratified randomized controlled design. A stratified randomized controlled design with two study arms and two time points, where strata are determined by clustering on baseline outcomes of the primary measure, is considered. A modified hierarchical clustering algorithm is developed which guarantees optimality as well as requiring each cluster to have at least one subject per study arm. The power is calculated based on simulated bivariate normal distributed primary measures with mixture normal distributed baseline outcomes. The simulation shows that the power of this approach can be increased compared with using a completely randomized controlled study with no stratification. The difference of the power between with stratification and without stratification increases as the sample size increases or as the correlation of the pre- and post-measures decreases.
243

A SEQUENTIAL ALGORITHM TO IDENTIFY THE MIXING ENDPOINTS IN LIQUIDS IN PHARMACEUTICAL APPLICATIONS

Saxena, Akriti 28 July 2009 (has links)
The objective of this thesis is to develop a sequential algorithm to determine accurately and quickly, at which point in time a product is well mixed or reaches a steady state plateau, in terms of the Refractive Index (RI). An algorithm using sequential non-linear model fitting and prediction is proposed. A simulation study representing typical scenarios in a liquid manufacturing process in pharmaceutical industries was performed to evaluate the proposed algorithm. The data simulated included autocorrelated normal errors and used the Gompertz model. A set of 27 different combinations of the parameters of the Gompertz function were considered. The results from the simulation study suggest that the algorithm is insensitive to the functional form and achieves the goal consistently with least number of time points.
244

Inferential Methods for High-Throughput Methylation Data

Capparuccini, Maria 23 November 2010 (has links)
The role of abnormal DNA methylation in the progression of disease is a growing area of research that relies upon the establishment of sound statistical methods. The common method for declaring there is differential methylation between two groups at a given CpG site, as summarized by the difference between proportions methylated db=b1-b2, has been through use of a Filtered Two Sample t-test, using the recommended filter of 0.17 (Bibikova et al., 2006b). In this dissertation, we performed a re-analysis of the data used in recommending the threshold by fitting a mixed-effects ANOVA model. It was determined that the 0.17 filter is not accurate and conjectured that application of a Filtered Two Sample t-test likely leads to loss of power. Further, the Two Sample t-test assumes that data arise from an underlying distribution encompassing the entire real number line, whereas b1 and b2 are constrained on the interval . Additionally, the imposition of a filter at a level signifying the minimum level of detectable difference to a Two Sample t-test likely reduces power for smaller but truly differentially methylated CpG sites. Therefore, we compared the Two Sample t-test and the Filtered Two Sample t-test, which are widely used but largely untested with respect to their performance, to three proposed methods. These three proposed methods are a Beta distribution test, a Likelihood ratio test, and a Bootstrap test, where each was designed to address distributional concerns present in the current testing methods. It was ultimately shown through simulations comparing Type I and Type II error rates that the (unfiltered) Two Sample t-test and the Beta distribution test performed comparatively well.
245

A Normal-Mixture Model with Random-Effects for RR-Interval Data

Ketchum, Jessica McKinney 01 January 2006 (has links)
In many applications of random-effects models to longitudinal data, such as heart rate variability (HRV) data, a normal-mixture distribution seems to be more appropriate than the normal distribution assumption. While the random-effects methodology is well developed for several distributions in the exponential family, the case of the normal-mixture has not been dealt with adequately in the literature. The models and the estimation methods that have been proposed in the past assume the conditional model (fixing the random-effects) to be normal and allow a mixture distribution for the random effects (Xu and Hedeker, 2001, Xu, 1995). The methods proposed in this dissertation assume the conditional model to be a normal-mixture while the random-effects are assumed to be normal. This is primarily to fit the HRV data, which seems to follow a normal-mixture within subjects. Another advantage of this model is that the estimation becomes much simpler through the use of an EM-algorithm. Existing methods and software such as the PROC MIXED in SAS are exploited to facilitate the estimation procedure.A simulation study is performed to examine the properties of the random-effects model with normal-mixture distribution and the estimation of the parameters using the EM-algorithm. The study shows that the estimates have similar properties to the usual normal random-effects models. The between subject variance parameter seems to require larger numbers of subjects to achieve reasonable accuracy, which is typical in all random-effects models.The HRV data is used to illustrate the random-effects normal-mixture method. These data consist of 9 subjects who completed a series of five speech tasks (Cacioppo et al., 2002). For each of the tasks, a series of RR-intervals was collected during baseline, preparation, and delivery periods. Information about their age and gender were also available. The random-effects mixture model presented in this dissertation treats the subjects as random and models age, gender, task, type, and task × type as fixed-effects. The analysis leads to the conclusion that all the fixed effects are statistically significant. The model further indicates a two-component normal-mixture with the same mixture proportion across individuals fit the data adequately.
246

Considerations for Screening Designs and Follow-Up Experimentation

Leonard, Robert D 01 January 2015 (has links)
The success of screening experiments hinges on the effect sparsity assumption, which states that only a few of the factorial effects of interest actually have an impact on the system being investigated. The development of a screening methodology to harness this assumption requires careful consideration of the strengths and weaknesses of a proposed experimental design in addition to the ability of an analysis procedure to properly detect the major influences on the response. However, for the most part, screening designs and their complementing analysis procedures have been proposed separately in the literature without clear consideration of their ability to perform as a single screening methodology. As a contribution to this growing area of research, this dissertation investigates the pairing of non-replicated and partially–replicated two-level screening designs with model selection procedures that allow for the incorporation of a model-independent error estimate. Using simulation, we focus attention on the ability to screen out active effects from a first order with two-factor interactions model and the possible benefits of using partial replication as part of an overall screening methodology. We begin with a focus on single-criterion optimum designs and propose a new criterion to create partially replicated screening designs. We then extend the newly proposed criterion into a multi-criterion framework where estimation of the assumed model in addition to protection against model misspecification are considered. This is an important extension of the work since initial knowledge of the system under investigation is considered to be poor in the cases presented. A methodology to reduce a set of competing design choices is also investigated using visual inspection of plots meant to represent uncertainty in design criterion preferences. Because screening methods typically involve sequential experimentation, we present a final investigation into the screening process by presenting simulation results which incorporate a single follow-up phase of experimentation. In this concluding work we extend the newly proposed criterion to create optimal partially replicated follow-up designs. Methodologies are compared which use different methods of incorporating knowledge gathered from the initial screening phase into the follow-up phase of experimentation.
247

Accounting for Additional Heterogeneity: A Theoretic Extension of an Extant Economic Model

Barney, Bradley John 26 October 2007 (has links)
The assumption in economics of a representative agent is often made. However, it is a very rigid assumption. Hall and Jones (2004b) presented an economic model that essentially provided for a representative agent for each age group in determining the group's health level function. Our work seeks to extend their theoretical version of the model by allowing for two representative agents for each age—one for each of “Healthy” and “Sick” risk-factor groups—to allow for additional heterogeneity in the populace. The approach to include even more risk-factor groups is also briefly discussed. While our “extended” theoretical model is not applied directly to relevant data, several techniques that could be applicable were the relevant data to be obtained are demonstrated on other data sets. This includes examples of using linear classification, fitting baseline-category logit models, and running the genetic algorithm.
248

A Singular Perturbation Approach to the Fitzhugh-Nagumo PDE for Modeling Cardiac Action Potentials.

Brooks, Jeremy 01 May 2011 (has links)
The study of cardiac action potentials has many medical applications. Dr. Dennis Noble first used mathematical models to study cardiac action potentials in the 1960s. We begin our study of cardiac action potentials with one form of the Fitzhugh-Nagumo partial differential equation. We use the non-classical method to produce a closed form solution for the decoupled Fitzhugh Nagumo equation. Using voltage recording data of action potentials in a cardiac myocyte and in purkinje fibers, we estimate parameter values for the closed form solution with standard linear and non-linear regression methods. Results are limited, thus leading us to perturb the solution to obtain a better fit. We turn to singular perturbation theory to justify our pole-based approach. Finally, we test our model on independent action potential data sets to evaluate our model and to draw conclusions on how our model can be applied.
249

A Comparative Study between the Standards of Learning and In-Class Grades.

Fuller, Randetta Lynn 13 August 2010 (has links)
We examined the Standards of Learning mathematics scores and in-class grades for a rural Virginia county public school system. We looked at third, fourth, fifth, sixth, and seventh grades as well as Algebra I, Algebra II, and Geometry classes. The purpose of this was to determine whether or not there is a strong correlation between the Standards of Learning and the students' in-class grades. Had a strong enough correlation between the Standards of Learning and in-class grades been found we would have used only the in-class grades to predict the Standard of Learning test scores. However, we found that the students' in-class grades are not the only predictor of the Standards of Learning test scores. With the coefficient of determination ranging from 6.8% to 84.4%, this indicates that at best 84.4% of variation in the response is explained by the model for Algebra II and at worst only 6.8% for Algebra I.
250

Introduction to STATISTICS in a Biological Context

Seier, Edith, Joplin, Karl H. 01 January 2011 (has links)
This is a textbook written for undergraduate students in biology or health sciences in an introductory statistics course. / https://dc.etsu.edu/etsu_books/1061/thumbnail.jpg

Page generated in 0.1111 seconds