• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 113
  • 10
  • Tagged with
  • 123
  • 123
  • 24
  • 21
  • 19
  • 14
  • 13
  • 12
  • 12
  • 12
  • 10
  • 10
  • 10
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Off-line quality control by robust parameter design

Min, Jun Young January 1900 (has links)
Master of Science / Department of Statistics / Shie-Shien Yang / There have been considerable debates over the robust parameter design. As a result, there have been many approaches presented that are suited to the robust parameter design. In my report, I illustrate and present Taguchi's robust parameter design, response surface approach and semi-parameter design. Considerable attention has been placed on the semi-parameter design. This approach is new technology that was introduced to Picke, Robinson, Birch and Anderson-Cook (2006). The method is a combined parametric and nonparametric technique to improve the estimates of both the mean and the variance of the response.
32

Data analysis for quantitative determinations of polar lipid molecular species

Song, Tingting January 1900 (has links)
Master of Science / Department of Statistics / Gary L. Gadbury / This report presents an analysis of data resulting from a lipidomics experiment. The experiment sought to determine the changes in the lipidome of big bluestem prairie grass when exposed to stressors. The two stressors were drought (versus a watered condition) and a rust infection (versus no infection), and were whole plot treatments arranged in a 2 by 2 factorial. A split plot treatment factor was the position on a sampled leaf (top half versus bottom half). In addition, samples were analyzed at different times, representing a blocking factor. A total of 110 samples were used and, for each sample, concentrations of 137 lipids were obtained. Many lipids were not detected for certain samples and, in some cases, a lipid was not detected in most samples. Thus, each lipid was analyzed separately using a modeling strategy that involved a combination of mixed effects linear models and a categorical analysis technique, with the latter used for certain lipids to determine if a pattern of observed zeros was associated with the treatment condition(s). In addition, p-values from tests of fixed effects in a mixed effect model were computed three different ways and compared. Results in general show that the drought condition has the greatest effect on the concentrations of certain lipids, followed by the effect of position on the leaf. Of least effect on lipid concentrations was the rust condition.
33

Classification of image pixels based on minimum distance and hypothesis testing

Ghimire, Santosh January 1900 (has links)
Master of Science / Department of Statistics / Haiyan Wang / We introduce a new classification method that is applicable to classify image pixels. This work was motivated by the test-based classification (TBC) introduced by Liao and Akritas(2007). We found that direct application of TBC on image pixel classification can lead to high mis-classification rate. We propose a method that combines the minimum distance and evidence from hypothesis testing to classify image pixels. The method is implemented in R programming language. Our method eliminates the drawback of Liao and Akritas (2007).Extensive experiments show that our modified method works better in the classification of image pixels in comparison with some standard methods of classification; namely, Linear Discriminant Analysis (LDA), Quadratic Discriminant Analysis (QDA), Classification Tree(CT), Polyclass classification, and TBC. We demonstrate that our method works well in the case of both grayscale and color images.
34

Robustness of normal theory inference when random effects are not normally distributed

Devamitta Perera, Muditha Virangika January 1900 (has links)
Master of Science / Department of Statistics / Paul I. Nelson / The variance of a response in a one-way random effects model can be expressed as the sum of the variability among and within treatment levels. Conventional methods of statistical analysis for these models are based on the assumption of normality of both sources of variation. Since this assumption is not always satisfied and can be difficult to check, it is important to explore the performance of normal based inference when normality does not hold. This report uses simulation to explore and assess the robustness of the F-test for the presence of an among treatment variance component and the normal theory confidence interval for the intra-class correlation coefficient under several non-normal distributions. It was found that the power function of the F-test is robust for moderately heavy-tailed random error distributions. But, for very heavy tailed random error distributions, power is relatively low, even for a large number of treatments. Coverage rates of the confidence interval for the intra-class correlation coefficient are far from nominal for very heavy tailed, non-normal random effect distributions.
35

Ordinary least squares regression of ordered categorical data: inferential implications for practice

Larrabee, Beth R. January 1900 (has links)
Master of Science / Department of Statistics / Nora Bello / Ordered categorical responses are frequently encountered in many disciplines. Examples of interest in agriculture include quality assessments, such as for soil or food products, and evaluation of lesion severity, such as teat ends status in dairy cattle. Ordered categorical responses are characterized by multiple categories or levels recorded on a ranked scale that, while apprising relative order, are not informative of magnitude of or proportionality between levels. A number of statistically sound models for ordered categorical responses have been proposed, such as logistic regression and probit models, but these are commonly underutilized in practice. Instead, the ordinary least squares linear regression model is often employed with ordered categorical responses despite violation of basic model assumptions. In this study, the inferential implications of this approach are investigated using a simulation study that evaluates robustness based on realized Type I error rate and statistical power. The design of the simulation study is motivated by applied research cases reported in the literature. A variety of plausible scenarios were considered for simulation, including various shapes of the frequency distribution and different number of categories of the ordered categorical response. Using a real dataset on frequency of antimicrobial use in feedlots, I demonstrate the inferential performance of ordinary least squares linear regression on ordered categorical responses relative to a probit model.
36

Parameter estimation of the Black-Scholes-Merton model

Teka, Kubrom Hisho January 1900 (has links)
Master of Science / Department of Statistics / James Neill / In financial mathematics, asset prices for European options are often modeled according to the Black-Scholes-Merton (BSM) model, a stochastic differential equation (SDE) depending on unknown parameters. A derivation of the solution to this SDE is reviewed, resulting in a stochastic process called geometric Brownian motion (GBM) which depends on two unknown real parameters referred to as the drift and volatility. For additional insight, the BSM equation is expressed as a heat equation, which is a partial differential equation (PDE) with well-known properties. For American options, it is established that asset value can be characterized as the solution to an obstacle problem, which is an example of a free boundary PDE problem. One approach for estimating the parameters in the GBM solution to the BSM model can be based on the method of maximum likelihood. This approach is discussed and applied to a dataset involving the weekly closing prices for the Dow Jones Industrial Average between January 2012 and December 2012.
37

A study of covariance structure selection for split-plot designs analyzed using mixed models

Qiu, Chen January 1900 (has links)
Master of Science / Department of Statistics / Christopher I. Vahl / In the classic split-plot design where whole plots have a completely randomized design, the conventional analysis approach assumes a compound symmetry (CS) covariance structure for the errors of observation. However, often this assumption may not be true. In this report, we examine using different covariance models in PROC MIXED in the SAS system, which are widely used in the repeated measures analysis, to model the covariance structure in the split-plot data in which the simple compound symmetry assumption does not hold. The comparison of the covariance structure models in PROC MIXED and the conventional split-plot model is illustrated through a simulation study. In the example analyzed, the heterogeneous compound symmetry (CSH) covariance model has the smallest values for the Akaike and Schwarz’s Bayesian information criteria fit statistics and is therefore the best model to fit our example data.
38

A statistical investigation into noninferiority testing for two binomial proportions

Bloedow, Nicholas January 1900 (has links)
Master of Science / Department of Statistics / Christopher Vahl / In clinical research, noninferiority trials are becoming an important tool for investigating whether a new treatment is useful. The outcome measured can be either continuous (e.g. blood pressure level), time-to-event (e.g. days until heart attack), or binary (e.g. death). Rather than showing that the new treatment is superior to an active control, i.e. standard drug or treatment already available, one tests whether the new treatment is not meaningfully worse than the active control. Here we consider a binary outcome such as success or failure following an intervention. Evaluation of the treatment relative to control becomes a comparison of two binomial proportions; without loss of generality it will be assumed the larger the probability of success for an intervention the better. Simulation studies under these assumptions were programmed over a variety of different sample sizes and true population proportions to determine the performance between asymptotic noninferiority methods based on calculations of risk differences (with and without a continuity correction), relative risks, and odds ratio from two independent samples. Investigation was done to compare type I error rates, power when true proportions were exactly the same, and power when the true proportion for treatment group was less than the control, but not meaningfully inferior. Simulation results indicate most analysis methods have comparable type I error rates; however, the method based on relative risk has higher power under most circumstances. Due to the ease of interpretation with the relative risk, its use is recommended for establishing noninferiority of a binomial proportion between 0.2 and 0.8.
39

Application of a Gibbs Sampler to estimating parameters of a hierarchical normal model with a time trend and testing for existence of the global warming

Yankovskyy, Yevhen January 1900 (has links)
Master of Science / Department of Statistics / Paul I. Nelson / This research is devoted to studying statistical inference implemented using the Gibbs Sampler for a hierarchical Bayesian linear model with first order autoregressive structure. This model was applied to global-mean monthly temperatures from January 1880 to April 2008 and used to estimate a time trend coefficient and to test for the existence of global warming. The global temperature increase estimated by Gibbs Sampler was found to be between 0.0203℃ and 0.0284℃ per decade with 95% credibility. The difference between Gibbs Sampler estimate and ordinary least squares estimate for the time trend was insignificant. Further, a simulation study with data generated from this model was carried out. This study showed that the Gibbs Sampler estimators for the intercept and for the time trend were less biased than corresponding ordinary least squares estimators, while the reverse was true for the autoregressive parameter and error standard deviation. The difference in precision of the estimators found by the two approaches was insignificant except for the samples of small sizes. The Gibbs Sampler estimator of the time trend has significantly smaller mean square error than ordinary least squares estimator for the smaller sample sizes studied. This report also describes how the software package WinBUGS can be used to carry out the simulations required to implement a Gibbs Sampler.
40

LOF of logistic GEE models and cost efficient Bayesian optimal designs for nonlinear combinations of parameters in nonlinear regression models

Tang, Zhongwen January 1900 (has links)
Doctor of Philosophy / Department of Statistics / Shie-Shien Yang / When the primary research interest is in the marginal dependence between the response and the covariates, logistic GEE (Generalized Estimating Equation) models are often used to analyze clustered binary data. Relative to ordinary logistic regression, very little work has been done to assess the lack of fit of a logistic GEE model. A new method addressing the LOF of a logistic GEE model was proposed. Simulation results indicate the proposed method performs better than or as well as other currently available LOF methods for logistic GEE models. A SAS macro was developed to implement the proposed method. Nonlinear regression models are widely used in medical science. Before the models can be fit and parameters interpreted, researchers need to decide which design points in a prespecified design space should be included in the experiment. Careful choices at this stage will lead to efficient usage of limited resources. We proposed a cost efficient Bayesian optimal design method for nonlinear combinations of parameters in a nonlinear model with quantitative predictors. An R package was developed to implement the proposed method.

Page generated in 0.0218 seconds