• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 9
  • 1
  • 1
  • 1
  • Tagged with
  • 88
  • 88
  • 51
  • 38
  • 35
  • 32
  • 19
  • 19
  • 19
  • 17
  • 16
  • 16
  • 15
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Power Analysis for Alternative Tests for the Equality of Means.

Li, Haiyin 07 May 2011 (has links) (PDF)
The two sample t-test is the test usually taught in introductory statistics courses to test for the equality of means of two populations. However, the t-test is not the only test available to compare the means of two populations. The randomization test is being incorporated into some introductory courses. There is also the bootstrap test. It is also not uncommon to decide the equality of the means based on confidence intervals for the means of these two populations. Are all those methods equally powerful? Can the idea of non-overlapping t confidence intervals be extended to bootstrap confidence intervals? The powers of seven alternative ways of comparing two population means are analyzed using small samples with data coming from distributions with different degrees of skewness and kurtosis. The analysis is done using simulation; programs in GAUSS were especially written for this purpose.
32

Modeling the Progression of Discrete Paired Longitudinal Data.

Hicks, Jonathan Wesley 12 August 2008 (has links) (PDF)
It is our intention to derive a methodology for which to model discrete paired longitudinal data. Through the use of transition matrices and maximum likelihood estimation techniques by means of software, we develop a way to model the progression of such data. We provide an example by applying this method to the Wisconsin Epidemiological Study of Diabetic Retinopathy data set. The data set is comprised of individuals, all diabetics, who have had their eyes examined for diabetic retinopathy. The eyes are treated as paired data, and we have the results of the examination at the four unequally spaced time points spanning over a fourteen year duration.
33

Comparing Radiation Shielding Potential of Liquid Propellants to Water for Application in Space

Czaplewski, John 01 March 2021 (has links) (PDF)
The radiation environment in space is a threat that engineers and astronauts need to mitigate as exploration into the solar system expands. Passive shielding involves placing as much material between critical components and the radiation environment as possible. However, with mass and size budgets, it is important to select efficient materials to provide shielding. Currently, NASA and other space agencies plan on using water as a shield against radiation since it is already necessary for human missions. Water has been tested thoroughly and has been proven to be effective. Liquid propellants are needed for every mission and also share similar characteristics to water such as their density and hydrogenous composition. This thesis explores the shielding potentials of various liquid propellants as they compare to water for the purpose of providing an additional parameter when choosing propellants for any given mission. Testing propellants is done by first creating an experimental setup involving radioisotope sources Cs-137 and Co-60, a column of liquid with variable depths, and a Geiger-Mueller tube. Water and three other liquids: acetone, 70% isopropyl alcohol, and 12% hydrogen peroxide are physically tested and their linear attenuation coefficients are calculated. Then, the test setup is replicated in CERN’s Monte Carlo base radiation transport code, FLUKA. Although the calculated linear attenuation outputs from FLUKA are discrepant from experimental results by an average of 34%, they produce the same trends. FLUKA is used to expand upon experimental results by simulating a multitude of liquid propellants and comparing them all to water. FLUKA has the ability to simulate all propellants including hydrogen, oxygen, hydrazine, and dinitrogen tetroxide. Most of the tested propellants are found to have similar, to within 35%, gamma radiation linear attenuation coefficients as compared to water. The gamma radiation in this thesis’s experiment and simulations comes from Cs-137 and Co-60 radioisotope sources. For gamma radiation from the Co-60 source, liquid hydrogen provides 90% less attenuation than water and nitric acid and AF-M315E provide 35% and 38% more attenuation than water respectively. For gamma radiation emitted by Cs-137, liquid hydrogen, isopropyl alcohol, and methane have 90%, 35%, and 29% less attenuation than water respectively. Dinitrogen tetroxide, hydrogen peroxide, AF-M315E, and nitric acid have 34%, 41%, 46%, and 52% greater attenuation coefficients than water respectively. The liquids that are similar to water for the Cs-137 source have linear attenuation coefficients within 20% of water’s. Ultimately, most of the tested liquid propellants are shown to shield against radiation at a similar rate to water. Thus, an additional parameter for choosing liquid propellants on any given mission should be their radiation shielding capabilities.
34

Exploration and Statistical Modeling of Profit

Gibson, Caleb 01 December 2023 (has links) (PDF)
For any company involved in sales, maximization of profit is the driving force that guides all decision-making. Many factors can influence how profitable a company can be, including external factors like changes in inflation or consumer demand or internal factors like pricing and product cost. Understanding specific trends in one's own internal data, a company can readily identify problem areas or potential growth opportunities to help increase profitability. In this discussion, we use an extensive data set to examine how a company might analyze their own data to identify potential changes the company might investigate to drive better performance. Based upon general trends in the data, we recommend potential actions the company could take. Additionally, we examine how a company can utilize predictive modeling to help them adapt their decision-making process as the trends identified from the initial analysis of the data evolve over time.
35

Contributions to estimation and interpretation of intervention effects and heterogeneity in meta-analysis

Thorlund, Kristian 10 1900 (has links)
<p><strong><em>Background and objectives</em></strong><strong> </strong></p> <p>Despite great statistical advances in meta-analysis methodology, most published meta-analyses make use of out-dated statistical methods and authors are unaware of the shortcomings associated with the widely employed methods. There is a need for statistical contributions to meta-analysis where: 1) improvements to current statistical practice in meta-analysis are conveyed at the level that most systematic review authors will be able to understand; and where: 2) current statistical methods that are widely applied in meta-analytic practice undergo thorough testing and examination. The objective of this thesis is to address some of this demand.</p> <p><strong><em>Methods</em></strong></p> <p>Four studies were conducted that would each meet one or both of the objectives. Simulation was used to explore the number of patients and events required to limit the risk of overestimation of intervention effects to ‘acceptable’ levels. Empirical assessment was used to explore the performance of the popular measure of heterogeneity, <em>I<sup>2</sup></em>, and its associated 95% confidence intervals (CIs) as evidence accumulates. Empirical assessment was also used to compare inferential agreement between the widely used DerSimonian-Laird random-effects model and four alternative models. Lastly, a narrative review was undertaken to identify and appraise available methods for combining health related quality of life (HRQL) outcomes.</p> <p><strong><em>Results and conclusion</em></strong></p> <p>The information required to limit the risk of overestimation of intervention effects is typically close to what is known as the optimal information size (OIS, i.e., the required meta-analysis sample size). <em>I<sup>2</sup> </em>estimates fluctuate considerably in meta-analyses with less than 15 trials and 500 events; their 95% confidence intervals provide desired coverage. The choice of random-effects has ignorable impact on the inferences about the intervention effect, but not on inferences about the degree of heterogeneity. Many approaches are available for pooling HRQL outcomes. Recommendations are provided to enhance interpretability. Overall, each manuscript met at least one thesis objective.</p> / Doctor of Philosophy (PhD)
36

EMPIRICAL APPLICATION OF DIFFERENT STATISTICAL METHODS FOR ANALYZING CONTINUOUS OUTCOMES IN RANDOMIZED CONTROLLED TRIALS

Zhang, Shiyuan 10 1900 (has links)
<p>Background: Post-operative pain management in total joint replacement surgery remains to be ineffective in up to 50% of patients and remains to have overwhelming impacts in terms of patient well-being and healthcare burden. The MOBILE trial was designed to assess whether the addition of gabapentin to a multimodal perioperative analgesia regimen can reduce morphine consumption or improve analgesia of patients following total joint arthroplasty. We present here empirical application of these various statistical methods to the MOBILE trial.</p> <p>Methods: Part 1: Analysis of covariance (ANCOVA) was used to adjust for baseline measures and to provide an unbiased estimate of the mean group difference of the one year post-operative knee flexion scores in knee arthroplasty patients. Robustness test were done by comparing ANCOVA to three comparative methods: i) the post-treatment scores, ii) change in scores, iii) percentage change from baseline.</p> <p>Part 2: Morphine consumption, taken at 4 time periods, of both the total hip and total knee arthroplasty patients was analyzed using linear mixed-effects model (LMEM) to provide a longitudinal estimate of the group difference. Repeated measures ANOVA and generalized estimating equations were used in a sensitivity analysis to compare robustness of the methods. Additionally, robustness of different covariance matrix structures in the LMEM were tested, namely first order auto-regressive compared to compound symmetry and unstructured.</p> <p>Results: Part 1: All four methods showed similar direction of effect, however ANCOVA (-3.9, 95% CI -9.5, 1.6, p=0.15) and post-treatment score (-4.3, 95% CI -9.8, 1.2, p=0.12) method provided the highest precision of estimate compared to change score (-3.0, 95% CI -9.9, 3.8, p=0.38) and percent change (-0.019, 95% CI -0.087, 0.050, p=0.58).</p> <p>Part 2: There was no statistically significant difference between the morphine consumption in the treatment group and the control group (1.0, 95% CI -4.7, 6.7, p=0.73). The results remained robust across different longitudinal methods and different covariance matrix structures.</p> <p>Conclusion: ANCOVA, through both simulation and empirical studies, provides the best statistical estimation for analyzing continuous outcomes requiring covariate adjustment. More wide-spread of the use of ANCOVA should be recommended amongst not only biostatisticians but also clinicians and trialists. The re-analysis of the morphine consumption aligns with the results of the MOBILE trial that gabapentin did not significantly reduce morphine consumption in patients undergoing major replacement surgeries. More work in area of post-operative pain is required to provide sufficient management for this patient population.</p> / Master of Science (MSc)
37

Development in Normal Mixture and Mixture of Experts Modeling

Qi, Meng 01 January 2016 (has links)
In this dissertation, first we consider the problem of testing homogeneity and order in a contaminated normal model, when the data is correlated under some known covariance structure. To address this problem, we developed a moment based homogeneity and order test, and design weights for test statistics to increase power for homogeneity test. We applied our test to microarray about Down’s syndrome. This dissertation also studies a singular Bayesian information criterion (sBIC) for a bivariate hierarchical mixture model with varying weights, and develops a new data dependent information criterion (sFLIC).We apply our model and criteria to birth- weight and gestational age data for the same model, whose purposes are to select model complexity from data.
38

PARAMETRIC ESTIMATION IN COMPETING RISKS AND MULTI-STATE MODELS

Lin, Yushun 01 January 2011 (has links)
The typical research of Alzheimer's disease includes a series of cognitive states. Multi-state models are often used to describe the history of disease evolvement. Competing risks models are a sub-category of multi-state models with one starting state and several absorbing states. Analyses for competing risks data in medical papers frequently assume independent risks and evaluate covariate effects on these events by modeling distinct proportional hazards regression models for each event. Jeong and Fine (2007) proposed a parametric proportional sub-distribution hazard (SH) model for cumulative incidence functions (CIF) without assumptions about the dependence among the risks. We modified their model to assure that the sum of the underlying CIFs never exceeds one, by assuming a proportional SH model for dementia only in the Nun study. To accommodate left censored data, we computed non-parametric MLE of CIF based on Expectation-Maximization algorithm. Our proposed parametric model was applied to the Nun Study to investigate the effect of genetics and education on the occurrence of dementia. After including left censored dementia subjects, the incidence rate of dementia becomes larger than that of death for age < 90, education becomes significant factor for incidence of dementia and standard errors for estimates are smaller. Multi-state Markov model is often used to analyze the evolution of cognitive states by assuming time independent transition intensities. We consider both constant and duration time dependent transition intensities in BRAiNS data, leading to a mixture of Markov and semi-Markov processes. The joint probability of observing a sequence of same state until transition in a semi-Markov process was expressed as a product of the overall transition probability and survival probability, which were simultaneously modeled. Such modeling leads to different interpretations in BRAiNS study, i.e., family history, APOE4, and sex by head injury interaction are significant factors for transition intensities in traditional Markov model. While in our semi-Markov model, these factors are significant in predicting the overall transition probabilities, but none of these factors are significant for duration time distribution.
39

Analysis of Spatial Data

Zhang, Xiang 01 January 2013 (has links)
In many areas of the agriculture, biological, physical and social sciences, spatial lattice data are becoming increasingly common. In addition, a large amount of lattice data shows not only visible spatial pattern but also temporal pattern (see, Zhu et al. 2005). An interesting problem is to develop a model to systematically model the relationship between the response variable and possible explanatory variable, while accounting for space and time effect simultaneously. Spatial-temporal linear model and the corresponding likelihood-based statistical inference are important tools for the analysis of spatial-temporal lattice data. We propose a general asymptotic framework for spatial-temporal linear models and investigate the property of maximum likelihood estimates under such framework. Mild regularity conditions on the spatial-temporal weight matrices will be put in order to derive the asymptotic properties (consistency and asymptotic normality) of maximum likelihood estimates. A simulation study is conducted to examine the finite-sample properties of the maximum likelihood estimates. For spatial data, aside from traditional likelihood-based method, a variety of literature has discussed Bayesian approach to estimate the correlation (auto-covariance function) among spatial data, especially Zheng et al. (2010) proposed a nonparametric Bayesian approach to estimate a spectral density. We will also discuss nonparametric Bayesian approach in analyzing spatial data. We will propose a general procedure for constructing a multivariate Feller prior and establish its theoretical property as a nonparametric prior. A blocked Gibbs sampling algorithm is also proposed for computation since the posterior distribution is analytically manageable.
40

Design & Analysis of a Computer Experiment for an Aerospace Conformance Simulation Study

Gryder, Ryan W 01 January 2016 (has links)
Within NASA's Air Traffic Management Technology Demonstration # 1 (ATD-1), Interval Management (IM) is a flight deck tool that enables pilots to achieve or maintain a precise in-trail spacing behind a target aircraft. Previous research has shown that violations of aircraft spacing requirements can occur between an IM aircraft and its surrounding non-IM aircraft when it is following a target on a separate route. This research focused on the experimental design and analysis of a deterministic computer simulation which models our airspace configuration of interest. Using an original space-filling design and Gaussian process modeling, we found that aircraft delay assignments and wind profiles significantly impact the likelihood of spacing violations and the interruption of IM operations. However, we also found that implementing two theoretical advancements in IM technologies can potentially lead to promising results.

Page generated in 0.0943 seconds