Spelling suggestions: "subject:"cample size"" "subject:"cample vize""
71 |
A MULTIVARIATE STATISTICAL ANALYSIS ON THE SAMPLING UNCERTAINTIES OF GEOMETRIC AND DIMENSIONAL ERRORS FOR CIRCULAR FEATURESACHARYA, SRIKANTH B. 13 July 2005 (has links)
No description available.
|
72 |
Determining Appropriate Sample Size for Cases in a Case-Control Study Utilizing Proxy RespondentsWeyer, Karen 09 August 2010 (has links)
No description available.
|
73 |
Statistical Analysis of Microarray Experiments in PharmacogenomicsRao, Youlan 09 September 2009 (has links)
No description available.
|
74 |
A BAYESIAN DECISION THEORETIC APPROACH TO FIXED SAMPLE SIZE DETERMINATION AND BLINDED SAMPLE SIZE RE-ESTIMATION FOR HYPOTHESIS TESTINGBanton, Dwaine Stephen January 2016 (has links)
This thesis considers two related problems that has application in the field of experimental design for clinical trials: • fixed sample size determination for parallel arm, double-blind survival data analysis to test the hypothesis of no difference in survival functions, and • blinded sample size re-estimation for the same. For the first problem of fixed sample size determination, a method is developed generally for testing of hypothesis, then applied particularly to survival analysis; for the second problem of blinded sample size re-estimation, a method is developed specifically for survival analysis. In both problems, the exponential survival model is assumed. The approach we propose for sample size determination is Bayesian decision theoretical, using explicitly a loss function and a prior distribution. The loss function used is the intrinsic discrepancy loss function introduced by Bernardo and Rueda (2002), and further expounded upon in Bernardo (2011). We use a conjugate prior, and investigate the sensitivity of the calculated sample sizes to specification of the hyper-parameters. For the second problem of blinded sample size re-estimation, we use prior predictive distributions to facilitate calculation of the interim test statistic in a blinded manner while controlling the Type I error. The determination of the test statistic in a blinded manner continues to be nettling problem for researchers. The first problem is typical of traditional experimental designs, while the second problem extends into the realm of adaptive designs. To the best of our knowledge, the approaches we suggest for both problems have never been done hitherto, and extend the current research on both topics. The advantages of our approach, as far as we see it, are unity and coherence of statistical procedures, systematic and methodical incorporation of prior knowledge, and ease of calculation and interpretation. / Statistics
|
75 |
Sample Size Determination for a Three-arm Biosimilar TrialChang, Yu-Wei January 2014 (has links)
The equivalence assessment usually consists of three tests and is often conducted through a three-arm clinical trial. The first two tests are to demonstrate the superiority of the test treatment and the reference treatment to placebo, and they are followed by the equivalence test between the test treatment and the reference treatment. The equivalence is commonly defined in terms of mean difference, mean ratio or ratio of mean differences, i.e. the ratio of the mean difference of the test and placebo to the mean difference of the reference and placebo. In this dissertation, the equivalence assessment for both continuous data and discrete data are discussed. For the continuous case, the test of the ratio of mean differences is applied. The advantage of this test is that it combines a superiority test of the test treatment over the placebo and an equivalence test through one hypothesis. For the discrete case, the two-step equivalence assessment approach is studied for both Poisson and negative binomial data. While a Poisson distribution implies that population mean and variance are the same, the advantage of applying a negative binomial model is that it accounts for overdispersion, which is a common phenomenon of count medical endpoints. The test statistics, power function, and required sample size examples for a three-arm equivalence trial are given for both continuous and discrete cases. In addition, discussions on power comparisons are complemented with numerical results. / Statistics
|
76 |
An inter-laboratory investigation of ANSI standard fitting protocols, sample size, subject and experimenter gender, and trial on the real-ear attenuation of two types of earplugsMears, Mark G. 25 August 2008 (has links)
Identical experiments were conducted between two acoustical-testing laboratories to determine the inter-laboratory differences of using two different hearing protection device (HPD) fitting procedures for testing the real-ear attenuation at threshold (REAT) of a popular vinyl foam earplug and a multi-sized premolded PVC single-flanged earplug. The first fitting procedure tested in the experiment is included in the revision of the American National Standards Institute (ANSI) standard S12.6-1984 by the ANSI Working Group ANSI S12/WG11, <i>Field Effectiveness and Physical Characteristics of Hearing Protectors</i>. This fitting procedure, “subject fit,” is intended to estimate “...the attenuation obtained in the top 10-20% of today’s industrial and military hearing conservation programs, i.e. the attenuation that should be obtained by an informed and motivated work force” (ANSI S12.6-199X, Draft 1.4, p. 4). The subject-fit procedure employs HPD-naive subjects, minimizes experimenter involvement, enforces subject-selection controls, and requires subjects to fit the HPD with reasonable comfort using only the manufacturer’s fitting instructions. The subject-fit method differs from the second procedure tested in this investigation, experimenter fit, in both procedure and objective. In the ANSI S3.19-1974 “experimenter-fit” method, which is the procedure currently required by the Environmental Protection Agency (EPA) for the testing and labeling of HPDs (EPA, 1990), the experimenter fits the HPD to the subject (comfort is not a consideration) to determine the optimum attenuation of the HPD. The development of the subject-fit protocol was motivated by the large discrepancy between the attenuation achieved in the field and that claimed by manufacturers of HPDs using experimenter fit from ANSI S3.19-1974. Some experts have developed schemes to derate manufacturers’ laboratory data to approximate attenuation typically achieved in the field.
In addition to investigating the differences between the two fitting protocols, other factors relevant to the revision of ANSI S12.6-1984 were studied: subject and experimenter gender effects, ear canal size effects, inter-laboratory differences, and the number of replications and subjects needed for REAT tests.
Results indicated that the subject-fit method provided statistically significantly less attenuation than the experimenter-fit method. Subject-fit tended to overestimate in-field attenuation, but not by as much as experimenter-fit. No consistent subject-gender effects were found in the analysis. Experimenter gender did not have a significant effect on subject-fit foam-earplug attenuation. The lack of significant trial effects indicated that the goodness of fit did not change for either fitting condition or across trials. Ear canal size and attenuation effects were documented with mixed results. / Master of Science
|
77 |
The Accuracy of River Bed Sediment SamplesPetrie, John Eric 19 January 1999 (has links)
One of the most important factors that influences a stream's hydraulic and ecological health is the streambed's sediment size distribution. This distribution affects streambed stability, sediment transport rates, and flood levels by defining the roughness of the stream channel. Adverse effects on water quality and wildlife can be expected when excessive fine sediments enter a stream. Many chemicals and toxic materials are transported through streams by binding to fine sediments. Increases in fine sediments also seriously impact the survival of fish species present in the stream. Fine sediments fill tiny spaces between larger particles thereby denying fish embryos the necessary fresh water to survive. Reforestation, constructed wetlands, and slope stabilization are a few management practices typically utilized to reduce the amount of sediment entering a stream. To effectively gauge the success of these techniques, the sediment size distribution of the stream must be monitored.
Gravel bed streams are typically stratified vertically, in terms of particle size, in three layers, with each layer having its own distinct grain size distribution. The top two layers of the stream bed, the pavement and subpavement, are the most significant in determining the characteristics of the stream. These top two layers are only as thick as the largest particle size contained within each layer. This vertical stratification by particle size makes it difficult to characterize the grain size distribution of the surface layer. The traditional bulk or volume sampling procedure removes a specified volume of material from the stream bed. However, if the bed exhibits vertical stratification, the volume sample will mix different populations, resulting in inaccurate sample results. To obtain accurate results for the pavement size distribution, a surface oriented sampling technique must be employed. The most common types of surface oriented sampling are grid and areal sampling. Due to limitations in the sampling techniques, grid samples typically truncate the sample at the finer grain sizes, while areal samples typically truncate the sample at the coarser grain sizes. When combined with an analysis technique, either frequency-by-number or frequency-by-weight, the sample results can be represented in terms of a cumulative grain size distribution. However, the results of different sampling and analysis procedures can lead to biased results, which are not equivalent to traditional volume sampling results. Different conversions, dependent on both the sampling and analysis technique, are employed to remove the bias from surface sample results.
The topic of the present study is to determine the accuracy of sediment samples obtained by the different sampling techniques. Knowing the accuracy of a sample is imperative if the sample results are to be meaningful. Different methods are discussed for placing confidence intervals on grid sample results based on statistical distributions. The binomial distribution and its approximation with the normal distribution have been suggested for these confidence intervals in previous studies. In this study, the use of the multinomial distribution for these confidence intervals is also explored. The multinomial distribution seems to best represent the grid sampling process. Based on analyses of the different distributions, recommendations are made. Additionally, figures are given to estimate the grid sample size necessary to achieve a required accuracy for each distribution. This type of sample size determination figure is extremely useful when preparing for grid sampling in the field.
Accuracy and sample size determination for areal and volume samples present difficulties not encountered with grid sampling. The variability in number of particles contained in the sample coupled with the wide range of particle sizes present make direct statistical analysis impossible. Limited studies have been reported on the necessary volume to sample for gravel deposits. The majority of these studies make recommendations based on empirical results that may not be applicable to different size distributions. Even fewer studies have been published that address the issue of areal sample size. However, using grid sample results as a basis, a technique is presented to estimate the necessary sizes for areal and volume samples. These areal and volume sample sizes are designed to match the accuracy of the original grid sample for a specified grain size percentile of interest. Obtaining grid and areal results with the same accuracy can be useful when considering hybrid samples. A hybrid sample represents a combination of grid and areal sample results that give a final grain size distribution curve that is not truncated. Laboratory experiments were performed on synthetic stream beds to test these theories. The synthetic stream beds were created using both glass beads and natural sediments. Reducing sampling errors and obtaining accurate samples in the field are also briefly discussed. Additionally, recommendations are also made for using the most efficient sampling technique to achieve the required accuracy. / Master of Science
|
78 |
Confidence Intervals and Sample Size Calculations for Studies of Film-reading PerformanceScally, Andy J., Brealey, S. January 2003 (has links)
No / The relaxation of restrictions on the type of professions that can report films has resulted in radiographers and other healthcare professionals becoming increasingly involved in image interpretation in areas such as mammography, ultrasound and plain-film radiography. Little attention, however, has been given to sample size determinations concerning film-reading performance characteristics such as sensitivity, specificity and accuracy. Illustrated with hypothetical examples, this paper begins by considering standard errors and confidence intervals for performance characteristics and then discusses methods for determining sample size for studies of film-reading performance. Used appropriately, these approaches should result in studies that produce estimates of film-reading performance with adequate precision and enable investigators to optimize the sample size in their studies for the question they seek to answer.
|
79 |
Determining Appropriate Sample Sizes and Their Effects on Key Parameters in Longitudinal Three-Level ModelsJanuary 2016 (has links)
abstract: Through a two study simulation design with different design conditions (sample size at level 1 (L1) was set to 3, level 2 (L2) sample size ranged from 10 to 75, level 3 (L3) sample size ranged from 30 to 150, intraclass correlation (ICC) ranging from 0.10 to 0.50, model complexity ranging from one predictor to three predictors), this study intends to provide general guidelines about adequate sample sizes at three levels under varying ICC conditions for a viable three level HLM analysis (e.g., reasonably unbiased and accurate parameter estimates). In this study, the data generating parameters for the were obtained using a large-scale longitudinal data set from North Carolina, provided by the National Center on Assessment and Accountability for Special Education (NCAASE). I discuss ranges of sample sizes that are inadequate or adequate for convergence, absolute bias, relative bias, root mean squared error (RMSE), and coverage of individual parameter estimates. The current study, with the help of a detailed two-part simulation design for various sample sizes, model complexity and ICCs, provides various options of adequate sample sizes under different conditions. This study emphasizes that adequate sample sizes at either L1, L2, and L3 can be adjusted according to different interests in parameter estimates, different ranges of acceptable absolute bias, relative bias, root mean squared error, and coverage. Under different model complexity and varying ICC conditions, this study aims to help researchers identify L1, L2, and L3 sample size or both as the source of variation in absolute bias, relative bias, RMSE, or coverage proportions for a certain parameter estimate. This assists researchers in making better decisions for selecting adequate sample sizes in a three-level HLM analysis. A limitation of the study was the use of only a single distribution for the dependent and explanatory variables, different types of distributions and their effects might result in different sample size recommendations. / Dissertation/Thesis / Doctoral Dissertation Educational Psychology 2016
|
80 |
Confidence Intervals for Population Size in a Capture-Recapture Problem.Zhang, Xiao 14 August 2007 (has links) (PDF)
In a single capture-recapture problem, two new Wilson methods for interval estimation of population size are derived. Classical Chapman interval, Wilson and Wilson-cc intervals are examined and compared in terms of their expected interval width and exact coverage properties in two models. The new approach performs better than the Chapman in each model. Bayesian analysis also gives a different way to estimate population size.
|
Page generated in 0.0556 seconds