• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 70
  • 27
  • 12
  • 5
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 164
  • 164
  • 33
  • 33
  • 21
  • 21
  • 21
  • 17
  • 17
  • 17
  • 17
  • 17
  • 17
  • 17
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Effects of Sample Size on Various Metallic Glass Micropillars in Microcompression

Lai, Yen-Huei 16 November 2009 (has links)
Over the past decades, bulk metallic glasses (BMGs) have attracted extensive interests because of their unique properties such as good corrosion resistance, large elastic limit, as well as high strength and hardness. However, with the advent of micro-electro-mechanical systems (MEMS) and other microscaled devices, the fundamental properties of micrometer-sized BMGs have become increasingly more important. Thus, in this study, a methodology for performing uniaxial compression tests on BMGs having micron-sized dimensions is presented. Micropillar with diameters of 3.8, 1 and 0.7 £gm are fabricated successfully from the Mg65Cu25Gd10 and Zr63.8Ni16.2Cu15Al5 BMGs using focus ion beam, and then tested in microcompression at room temperature and strain rates from 1 x 10-4 to 1 x 10-2 s-1. Microcompression tests on the Mg- and Zr-based BMG pillar samples have shown an obvious sample size effect, with the yield strength increasing with decreasing sample diameter. The strength increase can be rationalized by the Weibull statistics for brittle materials, and the Weibull moduli of the Mg- and Zr-based BMGs are estimated to be about 35 and 60, respectively. The higher Weibull modulus of the Zr-based BMG is consistent with the more ductile nature of this system. In additions, high temperature microcompression tests are performed to investigate the deformation behavior of micron-sized Au49Ag5.5Pd2.3Cu26.9Si16.3 BMG pillar samples from room to their glass transition temperature (~400 K). For the 1 £gm Au-based BMG pillars, a transition from inhomogeneous flow to homogeneous flow is clearly observed at or near the glass transition temperature. Specifically, the flow transition temperature is about 393 K atthe strain rate of 1 x 10-2 s-1. For the 3.8 £gm Au-based BMG pillars, in order to investigate the homogeneous deformation behavior, microcompression tests are performed at 395.9-401.2 K. The strength is observed to decrease with increasing temperature and decreasing strain rate. Plastic flow behavior can be described by a shear transition zone model. The activation energy and the size of the basic flow unit are deduced and compared favorably with the theory.
12

Practical aspects of kernel smoothing for binary regression and density estimation

Signorini, David F. January 1998 (has links)
This thesis explores the practical use of kernel smoothing in three areas: binary regression, density estimation and Poisson regression sample size calculations. Both nonparametric and semiparametric binary regression estimators are examined in detail, and extended to two bandwidth cases. The asymptotic behaviour of these estimators is presented in a unified way, and the practical performance is assessed using a simulation experiment. It is shown that, when using the ideal bandwidth, the two bandwidth estimators often lead to dramatically improved estimation. These benefits are not reproduced, however, when two general bandwidth selection procedures described briefly in the literature are applied to the estimators in question. Only in certain circumstances does the two bandwidth estimator prove superior to the one bandwidth semiparametric estimator, and a simple rule-of-thumb based on robust scale estimation is suggested. The second part summarises and compares many different approaches to improving upon the standard kernel method for density estimation. These estimators all have asymptotically 'better' behaviour than the standard estimator, but a small-sample simulation experiment is used to examine which, if any, can give important practical benefits. Very simple bandwidth selection rules which rely on robust estimates of scale are then constructed for the most promising estimators. It is shown that a particular multiplicative bias-correcting estimator is in many cases superior to the standard estimator, both asymptotically and in practice using a data-dependent bandwidth. The final part shows how the sample size or power for Poisson regression can be calculated, using knowledge about the distribution of covariates. This knowledge is encapsulated in the moment generating function, and it is demonstrated that, in most circumstances, the use of the empirical moment generating function and related functions is superior to kernel smoothed estimates.
13

Sample Size Determination in Multivariate Parameters With Applications to Nonuniform Subsampling in Big Data High Dimensional Linear Regression

Wang, Yu 12 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Subsampling is an important method in the analysis of Big Data. Subsample size determination (SSSD) plays a crucial part in extracting information from data and in breaking the challenges resulted from huge data sizes. In this thesis, (1) Sample size determination (SSD) is investigated in multivariate parameters, and sample size formulas are obtained for multivariate normal distribution. (2) Sample size formulas are obtained based on concentration inequalities. (3) Improved bounds for McDiarmid’s inequalities are obtained. (4) The obtained results are applied to nonuniform subsampling in Big Data high dimensional linear regression. (5) Numerical studies are conducted. The sample size formula in univariate normal distribution is a melody in elementary statistics. It appears that its generalization to multivariate normal (or more generally multivariate parameters) hasn’t been caught much attention to the best of our knowledge. In this thesis, we introduce a definition for SSD, and obtain explicit formulas for multivariate normal distribution, in gratifying analogy of the sample size formula in univariate normal. Commonly used concentration inequalities provide exponential rates, and sample sizes based on these inequalities are often loose. Talagrand (1995) provided the missing factor to sharpen these inequalities. We obtained the numeric values of the constants in the missing factor and slightly improved his results. Furthermore, we provided the missing factor in McDiarmid’s inequality. These improved bounds are used to give shrunken sample sizes.
14

Sample Size Determination in Simple Logistic Regression: Formula versus Simulation

Meganathan, Karthikeyan 05 October 2021 (has links)
No description available.
15

Determining the Optimum Number of Increments in Composite Sampling

Hathaway, John Ellis 20 May 2005 (has links) (PDF)
Composite sampling can be more cost effective than simple random sampling. This paper considers how to determine the optimum number of increments to use in composite sampling. Composite sampling terminology and theory are outlined and a model is developed which accounts for different sources of variation in compositing and data analysis. This model is used to define and understand the process of determining the optimum number of increments that should be used in forming a composite. The blending variance is shown to have a smaller range of possible values than previously reported when estimating the number of increments in a composite sample. Accounting for differing levels of the blending variance significantly affects the estimated number of increments.
16

Sample Size Analysis and Issues About No-Perfect Matched-Controls for Matched Case-Control Study

Liu, Chunyan 28 September 2006 (has links)
No description available.
17

Sample Size Calculations in Matched Case-Control Studies and Unmatched Case-Control Studies with Controls Contaminated

Liu, Xiaolei January 2008 (has links)
No description available.
18

Power and Sample Size for Three-Level Cluster Designs

Cunningham, Tina 05 November 2010 (has links)
Over the past few decades, Cluster Randomized Trials (CRT) have become a design of choice in many research areas. One of the most critical issues in planning a CRT is to ensure that the study design is sensitive enough to capture the intervention effect. The assessment of power and sample size in such studies is often faced with many challenges due to several methodological difficulties. While studies on power and sample size for cluster designs with one and two levels are abundant, the evaluation of required sample size for three-level designs has been generally overlooked. First, the nesting effect introduces more than one intracluster correlation into the model. Second, the variance structure of the estimated treatment difference is more complicated. Third, sample size results required for several levels are needed. In this work, we developed sample size and power formulas for the three-level data structures based on the generalized linear mixed model approach. We derived explicit and general power and sample size equations for detecting a hypothesized effect on continuous Gaussian outcomes and binary outcomes. To confirm the accuracy of the formulas, we conducted several simulation studies and compared the results. To establish a connection between the theoretical formulas and their applications, we developed a SAS user-interface macro that allowed the researchers to estimate sample size for a three-level design for different scenarios. These scenarios depend on which randomization level is assigned and whether or not there is an interaction effect.
19

The impact of sample size re-estimation on the type I error rate in the analysis of a continuous end-point

Zhao, Songnian January 1900 (has links)
Master of Science / Department of Statistics / Christopher Vahl / Sample size estimation is generally based on assumptions made during the planning stage of a clinical trial. Often, there is limited information available to estimate the initial sample size. This may result in a poor estimate. For instance, an insufficient sample size may not have the capability to produce statistically significant results, while an over-sized study will lead to a waste of resources or even ethical issues in that too many patients are exposed to potentially ineffective treatments. Therefore, an interim analysis in the middle of a trial may be worthwhile to assure that the significance level is at the nominal level and/or the power is adequate to detect a meaningful treatment difference. In this report, the impact of sample size re-estimation on the type I error rate for the continuous end-point in a clinical trial with two treatments is evaluated through a simulation study. Two sample size estimation methods are taken into consideration: blinded and partially unblinded. For the blinded method, all collected data for two groups are used to estimate the variance, while only data from the control group are used to re-estimate the sample size for the partially unblinded method. The simulation study is designed with different combinations of assumed variance, assumed difference in treatment means, and re-estimation methods. The end-point is assumed to follow normal distribution and the variance for both groups are assumed to be identical. In addition, equal sample size is required for each group. According to the simulation results, the type I error rates are preserved for all settings.
20

The effect of sample size re-estimation on type I error rates when comparing two binomial proportions

Cong, Danni January 1900 (has links)
Master of Science / Department of Statistics / Christopher I. Vahl / Estimation of sample size is an important and critical procedure in the design of clinical trials. A trial with inadequate sample size may not produce a statistically significant result. On the other hand, having an unnecessarily large sample size will definitely increase the expenditure of resources and may cause a potential ethical problem due to the exposure of unnecessary number of human subjects to an inferior treatment. A poor estimate of the necessary sample size is often due to the limited information at the planning stage. Hence, the adjustment of the sample size mid-trial has become a popular strategy recently. In this work, we introduce two methods for sample size re-estimation for trials with a binary endpoint utilizing the interim information collected from the trial: a blinded method and a partially unblinded method. The blinded method recalculates the sample size based on the first stage’s overall event proportion, while the partially unblinded method performs the calculation based only on the control event proportion from the first stage. We performed simulation studies with different combinations of expected proportions based on fixed ratios of response rates. In this study, equal sample size per group was considered. The study shows that for both methods, the type I error rates were preserved satisfactorily.

Page generated in 0.048 seconds