Spelling suggestions: "subject:"1standard error"" "subject:"39standard error""
1 |
Estimating standard errors of estimated variance components in generalizability theory using bootstrap proceduresMoore, Joann Lynn 01 December 2010 (has links)
This study investigated the extent to which rules proposed by Tong and Brennan (2007) for estimating standard errors of estimated variance components held up across a variety of G theory designs, variance component structures, sample size patterns, and data types. Simulated data was generated for all combinations of conditions, and point estimates, standard error estimates, and coverage for three types of confidence intervals were calculated for each estimated variance component and relative and absolute error variance across a variety of bootstrap procedures for each combination of conditions. It was found that, with some exceptions, Tong and Brennan's (2007) rules produced adequate standard error estimates for normal and polytomous data, while some of the results differed for dichotomous data. Additionally, some refinements to the rules were suggested with respect to nested designs. This study provides support for the use of bootstrap procedures for estimating standard errors of estimated variance components when data are not normally distributed.
|
2 |
Net pay evaluation: a comparison of methods to estimate net pay and net-to-gross ratio using surrogate variablesBouffin, Nicolas 02 June 2009 (has links)
Net pay (NP) and net-to-gross ratio (NGR) are often crucial quantities to characterize a reservoir and assess the amount of hydrocarbons in place. Numerous methods in the industry have been developed to evaluate NP and NGR, depending on the intended purposes. These methods usually involve the use of cut-off values of one or more surrogate variables to discriminate non-reservoir from reservoir rocks. This study investigates statistical issues related to the selection of such cut-off values by considering the specific case of using porosity () as the surrogate. Four methods are applied to permeability-porosity datasets to estimate porosity cut-off values. All the methods assume that a permeability cut-off value has been previously determined and each method is based on minimizing the prediction error when particular assumptions are satisfied. The results show that delineating NP and evaluating NGR require different porosity cut-off values. In the case where porosity and the logarithm of permeability are joint normally distributed, NP delineation requires the use of the Y-on-X regression line to estimate the optimal porosity cut-off while the reduced major axis (RMA) line provides the optimal porosity cut-off value to evaluate NGR. Alternatives to RMA and regression lines are also investigated, such as discriminant analysis and a data-oriented method using a probabilistic analysis of the porosity-permeability crossplots. Joint normal datasets are generated to test the ability of the methods to predict accurately the optimal porosity cut-off value for sampled sub datasets. These different methods have been compared to one another on the basis of the bias, standard error and robustness of the estimates. A set of field data has been used from the Travis Peak formation to test the performance of the methods. The conclusions of the study have been confirmed when applied to field data: as long as the initial assumptions concerning the distribution of data are verified, it is recommended to use the Y-on-X regression line to delineate NP while either the RMA line or discriminant analysis should be used for evaluating NGR. In the case where the assumptions on data distribution are not verified, the quadrant method should be used.
|
3 |
Net pay evaluation: a comparison of methods to estimate net pay and net-to-gross ratio using surrogate variablesBouffin, Nicolas 02 June 2009 (has links)
Net pay (NP) and net-to-gross ratio (NGR) are often crucial quantities to characterize a reservoir and assess the amount of hydrocarbons in place. Numerous methods in the industry have been developed to evaluate NP and NGR, depending on the intended purposes. These methods usually involve the use of cut-off values of one or more surrogate variables to discriminate non-reservoir from reservoir rocks. This study investigates statistical issues related to the selection of such cut-off values by considering the specific case of using porosity () as the surrogate. Four methods are applied to permeability-porosity datasets to estimate porosity cut-off values. All the methods assume that a permeability cut-off value has been previously determined and each method is based on minimizing the prediction error when particular assumptions are satisfied. The results show that delineating NP and evaluating NGR require different porosity cut-off values. In the case where porosity and the logarithm of permeability are joint normally distributed, NP delineation requires the use of the Y-on-X regression line to estimate the optimal porosity cut-off while the reduced major axis (RMA) line provides the optimal porosity cut-off value to evaluate NGR. Alternatives to RMA and regression lines are also investigated, such as discriminant analysis and a data-oriented method using a probabilistic analysis of the porosity-permeability crossplots. Joint normal datasets are generated to test the ability of the methods to predict accurately the optimal porosity cut-off value for sampled sub datasets. These different methods have been compared to one another on the basis of the bias, standard error and robustness of the estimates. A set of field data has been used from the Travis Peak formation to test the performance of the methods. The conclusions of the study have been confirmed when applied to field data: as long as the initial assumptions concerning the distribution of data are verified, it is recommended to use the Y-on-X regression line to delineate NP while either the RMA line or discriminant analysis should be used for evaluating NGR. In the case where the assumptions on data distribution are not verified, the quadrant method should be used.
|
4 |
Error resilience in JPEG2000 /Natu, Ambarish Shrikrishna. January 2003 (has links)
Thesis (M.E.)--University of New South Wales, 2003. / Also available online.
|
5 |
Automatic source camera identification by lens aberration and JPEG compression statisticsChoi, Kai-san. January 2006 (has links)
Thesis (M. Phil.)--University of Hong Kong, 2007. / Title proper from title frame. Also available in printed format.
|
6 |
An investigation of bootstrap methods for estimating the standard error of equating under the common-item nonequivalent groups designWang, Chunxin 01 July 2011 (has links)
The purpose of this study was to investigate the performance of the parametric bootstrap method and to compare the parametric and nonparametric bootstrap methods for estimating the standard error of equating (SEE) under the common-item nonequivalent groups (CINEG) design with the frequency estimation (FE) equipercentile method under a variety of simulated conditions.
When the performance of the parametric bootstrap method was investigated, bivariate polynomial log-linear models were employed to fit the data. With the consideration of the different polynomial degrees and two different numbers of cross-product moments, a total of eight parametric bootstrap models were examined. Two real datasets were used as the basis to define the population distributions and the "true" SEEs. A simulation study was conducted reflecting three levels for group proficiency differences, three levels of sample sizes, two test lengths and two ratios of the number of common items and the total number of items. Bias of the SEE, standard errors of the SEE, root mean square errors of the SEE, and their corresponding weighted indices were calculated and used to evaluate and compare the simulation results.
The main findings from this simulation study were as follows: (1) The parametric bootstrap models with larger polynomial degrees generally produced smaller bias but larger standard errors than those with lower polynomial degrees. (2) The parametric bootstrap models with a higher order cross product moment (CPM) of two generally yielded more accurate estimates of the SEE than the corresponding models with the CPM of one. (3) The nonparametric bootstrap method generally produced less accurate estimates of the SEE than the parametric bootstrap method. However, as the sample size increased, the differences between the two bootstrap methods became smaller. When the sample size was equal to or larger than 3,000, the differences between the nonparametric bootstrap method and the parametric bootstrap model that produced the smallest RMSE were very small. (4) Of all the models considered in this study, parametric bootstrap models with the polynomial degree of four performed better under most simulation conditions. (5) Aside from method effects, sample size and test length had the most impact on estimating the SEE. Group proficiency differences and the ratio of the number of common items to the total number of items had little effect on a short test, but had slight effect on a long test.
|
7 |
Inferential Methods for the Tetrachoric Correlation CoefficientBonett, Douglas G., Price, Robert M. 01 January 2005 (has links)
The tetrachoric correlation describes the linear relation between two continuous variables that have each been measured on a dichotomous scale. The treatment of the point estimate, standard error, interval estimate, and sample size requirement for the tetrachoric correlation is cursory and incomplete in modern psychometric and behavioral statistics texts. A new and simple method of accurately approximating the tetrachoric correlation is introduced. The tetrachoric approximation is then used to derive a simple standard error, confidence interval, and sample size planning formula. The new confidence interval is shown to perform far better than the confidence interval computed by SAS. A method to improve the SAS confidence interval is proposed. All of the new results are computationally simple and are ideally suited for textbook and classroom presentations.
|
8 |
Estimation of the standard error and confidence interval of the indirect effect in multiple mediator modelsBriggs, Nancy Elizabeth 22 September 2006 (has links)
No description available.
|
9 |
Comparing Model-based and Design-based Structural Equation Modeling Approaches in Analyzing Complex Survey DataWu, Jiun-Yu 2010 August 1900 (has links)
Conventional statistical methods assuming data sampled under simple random sampling are inadequate for use on complex survey data with a multilevel structure and non-independent observations. In structural equation modeling (SEM) framework, a researcher can either use the ad-hoc robust sandwich standard error estimators to correct the standard error estimates (Design-based approach) or perform multilevel analysis to model the multilevel data structure (Model-based approach) to analyze dependent data.
In a cross-sectional setting, the first study aims to examine the differences between the design-based single-level confirmatory factor analysis (CFA) and the model-based multilevel CFA for model fit test statistics/fit indices, and estimates of the fixed and random effects with corresponding statistical inference when analyzing multilevel data. Several design factors were considered, including: cluster number, cluster size, intra-class correlation, and the structure equality of the between-/within-level models. The performance of a maximum modeling strategy with the saturated higher-level and true lower-level model was also examined. Simulation study showed that the design-based approach provided adequate results only under equal between/within structures. However, in the unequal between/within structure scenarios, the design-based approach produced biased fixed and random effect estimates. Maximum modeling generated consistent and unbiased within-level model parameter estimates across three different scenarios.
Multilevel latent growth curve modeling (MLGCM) is a versatile tool to analyze the repeated measure sampled from a multi-stage sampling. However, researchers often adopt latent growth curve models (LGCM) without considering the multilevel structure. This second study examined the influences of different model specifications on the model fit test statistics/fit indices, between/within-level regression coefficient and random effect estimates and mean structures. Simulation suggested that design-based MLGCM incorporating the higher-level covariates produces consistent parameter estimates and statistical inferences comparable to those from the model-based MLGCM and maintain adequate statistical power even with small cluster number.
|
10 |
Optimizing LDPC codes for a mobile WiMAX system with a saturated transmission amplifierSalmon, Brian P. January 2008 (has links)
Thesis (M.Eng.(Electronic Engineering))--University of Pretoria, 2008. / Summaries in Afrikaans and English. Includes bibliographical references (leaves [92]-99).
|
Page generated in 0.0685 seconds