• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 621
  • 267
  • 111
  • 73
  • 43
  • 43
  • 35
  • 22
  • 17
  • 11
  • 8
  • 7
  • 5
  • 5
  • 5
  • Tagged with
  • 1423
  • 527
  • 171
  • 159
  • 156
  • 147
  • 114
  • 104
  • 104
  • 100
  • 100
  • 97
  • 95
  • 94
  • 92
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Consequences of GIS Classification Errors on Bias and Variance of Forest Inventory Estimates

Crosby, Michael Keith 30 April 2011 (has links)
The use of remotely sensed imagery (e.g. Landsat TM) for developing forest inventory strata has become increasingly more common in recent years as data have become more readily available. Errors are inherent with the use of this technology, either from user mis-classification of conditions represented in the imagery or due to flaws in the technology. Knowledge of these errors is important, as they can inflate the variance of inventory estimates. Forest inventory estimates from the Mississippi Institute for Forest Inventory (MIFI) were applied to determine the extent that classification errors affect volume and area estimates. Forest strata (e.g. hardwood, mixed, and pine) determined by the classification of imagery and used for inventory design were compared with field verification data obtained during the inventory. Mis-classified plots were reallocated to their correct strata and both area and volume estimates were obtained for both scenarios (i.e. mis-classified and correctly classified plots). The standard error estimates for mean and total volume decreased when plots were re-allocated to their correct strata. Mis-classification scenarios were then performed, introducing various levels of mis-classification in each stratum. When the scenarios were performed for the Doyle volume unit the statistical efficiencies were larger than for cubic foot volume. Care should be taken when utilizing moderate resolution satellite imagery such as Landsat TM as image mis-classification could lead to large losses in the precision of volume estimates. The increased efficiency obtained from a correct classification/forest stratification scheme, as demonstrated here, could lead to the exploration of additional image classification methods or the use of higher resolution satellite data. Knowledge of these errors in advance could be useful to investors seeking a minimum-risk area for a forest products mill location.
112

Sources of Variability in a Proteomic Experiment

Crawford, Scott Daniel 11 August 2006 (has links) (PDF)
The study of proteomics holds the hope for detecting serious diseases earlier than is currently possible by analyzing blood samples in a mass spectrometer. Unfortunately, the statistics involved in comparing a control group to a diseased group are not trivial, and these difficulties have led others to incorrect decisions in the past. This paper considers a nested design that was used to quantify and identify the sources of variation in the mass spectrometer at BYU, so that correct conclusions can be drawn from blood samples analyzed in proteomics. Algorithms were developed which detect, align, correct, and cluster the peaks in this experiment. The variation in the m/z values as well as the variation in the intensities was studied, and the nested nature of the design allowed us to estimate the sources of that variation. The variation due to the machine components, including the mass spectrometer itself, was much greater than the variation in the preprocessing steps. This conclusion inspires future studies to investigate which part of the machine steps is causing the most variation.
113

Components of Variance Analysis

Walpole, Ronald E. 10 1900 (has links)
<p> In this thesis a systematic and short method for computing the expected values of mean squares has been developed. One chapter is devoted to the theory of regression analysis by the method of least squares using matrix notation and a proof is given that the method of least squares leads to an absolute minimum, a result which the author has not found in the literature. For two-way classifications the results have been developed for proportional frequencies, a subject which again has been neglected in the literature except for the Type II model. Finally, the methods for computing the expected values of the mean squares are applied to nested classifications and Latin square designs.</p> / Thesis / Master of Arts (MA)
114

A variance reduction technique for production cost simulation

Wise, Michael Anthony January 1989 (has links)
No description available.
115

Estimation of (co)variance components by weighted and unweighted symmetric differences squared, and selected MIVQUE's : relationships between methods and relative efficiencies /

Keele, John Wiliam January 1986 (has links)
No description available.
116

Norm-referenced construct validation of the Adaptive Behavior Scale for Infants and Early Childhood (ABSI) using covariance structure modeling (LISREL) /

Weaver, David January 1986 (has links)
No description available.
117

Bayesian optimal experimental design for the comparison of treatment with a control in the analysis of variance setting /

Toman, Blaza January 1987 (has links)
No description available.
118

Estimability and testability in linear models

Alalouf, Serge January 1975 (has links)
No description available.
119

Dual Model Robust Regression

Robinson, Timothy J. 15 April 1997 (has links)
In typical normal theory regression, the assumption of homogeneity of variances is often not appropriate. Instead of treating the variances as a nuisance and transforming away the heterogeneity, the structure of the variances may be of interest and it is desirable to model the variances. Aitkin (1987) proposes a parametric dual model in which a log linear dependence of the variances on a set of explanatory variables is assumed. Aitkin's parametric approach is an iterative one providing estimates for the parameters in the mean and variance models through joint maximum likelihood. Estimation of the mean and variance parameters are interrelatedas the responses in the variance model are the squared residuals from the fit to the means model. When one or both of the models (the mean or variance model) are misspecified, parametric dual modeling can lead to faulty inferences. An alternative to parametric dual modeling is to let the data completely determine the form of the true underlying mean and variance functions (nonparametric dual modeling). However, nonparametric techniques often result in estimates which are characterized by high variability and they ignore important knowledge that the user may have regarding the process. Mays and Birch (1996) have demonstrated an effective semiparametric method in the one regressor, single-model regression setting which is a "hybrid" of parametric and nonparametric fits. Using their techniques, we develop a dual modeling approach which is robust to misspecification in either or both of the two models. Examples will be presented to illustrate the new technique, termed here as Dual Model Robust Regression. / Ph. D.
120

An examination of outliers and interaction in a nonreplicated two-way table

Kuzmak, Barbara R. 11 May 2006 (has links)
The additive-plus-multiplicative model, Y<sub>ij</sub> = μ + α<sub>i</sub> + β<sub>j</sub> + ∑<sub>p=1</sub><sup>k</sup>λ<sub>p</sub>τ<sub>pi</sub>γ<sub>pj</sub>, has been used to describe multiplicative interaction in an unreplicated experiment. Outlier effects often appear as interaction in a two-way analysis of variance with one observation per cell. I use this model in the same setting to study outliers. In data sets with significant interaction, one may be interested in determining whether the cause of the interaction is due to a true interaction, outliers or both. I develop a new technique which can show how outliers can be distinguished from interaction when there are simple outliers in a two-way table. Several examples illustrating the use of this model to describe outliers and interaction are presented. I briefly address the topics of leverage and influence. Leverage measures the impact a change in an observation has on fitted values, whereas influence evaluates the effect deleting an observation has on model estimates. I extend the leverage tables for an additive-plus-multiplicative model of rank 1 to a rank k model. Several examples studying the influence in a two-way nonreplicated table are given. / Ph. D.

Page generated in 0.0604 seconds