Spelling suggestions: "subject:"anestimation theory"" "subject:"3destimation theory""
311 |
A comparison of a supplementary sample non-parametric empirical Bayes estimator with the classical estimator in a quality control situationGabbert, James Tate January 1968 (has links)
The purpose of this study was to compare the effectiveness of the classical estimator with that of a supplementary sample non-parametric empirical Bayes estimator in detecting an out-of-control situation arising in statistical quality control work. The investigation was accomplished through Monte Carlo simulation on the IBM-7040/1401 system at the Virginia Polytechnic Institute Computing Center, Blacksburg, Virginia.
In most cases considered in this study, the sole criterion for accepting or rejecting the hypothesis that the industrial process is in control was the location of the estimate on the control chart for fraction defectives. If an estimate fell outside the 30 control limits, that particular batch was said to have been produced by an out-of-control system. In other cases the concept of "runs" was included as an additional criterion for acceptance or rejection.
Also considered were various parameters, such as the mean in-control fraction defectives, the mean out-of-control fraction defectives, the~first sample size, the standard deviation of the supplementary sample estimates, and the number of past experiences used in computing the empirical Bayes estimator.
The Monte Carlo studies showed that, for almost any set of parameter values, the empirical Bayes estimator is much more effective in detecting an out-of-control situation than is the classical estimator. The most notable advantage gained by using the empirical Bayes estimator is that long-range lack of detection is virtually impossible. / M.S.
|
312 |
Application of order-based genetic algorithms to network path searching and location estimationBaugh, Walter T. January 1994 (has links)
M.S.
|
313 |
Mathematical bases of experimental sampling for estimation of size of certain biological populationsCox, Edwin L. January 1949 (has links)
M.S.
|
314 |
Lower bounds for the variance of uniformly minimum variance unbiased estimatorsLemon, Glen Hortin January 1965 (has links)
The object of this paper was to study lower bounds ·for the variance of uniformly minimum variance unbiased estimators.
The lower bounds of Cramer and Rao, Bhattacharyya, Hammersley, Chapman and Robbins, and Kiefer were derived and discussed. Each was compared with the other, showing their relative merits and shortcomings.
Of the lower bounds considered all are greater than or equal to the Cramer-Rao lower bound. The Kiefer lower bound is as good as any of the others, or better.
We were able to show that the Cramer-Rao lower bound is exactly the first Bhattacharyya lower bound. The Hammersley and the Chapman and Robbins lower bounds are identical when they both have the same parameter space, i.e., when Ω = (a,b).
The use of the various lower bounds is illustrated in examples throughout the paper. / M.S.
|
315 |
A function space approach to the generalized nonlinear model with applications to frequency domain spectral estimationSelander, Keith N. 06 June 2008 (has links)
Peter McCullagh (1983) outlined the theory of quasi-likelihood estimation in generalized linear models. Chiu (1988) showed that an iterated, reweighted least squares procedure applied to the periodogram produces estimates of spectral density model parameters for Gaussian univariate time series which have the same asymptotic variance as those produced by maximizing the true likelihood. In this dissertation, McCullagh's theory is combined with a functional analysis approach and extended to parametric estimation of the spectral density matrix components of a non-Gaussian bivariate time series. An asymptotic optimality theorem is given, which shows optimality of an iterated, reweighted least squares procedure within a class of procedures. However, the principal application of the theory is for parametric spectral estimation in the case of an observed "contaminated" Gaussian series X(t)+N(t), where the noise series is uncorrelated with the X series, and it is desired to estimate the spectrum of the X series. Previous literature suggests removing contaminated bands of the periodogram prior to analysis, but the results of the dissertation may be used to unbiasedly estimate the spectrum of f without a precise knowledge of which bands are contaminated. / Ph. D.
|
316 |
Non-asymptotic bounds for prediction problems and density estimation.Minsker, Stanislav 05 July 2012 (has links)
This dissertation investigates the learning scenarios where a high-dimensional parameter has to be estimated from a given sample of fixed size, often smaller than the dimension of the problem. The first part answers some open questions for the binary classification problem in the framework of active learning.
Given a random couple (X,Y) with unknown distribution P, the goal of binary classification is to predict a label Y based on the observation X. Prediction rule is constructed from a sequence of observations sampled from P. The concept of active learning can be informally characterized as follows: on every iteration, the algorithm is allowed to request a label Y for any instance X which it considers to be the most informative. The contribution of this work consists of two parts: first, we provide the minimax lower bounds for the performance of active learning methods. Second, we propose an active learning algorithm which attains nearly optimal rates over a broad class of underlying distributions and is adaptive with respect to the unknown parameters of the problem.
The second part of this thesis is related to sparse recovery in the framework of dictionary learning. Let (X,Y) be a random couple with unknown distribution P. Given a collection of functions H, the goal of dictionary learning is to construct a prediction rule for Y given by a linear combination of the elements of H. The problem is sparse if there exists a good prediction rule that depends on a small number of functions from H. We propose an estimator of the unknown optimal prediction rule based on penalized empirical risk minimization algorithm. We show that the proposed estimator is able to take advantage of the possible sparse structure of the problem by providing probabilistic bounds for its performance.
|
317 |
Some statistical aspects of LULU smoothersJankowitz, Maria Dorothea 12 1900 (has links)
Thesis (PhD (Statistics and Actuarial Science))--University of Stellenbosch, 2007. / The smoothing of time series plays a very important role in various practical applications. Estimating
the signal and removing the noise is the main goal of smoothing. Traditionally linear smoothers were
used, but nonlinear smoothers became more popular through the years.
From the family of nonlinear smoothers, the class of median smoothers, based on order statistics, is the
most popular. A new class of nonlinear smoothers, called LULU smoothers, was developed by using
the minimum and maximum selectors. These smoothers have very attractive mathematical properties.
In this thesis their statistical properties are investigated and compared to that of the class of median
smoothers.
Smoothing, together with related concepts, are discussed in general. Thereafter, the class of median
smoothers, from the literature is discussed. The class of LULU smoothers is defined, their properties
are explained and new contributions are made. The compound LULU smoother is introduced and its
property of variation decomposition is discussed. The probability distributions of some LULUsmoothers
with independent data are derived. LULU smoothers and median smoothers are compared according
to the properties of monotonicity, idempotency, co-idempotency, stability, edge preservation, output
distributions and variation decomposition. A comparison is made of their respective abilities for signal
recovery by means of simulations. The success of the smoothers in recovering the signal is measured
by the integrated mean square error and the regression coefficient calculated from the least squares
regression of the smoothed sequence on the signal. Finally, LULU smoothers are practically applied.
|
318 |
Aspects of model development using regression quantiles and elemental regressionsRanganai, Edmore 03 1900 (has links)
Dissertation (PhD)--University of Stellenbosch, 2007. / ENGLISH ABSTRACT: It is well known that ordinary least squares (OLS) procedures are sensitive to deviations from
the classical Gaussian assumptions (outliers) as well as data aberrations in the design space.
The two major data aberrations in the design space are collinearity and high leverage.
Leverage points can also induce or hide collinearity in the design space. Such leverage points
are referred to as collinearity influential points. As a consequence, over the years, many
diagnostic tools to detect these anomalies as well as alternative procedures to counter them
were developed. To counter deviations from the classical Gaussian assumptions many robust
procedures have been proposed. One such class of procedures is the Koenker and Bassett
(1978) Regressions Quantiles (RQs), which are natural extensions of order statistics, to the
linear model. RQs can be found as solutions to linear programming problems (LPs). The basic
optimal solutions to these LPs (which are RQs) correspond to elemental subset (ES)
regressions, which consist of subsets of minimum size to estimate the necessary parameters of
the model.
On the one hand, some ESs correspond to RQs. On the other hand, in the literature it is shown
that many OLS statistics (estimators) are related to ES regression statistics (estimators).
Therefore there is an inherent relationship amongst the three sets of procedures. The
relationship between the ES procedure and the RQ one, has been noted almost “casually” in
the literature while the latter has been fairly widely explored. Using these existing
relationships between the ES procedure and the OLS one as well as new ones, collinearity,
leverage and outlier problems in the RQ scenario were investigated. Also, a lasso procedure
was proposed as variable selection technique in the RQ scenario and some tentative results
were given for it. These results are promising.
Single case diagnostics were considered as well as their relationships to multiple case ones. In
particular, multiple cases of the minimum size to estimate the necessary parameters of the
model, were considered, corresponding to a RQ (ES). In this way regression diagnostics were
developed for both ESs and RQs. The main problems that affect RQs adversely are
collinearity and leverage due to the nature of the computational procedures and the fact that
RQs’ influence functions are unbounded in the design space but bounded in the response
variable. As a consequence of this, RQs have a high affinity for leverage points and a high
exclusion rate of outliers. The influential picture exhibited in the presence of both leverage points and outliers is the net result of these two antagonistic forces. Although RQs are
bounded in the response variable (and therefore fairly robust to outliers), outlier diagnostics
were also considered in order to have a more holistic picture.
The investigations used comprised analytic means as well as simulation. Furthermore,
applications were made to artificial computer generated data sets as well as standard data sets
from the literature. These revealed that the ES based statistics can be used to address
problems arising in the RQ scenario to some degree of success. However, due to the
interdependence between the different aspects, viz. the one between leverage and collinearity
and the one between leverage and outliers, “solutions” are often dependent on the particular
situation. In spite of this complexity, the research did produce some fairly general guidelines
that can be fruitfully used in practice. / AFRIKAANSE OPSOMMING: Dit is bekend dat die gewone kleinste kwadraat (KK) prosedures sensitief is vir afwykings
vanaf die klassieke Gaussiese aannames (uitskieters) asook vir data afwykings in die
ontwerpruimte. Twee tipes afwykings van belang in laasgenoemde geval, is kollinearitiet en
punte met hoë hefboom waarde. Laasgenoemde punte kan ook kollineariteit induseer of
versteek in die ontwerp. Na sodanige punte word verwys as kollinêre hefboom punte. Oor die
jare is baie diagnostiese hulpmiddels ontwikkel om hierdie afwykings te identifiseer en om
alternatiewe prosedures daarteen te ontwikkel. Om afwykings vanaf die Gaussiese aanname
teen te werk, is heelwat robuuste prosedures ontwikkel. Een sodanige klas van prosedures is
die Koenker en Bassett (1978) Regressie Kwantiele (RKe), wat natuurlike uitbreidings is van
rangorde statistieke na die lineêre model. RKe kan bepaal word as oplossings van lineêre
programmeringsprobleme (LPs). Die basiese optimale oplossings van hierdie LPs (wat RKe
is) kom ooreen met die elementale deelversameling (ED) regressies, wat bestaan uit
deelversamelings van minimum grootte waarmee die parameters van die model beraam kan
word.
Enersyds geld dat sekere EDs ooreenkom met RKe. Andersyds, uit die literatuur is dit bekend
dat baie KK statistieke (beramers) verwant is aan ED regressie statistieke (beramers). Dit
impliseer dat daar dus ‘n inherente verwantskap is tussen die drie klasse van prosedures. Die
verwantskap tussen die ED en die ooreenkomstige RK prosedures is redelik “terloops” van
melding gemaak in die literatuur, terwyl laasgenoemde prosedures redelik breedvoerig
ondersoek is. Deur gebruik te maak van bestaande verwantskappe tussen ED en KK
prosedures, sowel as nuwes wat ontwikkel is, is kollineariteit, punte met hoë hefboom
waardes en uitskieter probleme in die RK omgewing ondersoek. Voorts is ‘n lasso prosedure
as veranderlike seleksie tegniek voorgestel in die RK situasie en is enkele tentatiewe resultate
daarvoor gegee. Hierdie resultate blyk belowend te wees, veral ook vir verdere navorsing.
Enkel geval diagnostiese tegnieke is beskou sowel as hul verwantskap met meervoudige geval
tegnieke. In die besonder is veral meervoudige gevalle beskou wat van minimum grootte is
om die parameters van die model te kan beraam, en wat ooreenkom met ‘n RK (ED). Met
sodanige benadering is regressie diagnostiese tegnieke ontwikkel vir beide EDs en RKe. Die
belangrikste probleme wat RKe negatief beinvloed, is kollineariteit en punte met hoë
hefboom waardes agv die aard van die berekeningsprosedures en die feit dat RKe se invloedfunksies begrensd is in die ruimte van die afhanklike veranderlike, maar onbegrensd is
in die ontwerpruimte. Gevolglik het RKe ‘n hoë affiniteit vir punte met hoë hefboom waardes
en poog gewoonlik om uitskieters uit te sluit. Die finale uitset wat verkry word wanneer beide
punte met hoë hefboom waardes en uitskieters voorkom, is dan die netto resultaat van hierdie
twee teenstrydige pogings. Alhoewel RKe begrensd is in die onafhanklike veranderlike (en
dus redelik robuust is tov uitskieters), is uitskieter diagnostiese tegnieke ook beskou om ‘n
meer holistiese beeld te verkry.
Die ondersoek het analitiese sowel as simulasie tegnieke gebruik. Voorts is ook gebruik
gemaak van kunsmatige datastelle en standard datastelle uit die literatuur. Hierdie ondersoeke
het getoon dat die ED gebaseerde statistieke met ‘n redelike mate van sukses gebruik kan
word om probleme in die RK omgewing aan te spreek. Dit is egter belangrik om daarop te let
dat as gevolg van die interafhanklikheid tussen kollineariteit en punte met hoë hefboom
waardes asook dié tussen punte met hoë hefboom waardes en uitskieters, “oplossings”
dikwels afhanklik is van die bepaalde situasie. Ten spyte van hierdie kompleksiteit, is op
grond van die navorsing wat gedoen is, tog redelike algemene riglyne verkry wat nuttig in die
praktyk gebruik kan word.
|
319 |
Finding the optimal dynamic anisotropy resolution for grade estimation improvement at Driefontein Gold Mine, South AfricaMandava, Senzeni Maggie January 2016 (has links)
A research report submitted to the Faculty of Engineering and the Built Environment, University of the Witwatersrand, in partial fulfilment of the requirements for the degree of Master of Science in Mining Engineering.
February, 2016 / Mineral Resource estimation provides an assessment of the quantity, quality, shape and
grade distribution of a mineralised deposit. The resource estimation process involves; the
assessment of data available, creation of geological and/or grade models for the deposit,
statistical and geostatistical analyses of the data, as well as determination of the appropriate
grade interpolation methods. In the grade estimation process, grades are
interpolated/extrapolated into a two or three – dimensional resource block model of a
deposit. The process uses a search volume ellipsoid, centred on each block, to select samples
used for estimation. Traditionally, a global orientated search ellipsoid is used during the
estimation process. An improvement in the estimation process can be achieved if the
direction and continuity of mineralisation is acknowledged by aligning the search ellipsoid
accordingly. The misalignment of the search ellipsoid by just a few degrees can impact the
estimation results. Representing grade continuity in undulating and folded structures can be
a challenge to correct grade estimation. One solution to this problem is to apply the method
of Dynamic Anisotropy in the estimation process. This method allows for the anisotropy
rotation angles defining the search ellipsoid and variogram model, to directly follow the
trend of the mineralisation for each cell within a block model. This research report will
describe the application of Dynamic Anisotropy to a slightly undulating area which lies on a
gently folded limb of a syncline at Driefontein gold mine and where Ordinary Kriging is
used as the method of estimation. In addition, the optimal Dynamic Anisotropy resolution
that will provide an improvement in grade estimates will be determined. This will be
achieved by executing the estimation process on various block model grid sizes. The
geostatistical literature research carried out for this research report highlights the importance
of Dynamic Anisotropy in resource estimation. Through the application and analysis on a
real-life dataset, this research report will put theories and opinions about Dynamic
Anisotropy to the test.
|
320 |
Application of indicator kriging and conditional simulation in assessment of grade uncertainty in Hunters road magmatic sulphide nickel deposit in ZimbabweChiwundura, Phillip January 2017 (has links)
A research project report submitted to the Faculty of Engineering and the Built
Environment, University of the Witwatersrand, in fulfilment of the requirements
for the degree of Masters of Science in Engineering, 2017 / The assessment of local and spatial uncertainty associated with a
regionalised variable such as nickel grade at Hunters Road magmatic
sulphide deposit is one of the critical elements in the resource estimation.
The study focused on the application of Multiple Indicator Kriging (MIK) and
Sequential Gaussian Simulation (SGS) in the estimation of recoverable
resources and the assessment of grade uncertainty at Hunters Road’s
Western orebody. The Hunters Road Western orebody was divided into two
domains namely the Eastern and the Western domains and was evaluated
based on 172 drill holes. MIK and SGS were performed using Datamine
Studio RM module. The combined Mineral Resources estimate for the
Western orebody at a cut-off grade of 0.40%Ni is 32.30Mt at an average
grade of 0.57%Ni, equivalent to 183kt of contained nickel metal. SGS
results indicated low uncertainty associated with Hunters Road nickel
project with 90% probability of an average true grade above cut-off, lying
within +/-3% of the estimated block grade. The estimate of the mean based
on SGS was 0.55%Ni and 0.57% Ni for the Western and Eastern domains
respectively. MIK results were highly comparable with SGS E-type
estimates while the most recent Ordinary Kriging (OK) based estimates by
BNC dated May 2006, overstated the resources tonnage and
underestimated the grade compared to the MIK estimates. It was concluded
that MIK produced better estimates of recoverable resources than OK.
However, since only E-type estimates were produced by MIK, post
processing of “composite” conditional cumulative distribution function (ccdf)
results using a relevant change of support algorithm such as affine
correction is recommended. Although SGS produced a good measure of
uncertainty around nickel grades, post processing of realisations using a
different software such as Isatis has been recommended together with
combined simulation of both grade and tonnage. / XL2018
|
Page generated in 0.3853 seconds