Spelling suggestions: "subject:"[een] ANALYSIS OF VARIANCE"" "subject:"[enn] ANALYSIS OF VARIANCE""
81 |
On the use of an auxiliary variable in the transformation of discrete dataTaylor, Robert James January 1955 (has links)
M.S.
|
82 |
Comparison of Bayes' and minimum variance unbiased estimators of reliability in the extreme value life testing modelGodbold, James Homer January 1970 (has links)
The purpose of this study is to consider two different types of estimators for reliability using the extreme value distribution as the life-testing model. First the unbiased minimum variance estimator is derived. Then the Bayes' estimators for the uniform, exponential, and inverted gamma prior distributions are obtained, and these results are extended to a whole class of exponential failure models. Each of the Bayes' estimators is compared with the unbiased minimum variance estimator in a Monte Carlo simulation where it is shown that the Bayes' estimator has smaller squared error loss in each case.
The problem of obtaining estimators with respect to an exponential type loss function is also considered. The difficulties in such an approach are demonstrated. / Master of Science
|
83 |
Lower bounds for the variance of uniformly minimum variance unbiased estimatorsLemon, Glen Hortin January 1965 (has links)
The object of this paper was to study lower bounds ·for the variance of uniformly minimum variance unbiased estimators.
The lower bounds of Cramer and Rao, Bhattacharyya, Hammersley, Chapman and Robbins, and Kiefer were derived and discussed. Each was compared with the other, showing their relative merits and shortcomings.
Of the lower bounds considered all are greater than or equal to the Cramer-Rao lower bound. The Kiefer lower bound is as good as any of the others, or better.
We were able to show that the Cramer-Rao lower bound is exactly the first Bhattacharyya lower bound. The Hammersley and the Chapman and Robbins lower bounds are identical when they both have the same parameter space, i.e., when Ω = (a,b).
The use of the various lower bounds is illustrated in examples throughout the paper. / M.S.
|
84 |
Graphical assessment of the prediction capability of response surface designsGiovannitti-Jensen, Ann January 1987 (has links)
A response surface analysis is concerned with the exploration of a system in order to determine the behavior of the response of the system as levels of certain factors which influence the response are changed. It is often of particular interest to predict the response in some region of the allowable factor values and to find the optimal operating conditions of the system.
In an experiment to search for the optimum response of a surface it is advantageous to predict the response with equal, or nearly equal, precision at all combinations of the levels of the variables which represent locations which are the same distance from the center of the experimental region. Measures of the quality of prediction at locations on the surface of a hypersphere are presented in this thesis. These measures are used to form a graphical method of assessing the overall prediction capability of an experimental design throughout the region of interest.
Rotatable designs give equal variances of predicted values corresponding to locations on the same sphere. In this case, the center of the sphere coincides with the center of the rotatable design. However, there is a need for a method to quantify the prediction capability on spheres for non-rotatable designs. The spherical variance is a measure of the average prediction variance at locations on the surface of a sphere. The spherical variance obtained with a design provides an assessment of how well the response is being estimated on the average at locations which are the same distance from the region center. This thesis introduces two measures which describe the dispersion in the variances of the predicted responses at aH locations on the surface of a sphere. These prediction variance dispersion (PVD) measures are used to evaluate the ability of a design to estimate the response with consistent precision at locations which are the same distance from the region center. The PVD measures are used in conjunction with the spherical variance to assess the prediction capability of a design.
A plot of the spherical variance and the maximum and minimum prediction variances for locations on a sphere against the radius of the sphere gives a comprehensive picture of the behavior of the prediction variances throughout a region, and, hence, of the quality of the predicted responses, obtained with a particular design. Such plots are used to investigate and compare the prediction capabilities of certain response surface designs currently available to the researcher. The plots are also used to investigate the robustness of a design under adverse experimental conditions and to determine the effects of taking an additional experimental run on the quality of the predicted responses. / Ph. D.
|
85 |
Design and analysis for a two level factorial experiment in the presence of dispersion effectsMays, Darcy P. 10 October 2005 (has links)
Standard response surface methodology experimental designs for estimating location models involve the assumption of homogeneous variance throughout the design region. However, with heterogeneity of variance these standard designs are not optimal.
Using the D and Q-optimality criteria, this dissertation proposes a two-stage experimental design procedure that gives more efficient designs than the standard designs when heterogeneous variance exists. Several multiple variable location models, with and without interactions, are considered. For each the first stage estimates the heterogeneous variance structure, while the second stage then augments the first stage to produce a D or Q-optimal design for fitting the location model under the estimated variance structure. However, there is a potential instability of the variance estimates in the first stage that can lower the efficiency of the two-stage procedure. This problem can be addressed and the efficiency of the procedure enhanced if certain mild assumptions concerning the variance structure are made and formulated as a prior distribution to produce a Bayes estimator.
With homogeneous variance, designs are analyzed using ordinary least squares. However, with heterogeneous variance the correct analysis is to use weighted least squares. This dissertation also examines the effects that analysis by weighted least squares can have and compares this procedure to the proposed two-stage procedure. / Ph. D.
|
86 |
A graphical approach for evaluating the potential impact of bias due to model misspecification in response surface designsVining, G. Geoffrey January 1988 (has links)
The basic purpose of response surface analysis is to generate a relatively simple model to serve as an adequate approximation for a more complex phenomenon. This model then may be used for other purposes, for example prediction or optimization. Since the proposed model is only an approximation, the analyst almost always faces the potential of bias due to model misspecification. The ultimate impact of this bias depends upon the choice both of the experimental design and of the region for conducting the experiment.
This dissertation proposes a graphical approach for evaluating the impact of bias upon response surface designs. Essentially, it extends the work of Giovannitti-Jensen (1987) and Giovannitti-Jensen and Myers (1988) who have developed a graphical technique for displaying a design's prediction variance capabilities. This dissertation extends this concept: (1) to the prediction bias due to model misspecification; (2) the prediction bias due to the presence of a single outlier; and (3) to a mean squared error of prediction. Several common first and second-order response surface designs are evaluated through this approach. / Ph. D.
|
87 |
Survival analysisWardak, Mohammad Alif 01 January 2005 (has links)
Survival analysis pertains to a statistical approach designed to take into account the amount of time an experimental unit contributes to a study. A Mayo Clinic study of 418 Primary Biliary Cirrhosis patients during a ten year period was used. The Kaplan-Meier Estimator, a non-parametric statistic, and the Cox Proportional Hazard methods were the tools applied. Kaplan-Meier results include total values/censored values.
|
88 |
A comparison of a distributed control system’s graphical interfaces : a DoE approach to evaluate efficiency in automated process plants / En jämförelse av grafiska gränssnitt för ett distribuerat kontrollsystem : en försöksplaneringsstrategi för att utvärdera effektiviteten i fabriker med automatiserade processerMaanja, Karen January 2024 (has links)
Distributed control systems play a central role for critical processes within a plant that needs to be monitored or controlled. They ensure high production availability and output while simultaneously ensuring the safety of the personnel and the environment. However, 5% of global annual production is lost due to unscheduled downtime. 80% of the unscheduled shutdowns could have been prevented and 40% of these are caused by human error. This study is conducted at ABB's Process Automation team in Umeå. The aim is to examine if different human-machine interfaces affect operators' effectiveness in resolving errors and maintaining a high production level. DoE is the chosen approach for this study which includes planning and conducting an experiment where the two dependent variables are Effect and Time. The independent variables examined are Scenario, Graphic, and Operator which are used as factors in a factorial design, each having two levels. Experiments showed that the design of the human-machine interface has no impact on either responses, i.e. it has no statistically significant effect on the production in terms of operator effectiveness or production efficiency. Instead, the level of experience of the operators seems to be the main contributor of variance in production in the models used. / Distribuerade styrsystem spelar en central roll för kritiska processer inom en anläggning som måste övervakas eller kontrolleras. De säkerställer hög produktionstillgänglighet ochproduktion samtidigt som säkerheten för personalen och miljön säkerställs. Det har visats att 5% av den globala årsproduktionen går förlorad på grund av oplanerade driftstopp. 80% av de oplanerade avbrotten kunde ha förhindrats och 40% av dessa orsakas av den mänskliga faktorn. Denna studie genomförs hos ABB:s Process Automation team i Umeå. Målet är att undersöka om olika gränssnitt för styrsystemen är en viktig faktor för operatörens effektivitet i att åtgärda fel och att upprätthålla en hög produktionsnivå. Försöksplanering är det valda tillvägagångssättet för denna studie som inkluderar planering och genomförande av experimentet där de två beroende variabler är Effekt och Tid. De oberoende variabler som undersöks är Scenario, Grafik och Operatör, och används som faktorer i en faktoriell design, där faktorerna har två nivåer vardera. Experimentet visade att utformningen av den grafiska designen för gränssnittet inte har någon inverkan på någondera svaren, d.v.s. den har ingen statistiskt signifikant effekt på produktionen i form av operatörseffektivitet eller produktionseffektivitet. Istället tycks operatörernas erfarenhetsnivå vara den främsta orsaken till variationen i produktionen i de modeller som används.
|
89 |
Empirical Bayes estimation of the extreme value index in an ANOVA settingJordaan, Aletta Gertruida 04 1900 (has links)
Thesis (MComm)-- Stellenbosch University, 2014. / ENGLISH ABSTRACT: Extreme value theory (EVT) involves the development of statistical models and techniques in order to describe and model extreme events. In order to make inferences about extreme quantiles, it is necessary to estimate the extreme value index (EVI). Numerous estimators of the EVI exist in the literature. However, these estimators are only applicable in the single sample setting. The aim of this study is to obtain an improved estimator of the EVI that is applicable to an ANOVA setting.
An ANOVA setting lends itself naturally to empirical Bayes (EB) estimators, which are the main estimators under consideration in this study. EB estimators have not received much attention in the literature.
The study begins with a literature study, covering the areas of application of EVT, Bayesian theory and EB theory. Different estimation methods of the EVI are discussed, focusing also on possible methods of determining the optimal threshold. Specifically, two adaptive methods of threshold selection are considered.
A simulation study is carried out to compare the performance of different estimation methods, applied only in the single sample setting. First order and second order estimation methods are considered. In the case of second order estimation, possible methods of estimating the second order parameter are also explored.
With regards to obtaining an estimator that is applicable to an ANOVA setting, a first order EB estimator and a second order EB estimator of the EVI are derived. A case study of five insurance claims portfolios is used to examine whether the two EB estimators improve the accuracy of estimating the EVI, when compared to viewing the portfolios in isolation.
The results showed that the first order EB estimator performed better than the Hill estimator. However, the second order EB estimator did not perform better than the “benchmark” second order estimator, namely fitting the perturbed Pareto distribution to all observations above a pre-determined threshold by means of maximum likelihood estimation. / AFRIKAANSE OPSOMMING: Ekstreemwaardeteorie (EWT) behels die ontwikkeling van statistiese modelle en tegnieke wat gebruik word om ekstreme gebeurtenisse te beskryf en te modelleer. Ten einde inferensies aangaande ekstreem kwantiele te maak, is dit nodig om die ekstreem waarde indeks (EWI) te beraam. Daar bestaan talle beramers van die EWI in die literatuur. Hierdie beramers is egter slegs van toepassing in die enkele steekproef geval. Die doel van hierdie studie is om ’n meer akkurate beramer van die EWI te verkry wat van toepassing is in ’n ANOVA opset.
’n ANOVA opset leen homself tot die gebruik van empiriese Bayes (EB) beramers, wat die fokus van hierdie studie sal wees. Hierdie beramers is nog nie in literatuur ondersoek nie.
Die studie begin met ’n literatuurstudie, wat die areas van toepassing vir EWT, Bayes teorie en EB teorie insluit. Verskillende metodes van EWI beraming word bespreek, insluitend ’n bespreking oor hoe die optimale drempel bepaal kan word. Spesifiek word twee aanpasbare metodes van drempelseleksie beskou.
’n Simulasiestudie is uitgevoer om die akkuraatheid van beraming van verskillende beramingsmetodes te vergelyk, in die enkele steekproef geval. Eerste orde en tweede orde beramingsmetodes word beskou. In die geval van tweede orde beraming, word moontlike beramingsmetodes van die tweede orde parameter ook ondersoek.
’n Eerste orde en ’n tweede orde EB beramer van die EWI is afgelei met die doel om ’n beramer te kry wat van toepassing is vir die ANAVA opset. ’n Gevallestudie van vyf versekeringsportefeuljes word gebruik om ondersoek in te stel of die twee EB beramers die akkuraatheid van beraming van die EWI verbeter, in vergelyking met die EWI beramers wat verkry word deur die portefeuljes afsonderlik te ontleed. Die resultate toon dat die eerste orde EB beramer beter gevaar het as die Hill beramer. Die tweede orde EB beramer het egter slegter gevaar as die tweede orde beramer wat gebruik is as maatstaf, naamlik die passing van die gesteurde Pareto verdeling (PPD) aan alle waarnemings bo ’n gegewe drempel, met behulp van maksimum aanneemlikheidsberaming.
|
90 |
Disfluency as ... er ... delay : an investigation into the immediate and lasting consequences of disfluency and temporal delay using EEG and mixed-effects modellingBouwsema, Jennifer A. E. January 2014 (has links)
Difficulties in speech production are often marked by disfluency; fillers, hesitations, prolongations, repetitions and repairs. In recent years a body of work has emerged that demonstrates that listeners are sensitive to disfluency, and that this affects their expectations for upcoming speech, as well as their attention to the speech stream. This thesis investigates the extent to which delay may be responsible for triggering these effects. The experiments reported in this thesis build on an Event Related Potential (ERP) paradigm developed by Corley et al., (2007), in which participants listened to sentences manipulated by both fluency and predictability. Corley et al. reported an attenuated N400 effect for words following disfluent ers, and interpreted this as indicating that the extent to which listeners made predictions was reduced following an er. In the current set of experiments, various noisy interruptions were added to Corley et al.,'s paradigm, time matched to the disfluent fillers. These manipulations allowed investigation of whether the same effects could be triggered by delay alone, in the absence of a cue indicating that the speaker was experiencing difficulty. The first experiment, which contrasted disfluent ers with artificial beeps, revealed a small but significant reduction in N400 effect amplitude for words affected by ers but not by beeps. The second experiment, in which ers were contrasted with speaker generated coughs, revealed no fluency effects on the N400 effect. A third experiment combined the designs of Experiments 1 and 2 to verify whether the difference between them could be characterised as a context effect; one potential explanation for the difference between the outcomes of Experiments 1 and 2 is that the interpretation of an er is affected by the surrounding stimuli. However, in Experiment 3, once again no effect of fluency on the magnitude of the N400 effect was found. Taken together, the results of these three studies lead to the question of whether er's attenuation effect on the N400 is robust. In a second part to each study, listeners took part in a surprise recognition memory test, comprising words which had been the critical words in the previous task intermixed with new words which had not appeared anywhere in the sentences previously heard. Participants were significantly more successful at recognising words which had been unpredictable in their contexts, and, importantly, for Experiments 1 and 2, were significantly more successful at recognising words which had featured in disfluent or interrupted sentences. There was no difference between the recognition rates of words which had been disfluent and those which were affected by a noisy interruption. Collard et al., (2008) demonstrated that disfluency could raise attention to the speech stream, and the finding that interrupted words are equally well remembered leads to the suggestion that any noisy interruption can raise attention. Overall, the finding of memory benefits in response to disfluency, in the absence of attenuated N400 effects leads to the suggestion that different elements of disfluencies may be responsible for triggering these effects. The studies in this thesis also extend previous work by being designed to yield enough trials in the memory test portion of each experiment to permit ERP analysis of the memory data. Whilst clear ERP memory effects remained elusive, important progress was made in that memory ERPs were generated from a disfluency paradigm, and this provided a testing ground on which to demonstrate the use of linear mixed-effects modelling as an alternative to ANOVA analysis for ERPs. Mixed-effects models allow the analysis of unbalanced datasets, such as those generated in many memory experiments. Additionally, we demonstrate the ability to include crossed random effects for subjects and items, and when this is applied to the ERPs from the listening section of Experiment 1, the effect of fluency on N400 amplitude is no longer significant. Taken together, the results from the studies reported in this thesis suggest that temporal delay or disruption in speech can trigger raised attention, but do not necessarily trigger changes in listeners' expectations.
|
Page generated in 0.039 seconds