431 |
Importance of various data sources in deterministic stock assessment modelsNorthrop, Amanda Rosalind January 2008 (has links)
In fisheries, advice for the management of fish populations is based upon management quantities that are estimated by stock assessment models. Fisheries stock assessment is a process in which data collected from a fish population are used to generate a model which enables the effects of fishing on a stock to be quantified. This study determined the effects of various data sources, assumptions, error scenarios and sample sizes on the accuracy with which the age-structured production model and the Schaefer model (assessment models) were able to estimate key management quantities for a fish resource similar to the Cape hakes (Merluccius capensis and M. paradoxus). An age-structured production model was used as the operating model to simulate hypothetical fish resource population dynamics for which management quantities could be determined by the assessment models. Different stocks were simulated with various harvest rate histories. These harvest rates produced Downhill trip data, where harvest rates increase over time until the resource is close to collapse, and Good contrast data, where the harvest rate increases over time until the resource is at less than half of it’s exploitable biomass, and then it decreases allowing the resource to rebuild. The accuracy of the assessment models were determined when data were drawn from the operating model with various combinations of error. The age-structured production model was more accurate at estimating maximum sustainable yield, maximum sustainable yield level and the maximum sustainable yield ratio. The Schaefer model gave more accurate estimates of Depletion and Total Allowable Catch. While the assessment models were able to estimate management quantities using Downhill trip data, the estimates improved significantly when the models were tuned with Good contrast data. When autocorrelation in the spawner-recruit curve was not accounted for by the deterministic assessment model, inaccuracy in parameter estimates were high. The assessment model management quantities were not greatly affected by multinomial ageing error in the catch-at-age matrices at a sample size of 5000 otoliths. Assessment model estimates were closer to their true values when log-normal error were assumed in the catch-at-age matrix, even when the true underlying error were multinomial. However, the multinomial had smaller coefficients of variation at all sample sizes, between 1000 and 10000, of otoliths aged. It was recommended that the assessment model is chosen based on the management quantity of interest. When the underlying error is multinomial, the weighted log-normal likelihood function should be used in the catch-at-age matrix to obtain accurate parameter estimates. However, the multinomial likelihood should be used to minimise the coefficient of variation. Investigation into correcting for autocorrelation in the stock-recruitment relationship should be carried out, as it had a large effect on the accuracy of management quantities.
|
432 |
Bayesian multi-species modelling of non-negative continuous ecological data with a discrete mass at zeroSwallow, Ben January 2015 (has links)
Severe declines in the number of some songbirds over the last 40 years have caused heated debate amongst interested parties. Many factors have been suggested as possible causes for these declines, including an increase in the abundance and distribution of an avian predator, the Eurasian sparrowhawk Accipiter nisus. To test for evidence for a predator effect on the abundance of its prey, we analyse data on 10 species visiting garden bird feeding stations monitored by the British Trust for Ornithology in relation to the abundance of sparrowhawks. We apply Bayesian hierarchical models to data relating to averaged maximum weekly counts from a garden bird monitoring survey. These data are essentially continuous, bounded below by zero, but for many species show a marked spike at zero that many standard distributions would not be able to account for. We use the Tweedie distributions, which for certain areas of parameter space relate to continuous nonnegative distributions with a discrete probability mass at zero, and are hence able to deal with the shape of the empirical distributions of the data. The methods developed in this thesis begin by modelling single prey species independently with an avian predator as a covariate, using MCMC methods to explore parameter and model spaces. This model is then extended to a multiple-prey species model, testing for interactions between species as well as synchrony in their response to environmental factors and unobserved variation. Finally we use a relatively new methodological framework, namely the SPDE approach in the INLA framework, to fit a multi-species spatio-temporal model to the ecological data. The results from the analyses are consistent with the hypothesis that sparrowhawks are suppressing the numbers of some species of birds visiting garden feeding stations. Only the species most susceptible to sparrowhawk predation seem to be affected.
|
433 |
Development of confidence intervals for process capability assessment in short run manufacturing environment using bootstrap methodologyKnezevic, Zec Gorana 01 October 2003 (has links)
No description available.
|
434 |
Statistical Methodologies for Decision-Making and Uncertainty Reduction in Machine LearningZhang, Haofeng January 2024 (has links)
Stochasticity arising from data and training can cause statistical errors in prediction and optimization models and lead to inferior decision-making. Understanding the risk associated with the models and converting predictions into better decisions have become increasingly prominent.
This thesis studies the interaction of two fundamental topics, data-driven decision-making and machine-learning-based uncertainty reduction, where it develops statistically principled methodologies and provides theoretical insights.
Chapter 2 studies data-driven stochastic optimization where model parameters of the underlying distribution need to be estimated from data in addition to the optimization task. Several mainstream approaches have been developed to solve data-driven stochastic optimization, but direct statistical comparisons among different approaches have not been well investigated in the literature. We develop a new regret-based framework based on stochastic dominance to rigorously study and compare their statistical performance.
Chapter 3 studies uncertainty quantification and reduction techniques for neural network models. Uncertainties of neural networks arise not only from data, but also from the training procedure that often injects substantial noises and biases. These hinder the attainment of statistical guarantees and, moreover, impose computational challenges due to the need for repeated network retraining. Building upon the recent neural tangent kernel theory, we create statistically guaranteed schemes to principally characterize and remove the uncertainty of over-parameterized neural networks with very low computation effort.
Chapter 4 studies reducing uncertainty in stochastic simulation where standard Monte Carlo computation is widely known to exhibit a canonical square-root convergence speed in terms of sample size. Two recent techniques derived from an integration of reproducing kernels and Stein's identity have been proposed to reduce the error in Monte Carlo computation to supercanonical convergence. We present a more general framework to encompass both techniques that is especially beneficial when the sample generator is biased and noise-corrupted. We show that our general estimator, the doubly robust Stein-kernelized estimator, outperforms both existing methods in terms of mean squared error rates across different scenarios.
Chapter 5 studies bandit problems, which are important sequential decision-making problems that aim to find optimal adaptive strategies to maximize cumulative reward. Bayesian bandit algorithms with approximate Bayesian inference have been widely used to solve bandit problems in practice, but their theoretical justification is less investigated partially due to the additional Bayesian inference errors. We propose a general theoretical framework to analyze Bayesian bandits in the presence of approximate inference and establish the first regret bound for Bayesian bandit algorithms with bounded approximate inference errors.
|
435 |
Measuring group differences using a model of test anxiety, fluid intelligence and attentional resourcesBosch, Anelle, 1982- 06 1900 (has links)
Literature reports that test anxiety may have an influence on aptitude test
performance for some racial groups and therefore serves as a source of bias
(Zeidner, 1998). Testing organisations have also found that individuals from
African groups perform poorly on measures of fluid intelligence, putting them
at a disadvantage when these scores are used for selection and training
purposes. The current study examines a model defining the relationship
between test anxiety, attentional resources and fluid intelligence in the
following manner: an increase in test anxiety will result in a decrease of
attentional resources as well as a decrease in fluid intelligence. With a
decrease in attentional resources we will see a negative influence on fluid
intelligence and test performance for different racial groups. Twenty-five African individuals and twenty-five individuals from Caucasian
racial groups have set the stage to answer the question if certain groups
experience higher test anxiety and thus perform poorly on fluid intelligence
measures. Significant relationships were found, within and between groups,
for attentional resources and fluid intelligence. Meanwhile, other factors, such
as test anxiety, were not strongly associated with fluid intelligence
performance. Future research into reasons why certain racial groups display
lower overall attention in testing situations is suggested in order to ensure that
tests for selection and training and aptitude tests are fair to all racial groups. / Psychology / M.A. Soc. Sc.(Psychology)
|
436 |
An empirical survey of certain key aspects of the use of statistical sampling by South African registered auditors accredited by the Johannesburg securities exchangeSwanepoel, Elmarie 12 1900 (has links)
Thesis (MAcc)--Stellenbosch University, 2011. / ENGLISH ABSTRACT: The quality of external audits has increasingly come under the spotlight over the last
decade as a result of a number of audit failures. The use of scientifically based statistical
sampling as a sampling technique is allowed, but not required by International Standards
on Auditing. The science behind this sampling technique can add to the credibility and
quality of the audit. Accordingly the main objective of this study was to explore certain key
aspects of the use of statistical sampling as a sampling technique in the audits of financial
statements done by South African Registered Auditors accredited by the Johannesburg
Stock Exchange (JSE).
A literature review of the most recent local and international studies related to the key
aspects addressed in this study was done. An empirical study was then done by means of
a questionnaire that was sent to the JSE-accredited auditing firms for completion. The
questionnaire focused on what was allowed by the firms’ audit methodologies regarding
the key aspects investigated in this study and not on the actual usage of statistical
sampling in audits performed by the firms.
The following main conclusions were drawn in respect of the four key aspects that were
investigated:
1. In investigating the extent to which statistical sampling is used by auditing firms, it was
found that the majority of them was allowed to use the principles of statistical
sampling. Upon further investigation it was found that only 38% were explicitly allowed
to use it in all three sampling steps (size determination, selection of items and
evaluation of results). The evaluation step was identified as the most problematic
statistical sampling phase.
2. Two reasons why auditors decided not use statistical sampling as a sampling
technique were identified, namely the perceived inefficiency (costliness) of the
statistical sampling process, and a lack of understanding, training and experience in
the use thereof.
3. In investigating how professional judgement is exercised in the use of statistical
sampling, it was found that the audit methodologies of the majority of the auditing firms
prescribed the precision and confidence levels to be used, and further that the minority indicated that they were allowed to adjust these levels using their professional
judgement. The partner in charge of the audit was identified to be typically responsible
for final authorisation of the sampling approach to be followed.
4. It was found that approximately a third of the auditing firms did not use computer
software for assistance in using statistical sampling. The majority of the auditing firms
did however have a written guide on how to use statistical sampling in practice
available as a resource to staff.
The value of this study lies in its contribution to the existing body of knowledge in South
Africa regarding the use of statistical sampling in auditing. Stakeholders in statistical
sampling as an auditing technique that can benefit from this study include Registered
Auditors in practice, academics, and, from regulatory, education and training perspectives,
the Independent Regulatory Board for Auditors and the South African Institute of
Chartered Accountants. / AFRIKAANSE OPSOMMING: Na aanleiding van 'n aantal oudit mislukkings in die afgelope dekade het die kwaliteit van
eksterne oudits toenemend onder die soeklig gekom. Die gebruik van wetenskaplik
gebaseerde statistiese steekproefneming word deur die International Standards on
Auditing toegelaat, maar nie vereis nie, as 'n steekproefnemingstegniek. Die wetenskap
agter hierdie steekproefnemingstegniek kan tot die geloofwaardigheid en die kwaliteit van
die oudit bydra. Die hoofdoel van hierdie studie was gevolglik om sekere sleutel aspekte
van die gebruik van statistiese steekproefneming as 'n steekproefnemingstegniek in die
oudits van finansiële state soos gedoen deur Suid-Afrikaanse Geregistreerde Ouditeure
geakkrediteer deur die Johannesburgse Effektebeurs (JSE), te verken.
'n Literatuurstudie van die mees onlangse plaaslike en internasionale studies wat verband
hou met die sleutel aspekte wat in hierdie studie aangespreek word, is gedoen. 'n
Empiriese studie is daarna gedoen met behulp van 'n vraelys wat vir die voltooiing aan die
JSE-geakkrediteerde ouditeursfirmas gestuur is. Die vraelys het gefokus op wat toegelaat
word deur die firmas se oudit metodologieë ten opsigte van die sleutel aspekte ondersoek
in hierdie studie en nie op die werklike gebruik van statistiese steekproefneming in oudits
wat deur die firmas uitgevoer word nie.
Die volgende hoofgevolgtrekkings is gemaak ten opsigte van die vier sleutel aspekte wat
ondersoek is:
1. In die ondersoek na die mate waarin statistiese steekproefneming gebruik word deur
ouditeursfirmas, is gevind dat die meerderheid toegelaat was om die beginsels van
statistiese steekproefneming te gebruik. By verdere ondersoek is gevind dat slegs
38% uitdruklik toegelaat word om dit te gebruik in al drie steekproefneming stappe
(grootte-bepaling, keuse van items en evaluering van resultate). Die evalueringstap is
geïdentifiseer as die mees problematiese statistiese steekproefnemings fase.
2. Twee redes waarom ouditeure besluit het om nie statistiese steekproefneming as 'n
steekproefnemingstegniek te gebruik nie is geïdentifiseer, naamlik die vermeende
ondoeltreffendheid (hoë koste) van die statistiese steekproefnemingsproses, en 'n
gebrek aan begrip, opleiding en ondervinding in die gebruik daarvan. 3. Met die ondersoek van die wyse waarop professionele oordeel uitgeoefen word in die
gebruik van statistiese steekproefneming, is gevind dat die presisiepeil en
vertrouensvlakke wat gebruik word deur die meerderheid van die ouditeursfirmas se
oudit metodologieë voorgeskryf word, en verder het die minderheid aangedui dat hulle
hierdie vlakke mag aanpas deur hul professionele oordeel te gebruik. Die vennoot in
beheer van die oudit is geïdentifiseer as tipies verantwoordelik vir die finale
goedkeuring van die steekproefnemingsbenadering wat gevolg word .
4. Daar is gevind dat ongeveer 'n derde van die ouditeursfirmas nie gebruik maak van
rekenaarsagteware vir bystand in die gebruik van statistiese steekproefneming nie.
Die meerderheid van die ouditeursfirmas het egter 'n geskrewe gids oor hoe om
statistiese steekproefneming in die praktyk te gebruik as 'n hulpmiddel aan personeel
beskikbaar.
Die waarde van hierdie studie lê in sy bydrae tot die bestaande liggaam van kennis in
Suid-Afrika met betrekking tot die gebruik van statistiese steekproefneming in ouditkunde.
Belanghebbers in statistiese steekproefneming as 'n oudittegniek wat kan baat vind by
hierdie studie sluit in Geregistreerde Ouditeure in praktyk, akademici, en, vanuit
regulerings-, opvoedings- en opleidingsperspektiewe, die Independent Regulatory Board
for Auditors en die Suid-Afrikaanse Instituut van Geoktrooieerde Rekenmeesters.
|
437 |
Evaluating the properties of sensory tests using computer intensive and biplot methodologiesMeintjes, M. M. (Maria Magdalena) 03 1900 (has links)
Assignment (MComm)--University of Stellenbosch, 2007. / ENGLISH ABSTRACT: This study is the result of part-time work done at a product development centre. The organisation extensively makes use of trained panels in sensory trials designed to asses the quality of its product. Although standard statistical procedures are used for analysing the results arising from these trials, circumstances necessitate deviations from the prescribed protocols. Therefore the validity of conclusions drawn as a result of these testing procedures might be questionable. This assignment deals with these questions.
Sensory trials are vital in the development of new products, control of quality levels and the exploration of improvement in current products. Standard test procedures used to explore such questions exist but are in practice often implemented by investigators who have little or no statistical background. Thus test methods are implemented as black boxes and procedures are used blindly without checking all the appropriate assumptions and other statistical requirements. The specific product under consideration often warrants certain modifications to the standard methodology. These changes may have some unknown effect on the obtained results and therefore should be scrutinized to ensure that the results remain valid.
The aim of this study is to investigate the distribution and other characteristics of sensory data, comparing the hypothesised, observed and bootstrap distributions. Furthermore, the standard testing methods used to analyse sensory data sets will be evaluated. After comparing these methods, alternative testing methods may be introduced and then tested using newly generated data sets.
Graphical displays are also useful to get an overall impression of the data under consideration. Biplots are especially useful in the investigation of multivariate sensory data. The underlying relationships among attributes and their combined effect on the panellists’ decisions can be visually investigated by constructing a biplot. Results obtained by implementing biplot methods are compared to those of sensory tests, i.e. whether a significant difference between objects will correspond to large distances between the points representing objects in the display. In conclusion some recommendations are made as to how the organisation under consideration should implement sensory procedures in future trials. However, these proposals are preliminary and further research is necessary before final adoption. Some issues for further investigation are suggested. / AFRIKAANSE OPSOMMING: Hierdie studie spruit uit deeltydse werk by ’n produk-ontwikkeling-sentrum. Die organisasie maak in al hul sensoriese proewe rakende die kwaliteit van hul produkte op groot skaal gebruik van opgeleide panele. Alhoewel standaard prosedures ingespan word om die resultate te analiseer, noodsaak sekere omstandighede dat die voorgeskrewe protokol in ’n aangepaste vorm geïmplementeer word. Dié aanpassings mag meebring dat gevolgtrekkings gebaseer op resultate ongeldig is. Hierdie werkstuk ondersoek bogenoemde probleem.
Sensoriese proewe is noodsaaklik in kwaliteitbeheer, die verbetering van bestaande produkte, asook die ontwikkeling van nuwe produkte. Daar bestaan standaard toets- prosedures om vraagstukke te verken, maar dié word dikwels toegepas deur navorsers met min of geen statistiese kennis. Dit lei daartoe dat toetsprosedures blindelings geïmplementeer en resultate geïnterpreteer word sonder om die nodige aannames en ander statistiese vereistes na te gaan. Alhoewel ’n spesifieke produk die wysiging van die standaard metode kan regverdig, kan hierdie veranderinge ’n groot invloed op die resultate hê. Dus moet die geldigheid van die resultate noukeurig ondersoek word.
Die doel van hierdie studie is om die verdeling sowel as ander eienskappe van sensoriese data te bestudeer, deur die verdeling onder die nulhipotese sowel as die waargenome- en skoenlusverdelings te beskou. Verder geniet die standaard toetsprosedure, tans in gebruik om sensoriese data te analiseer, ook aandag. Na afloop hiervan word alternatiewe toetsprosedures voorgestel en dié geëvalueer op nuut gegenereerde datastelle.
Grafiese voorstellings is ook nuttig om ’n geheelbeeld te kry van die data onder bespreking. Bistippings is veral handig om meerdimensionele sensoriese data te bestudeer. Die onderliggende verband tussen die kenmerke van ’n produk sowel as hul gekombineerde effek op ’n paneel se besluit, kan hierdeur visueel ondersoek word. Resultate verkry in die voorstellings word vergelyk met dié van sensoriese toetsprosedures om vas te stel of statisties betekenisvolle verskille in ’n produk korrespondeer met groot afstande tussen die relevante punte in die bistippingsvoorstelling.
Ten slotte word sekere aanbevelings rakende die implementering van sensoriese proewe in die toekoms aan die betrokke organisasie gemaak. Hierdie aanbevelings word gemaak op grond van die voorafgaande ondersoeke, maar verdere navorsing is nodig voor die finale aanvaarding daarvan. Waar moontlik, word voorstelle vir verdere ondersoeke gedoen.
|
438 |
Estimating the window period and incidence of recently infected HIV patients.Du Toit, Cari 03 1900 (has links)
Thesis (MComm (Statistics and Actuarial Science))--University of Stellenbosch, 2009. / Incidence can be defined as the rate of occurence of new infections of a disease like HIV and
is an useful estimate of trends in the epidemic. Annualised incidence can be expressed as a
proportion, namely the number of recent infections per year divided by the number of people at
risk of infection. This number of recent infections is dependent on the window period, which
is basically the period of time from seroconversion to being classified as a long-term infection
for the first time. The BED capture enzyme immunoassay was developed to provide a way to
distinguish between recent and long-term infections. An optical density (OD) measurement is
obtained from this assay. Window period is defined as the number of days since seroconversion,
with a baseline OD value of 0, 0476 to the number of days to reach an optical density of 0, 8.The
aim of this study is to describe different techniques to estimate the window period which may
subsequently lead to alternative estimates of annualised incidence of HIV infection. These
various techniques are applied to different subsets of the Zimbabwe Vitamin A for Mothers and
Babies (ZVITAMBO) dataset.
Three different approaches are described to analyse window periods: a non-parametric survival
analysis approach, the fitting of a general linear mixed model in a longitudinal data setting and
a Bayesian approach of assigning probability distributions to the parameters of interest. These
techniques are applied to different subsets and transformations of the data and the estimated
mean and median window periods are obtained and utilised in the calculation of incidence.
|
439 |
A unified approach to the economic aspects of statistical quality control and improvementGhebretensae Manna, Zerai 12 1900 (has links)
Assignment (MSc)--Stellenbosch University, 2004. / ENGLISH ABSTRACT: The design of control charts refers to the selection of the parameters implied, including the
sample size n, control limit width parameter k, and the sampling interval h. The design of the
X -control chart that is based on economic as well as statistical considerations is presently one of
the more popular subjects of research. Two assumptions are considered in the development and
use of the economic or economic statistical models. These assumptions are potentially critical. It
is assumed that the time between process shifts can be modelled by means of the exponential
distribution. It is further assumed that there is only one assignable cause. Based on these
assumptions, economic or economic statistical models are derived using a total cost function per
unit time as proposed by a unified approach of the Lorenzen and Vance model (1986). In this
approach the relationship between the three control chart parameters as well as the three types of
costs are expressed in the total cost function. The optimal parameters are usually obtained by the
minimization of the expected total cost per unit time. Nevertheless, few practitioners have tried
to optimize the design of their X -control charts. One reason for this is that the cost models and
their associated optimization techniques are often too complex and difficult for practitioners to
understand and apply. However, a user-friendly Excel program has been developed in this paper
and the numerical examples illustrated are executed on this program. The optimization procedure
is easy-to-use, easy-to-understand, and easy-to-access. Moreover, the proposed procedure also
obtains exact optimal design values in contrast to the approximate designs developed by Duncan
(1956) and other subsequent researchers.
Numerical examples are presented of both the economic and the economic statistical designs of
the X -control chart in order to illustrate the working of the proposed Excel optimal procedure.
Based on the Excel optimization procedure, the results of the economic statistical design are
compared to those of a pure economic model. It is shown that the economic statistical designs
lead to wider control limits and smaller sampling intervals than the economic designs.
Furthermore, even if they are more costly than the economic design they do guarantee output of
better quality, while keeping the number of false alarm searches at a minimum. It also leads to
low process variability. These properties are the direct result of the requirement that the
economic statistical design must assure a satisfactory statistical performance.
Additionally, extensive sensitivity studies are performed on the economic and economic
statistical designs to investigate the effect of the input parameters and the effects of varying the bounds on, a, 1-f3 , the average time-to-signal, ATS as well as the expected shift size t5 on
the minimum expected cost loss as well as the three control chart decision variables. The
analyses show that cost is relatively insensitive to improvement in the type I and type II error
rates, but highly sensitive to changes in smaller bounds on ATS as well as extremely sensitive
for smaller shift levels, t5 .
Note: expressions like economic design, economic statistical design, loss cost and assignable
cause may seen linguistically and syntactically strange, but are borrowed from and used
according the known literature on the subject. / AFRIKAANSE OPSOMMING: Die ontwerp van kontrolekaarte verwys na die seleksie van die parameters geïmpliseer,
insluitende die steekproefgrootte n , kontrole limiete interval parameter k , en die
steekproefmterval h. Die ontwerp van die X -kontrolekaart, gebaseer op ekonomiese sowel as
statistiese oorwegings, is tans een van die meer populêre onderwerpe van navorsing. Twee
aannames word in ag geneem in die ontwikkeling en gebruik van die ekonomiese en ekonomies
statistiese modelle. Hierdie aannames is potensieel krities. Dit word aanvaar dat die tyd tussen
prosesverskuiwings deur die eksponensiaalverdeling gemodelleer kan word. Daar word ook
verder aangeneem dat daar slegs een oorsaak kan wees vir 'n verskuiwing, of te wel 'n
aanwysbare oorsaak (assignable cause). Gebaseer op hierdie aannames word ekonomies en
ekonomies statistiese modelle afgelei deur gebruik te maak van 'n totale kostefunksie per
tydseenheid soos voorgestel deur deur 'n verenigende (unified) benadering van die Lorenzen en
Vance-model (1986). In hierdie benadering word die verband tussen die drie kontrole
parameters sowel as die drie tipes koste in die totale kostefunksie uiteengesit. Die optimale
parameters word gewoonlik gevind deur die minirnering van die verwagte totale koste per
tydseenheid. Desnieteenstaande het slegs 'n minderheid van praktisyns tot nou toe probeer om
die ontwerp van hulle X -kontrolekaarte te optimeer. Een rede hiervoor is dat die kosternodelle
en hulle geassosieerde optimeringstegnieke té kompleks en moeilik is vir die praktisyns om te
verstaan en toe te pas. 'n Gebruikersvriendelike Excelprogram is egter hier ontwikkel en die
numeriese voorbeelde wat vir illustrasie doeleindes getoon word, is op hierdie program
uitgevoer. Die optimeringsprosedure is maklik om te gebruik, maklik om te verstaan en die
sagteware is geredelik beskikbaar. Wat meer is, is dat die voorgestelde prosedure eksakte
optimale ontwerp waardes bereken in teenstelling tot die benaderde ontwerpe van Duncan (1956)
en navorsers na hom.
Numeriese voorbeelde word verskaf van beide die ekonomiese en ekonomies statistiese
ontwerpe vir die X -kontrolekaart om die werking van die voorgestelde Excel optimale
prosedure te illustreer. Die resultate van die ekonomies statistiese ontwerp word vergelyk met
dié van die suiwer ekomomiese model met behulp van die Excel optimerings-prosedure. Daar
word aangetoon dat die ekonomiese statistiese ontwerpe tot wyer kontrole limiete en kleiner
steekproefmtervalle lei as die ekonomiese ontwerpe. Al lei die ekonomies statistiese ontwerp tot
ietwat hoër koste as die ekonomiese ontwerpe se oplossings, waarborg dit beter kwaliteit terwyl
dit die aantal vals seine tot 'n minimum beperk. Hierbenewens lei dit ook tot kleiner prosesvartasie. Hierdie eienskappe is die direkte resultaat van die vereiste dat die ekonomies
statistiese ontwerp aan sekere statistiese vereistes moet voldoen.
Verder is uitgebreide sensitiwiteitsondersoeke op die ekonomies en ekonomies statistiese
ontwerpe gedoen om die effek van die inset parameters sowel as van variërende grense op a,
1- f3 , die gemiddelde tyd-tot-sein, ATS sowel as die verskuiwingsgrootte 8 op die minimum
verwagte kosteverlies sowel as die drie kontrolekaart besluitnemingsveranderlikes te bepaal. Die
analises toon dat die totale koste relatief onsensitief is tot verbeterings in die tipe I en die tipe II
fout koerse, maar dat dit hoogs sensitief is vir wysigings in die onderste grens op ATS sowel as
besonder sensitief vir klein verskuiwingsvlakke, 8.
Let op: Die uitdrukkings ekonomiese ontwerp (economic design), ekonomies statistiese ontwerp
(economic statistical design), verlies kostefunksie (loss cost function) en aanwysbare oorsaak
(assignable cause) mag taalkundig en sintakties vreemd voordoen, maar is geleen uit, en word so
gebruik in die bekende literatuur oor hierdie onderwerp.
|
440 |
Risk and admissibility for a Weibull class of distributionsNegash, Efrem Ocubamicael 12 1900 (has links)
Thesis (MSc)--Stellenbosch University, 2004. / ENGLISH ABSTRACT: The Bayesian approach to decision-making is considered in this thesis for reliability/survival
models pertaining to a Weibull class of distributions. A generalised right censored sampling
scheme has been assumed and implemented. The Jeffreys' prior for the inverse mean lifetime
and the survival function of the exponential model were derived. The consequent posterior distributions
of these two parameters were obtained using this non-informative prior. In addition
to the Jeffreys' prior, the natural conjugate prior was considered as a prior for the parameter
of the exponential model and the consequent posterior distribution was derived. In many
reliability problems, overestimating a certain parameter of interest is more detrimental than underestimating
it and hence, the LINEX loss function was used to estimate the parameters and
their consequent risk measures. Moreover, the same analogous derivations have been carried
out relative to the commonly-used symmetrical squared error loss function. The risk function,
the posterior risk and the integrated risk of the estimators were obtained and are regarded in
this thesis as the risk measures. The performance of the estimators have been compared relative
to these risk measures. For the Jeffreys' prior under the squared error loss function, the
comparison resulted in crossing-over risk functions and hence, none of these estimators are
completely admissible. However, relative to the LINEX loss function, it was found that a correct
Bayesian estimator outperforms an incorrectly chosen alternative. On the other hand for
the conjugate prior, crossing-over of the risk functions of the estimators were evident as a result.
In comparing the performance of the Bayesian estimators, whenever closed-form expressions
of the risk measures do not exist, numerical techniques such as Monte Carlo procedures were
used. In similar fashion were the posterior risks and integrated risks used in the performance
compansons.
The Weibull pdf, with its scale and shape parameter, was also considered as a reliability model.
The Jeffreys' prior and the consequent posterior distribution of the scale parameter of the
Weibull model have also been derived when the shape parameter is known. In this case, the estimation process of the scale parameter is analogous to the exponential model. For the case
when both parameters of the Weibull model are unknown, the Jeffreys' and the reference priors
have been derived and the computational difficulty of the posterior analysis has been outlined.
The Jeffreys' prior for the survival function of the Weibull model has also been derived, when
the shape parameter is known. In all cases, two forms of the scalar estimation error have been
t:. used to compare as much risk measures as possible. The performance of the estimators were
compared for acceptability in a decision-making framework. This can be seen as a type of
procedure that addresses robustness of an estimator relative to a chosen loss function. / AFRIKAANSE OPSOMMING: Die Bayes-benadering tot besluitneming is in hierdie tesis beskou vir betroubaarheids- / oorlewingsmodelle
wat behoort tot 'n Weibull klas van verdelings. 'n Veralgemene regs gesensoreerde
steekproefnemingsplan is aanvaar en geïmplementeer. Die Jeffreyse prior vir die
inverse van die gemiddelde leeftyd en die oorlewingsfunksie is afgelei vir die eksponensiële
model. Die gevolglike aposteriori-verdeling van hierdie twee parameters is afgelei, indien hierdie
nie-inligtingge-wende apriori gebruik word. Addisioneel tot die Jeffreyse prior, is die
natuurlike toegevoegde prior beskou vir die parameter van die eksponensiële model en ooreenstemmende
aposteriori-verdeling is afgelei. In baie betroubaarheidsprobleme het die oorberaming
van 'n parameter meer ernstige nagevolge as die onderberaming daarvan en omgekeerd
en gevolglik is die LINEX verliesfunksie gebruik om die parameters te beraam tesame met
ooreenstemmende risiko maatstawwe. Soortgelyke afleidings is gedoen vir hierdie algemene
simmetriese kwadratiese verliesfunksie. Die risiko funksie, die aposteriori-risiko en die integreerde
risiko van die beramers is verkry en word in hierdie tesis beskou as die risiko maatstawwe.
Die gedrag van die beramers is vergelyk relatief tot hierdie risiko maatstawwe. Die
vergelyking vir die Jeffreyse prior onder kwadratiese verliesfunksie het op oorkruisbare risiko
funksies uitgevloei en gevolglik is geeneen van hierdie beramers volkome toelaatbaar nie. Relatief
tot die LINEX verliesfunksie is egter gevind dat die korrekte Bayes-beramer beter vaar
as die alternatiewe beramer. Aan die ander kant is gevind dat oorkruisbare risiko funksies van
die beramers verkry word vir die toegevoegde apriori-verdeling. Met hierdie gedragsvergelykings
van die beramers word numeriese tegnieke toegepas, soos die Monte Carlo prosedures,
indien die maatstawwe nie in geslote vorm gevind kan word nie. Op soortgelyke wyse is die
aposteriori-risiko en die integreerde risiko's gebruik in die gedragsvergelykings.
Die Weibull waarskynlikheidsverdeling, met skaal- en vormingsparameter, is ook beskou as 'n
betroubaarheidsmodel. Die Jeffreyse prior en die gevolglike aposteriori-verdeling van die skaalparameter
van die Weibull model is afgelei, indien die vormingsparameter bekend is. In hierdie geval is die beramingsproses van die skaalparameter analoog aan die afleidings van die eksponensiële
model. Indien beide parameters van die Weibull modelonbekend is, is die Jeffreyse
prior en die verwysingsprior afgelei en is daarop gewys wat die berekeningskomplikasies is
van 'n aposteriori-analise. Die Jeffreyse prior vir die oorlewingsfunksie van die Weibull model
is ook afgelei, indien die vormingsparameter bekend is. In al die gevalle is twee vorms van
die skalaar beramingsfoute gebruik in die vergelykings, sodat soveel as moontlik risiko maatstawwe
vergelyk kan word. Die gedrag van die beramers is vergelyk vir aanvaarbaarheid binne
die besluitnemingsraamwerk. Hierdie kan gesien word as 'n prosedure om die robuustheid van
'n beramer relatief tot 'n gekose verliesfunksie aan te spreek.
|
Page generated in 0.0947 seconds