Spelling suggestions: "subject:"beta distribution"" "subject:"meta distribution""
1 |
產生貝他分配的演算法研究 / A Study on an Algorithm for Generating Beta Distribution洪英超, Hung, Ying Chau Unknown Date (has links)
在眾多產生貝他分配的方法中,我們研究Kennedy的演算法。在本文中,我們探討在小樣本下,不同參數組合(k,p,q,r) 產生同一貝他分配的情形。 / There are mAny methods for generating a beta distribution. In this study, we focus on the method proposed by Kennedy (1988). Let [A<sub>1</sub>,B<sub>1</sub>]=[0,1] And [A<sub>n</sub>,B<sub>n</sub>] be rAndom subinterval of [0,1] defined recursively as follows. Take C , D to be the minimum And maximum of k i.i.d rAndom points uniformly distributed on [A<sub>n</sub>,B<sub>n</sub>]; And choose [A<sub>n+1</sub>,B<sub>n+1</sub>] to be [C<sub>n</sub>,B<sub>n</sub>], [A<sub>n</sub>,D<sub>n</sub>] or [C<sub>n</sub>,D<sub>n</sub>] with probabilities p, q, r respectively such that p+q+r=1. Kennedy showed that the limiting distribution of [A<sub>n</sub>,B<sub>n</sub>] has a beta distribution on [0,1] with parameters k(p+r) And k(q+r).
Based upon the known asymptotic result, we study the small-sample behaviors among those combinations of k, p, q, r that have the same Beta(m, n) distribution, where m = k(p+r), n = k(q+r), through simulations. We conclude that smaller k's basically have better performAnces.
|
2 |
A Comparison of Estimation Procedures for the Beta DistributionYan, Huey 01 May 1991 (has links)
The beta distribution may be used as a stochastic model for continuous proportions in many situations in applied statistics. This thesis was concerned with estimation of the parameters of the beta distribution in three different situations.
Three different estimation procedures-the method of moments, maximum likelihood, and a hybrid of these two methods, which we call the one-step improvement-were compared by computer simulation, for beta data and beta data contaminated by zeros and ones. We also evaluated maximum likelihood estimation in the context of censored data, and Newton's method as a numerical procedure for solving the likelihood equations for censored beta data.
|
3 |
Bayesian Estimation of Small Proportions Using Binomial Group TestLuo, Shihua 09 November 2012 (has links)
Group testing has long been considered as a safe and sensible relative to one-at-a-time testing in applications where the prevalence rate p is small. In this thesis, we applied Bayes approach to estimate p using Beta-type prior distribution. First, we showed two Bayes estimators of p from prior on p derived from two different loss functions. Second, we presented two more Bayes estimators of p from prior on π according to two loss functions. We also displayed credible and HPD interval for p. In addition, we did intensive numerical studies. All results showed that the Bayes estimator was preferred over the usual maximum likelihood estimator (MLE) for small p. We also presented the optimal β for different p, m, and k.
|
4 |
Parameter Estimation for the Beta DistributionOwen, Claire Elayne Bangerter 20 November 2008 (has links) (PDF)
The beta distribution is useful in modeling continuous random variables that lie between 0 and 1, such as proportions and percentages. The beta distribution takes on many different shapes and may be described by two shape parameters, alpha and beta, that can be difficult to estimate. Maximum likelihood and method of moments estimation are possible, though method of moments is much more straightforward. We examine both of these methods here, and compare them to three more proposed methods of parameter estimation: 1) a method used in the Program Evaluation and Review Technique (PERT), 2) a modification of the two-sided power distribution (TSP), and 3) a quantile estimator based on the first and third quartiles of the beta distribution. We find the quantile estimator performs as well as maximum likelihood and method of moments estimators for most beta distributions. The PERT and TSP estimators do well for a smaller subset of beta distributions, though they never outperform the maximum likelihood, method of moments, or quantile estimators. We apply these estimation techniques to two data sets to see how well they approximate real data from Major League Baseball (batting averages) and the U.S. Department of Energy (radiation exposure). We find the maximum likelihood, method of moments, and quantile estimators perform well with batting averages (sample size 160), and the method of moments and quantile estimators perform well with radiation exposure proportions (sample size 20). Maximum likelihood estimators would likely do fine with such a small sample size were it not for the iterative method needed to solve for alpha and beta, which is quite sensitive to starting values. The PERT and TSP estimators do more poorly in both situations. We conclude that in addition to maximum likelihood and method of moments estimation, our method of quantile estimation is efficient and accurate in estimating parameters of the beta distribution.
|
5 |
A case study in applying generalized linear mixed models to proportion data from poultry feeding experimentsShannon, Carlie January 1900 (has links)
Master of Science / Department of Statistics / Leigh Murray / This case study was motivated by the need for effective statistical analysis for a series of poultry feeding experiments conducted in 2006 by Kansas State University researchers in the department of Animal Science. Some of these experiments involved an automated auger feed line system commonly used in commercial broiler houses and continuous, proportion response data. Two of the feed line experiments are considered in this case study to determine if a statistical model using a non-normal response offers a better fit for this data than a model utilizing a normal approximation. The two experiments involve fixed as well as multiple random effects. In this case study, the data from these experiments is analyzed using a linear mixed model and Generalized Linear Mixed Models (GLMM’s) with the SAS Glimmix procedure. Comparisons are made between a linear mixed model and GLMM’s using the beta and binomial responses. Since the response data is not count data a quasi-binomial approximation to the binomial is used to convert continuous proportions to the ratio of successes over total number of trials, N, for a variety of possible N values. Results from these analyses are compared on the basis of point estimates, confidence intervals and confidence interval widths, as well as p-values for tests of fixed effects. The investigation concludes that a GLMM may offer a better fit than models using a normal approximation for this data when sample sizes are small or response values are close to zero. This investigation discovers that these same instances can cause GLMM’s utilizing the beta response to behave poorly in the Glimmix procedure because lack of convergence issues prevent the obtainment of valid results. In such a case, a GLMM using a quasi-binomial response distribution with a high value of N can offer a reasonable and well behaved alternative to the beta distribution.
|
6 |
Atsitiktinių dydžių sandaugų tikimybiniai skirstiniai / Distributions of random variables multiplyStaskevičiūtė, Simona 17 June 2013 (has links)
Magistro baigiamajame darbe analizuojami matematinės statistikos tikimybiniai skirstiniai. Darbo tikslas – atlikti nepriklausomų beta atsitiktinių dydžių sandaugos pasiskirstymo analizę. Ši tematika aktuali sudėtingų sistemų patikimumo analizės teorijoje. Darbe aprašyta neaprėžtai dalių tikimybinių skirstinių teorija, kuri naudojama atsitiktinių dydžių sumos pasiskirstymui tirti, ir M-dalių tikimybių pasiskirstymo modelių teorija, naudojama nepriklausomų atsitiktinių dydžių sandaugos pasiskirstymo analizei atlikti. Baigiamajame darbe suformuluotos dvi teoremos, nusakančios, kuriais atvejais daugindami baigtinį skaičių nepriklausomų beta atsitiktinių dydžių su skirtingomis parametrų reikšmėmis gauname beta atsitiktinį dydį, kurio parametrai išreikšti per dauginamųjų parametrus. Šios teoremos įrodytos, kuomet dauginame du nepriklausomus beta atsitiktinius dydžius. Taip pat darbe suformuluota ir įrodyta teorema, nurodanti beta atsitiktinio dydžio logaritmo charakteringosios funkcijos analitinę išraišką. / The mathematical statistics probability distributions are analyzed in this master thesis. The main purpose is to carry out the analysis of independent beta random variables products distribution. This theme is relevant to the reliability analysis of complex systems theory. The theoretics of unlimited divisible probability distributions is described in this master thesis. Following theory is useful to investigate the sums of independent random variables. The theoretics of M-divisible probability distributions is also described in this master thesis. It is useful to investigate the product of independent random variables. Two theorems about the product of finite number of independent beta random variables are formulated in this master thesis. Following theorems tells us, that product is beta random variable again, and its parameters expressions are related with multiplicands parameters. Theorems are proved when we are multiplying two independent beta random variables. Another theorem that is about characteristic function of beta random variable logarithm is formulated and proved in this master thesis.
|
7 |
A systems engineering approach to metallurgical accounting of integrated smelter complexesMtotywa, Busisiwe Percelia, Lyman, G. J. 12 1900 (has links)
Thesis (PhD)--Stellenbosch University, 2008. / ENGLISH ABSTRACT: The growing need to improve accounting accuracy, precision and to standardise
generally accepted measurement methods in the mining and processing industries
has led to the joining of a number of organisations under the AMIRA International
umbrella, with the purpose of fulfilling these objectives. As part of this venture,
Anglo Platinum undertook a project on the material balancing around its largest
smelter, the Waterval Smelter.
The primary objective of the project was to perform a statistical material balance
around the Waterval Smelter using the Maximum Likelihood method with respect
to platinum, rhodium, nickel, sulphur and chrome (III) oxide.
Pt, Rh and Ni were selected for their significant contribution to the company’s profit
margin, whilst S was included because of its environmental importance. Cr2O3 was
included for its importance in as far as the difficulties its presence poses in
smelting of PGMs.
The objective was achieved by performing a series of statistical computations.
These include; quantification of total and analytical uncertainties, detection of
outliers, estimation and modelling of daily and monthly measurement uncertainties,
parameter estimation and data reconciliation. Comparisons were made between
the Maximum Likelihood and Least Squares methods.
Total uncertainties associated with the daily grades were determined by use of
variographic studies. The estimated Pt standard deviations were within 10%
relative to the respective average grades with a few exceptions. The total
uncertainties were split into their respective components by determining analytical variances from analytical replicates. The results indicated that the sampling
components of the total uncertainty were generally larger as compared to their
analytical counterparts. WCM, the platinum rich Waterval smelter product, has an
uncertainty that is worth ~R2 103 000 in its daily Pt grade. This estimated figure
shows that the quality of measurements do not only affect the accuracy of metal
accounting, but can have considerable implications if not quantified and managed.
The daily uncertainties were estimated using Kriging and bootstrapped to obtain
estimates for the monthly uncertainties. Distributions were fitted using MLE on the
distribution fitting tool of the JMP6.0 programme and goodness of fit tests were
performed. The data were fitted with normal and beta distributions, and there was
a notable decrease in the skewness from the daily to the monthly data.
The reconciliation of the data was performed using the Maximum Likelihood and
comparing that with the widely used Least Squares. The Maximum Likelihood and
Least Squares adjustments were performed on simulated data in order to conduct
a test of accuracy and to determine the extent of error reduction after the
reconciliation exercise. The test showed that the two methods had comparable
accuracies and error reduction capabilities. However, it was shown that modelling
of uncertainties with the unbounded normal distribution does lead to the estimation
of adjustments so large that negative adjusted values are the result. The benefit of
modelling the uncertainties with a bounded distribution, which is the beta
distribution in this case, is that the possibility of obtaining negative adjusted values
is annihilated. ML-adjusted values (beta) will always be non-negative, therefore
feasible. In a further comparison of the ML(bounded model) and the LS methods in
the material balancing of the Waterval smelter complex, it was found that for all
those streams whose uncertainties were modelled with a beta distribution, i.e.
those whose distribution possessed some degree of skewness, the ML
adjustments were significantly smaller than the LS counterparts
It is therefore concluded that the Maximum Likelihood (bounded models) is a
rigorous alternative method of data reconciliation to the LS method with the benefits of; -- Better estimates due to the fact that the nature of the data (distribution) is not assumed, but determined through distribution fitting and parameter estimation
-- Adjusted values can never be negative due to the bounded nature of the
distribution
The novel contributions made in this thesis are as follows;
-- The Maximum Likelihood method was for the first time employed in the
material balancing of non-normally distributed data and compared with the
well-known Least Squares method
-- This was an original integration of geostatistical methods with data
reconciliation to quantify and predict measurement uncertainties.
-- For the first time, measurement uncertainties were modeled with a
distribution that was non-normal and bounded in nature, leading to smaller
adjustments / AFRIKAANSE OPSOMMING: Die groeiende behoefte aan rekeningkundige akkuraatheid, en om presisie te
verbeter, en te standardiseer op algemeen aanvaarde meetmetodes in die mynbou
en prosesseringsnywerhede, het gelei tot die samwewerking van 'n aantal van
organisasies onder die AMIRA International sambreel, met die doel om
bogenoemde behoeftes aan te spreek. As deel van hierdie onderneming, het
Anglo Platinum onderneem om 'n projek op die materiaal balansering rondom sy
grootste smelter, die Waterval smelter.
Die primêre doel van die projek was om 'n statistiese materiaal balans rondom die
Waterval smelter uit te voer deur gebruik te maak van die sogenaamde maksimum
waarskynlikheid metode met betrekking tot platinum, rodium, nikkel, swawel en
chroom (iii) oxied.
Pt, Rh en Ni was gekies vir hul beduidende bydrae tot die maatskappy se
winsmarge, terwyl S ingesluit was weens sy belangrike omgewingsimpak. Cr2O3
was ingesluit weens sy impak op die smelting van Platinum groep minerale.
Die doelstelling was bereik deur die uitvoering van 'n reeks van statistiese
berekeninge. Hierdie sluit in: die kwantifisering van die totale en analitiese
variansies, opsporing van uitskieters, beraming en modellering van daaglikse en
maandelikse metingsvariansies, parameter beraming en data rekonsiliasie.
Vergelykings was getref tussen die maksimum waarskynlikheid en kleinste
kwadrate metodes.
Totale onsekerhede of variansies geassosieer met die daaglikse grade was bepaal
deur ’n Variografiese studie. Die beraamde Pt standaard afwykings was binne 10% relatief tot die onderskeie gemiddelde grade met sommige uitsonderings. Die totale
onsekerhede was onderverdeel in hul onderskeie komponente deur bepaling van
die ontledingsvariansies van duplikate. Die uitslae toon dat die monsternemings
komponente van die totale onsekerheid oor die algemeen groter was as hul
bypassende analitiese variansies. WCM, ‘n platinum-ryke Waterval Smelter
produk, het 'n onsekerheid in die orde van ~twee miljoen rand in sy daagliks Pt
graad. Hierdie beraamde waarde toon dat die kwaliteit van metings nie alleen die
akkuraatheid van metaal rekeningkunde affekteer nie, maar aansienlike finansiële
implikasies het indien nie die nie gekwantifiseer en bestuur word nie.
Die daagliks onsekerhede was beraam deur gebruik te maak van “Kriging” en
“Bootstrap” metodes om die maandelikse onsekerhede te beraam. Verspreidings
was gepas deur gebruik te maak van hoogste waarskynlikheid beraming passings
en goedheid–van-pas toetse was uitgevoer. Die data was gepas met Normaal en
Beta verspreidings, en daar was 'n opmerklike vermindering in die skeefheid van
die daaglikse tot die maandeliks data.
Die rekonsiliasies van die massabalans data was uitgevoer deur die gebruik die
maksimum waarskynlikheid metodes en vergelyk daardie met die algemeen
gebruikde kleinste kwadrate metode. Die maksimum waarskynlikheid (ML) en
kleinste kwadrate (LS) aanpassings was uitgevoer op gesimuleerde data ten einde
die akkuraatheid te toets en om die mate van fout vermindering na die rekonsiliasie
te bepaal. Die toets getoon dat die twee metodes het vergelykbare akkuraathede
en foutverminderingsvermoëns. Dit was egter getoon dat modellering van die
onsekerhede met die onbegrensde Normaal verdeling lei tot die beraming van
aanpassings wat so groot is dat negatiewe verstelde waardes kan onstaan na
rekosniliasie. Die voordeel om onsekerhede met 'n begrensde distribusie te
modelleer, soos die beta distribusie in hierdie geval, is dat die moontlikheid om
negatiewe verstelde waardes te verkry uitgelsuit word. ML-verstelde waardes (met
die Beta distribusie funksie) sal altyd nie-negatief wees, en om hierdie rede
uitvoerbaar. In 'n verdere vergelyking van die ML (begrensd) en die LS metodes in
die materiaal balansering van die waterval smelter kompleks, is dit gevind dat vir
almal daardie strome waarvan die onserkerhede gesimuleer was met 'n Beta distribusie, dus daardie strome waarvan die onsekerheidsdistribusie ‘n mate van
skeefheid toon, die ML verstellings altyd beduidend kleiner was as die
ooreenkomstige LS verstellings. Vervolgens word die Maksimum Waarskynlikheid
metode (met begrensde modelle) gesien as 'n beter alternatiewe metode van data
rekosiliasie in vergelyking met die kleinste kwadrate metode met die voordele van:
• Beter beramings te danke aan die feit dat die aard van die
onsekerheidsdistribusie nie aangeneem word nie, maar bepaal is deur die
distribusie te pas en deur van parameter beraming gebruik te maak.
• Die aangepaste waardes kan nooit negatief wees te danke aan die begrensde
aard van die verdeling.
Die volgende oorspronklike bydraes is gelewer in hierdie verhandeling:
• Die Maksimum Waarskynlikheid metode was vir die eerste keer geëvalueer vir
massa balans rekonsiliasie van nie-Normaal verspreide data en vergelyk met die
bekendde kleinste kwadrate metode.
• Dit is die eerste keer geostatistiese metodes geïntegreer is met data rekonsiliasie
om onsekerhede te beraam waarbinne verstellings gemaak word.
• Vir die eerste keer, is meetonsekerhede gemoddelleer met 'n distribusie wat nie-
Normaal en begrensd van aard is, wat lei tot kleiner en meer realistiese verstellings.
|
8 |
Distribuições de probabilidade no intervalo unitário / Probability distributions in the unit intervalLima, Francimário Alves de 16 March 2018 (has links)
A distribuição beta é a mais frequentemente utilizada para a modelagem de dados contínuos observados no intervalo unitário, como taxas e proporções. Embora seja flexível, admitindo formas variadas, tais como J, J invertido, U e unimodal, não é adequada em todas as situações práticas. Nesta dissertação fazemos uma revisão sobre distribuições contínuas no intervalo unitário englobando as distribuições beta, Kumaraswamy, simplex, gama unitária e beta retangular. Também abordamos uma ampla classe de distribuições obtida por transformações (Smithson e Merkle, 2013). Em particular, focamos em duas subclasses, uma apresentada e estudada por Lemonte e Bazán (2015), que chamaremos de classe de distribuições logito, e outra que chamaremos de classe de distribuições logito skew. Todas as distribuições consideradas são aplicadas a conjuntos de dados do Banco Mundial. / The beta distribution is the most frequently used for modeling continuous data observed in the unit interval, such as rates and proportions. Although flexible, assuming varied forms, such as J, inverted J, U and unimodal, it is not suitable in all practical situations. In this dissertation we make a review on continuous distributions in the unit interval encompassing the beta, Kumaraswamy, simplex, unit gamma and rectangular beta distributions. We also address a wide class of distributions obtained by transformations (Smithson and Merkle, 2013). In particular, we focus on two subclasses, one presented and studied by Lemonte and Bazán (2015), which we will call the logit class of distributions, and another that we will call the logit class of skew distributions. All distributions considered are applied to World Bank data sets.
|
9 |
Distribuições de probabilidade no intervalo unitário / Probability distributions in the unit intervalFrancimário Alves de Lima 16 March 2018 (has links)
A distribuição beta é a mais frequentemente utilizada para a modelagem de dados contínuos observados no intervalo unitário, como taxas e proporções. Embora seja flexível, admitindo formas variadas, tais como J, J invertido, U e unimodal, não é adequada em todas as situações práticas. Nesta dissertação fazemos uma revisão sobre distribuições contínuas no intervalo unitário englobando as distribuições beta, Kumaraswamy, simplex, gama unitária e beta retangular. Também abordamos uma ampla classe de distribuições obtida por transformações (Smithson e Merkle, 2013). Em particular, focamos em duas subclasses, uma apresentada e estudada por Lemonte e Bazán (2015), que chamaremos de classe de distribuições logito, e outra que chamaremos de classe de distribuições logito skew. Todas as distribuições consideradas são aplicadas a conjuntos de dados do Banco Mundial. / The beta distribution is the most frequently used for modeling continuous data observed in the unit interval, such as rates and proportions. Although flexible, assuming varied forms, such as J, inverted J, U and unimodal, it is not suitable in all practical situations. In this dissertation we make a review on continuous distributions in the unit interval encompassing the beta, Kumaraswamy, simplex, unit gamma and rectangular beta distributions. We also address a wide class of distributions obtained by transformations (Smithson and Merkle, 2013). In particular, we focus on two subclasses, one presented and studied by Lemonte and Bazán (2015), which we will call the logit class of distributions, and another that we will call the logit class of skew distributions. All distributions considered are applied to World Bank data sets.
|
10 |
Análise conjunta de fatores: distribuição amostral da importância relativa por simulação de dados / Conjoint analysis: sampling distribution of the relative importance by data simulationTemoteo, Alex da Silva 17 November 2008 (has links)
Made available in DSpace on 2015-03-26T13:32:07Z (GMT). No. of bitstreams: 1
texto completo.pdf: 1363370 bytes, checksum: c1941f11abc68868beaef6fe7462af56 (MD5)
Previous issue date: 2008-11-17 / Conjoint analysis is a regression analvsis that uses a model with dummy or indicator explanatory variables to study consumer preference for treatments that can be products or services, and are defined by combining levels of each attribute or factor. lt allows the estimation of the Relative Importance (RI ) of each factor that makes up the treatments. Such studies are important to help decide, based on RI estimates obtained from the CA, to which factors should be given more attention when developing the products and/or services. In this research work we conducted a simulation study in order to investigate the robustness of the RI sampling distribution to departures from normality for the distribution of the random error term (є) of the CA model. We simulated four alternative distributions for є and generated data (acceptance notes) that allowed estimation of RI for hypothetical factors A, B, C and D considered in our study. In addition to the normal distribution, we used a location and scale transformation of the beta density to generate three alternative distributions: right skewed, left skewed, and also an U-shape distribution. Each one of these four distributions was tested with two standard error values (σ = 2.8 and 0.5) which resulted in eight alternative scenarios. Our simulation study considered factors A and B with 3 levels and factors C and D with two levels, hence 36 treatments in a full factorial design. We set reference RI values of 44.25%, 25.66%, 26.55% and 3.54%, respectively for factor A, B, C and D, and simulated data such that each treatment was evaluated by 108 consumers. This data set with 3888 observations was simulated 100 times for each scenario and analyzed by CA which resulted in 100 RI estimates for each factor at every scenario. Results were investigated by 95% confidence intervals (Cl) using the usual normal approximation and also percentile intervals, histograms of RI values sampling distribution to check normality, and also we calculated relative mean errors of estimation (RME) with respect to the reference RI values It was observed that the confidence intervals included the values of RI´s taken as reference in all scenarios, with the exception of: (i) factors A and B, with the normal Cl using normal distribution and σ = 2.8; ( i i ) wi th normal Cl and σ = 0.5, (iia) for factors A and C with normal distribution, U shaped and left skewed; (iib) for factor B with U shaped model and (iic) for factor D with normal distribution and U shaped. In al l these cases were the Cl missed the RI reference value, we observed close miss left and miss right results. We observed RME < 5% in all scenarios except for normal distribution and factor D only, for which RME = 7.91%. We concluded that sampling distribution of the estimator of the RI of a factor is relatively robust to departures from the normal distribution. In fact, results showed that i t s sampl ing dist r ibut ion must be close to the normal, regardless of the distribution of the random error term of the CA model. / Conjoint analysis ou análise conjunta de fatores (ANCF) é uma análise de regressão que utiliza um modelo com variáveis explicativas indicadoras ou dumnmy, para se estudar a preferência de consumidores por tratamentos que podem ser servidos ou produtos, e que são definidos pela combinação de níveis de diversos atributos ou fatores. Com essa técnica estima-se a Importância Relativa (IR) de cada fator que compõe os tratamentos avaliados. Tais estudos são importantes por permitir decidir, com base nas estimativas das IR de cada fator, quais devem ser observados com maior atenção na definição do tratamento. No presente trabalho foi realizado um estudo por simulação para se investigar a robustez da distribuição amostral do estimador da IR de um fator, à variação na distribuição do erro aleatório do modelo de regressão empregado na ANCF. Foram gerados erros aleatórios com a distribuição normal e também três outras distribuições alternativas obtidas por uma transformação de locação e escala da beta: uma distribuição assimétrica à direita, outra assimétrica à esquerda e uma com forma U. Para cada distribuição, utilizou-se desvio-padrão σ = 2,8 e σ = 0,5, portanto para oito condições foram simulados 100 conjuntos de dados referentes a avaliações (notas de aceitação) de 108 consumidores para cada um dos 36 tratamentos formados pela combinação de 4 fatores (A, B, C e D) num esquema fatorial completo 32 x 22. Definiu-se com base em um modelo de regressão para ANCF, valores de referências para as IR's iguais a 44,25%, 25,66%, 26,55% e 3,54%, respectivamente para os fatores A, B. C e D. Na avaliação dos resultados com base em intervalos de confiança percentil e pela aproximação normal, ambos a 95%, verificou-se intervalos mais estreitos pela aproximação normal. Conforme esperado, verificou-se intervalos de confiança para as IR´s mais amplos quando σ = 2,8. Observou-se que todos os intervalos de confiança incluíram os valores das IR's tomados como referência, exceto para os seguintes casos: (i) intervalo de confiança pela aproximação normal para a simulação de erros com distribuição normal e σ = 2,8, para os fatores A e B; (ii) com intervalo pela aproximação normal e σ = 0,5, (iia) para os fatores A e C com distribuição normal, em forma de U e assimétrica à esquerda; (iib) para o fator B com distribuição em forma de U; e (iic) para o fator D com distribuição normal e em forma de U . Entretanto, neste casos de não inclusão do valor IR de referência nos intervalos, observou-se que o valor estava próximo ao limite do IC, tanto à esquerda quanto à direita. As estimativas de IR obtidas no estudo por simulação também foram avaliadas pelo Erro Médio Relativo (EMR) com relação aos respectivos valores de referência. Exceto para o fator D na simulação com erros normais e σ = 2,8, na qual se obteve EMR = 7,91%, em todas as demais situações simuladas obteve-se EMR < 5%. Adicionalmente, o teste de Kolmogorov-Smirnov indicou normalidade (p > 0,05) das distribuições amostrais em todos os casos. Concluiu-se que o estimador da IR pode ser considerado como robusto à não nor-malidade da distribuição do erro aleatório do modelo de regressão utilizado na ANCF. Adicionalmente, pode-se considerar que a distribuição amostral da IR seja normal e que portanto métodos inferenciais que requerem normalidade podem ser aplicados às estimativas de lR's obtidas na ANCF.
|
Page generated in 0.1251 seconds