Spelling suggestions: "subject:"confidence intervals"" "subject:"konfidence intervals""
61 |
Multiple Comparisons under Unequal Variances and Its Application to Dose Response StudiesLi, Hong 28 September 2009 (has links)
No description available.
|
62 |
Quality Control Using Inferential Statistics in Weibull Analyses for Components Fabricated from Monolithic CeramicsParikh, Ankurben H. 04 April 2012 (has links)
No description available.
|
63 |
STATISTICAL INFERENCE ON BINOMIAL PROPORTIONSZHAO, SHUHONG 13 July 2005 (has links)
No description available.
|
64 |
Confidence Intervals and Sample Size Calculations for Studies of Film-reading PerformanceScally, Andy J., Brealey, S. January 2003 (has links)
The relaxation of restrictions on the type of professions that can report films has resulted in radiographers and other healthcare professionals becoming increasingly involved in image interpretation in areas such as mammography, ultrasound and plain-film radiography. Little attention, however, has been given to sample size determinations concerning film-reading performance characteristics such as sensitivity, specificity and accuracy. Illustrated with hypothetical examples, this paper begins by considering standard errors and confidence intervals for performance characteristics and then discusses methods for determining sample size for studies of film-reading performance. Used appropriately, these approaches should result in studies that produce estimates of film-reading performance with adequate precision and enable investigators to optimize the sample size in their studies for the question they seek to answer.
|
65 |
A practical introduction to medical statisticsScally, Andy J. 16 October 2013 (has links)
No / Medical statistics is a vast and ever-growing field of academic endeavour, with direct application to developing the robustness of the evidence base in all areas of medicine. Although the complexity of available statistical techniques has continued to increase, fuelled by the rapid data processing capabilities of even desktop/laptop computers, medical practitioners can go a long way towards creating, critically evaluating and assimilating this evidence with an understanding of just a few key statistical concepts. While the concepts of statistics and ethics are not common bedfellows, it should be emphasised that a statistically flawed study is also an unethical study.[1] This review will outline some of these key concepts and explain how to interpret the output of some commonly used statistical analyses. Examples will be confined to two-group tests on independent samples, using both a continuous and a dichotomous/binary outcome measure.
|
66 |
The Accuracy of River Bed Sediment SamplesPetrie, John Eric 19 January 1999 (has links)
One of the most important factors that influences a stream's hydraulic and ecological health is the streambed's sediment size distribution. This distribution affects streambed stability, sediment transport rates, and flood levels by defining the roughness of the stream channel. Adverse effects on water quality and wildlife can be expected when excessive fine sediments enter a stream. Many chemicals and toxic materials are transported through streams by binding to fine sediments. Increases in fine sediments also seriously impact the survival of fish species present in the stream. Fine sediments fill tiny spaces between larger particles thereby denying fish embryos the necessary fresh water to survive. Reforestation, constructed wetlands, and slope stabilization are a few management practices typically utilized to reduce the amount of sediment entering a stream. To effectively gauge the success of these techniques, the sediment size distribution of the stream must be monitored.
Gravel bed streams are typically stratified vertically, in terms of particle size, in three layers, with each layer having its own distinct grain size distribution. The top two layers of the stream bed, the pavement and subpavement, are the most significant in determining the characteristics of the stream. These top two layers are only as thick as the largest particle size contained within each layer. This vertical stratification by particle size makes it difficult to characterize the grain size distribution of the surface layer. The traditional bulk or volume sampling procedure removes a specified volume of material from the stream bed. However, if the bed exhibits vertical stratification, the volume sample will mix different populations, resulting in inaccurate sample results. To obtain accurate results for the pavement size distribution, a surface oriented sampling technique must be employed. The most common types of surface oriented sampling are grid and areal sampling. Due to limitations in the sampling techniques, grid samples typically truncate the sample at the finer grain sizes, while areal samples typically truncate the sample at the coarser grain sizes. When combined with an analysis technique, either frequency-by-number or frequency-by-weight, the sample results can be represented in terms of a cumulative grain size distribution. However, the results of different sampling and analysis procedures can lead to biased results, which are not equivalent to traditional volume sampling results. Different conversions, dependent on both the sampling and analysis technique, are employed to remove the bias from surface sample results.
The topic of the present study is to determine the accuracy of sediment samples obtained by the different sampling techniques. Knowing the accuracy of a sample is imperative if the sample results are to be meaningful. Different methods are discussed for placing confidence intervals on grid sample results based on statistical distributions. The binomial distribution and its approximation with the normal distribution have been suggested for these confidence intervals in previous studies. In this study, the use of the multinomial distribution for these confidence intervals is also explored. The multinomial distribution seems to best represent the grid sampling process. Based on analyses of the different distributions, recommendations are made. Additionally, figures are given to estimate the grid sample size necessary to achieve a required accuracy for each distribution. This type of sample size determination figure is extremely useful when preparing for grid sampling in the field.
Accuracy and sample size determination for areal and volume samples present difficulties not encountered with grid sampling. The variability in number of particles contained in the sample coupled with the wide range of particle sizes present make direct statistical analysis impossible. Limited studies have been reported on the necessary volume to sample for gravel deposits. The majority of these studies make recommendations based on empirical results that may not be applicable to different size distributions. Even fewer studies have been published that address the issue of areal sample size. However, using grid sample results as a basis, a technique is presented to estimate the necessary sizes for areal and volume samples. These areal and volume sample sizes are designed to match the accuracy of the original grid sample for a specified grain size percentile of interest. Obtaining grid and areal results with the same accuracy can be useful when considering hybrid samples. A hybrid sample represents a combination of grid and areal sample results that give a final grain size distribution curve that is not truncated. Laboratory experiments were performed on synthetic stream beds to test these theories. The synthetic stream beds were created using both glass beads and natural sediments. Reducing sampling errors and obtaining accurate samples in the field are also briefly discussed. Additionally, recommendations are also made for using the most efficient sampling technique to achieve the required accuracy. / Master of Science
|
67 |
Confidence intervals for estimators of welfare indices under complex samplingKirchoff, Retha 03 1900 (has links)
Thesis (MComm (Statistics and Actuarial Science))--University of Stellenbosch, 2010. / ENGLISH ABSTRACT: The aim of this study is to obtain estimates and confidence intervals for welfare
indices under complex sampling. It begins by looking at sampling in general with
specific focus on complex sampling and weighting. For the estimation of the welfare
indices, two resampling techniques, viz. jackknife and bootstrap, are discussed.
They are used for the estimation of bias and standard error under simple random
sampling and complex sampling. Three con dence intervals are discussed, viz. standard
(asymptotic), percentile and bootstrap-t. An overview of welfare indices and
their estimation is given. The indices are categorized into measures of poverty and
measures of inequality. Two Laeken indices, viz. at-risk-of-poverty and quintile
share ratio, are included in the discussion. The study considers two poverty lines,
namely an absolute poverty line based on percy (ratio of total household income
to household size) and a relative poverty line based on equivalized income (ratio of
total household income to equivalized household size). The data set used as surrogate
population for the study is the Income and Expenditure survey 2005/2006
conducted by Statistics South Africa and details of it are provided and discussed.
An analysis of simulation data from the surrogate population was carried out using
techniques mentioned above and the results were graphed, tabulated and discussed.
Two issues were considered, namely whether the design of the survey should be considered
and whether resampling techniques provide reliable results, especially for
con dence intervals. The results were a mixed bag . Overall, however, it was found
that weighting showed promise in many cases, especially in the improvement of the
coverage probabilities of the con dence intervals. It was also found that the bootstrap
resampling technique was reliable (by looking at standard errors). Further
research options are mentioned as possible solutions towards the mixed results. / AFRIKAANSE OPSOMMING: Die doel van die studie is die verkryging van beramings en vertrouensintervalle vir
welvaartsmaatstawwe onder komplekse steekproefneming. 'n Algemene bespreking
van steekproefneming word gedoen waar daar spesi ek op komplekse steekproefneming
en weging gefokus word. Twee hersteekproefnemingstegnieke, nl. uitsnit
(jackknife)- en skoenlushersteekproefneming, word bespreek as metodes vir die beraming
van die maatstawwe. Hierdie maatstawwe word gebruik vir sydigheidsberaming
asook die beraming van standaardfoute in eenvoudige ewekansige steekproefneming
asook komplekse steekproefneming. Drie vertrouensintervalle word bespreek, nl.
die standaard (asimptotiese), die persentiel en die bootstrap-t vertrouensintervalle.
Daar is ook 'n oorsigtelike bespreking oor welvaartsmaatstawwe en die beraming
daarvan. Hierdie welvaartsmaatstawwe vorm twee kategorieë, nl. maatstawwe van
armoede en maatstawwe van ongelykheid. Ook ingesluit by hierdie bespreking is die
at-risk-of-poverty en quintile share ratio wat deel vorm van die Laekenindekse.
Twee armoedemaatlyne , 'n absolute- en relatiewemaatlyn, word in hierdie studie
gebruik. Die absolute armoedemaatlyn word gebaseer op percy , die verhouding van
die totale huishoudingsinkomste tot die grootte van die huishouding, terwyl die relatiewe
armoedemaatlyn gebasseer word op equivalized income , die verhouding van
die totale huishoudingsinkomste tot die equivalized grootte van die huishouding.
Die datastel wat as surrogaat populasie gedien het in hierdie studie is die Inkomste
en Uitgawe opname van 2005/2006 wat deur Statistiek Suid-Afrika uitgevoer is. Inligting
met betrekking tot hierdie opname word ook gegee. Gesimuleerde data vanuit
die surrogaat populasie is geanaliseer deur middel van die hersteekproefnemingstegnieke
wat genoem is. Die resultate van die simulasie is deur middel van gra eke en
tabelle aangedui en bespreek. Vanuit die simulasie het twee vrae opgeduik, nl. of
die ontwerp van 'n steekproef, dus weging, in ag geneem behoort te word en of die
hersteekproefnemingstegnieke betroubare resultate lewer, veral in die geval van die vertrouensintervalle. Die resultate wat verkry is, het baie gevarieer. Daar is egter
bepaal dat weging in die algemeen belowende resultate opgelewer het vir baie van die
gevalle, maar nie vir almal nie. Dit het veral die dekkingswaarskynlikhede van die
vertrouensintervalle verbeter. Daar is ook bepaal, deur na die standaardfoute van
die skoenlusberamers te kyk, dat die skoenlustegniek betroubare resultate gelewer
het. Verdere navorsingsmoontlikhede is genoem as potensiële verbeteringe op die
gemengde resultate wat verkry is.
|
68 |
Intervalos de confiança para altos quantis oriundos de distribuições de caudas pesadas / Confidence intervals for high quantiles from heavy-tailed distributions.Montoril, Michel Helcias 10 March 2009 (has links)
Este trabalho tem como objetivo calcular intervalos de confiança para altos quantis oriundos de distribuições de caudas pesadas. Para isso, utilizamos os métodos da aproximação pela distribuição normal, razão de verossimilhanças, {\\it data tilting} e gama generalizada. Obtivemos, através de simulações, que os intervalos calculados a partir do método da gama generalizada apresentam probabilidades de cobertura bem próximas do nível de confiança, com amplitudes médias menores do que os outros três métodos, para dados gerados da distribuição Weibull. Todavia, para dados gerados da distribuição Fréchet, o método da razão de verossimilhanças fornece os melhores intervalos. Aplicamos os métodos utilizados neste trabalho a um conjunto de dados reais, referentes aos pagamentos de indenizações, em reais, de seguros de incêndio, de um determinado grupo de seguradoras no Brasil, no ano de 2003 / In this work, confidence intervals for high quantiles from heavy-tailed distributions were computed. More specifically, four methods, namely, normal approximation method, likelihood ratio method, data tilting method and generalised gamma method are used. A simulation study with data generated from Weibull distribution has shown that the generalised gamma method has better coverage probabilities with the smallest average length intervals. However, from data generated from Fréchet distribution, the likelihood ratio method gives the better intervals. Moreover, the methods used in this work are applied on a real data set from 1758 Brazilian fire claims
|
69 |
Resampling Evaluation of Signal Detection and Classification : With Special Reference to Breast Cancer, Computer-Aided Detection and the Free-Response ApproachBornefalk Hermansson, Anna January 2007 (has links)
<p>The first part of this thesis is concerned with trend modelling of breast cancer mortality rates. By using an age-period-cohort model, the relative contributions of period and cohort effects are evaluated once the unquestionable existence of the age effect is controlled for. The result of such a modelling gives indications in the search for explanatory factors. While this type of modelling is usually performed with 5-year period intervals, the use of 1-year period data, as in Paper I, may be more appropriate.</p><p>The main theme of the thesis is the evaluation of the ability to detect signals in x-ray images of breasts. Early detection is the most important tool to achieve a reduction in breast cancer mortality rates, and computer-aided detection systems can be an aid for the radiologist in the diagnosing process.</p><p>The evaluation of computer-aided detection systems includes the estimation of distributions. One way of obtaining estimates of distributions when no assumptions are at hand is kernel density estimation, or the adaptive version thereof that smoothes to a greater extent in the tails of the distribution, thereby reducing spurious effects caused by outliers. The technique is described in the context of econometrics in Paper II and then applied together with the bootstrap in the breast cancer research area in Papers III-V.</p><p>Here, estimates of the sampling distributions of different parameters are used in a new model for free-response receiver operating characteristic (FROC) curve analysis. Compared to earlier work in the field, this model benefits from the advantage of not assuming independence of detections in the images, and in particular, from the incorporation of the sampling distribution of the system's operating point.</p><p>Confidence intervals obtained from the proposed model with different approaches with respect to the estimation of the distributions and the confidence interval extraction methods are compared in terms of coverage and length of the intervals by simulations of lifelike data.</p>
|
70 |
Η Bootstrap σαν μέσο αντιμετώπισης του θορύβου της DEAΓιαννακόπουλος, Βασίλειος 03 May 2010 (has links)
Η παρούσα διπλωματική εργασία πραγματεύεται την μελέτη της μεθόδου Bootstrap στα πλαίσια συμπλήρωσης των ελλείψεων της DEA, κατά τον υπολογισμό της τεχνικής αποτελεσματικότητας διαφόρων μονάδων λήψης αποφάσεων. Πιο συγκεκριμένα θα μελετηθεί η Bootstrap ως μέσο για τον υπολογισμό της μεροληψίας, και διαστημάτων εμπιστοσύνης των efficiency scores που προκύπτουν από την χρήση της DEA. Όπως θα φανεί, η DEA, ως μία εφαρμογή του γραμμικού προγραμματισμού, αποτελεί μία μη παραμετρική μέθοδο υπολογισμού της τεχνικής αποτελεσματικότητας και χαρακτηρίζεται από το μειονέκτημα της έλλειψης στατιστικών μεγεθών καθώς και την αδυναμία να ξεχωρίσει τον θόρυβο από την αναποτελεσματικότητα. Η Bootstrap από την άλλη, αποτελεί μία επαναληπτική εφαρμογή της DEA, η οποία καλείται να δώσει λύση στα παραπάνω προβλήματα. Σκοπός της παρούσας διπλωματικής είναι να ελέγξει τον βαθμό στον οποίο η Bootstrap καταφέρνει να εκπληρώσει αυτή την αποστολή. Για το σκοπό αυτό χρησιμοποιούνται πραγματικά δεδομένα που αφορούν ιχθυοκαλλιέργειες που δραστηριοποιούνται στην ελληνική επικράτεια, ενώ οι υπολογισμοί γίνονται μέσω των προγραμμάτων DEAP και PIM – DEA v2.0. / The present diplomatic essay treats the study of Bootstrap method within the bounds of completion of DEA deficiencies during the estimation of technical efficiency of several decision-making units. More precisely it will be scrutinized bootstrap as a mean of estimating biasness and the confidence intervals of the efficiency scores, which arise from the use of DEA. As it will be come clear, DEA as an implementation of linear programming, constitutes a non-parametric method of estimating technical efficiency and is characterized by the drawback of non-distinguishing the noise by the inefficiency. On the other hand, bootstrap constitutes a repetitive implementation of DEA, which is assigned to give a solution to the above questions. The purpose of this essay is to verify the degree in which bootstrap completes this mission. For this reason there are used real data, which concern fish farms that are placed in Greek territory while, the calculations are executed through the programs DEAP and PIM – DEA V2.0
|
Page generated in 0.1105 seconds