• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 61
  • 6
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 102
  • 102
  • 47
  • 21
  • 17
  • 17
  • 16
  • 13
  • 13
  • 12
  • 12
  • 12
  • 11
  • 9
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Confidence Interval Estimation for Distribution Systems Power Consumption by Using the Bootstrap Method

Cugnet, Pierre 17 July 1997 (has links)
The objective of this thesis is to estimate, for a distribution network, confidence intervals containing the values of nodal hourly power consumption and nodal maximum power consumption per customer where they are not measured. The values of nodal hourly power consumption are needed in operational as well as in planning stages to carry out load flow studies. As for the values of nodal maximum power consumption per customer, they are used to solve planning problems such as transformer sizing. Confidence interval estimation was preferred to point estimation because it takes into consideration the large variability of the consumption values. A computationally intensive statistical technique, namely the bootstrap method, is utilized to estimate these intervals. It allows us to replace idealized model assumptions for the load distributions by model free analyses. Two studies have been executed. The first one is based on the original nonparametric bootstrap method to calculate a 95% confidence interval for nodal hourly power consumption. This estimation is carried out for a given node and a given hour of the year. The second one makes use of the parametric bootstrap method in order to infer a 95% confidence interval for nodal maximum power consumption per customer. This estimation is realized for a given node and a given month. Simulation results carried out on a real data set are presented and discussed. / Master of Science
42

Statistical inference for inequality measures based on semi-parametric estimators

Kpanzou, Tchilabalo Abozou 12 1900 (has links)
Thesis (PhD)--Stellenbosch University, 2011. / ENGLISH ABSTRACT: Measures of inequality, also used as measures of concentration or diversity, are very popular in economics and especially in measuring the inequality in income or wealth within a population and between populations. However, they have applications in many other fields, e.g. in ecology, linguistics, sociology, demography, epidemiology and information science. A large number of measures have been proposed to measure inequality. Examples include the Gini index, the generalized entropy, the Atkinson and the quintile share ratio measures. Inequality measures are inherently dependent on the tails of the population (underlying distribution) and therefore their estimators are typically sensitive to data from these tails (nonrobust). For example, income distributions often exhibit a long tail to the right, leading to the frequent occurrence of large values in samples. Since the usual estimators are based on the empirical distribution function, they are usually nonrobust to such large values. Furthermore, heavy-tailed distributions often occur in real life data sets, remedial action therefore needs to be taken in such cases. The remedial action can be either a trimming of the extreme data or a modification of the (traditional) estimator to make it more robust to extreme observations. In this thesis we follow the second option, modifying the traditional empirical distribution function as estimator to make it more robust. Using results from extreme value theory, we develop more reliable distribution estimators in a semi-parametric setting. These new estimators of the distribution then form the basis for more robust estimators of the measures of inequality. These estimators are developed for the four most popular classes of measures, viz. Gini, generalized entropy, Atkinson and quintile share ratio. Properties of such estimators are studied especially via simulation. Using limiting distribution theory and the bootstrap methodology, approximate confidence intervals were derived. Through the various simulation studies, the proposed estimators are compared to the standard ones in terms of mean squared error, relative impact of contamination, confidence interval length and coverage probability. In these studies the semi-parametric methods show a clear improvement over the standard ones. The theoretical properties of the quintile share ratio have not been studied much. Consequently, we also derive its influence function as well as the limiting normal distribution of its nonparametric estimator. These results have not previously been published. In order to illustrate the methods developed, we apply them to a number of real life data sets. Using such data sets, we show how the methods can be used in practice for inference. In order to choose between the candidate parametric distributions, use is made of a measure of sample representativeness from the literature. These illustrations show that the proposed methods can be used to reach satisfactory conclusions in real life problems. / AFRIKAANSE OPSOMMING: Maatstawwe van ongelykheid, wat ook gebruik word as maatstawwe van konsentrasie of diversiteit, is baie populêr in ekonomie en veral vir die kwantifisering van ongelykheid in inkomste of welvaart binne ’n populasie en tussen populasies. Hulle het egter ook toepassings in baie ander dissiplines, byvoorbeeld ekologie, linguistiek, sosiologie, demografie, epidemiologie en inligtingskunde. Daar bestaan reeds verskeie maatstawwe vir die meet van ongelykheid. Voorbeelde sluit in die Gini indeks, die veralgemeende entropie maatstaf, die Atkinson maatstaf en die kwintiel aandeel verhouding. Maatstawwe van ongelykheid is inherent afhanklik van die sterte van die populasie (onderliggende verdeling) en beramers daarvoor is tipies dus sensitief vir data uit sodanige sterte (nierobuust). Inkomste verdelings het byvoorbeeld dikwels lang regtersterte, wat kan lei tot die voorkoms van groot waardes in steekproewe. Die tradisionele beramers is gebaseer op die empiriese verdelingsfunksie, en hulle is gewoonlik dus nierobuust teenoor sodanige groot waardes nie. Aangesien swaarstert verdelings dikwels voorkom in werklike data, moet regstellings gemaak word in sulke gevalle. Hierdie regstellings kan bestaan uit of die afknip van ekstreme data of die aanpassing van tradisionele beramers om hulle meer robuust te maak teen ekstreme waardes. In hierdie tesis word die tweede opsie gevolg deurdat die tradisionele empiriese verdelingsfunksie as beramer aangepas word om dit meer robuust te maak. Deur gebruik te maak van resultate van ekstreemwaardeteorie, word meer betroubare beramers vir verdelings ontwikkel in ’n semi-parametriese opset. Hierdie nuwe beramers van die verdeling vorm dan die basis vir meer robuuste beramers van maatstawwe van ongelykheid. Hierdie beramers word ontwikkel vir die vier mees populêre klasse van maatstawwe, naamlik Gini, veralgemeende entropie, Atkinson en kwintiel aandeel verhouding. Eienskappe van hierdie beramers word bestudeer, veral met behulp van simulasie studies. Benaderde vertrouensintervalle word ontwikkel deur gebruik te maak van limietverdelingsteorie en die skoenlus metodologie. Die voorgestelde beramers word vergelyk met tradisionele beramers deur middel van verskeie simulasie studies. Die vergelyking word gedoen in terme van gemiddelde kwadraat fout, relatiewe impak van kontaminasie, vertrouensinterval lengte en oordekkingswaarskynlikheid. In hierdie studies toon die semi-parametriese metodes ’n duidelike verbetering teenoor die tradisionele metodes. Die kwintiel aandeel verhouding se teoretiese eienskappe het nog nie veel aandag in die literatuur geniet nie. Gevolglik lei ons die invloedfunksie asook die asimptotiese verdeling van die nie-parametriese beramer daarvoor af. Ten einde die metodes wat ontwikkel is te illustreer, word dit toegepas op ’n aantal werklike datastelle. Hierdie toepassings toon hoe die metodes gebruik kan word vir inferensie in die praktyk. ’n Metode in die literatuur vir steekproefverteenwoordiging word voorgestel en gebruik om ’n keuse tussen die kandidaat parametriese verdelings te maak. Hierdie voorbeelde toon dat die voorgestelde metodes met vrug gebruik kan word om bevredigende gevolgtrekkings in die praktyk te maak.
43

Edgeworth-corrected small-sample confidence intervals for ratio parameters in linear regression

Binyavanga, Kamanzi-wa 03 1900 (has links)
Dissertation (PhD)--Stellenbosch University, 2002. / ENGLISH ABSTRACT: In this thesis we construct a central confidence interval for a smooth scalar non-linear function of parameter vector f3 in a single general linear regression model Y = X f3 + c. We do this by first developing an Edgeworth expansion for the distribution function of a standardised point estimator. The confidence interval is then constructed in the manner discussed. Simulation studies reported at the end of the thesis show the interval to perform well in many small-sample situations. Central to the development of the Edgeworth expansion is our use of the index notation which, in statistics, has been popularised by McCullagh (1984, 1987). The contributions made in this thesis are of two kinds. We revisit the complex McCullagh Index Notation, modify and extend it in certain respects as well as repackage it in the manner that is more accessible to other researchers. On the new contributions, in addition to the introduction of a new small-sample confidence interval, we extend the theory of stochastic polynomials (SP) in three respects. A method, which we believe to be the simplest and most transparent to date, is proposed for deriving cumulants for these. Secondly, the theory of the cumulants of the SP is developed both in the context of Edgeworth expansion as well as in the regression setting. Thirdly, our new method enables us to propose a natural alternative to the method of Hall (1992a, 1992b) regarding skewness-reduction in Edgeworth expansions. / AFRIKAANSE OPSOMMING: In hierdie proefskrif word daar aandag gegee aan die konstruksie van 'n sentrale vertrouensinterval vir 'n gladde skalare nie-lineêre funksie van die parametervektor (3 in 'n enkele algemene lineêre regressiemodel y = X (3 + e.. Dit behels eerstens die ontwikkeling van 'n Edgeworth uitbreiding vir die verdelingsfunksie van 'n gestandaardiseerde puntberamer. Die vertrouensinterval word dan op grond van hierdie uitbreiding gekonstrueer. Simulasiestudies wat aan die einde van die proefskrif gerapporteer word, toon dat die voorgestelde interval goed vertoon in verskeie klein-steekproef gevalle. Die gebruik van indeksnotasie, wat in die statistiek deur McCullagh (1984, 1987) bekendgestel is, speel 'n sentrale rol in die ontwikkeling van die Edgeworth uitbreiding. Die bydrae wat in hierdie proefskrif gemaak word, is van 'n tweërlei aard. Die ingewikkelde Indeksnotasie van McCullagh word ondersoek, aangepas en ten opsigte van sekere aspekte uitgebrei. Die notasie word ook aangebied in 'n vorm wat dit hopelik meer toeganklik sal maak vir ander navorsers. Betreffende die bydrae wat gemaak word, word 'n nuwe klein-steekproef vertrouensinterval voorgestel, en word die teorie van stogastiese polinome (SP) ook in drie opsigte uitgebrei. 'n Metode word voorgestelom die kumulante van SP'e af te lei. Ons glo dat hierdie metode die duidelikste en eenvoudigste metode is wat tot dusver hiervoor voorgestel is. Tweedens word die teorie van die kumulante van SP'e ontwikkel binne die konteks van Edgeworth uitbreidings, sowel as die konteks van regressie. Derdens stelons nuwe metode ons in staat om 'n natuurlike alternatief voor te stel vir die metode van Hall (1992a, 1992b) vir die vermindering van skeefheid in Edgeworth uitbreidings.
44

Algorithmic Developments in Monte Carlo Sampling-Based Methods for Stochastic Programming

Pierre-Louis, Péguy January 2012 (has links)
Monte Carlo sampling-based methods are frequently used in stochastic programming when exact solution is not possible. In this dissertation, we develop two sets of Monte Carlo sampling-based algorithms to solve classes of two-stage stochastic programs. These algorithms follow a sequential framework such that a candidate solution is generated and evaluated at each step. If the solution is of desired quality, then the algorithm stops and outputs the candidate solution along with an approximate (1 - α) confidence interval on its optimality gap. The first set of algorithms proposed, which we refer to as the fixed-width sequential sampling methods, generate a candidate solution by solving a sampling approximation of the original problem. Using an independent sample, a confidence interval is built on the optimality gap of the candidate solution. The procedures stop when the confidence interval width plus an inflation factor falls below a pre-specified tolerance epsilon. We present two variants. The fully sequential procedures use deterministic, non-decreasing sample size schedules, whereas in another variant, the sample size at the next iteration is determined using current statistical estimates. We establish desired asymptotic properties and present computational results. In another set of sequential algorithms, we combine deterministically valid and sampling-based bounds. These algorithms, labeled sampling-based sequential approximation methods, take advantage of certain characteristics of the models such as convexity to generate candidate solutions and deterministic lower bounds through Jensen's inequality. A point estimate on the optimality gap is calculated by generating an upper bound through sampling. The procedure stops when the point estimate on the optimality gap falls below a fraction of its sample standard deviation. We show asymptotically that this algorithm finds a solution with a desired quality tolerance. We present variance reduction techniques and show their effectiveness through an empirical study.
45

Bootstrap standard error and confidence intervals for the correlation corrected for indirect range restriction: a Monte Carlo study. / Bootstrap method / Bootstrap standard error & confidence intervals for the correlation corrected for indirect range restriction

January 2006 (has links)
Li Johnson Ching Hong. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2006. / Includes bibliographical references (leaves 40-42). / Abstracts in English and Chinese. / ACKNOWLEDGEMENT --- p.2 / ABSTRACT --- p.3 / CHINESE ABSTRACT --- p.4 / TABLE OF CONTENTS --- p.5 / Chapter CHAPTER 1 --- INTRODUCTION --- p.7 / Thorndike's Three Formulae to Correct Correlation for Range Restriction --- p.8 / Significance of Case 3 --- p.9 / Importance of Standard Error and Confidence Intervals --- p.10 / Research Gap in the Estimation of Standard Error of rc and the Construction of the Confidence Intervals for pxy --- p.10 / Objectives of the Present Study --- p.12 / Chapter CHAPTER 2 --- BOOTSTRAP METHOD --- p.13 / Different Confidence Intervals Constructed for the Present Study --- p.14 / Chapter CHAPTER 3 --- A PROPOSED PROCEDURE FOR THE ESTIMATION OF STANDAR ERROR OF rc AND THE CONSTRUCTION OF CONFIDENCE INTERVALS --- p.16 / Chapter CHAPTER 4 --- METHODS --- p.20 / Model Specifications --- p.20 / Procedure --- p.21 / Chapter CHAPTER 5 --- ASSESSMENT --- p.23 / Chapter CHAPTER 6 --- RESULTS --- p.25 / Accuracy of Average Correlation Corrected for IRR ( rc ) --- p.25 / Empirical Standard Deviation (SDE) of rc --- p.29 / Accuracy of Standard Error Estimate --- p.29 / Accuracy of Confidence Intervals --- p.33 / Chapter CHAPTER 7 --- DISCUSSION AND CONCLUSION --- p.36 / Chapter CHAPTER 8 --- LIMITATIONS AND FURTHER DIRECTIONS --- p.38 / REFERENCES --- p.40 / APPENDIX A --- p.43 / FIGURE CAPTION --- p.53 / LIST OF TABLES --- p.55
46

Measurement Error in Progress Monitoring Data: Comparing Methods Necessary for High-Stakes Decisions

Bruhl, Susan 2012 May 1900 (has links)
Support for the use of progress monitoring results for high-stakes decisions is emerging in the literature, but few studies support the reliability of the measures for this level of decision-making. What little research exists is limited to oral reading fluency measures, and their reliability for progress monitoring (PM) is not supported. This dissertation explored methods rarely applied in the literature for summarizing and analyzing progress monitoring results for medium- to high-stakes decisions. The study was conducted using extant data from 92 "low performing" third graders who were progress monitored using mathematics concept and application measures. The results for the participants in this study identified 1) the number of weeks needed to reliably assess growth on the measure; 2) if slopes differed when results were analyzed with parametric or nonparametric analyses; 3) the reliability of growth; and 4) the extent to which the group did or did not meet parametric assumptions inherent in the ordinary least square regression model. The results indicate reliable growth from static scores can be obtained in as few as 10 weeks of progress monitoring. It was also found that within this dataset, growth through parametric and nonparametric analyses was similar. These findings are limited to the dataset analyzed in this study but provide promising methods not widely known among practitioners and rarely applied in the PM literature.
47

Modified Profile Likelihood Approach for Certain Intraclass Correlation Coefficient

Liu, Huayu 20 April 2011 (has links)
In this paper we consider the problem of constructing confidence intervals and lower bounds forthe intraclass correlation coefficient in an interrater reliability study where the raters are randomly selected from a population of raters.The likelihood function of the interrater reliability is derived and simplified, and the profile likelihood based approach is readily available for computing the confidence intervals of the interrater reliability. Unfortunately, the confidence intervals computed by using the profile likelihood function are in general too narrow to have the desired coverage probabilities. From the point view of practice, a conservative approach, if is at least as precise as any existing method, is preferred sinceit gives the correct results with a probability higher than claimed. Under this rationale, we propose the so-called modified likelihood approach in this paper. Simulation study shows that, the proposed method in general has better performance than currently used methods.
48

New Non-Parametric Confidence Interval for the Youden

Zhou, Haochuan 18 July 2008 (has links)
Youden index, a main summary index for the Receiver Operating Characteristic (ROC) curve, is a comprehensive measurement for the effectiveness of a diagnostic test. For a continuous-scale diagnostic test, the optimal cut-point for the positive of disease is the cut-point leading to the maximization of the sum of sensitivity and specificity. Finding the Youden index of the test is equivalent to maximize the sum of sensitivity and specificity for all the possible values of the cut-point. In this thesis, we propose a new non-parametric confidence interval for the Youden index. Extensive simulation studies are conducted to compare the relative performance of the new interval with the existing intervals for the index. Our simulation results indicate that the newly developed non-parametric method performs as well as the existing parametric method but it has better finite sample performance than the existing non-parametric methods. The new method is flexible and easy to implement in practice. A real example is also used to illustrate the application of the proposed interval.
49

Simultaneous confidence bands for nonparametric, polynomial-trigonometric regression estimators /

Duggins, Jonathan W. January 2003 (has links) (PDF)
Thesis (M.S.)--University of North Carolina at Wilmington, 2003. / Includes bibliographical references (leaf : [40]).
50

Βελτιωμένα διαστήματα εμπιστοσύνης για την διασπορά κανονικού πληθυσμού

Ταφιάδη, Μαρία 25 May 2009 (has links)
Η παρούσα μεταπτυχιακή διατριβή ανήκει στο επιστημονικό πεδίο της Στατιστικής Θεωρίας Αποφάσεων και αποσκοπεί στην κατασκευή βελτιωμένων διαστημάτων εμπιστοσύνης για την διασπορά ενός πληθυσμού που προέρχεται από κανονική κατανομή. Η μελέτη του προβλήματος της κατασκευής ενός διαστήματος εμπιστοσύνης για την διασπορά μιας κανονικής κατανομής, παρουσιάστηκε στην εργασία του Shorrock (1990). Ειδικότερα, ο Shorrock σε αυτή του τη μελέτη κατασκεύασε διαστήματα εμπιστοσύνης που εξαρτώνταν από την δειγματική διασπορά και από τον δειγματικό μέσο. Συγκεκριμένα, τα νέα αυτά διαστήματα έχουν το ίδιο μήκος με το κλασικό διάστημα εμπιστοσύνης για την διασπορά, αλλά έχουν ομοιόμορφα μεγαλύτερη πιθανότητα κάλυψης. Αρχικά, εξετάζουμε λεπτομερώς τα γνωστά διαστήματα εμπιστοσύνης και πιο συγκεκριμένα, το διάστημα εμπιστοσύνης ίσων ουρών, ελαχίστου μήκους, λόγου πιθανοφανειών και το αμερόληπτο διάστημα εμπιστοσύνης για να γίνουν οι απαραίτητες συγκρίσεις με τα διαστήματα που θα παραχθούν στη συνέχεια. Το πρώτo διάστημα κατασκευάζεται ακολουθώντας μία διαδικασία που είναι αντίστοιχη με την μεθοδολογία εύρεσης του εκτιμητή τύπου Stein, γι' αυτό και το διάστημα που προκύπτει, ονομάζεται διάστημα εμπιστοσύνης τύπου Stein. Η κατασκευή του επόμενου διαστήματος βασίζεται στην μεθοδολογία εύρεσης του εκτιμητή Brown (1968) γι' αυτό και ονομάζεται διάστημα εμπιστοσύνης τύπου Brown. Κατ' όπιν και σε αναλογία με την μεθοδολογία εύρεσης των εκτιμητών Brewster and Zidek (1964) γενικεύεται το προηγούμενο διάστημα κατασκευάζοντας το διάστημα εμπιστσύνης Brewster and Zidek, το οποίο αποδεικνύεται με τη σειρά του ότι, είναι ένα γενικευμένο διάστημα Bayes. Έτσι, κάνοντας τη σύγκριση ως προς την πιθανότητα κάλυψης μεταξύ των νέων αυτών διαστημάτων και του κλασικού διαστήματος εμπιστοσύνης αποδεικνύεται πως αυτή είναι ομοιόμορφα μεγλύτερη για τα νέα διαστήματα. / This master thesis belongs to Statistic Decision Theory field and its purpose is the construction of improved confidence intervals for a normal variance. These intervals were studied by Shorrock (1990). Especially, the usual confidence interval for the variance of a normal distribution, is a function of the sample variance alone. However, in his work Shorrock constructs intervals for variance that also depend on the sample mean. The new intervals have the same length as the shortest interval, depending only on the sample variance and have uniformly higher probability of coverage. Initially, we study well known confidence intervals such as, confidence interval with equal tails, confidence interval of minimum length and then we construct the improved ones. More specifically, we construct a confidence interval analogue of the point estimator in Stein (1964), a confidence interval analogue of the point estimator in Brown (1968) and a Brewster and Zidek (1974) confidence interval, which is also a generalized Bayes interval. Thus, we understand that the intervals above, are improved because they have uniformly greater coverage probability than the shortest one.

Page generated in 0.1095 seconds