• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6817
  • 117
  • 29
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 6817
  • 1489
  • 1259
  • 1253
  • 1142
  • 996
  • 642
  • 638
  • 595
  • 472
  • 464
  • 459
  • 458
  • 408
  • 397
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

A power study of multiple range and multiple F tests

Wine, R. Lowell 12 January 2010 (has links)
In the study of multiple comparisons tests the following topics were discussed: (i) extension to the general case of certain properties and results which previously had been given for three and four means only, (ii) power vectors and average power, (iii) expressions of power for the multiple range tests and for the multiple F test involving only three means, and (iv) methods for evaluating power. These four topics are amplified in order in the four paragraphs below. A set of recursion formulas was obtained for enumerating the decision patterns for n means. x<sub>i</sub> was given by the equation See: Equation where i = 1, 2,...,n-1. In IV, formulas were derived which express d<sub>j,i+1</sub> as a function of the x<sub>i</sub>. This made possible the writing of the bounding equations for any decision region involving n means. The regions (1,2), (1,2,3), (1,2,4), ..., (1,2....,n) for the multiple range tests were described in the sample-space of differences among n means. / Ph. D.
12

A study of statistical and deterministic models for the prediction of the composition of a mixture

Myers, Raymond H. 07 April 2010 (has links)
This thesis is a study of various physical and statistical models which may be useful for the prediction of the composition of a ternary liquid mixture. The particular mixture considered in this study was the solvent system consisting of nitroglycerine (NG), 2-nitrodiphenylamine (2NDPA), and triacetin (TA). Several models were investigated for their adequacy and closeness of fit. An attempt has been made to relate the actual composition to a few easily measurable quantities, namely, refractive index, density, and the separate analysis of 2NDPA. Deterministic models relating the concentration of each component in the mixture with the physical determinations mentioned above have been considered first. These models are based on the known theory of physical chemistry. The deterministic model which was chosen as "best" in terms of the smallness of error of prediction, estimates the composition from the determination of density and the spectrophotometer analysis of 2NDPA. Since the latter analysis is a quick and accurate determination of the 2NDPA content and since the content of the third component could be determined by complementing to 100 percent, the models have been formulated in terms of the concentration of only one component, namely NG. The statistical models under investigation are divided into activity models and regression models. The activity model is a combination of chemical and statistical theory while the simple regression model represents an approach that a statistician might take if he disregarded the physical or chemical theory involved. Two activity models have been discussed, the first assuming the activity of the mixture constant and the second assuming the activity of the mixture to be a weighted sum of the activities of the three components. Tests of hypotheses are made to determine whether the activity models result in a significant reduction in error over that of the "best" deterministic model. The investigation that the model formulated assuming constant activity of the mixture results in the smallest error of estimation among all models under study. Thus it has been used as a basis for the preparation of control charts. The linear regression models, constructed with various functions of density, refractive index, and spectrophotometer analysis as independent variables, produced errors of estimation above those for the deterministic model. Chapter IV represents the summary of the thesis and the translation of findings into actual control charts. On the basis of this chapter, a technician can easily determine the estimate of the composition of the mixture and the attached 99 percent confidence bounds. Thus, this Chapter contains these charts combined with instructions in their use and numerical examples. / Master of Science
13

Determining the most appropiate [sic] sampling interval for a Shewhart X-chart

Vining, G. Geoffrey January 1986 (has links)
A common problem encountered in practice is determining when it is appropriate to change the sampling interval for control charts. This thesis examines this problem for Shewhart X̅ charts. Duncan's economic model (1956) is used to develop a relationship between the most appropriate sampling interval and the present rate of"disturbances,” where a disturbance is a shift to an out of control state. A procedure is proposed which switches the interval to convenient values whenever a shift in the rate of disturbances is detected. An example using simulation demonstrates the procedure. / M.S.
14

Estimation in truncated distributions

Furrow, Linda Joyce January 1968 (has links)
When some population values are completely from observation, the distribution from which the observations came is said to be truncated. Estimation of the parameters from truncated distributions has been an open field for research. This thesis examines the developments which have taken place in this area, giving the major writers and the methods used by them to obtain estimators. A. C. Cohen is responsible for much work involving the maximum likelihood procedure. Using the method of moments and several methods which they have developed, Rider, Plackett, Samford, Moore, Des Raj, and Halperin have made significant contributions. The Poisson, Normal, Binomial, Negative Binomial, and Gamma distributions are included in the investigation and along with the estimators, in some cases, asymptotic variances are given. Though much work has been done, there are many things left to be investigated. Only a small number of distributions have been dealt with, with all multivariate distributions other than the normal lacking any investigation. It is not known how the estimators are affected by small sample sizes, and with the aid of the computer variances can be examined. A new problem arises when the points of truncation are not clearly defined and complicated equations often make estimators difficult to fine. / M.S.
15

Sample and counting variations associated with x-ray flourescence [sic] analysis

Davis, Robert Loyal January 1966 (has links)
M. S.
16

Experimental evaluation of the efficiencies of certain non- parametric statistics

Cleaver, Frederick William January 1950 (has links)
M.S.
17

Some considerations of an optimum sample size for a one-stage sampling procedure

Zakich, Daniel 16 February 2010 (has links)
The purpose of this work is to discover an optimum sample size to be used for deciding between two methods (populations) to choose for future production. The procedure involves the formulation of a loss function, expressing the expected loss due to choosing the population with the small mean, as a function of the difference between the population means, the amount to be produced and the cost of sampling. A minimax procedure is applied to obtain the optimum sample size. Since the function does not lend itself conveniently to mathematical considerations, special cases involving the difference between the means are considered and an optimum sample size is found for these cases. In all cases, the optimum sample size is an explicit function of the amount to be produced, the cost of sampling and the standard deviation. / Master of Science
18

Seismicity Analyses Using Dense Network Data : Catalogue Statistics and Possible Foreshocks Investigated Using Empirical and Synthetic Data

Adamaki, Angeliki January 2017 (has links)
Precursors related to seismicity patterns are probably the most promising phenomena for short-term earthquake forecasting, although it remains unclear if such forecasting is possible. Foreshock activity has often been recorded but its possible use as indicator of coming larger events is still debated due to the limited number of unambiguously observed foreshocks. Seismicity data which is inadequate in volume or character might be one of the reasons foreshocks cannot easily be identified. One method used to investigate the possible presence of generic seismicity behavior preceding larger events is the aggregation of seismicity series. Sequences preceding mainshocks chosen from empirical data are superimposed, revealing an increasing average seismicity rate prior to the mainshocks. Such an increase could result from the tendency of seismicity to cluster in space and time, thus the observed patterns could be of limited predictive value. Randomized tests using the empirical catalogues imply that the observed increasing rate is statistically significant compared to an increase due to simple clustering, indicating the existence of genuine foreshocks, somehow mechanically related to their mainshocks. If network sensitivity increases, the identification of foreshocks as such may improve. The possibility of improved identification of foreshock sequences is tested using synthetic data, produced with specific assumptions about the earthquake process. Complications related to background activity and aftershock production are investigated numerically, in generalized cases and in data-based scenarios. Catalogues including smaller, and thereby more, earthquakes can probably contribute to better understanding the earthquake processes and to the future of earthquake forecasting. An important aspect in such seismicity studies is the correct estimation of the empirical catalogue properties, including the magnitude of completeness (Mc) and the b-value. The potential influence of errors in the reported magnitudes in an earthquake catalogue on the estimation of Mc and b-value is investigated using synthetic magnitude catalogues, contaminated with Gaussian error. The effectiveness of different algorithms for Mc and b-value estimation are discussed. The sample size and the error level seem to affect the estimation of b-value, with implications for the reliability of the assessment of the future rate of large events and thus of seismic hazard. / Οι προσεισμοί αποτελούν τα πλέον υποσχόμενα πρόδρομα φαινόμενα για τη βραχυπρόθεσμη πρόγνωση των σεισμών, παρόλο που παραμένει άγνωστο το αν μια τέτοια πρόγνωση είναι εφικτή. Η χρήση της προσεισμικής δραστηριότητας ως ένδειξη ενός επερχόμενου μεγάλου σεισμού είναι αμφιλεγόμενη, κυρίως λόγω του περιορισμένου πλήθους των προσεισμών, γεγονός που πιθανά οφείλεται στην ανεπαρκή καταγραφή σεισμικών δεδομένων. Η άθροιση σεισμικών σειρών είναι μια μέθοδος που εφαρμόζεται προκειμένου να μελετηθεί η πιθανή παρουσία ενός γενικευμένου μοτίβου σεισμικότητας πριν από ισχυρούς σεισμούς. Η υπέρθεση σεισμικών ακολουθιών που προηγήθηκαν των κυρίων σεισμών αναδεικνύει μια αυξανόμενη μέση δραστηριότητα πριν από τους κύριους σεισμούς. Μια τέτοια συμπεριφορά θα μπορούσε να προκύψει και από την εγγενή τάση των σεισμών να ομαδοποιούνται χωρικά και χρονικά, με αποτέλεσμα τα παρατηρούμενα μοτίβα να έχουν περιορισμένη προγνωστική αξία. Τυχαιοποιημένοι έλεγχοι των πραγματικών δεδομένων υποδηλώνουν ότι ο παρατηρούμενος αυξανόμενος ρυθμός είναι στατιστικά σημαντικός σε σύγκριση με τη μεταβολή που οφείλεται στη γένεση απλών συστάδων σεισμών, αναδεικνύοντας την ύπαρξη προσεισμών αιτιολογικά συσχετιζόμενων με τους κύριους σεισμούς. Μια ενδεχόμενη αύξηση της ευαισθησίας των σεισμικών δικτύων πιθανά να συμβάλει στην αποτελεσματικότερη αναγνώριση των προσεισμών. Η πιθανότητα μιας τέτοιας βελτίωσης ελέγχεται με τη χρήση συνθετικών δεδομένων τα οποία προκύπτουν υπό προϋποθέσεις ως προς τη σεισμική διαδικασία. Οι επιπλοκές που μπορεί να προκύψουν από την παρουσία σεισμικότητας υποβάθρου και των μετασεισμικών ακολουθιών διερευνώνται αριθμητικά, με γενικευμένες περιπτώσεις και σενάρια που βασίζονται σε πραγματικά δεδομένα. Οι κατάλογοι που περιλαμβάνουν μικρότερους και επομένως περισσότερους σεισμούς μπορούν πιθανώς να συμβάλουν στην καλύτερη κατανόηση των σεισμικών διεργασιών και στη μελλοντική πρόγνωση των σεισμών. Σημαντική πτυχή σε τέτοιες μελέτες αποτελεί η σωστή εκτίμηση των ιδιοτήτων των σεισμικών καταλόγων, όπως είναι το μέγεθος πληρότητας και η παράμετρος b. Η επίδραση των σφαλμάτων των μεγεθών που υπάρχουν στους σεισμικούς καταλόγους στην εκτίμηση των προαναφερθέντων ιδιοτήτων ερευνάται χρησιμοποιώντας συνθετικά μεγέθη στα οποία ενυπάρχουν κανονικώς κατανεμημένα σφάλματα. Κατά τη διερεύνηση της αποτελεσματικότητας των διαφόρων μεθόδων που χρησιμοποιούνται για την εκτίμηση του μεγέθους πληρότητας προκύπτει ότι το μέγεθος του δείγματος και του σφάλματος των μεγεθών μπορούν να επηρεάσουν την εκτίμηση της παραμέτρου b, με επιπτώσεις στην εκτίμηση του ρυθμού των μελλοντικών ισχυρών σεισμών και την αξιολόγηση του σεισμικού κινδύνου.
19

Almost sure behavior for increments of U-statistics / Beschreibung der Fluktuation von Zuwächsen für U-Statistiken

Abujarad, Mohammed 18 January 2007 (has links)
No description available.
20

A PHENOMENOLOGICAL STUDY OF MATHEMATICS TEACHER EDUCATORS¿¿¿¿¿¿¿¿¿¿¿¿¿¿¿¿¿¿¿ EXPERIENCES RELATED TO AND PERCEPTIONS OF STATISTICS

Hogue, Mark D. 11 December 2012 (has links)
No description available.

Page generated in 0.4048 seconds