• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 483
  • 44
  • 34
  • 19
  • 8
  • 8
  • 8
  • 8
  • 8
  • 8
  • 4
  • 3
  • 3
  • 3
  • 1
  • Tagged with
  • 637
  • 637
  • 586
  • 573
  • 140
  • 107
  • 102
  • 100
  • 62
  • 59
  • 55
  • 55
  • 50
  • 45
  • 45
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Bayesian hierarchial modeling for longitudinal frequency data

Jordon, Joseph. January 2005 (has links)
Thesis (M.S.)--Duquesne University, 2005. / Title from document title page. Abstract included in electronic submission form. Includes bibliographical references and abstract.
22

A study of character recognition using geometric moments under conditions of simple and non-simple loss

Tucker, N. D. January 1974 (has links)
The theory of Loss Functions Is a fundamental part of Statistical Decision Theory and of Pattern Recognition. However It is a subject which few have studied In detail. This thesis is an attempt to develop a simple character recognition process In which losses may be Implemented when and where necessary. After a brief account of the history of Loss Functions and an Introduction to elementary Decision Theory, some examples have been constructed to demonstrate how various decision boundaries approximate to the optimal boundary and what Increase In loss would be associated with these sub-optimal boundaries. The results show that the Euclidean and Hamming distance discriminants can be sufficiently close approximations that the decision process may be legitimately simplified by the use of these linear boundaries. Geometric moments were adopted for the computer simulation of the recognition process because each moment is closely related to the symmetry and structure of a character, unlike many other features. The theory of Moments is discussed, in particular their geometrical properties. A brief description of the programs used in the simulation follows. Two different data sets were investigated, the first being hand-drawn capitals and the second machine-scanned lower case type script. This latter set was in the form of a message, which presented interesting programming problems in itself. The results from the application of different discriminants to these sets under conditions of simple loss are analysed and the recognition efficiencies are found to vary between about 30% and. 99% depending on the number of moments being used and the type of discriminant. Next certain theoretical problems are studied. The relations between the rejection rate, the error rate and the rejection threshold are discussed both theoretically and practically. Also an attempt is made to predict theoretically the variation of efficiency with the number of moments used in the discrimination. This hypothesis is then tested on the data already calculated and shown to be true within reasonable limits. A discussion of moment ordering by defining their re-solving powers is undertaken and it seems likely that the moments normally used unordered are among the most satisfactory. Finally, some time is devoted towards methods of improving recognition efficiency. Information content is discussed along with the possibilities inherent in the use of digraph and trigraph probabilities. A breakdown of the errors in the recognition system adopted here is presented along with suggestions to improve the technique. The execution time of the different decision mechanisms is then inspected and a refined 2-Stage method is produced. Lastly the various methods by which a decision mechanism might be improved are united under a common loss matrix, formed by a product of matrices each of which represents a particular facet of the recognition problem.
23

Admissable and minimax procedures in statistical estimation

Unknown Date (has links)
"The purpose of this paper is to present two methods for proving that a statistical estimate is admissible and minimax. The Bayes method was introduced by Wald, and Theorems 2.2 and 2.3 illustrate the technique. The second way is due to Hodges and Lehmann and is based on a lower bound for the variance of an estimate. In Theorem 3.2 the Hodges-Lehmann method for proving admissibility is given. The last chapter is devoted to an extension of the Hodges and Lehmann technique to the Bhattacharyya bounds"--Introduction. / "August, 1954." / Typescript. / "Submitted to the Graduate Council of Florida State University in partial fulfillment of the requirements for the degree of Master of Science." / Advisor: A. V. Fend, Professor Directing Paper. / Includes bibliographical references (leaves 44-45).
24

The rationalization of industrial decision processes /

Morris, William Thomas January 1956 (has links)
No description available.
25

Optimum pattern ordering based upon a partially ordered pattern set /

El-Sawah, Mahmoud Samy Hassan January 1967 (has links)
No description available.
26

Admissible decision rules

McArthur, George E. (George Edwin) January 1969 (has links)
No description available.
27

Problems in decision theory

Cabilio, Paul. January 1968 (has links)
No description available.
28

Bayesian model selection using intrinsic priors for commonly used models in reliability and survival analysis /

Kim, Seong W. January 1997 (has links)
Thesis (Ph. D.)--University of Missouri-Columbia, 1997. / Typescript. Vita. Includes bibliographical references (leaves 96-98). Also available on the Internet.
29

Bayesian model selection using intrinsic priors for commonly used models in reliability and survival analysis

Kim, Seong W. January 1997 (has links)
Thesis (Ph. D.)--University of Missouri-Columbia, 1997. / Typescript. Vita. Includes bibliographical references (leaves 96-98). Also available on the Internet.
30

Comparison of two drugs by multiple stage sampling using Bayesian decision theory

Smith, Armand V. 02 February 2010 (has links)
The general problem considered in this thesis is to determine an optimum strategy for deciding how to allocate the observations in each stage of a multi-stage experimental procedure between two binomial populations (e.g., the numbers of successes for two drugs) on the basis of the results of previous stages. After all of the stages of the experiment have been performed, one must make the terminal decision of which of the two populations has the higher probability of success. The optimum strategy is to be optimum relative to a given loss function; and a prior distribution, or weighting function, for the probabilities of success for the two populations is assumed. Two general classes of loss functions are considered, and it is assumed that the total number of observations in each stage is fixed prior to the experiment. In order to find the optimum strategy a method of analysis called extensive-form analysis is used. This is essentially a method for enumerating all the possible outcomes and corresponding strategies and choosing the optimum strategy for a given outcome. However, it is found that this method of analysis is much too long for all but small examples even when a digital computer is used. Because of this difficulty two alternative procedures, which are approximations to extensive-form analysis, are proposed. In the stage-by-stage procedure one assumes that at each stage he is at the last stage of his multi-stage procedure and allocates his observations to each of the two populations accordingly. It is shown that this is equivalent to assuming at each stage one has a one stage procedure. In the approximate procedure one (approximately) minimizes the posterior variance of the difference of the probabilities of success for the two populations at each stage. The computations for this procedure are quite simple to perform. The stage-by-stage procedure for the case that the two populations are normal with known variance rather than binomial is considered. It is then shown that the approximate procedure can be derived as an approximation to the stage-by- stage procedure when normal approximations to binomial distributions are used. The three procedures are compared with each other and with equal division of the observations in several examples by the computation of the probability of making the correct terminal decision for various values of the population parameters (the probabilities of success}. It is assumed in these computations that the prior distributions of the population parameters are rectangular distributions and that the loss functions are symmetric} i.e., the losses are as great for one wrong terminal decision as they are for the other. These computations show that, for the examples studied, there is relatively little loss in using the stage-by-stage procedure rather than extensive-form analysis and relatively little gain in using the approximate procedure instead of equal division of the observations. However, there is a relatively large loss in using the approximate procedure rather than the stage-by-stage procedure when the population parameters are close to 0 or 1. At first it is assumed there are a fixed number of stages in the experiment, but later in the thesis this restriction is weakened to the restriction that only the maximum number of stages possible in the experiment is fixed and the experiment can be stopped at any stage before the last possible stage is reached. Stopping rules for the stage-by- stage and the approximate procedures are then derived. / Ph. D.

Page generated in 0.1172 seconds