• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 92414
  • 64059
  • 33776
  • 15843
  • 5732
  • 4120
  • 1662
  • 1450
  • 1305
  • 1303
  • 1120
  • 1092
  • 1091
  • 851
  • Tagged with
  • 13114
  • 9162
  • 8321
  • 8222
  • 7837
  • 7795
  • 6207
  • 6093
  • 5473
  • 5120
  • 5071
  • 4808
  • 4714
  • 4590
  • 4038
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Cyclic Designs

Wolock, Fred Walter January 1964 (has links)
Ph. D.
72

Decision Criteria for Determining Unidimensionality

Hattie, John Allen 04 1900 (has links)
<p>One of the fundamental assumptions of measurement theory is that a set of its items forming a test is unidimensional. The purposes of this dissertation were (1) to review various methods for determining unidimensionality and to assess the rationale of those methods; (2) to attempt to clarify the term unidimensionality, and to show how it differs from other terms often used interchangeably; and (3) to assess the effectiveness of various indices proposed to determine unidimensionality.</p> <p>Indices based on answer patterns, reliability, component and factor analysis, and latent traits were reviewed and it was shown that many of these lacked a rationale, that for many the sampling distributions were not known, and that many were adjustments to an established index to take into account some criticism of it. Altogether 87 indices were reviewed.</p> <p>It was demonstrated that unidimensionality often is used interchangeably with reliability, internal consistency, and homogeneity. Reliability was defined as the ratio of the true score variance to obsesrved score variance. Internal consistency has been used often as a synonym for unidimensionality, and it also denotes a group of methods that are intended to estimate reliability. Internal consistency methods are based on the variances and covariances of test-items, and depend on only one administration of a test. Homogeneity seems to refer more specifically to the similarity of the item correlations, but the term is often used as a synonym for unidimensionality. Unidimensionality was defined as the existence of one latent trait underlying the data. The usefulness of the terms internal consistency and homogeneity was questioned.</p> <p>A Monte Carlo simulation was conducted to assess the 87 indices under known conditions. A three-parameter, multivariate, logistic latent-trait model was used to generate item responses. Difficulty, guessing, discrimination, and the number of factors underlying the data were varied.</p> <p>Many of the indices were highly correlated, some resulted in estimates outside their theoretical bounds, and most were particularly sensitive to the intercorrelations between the factors. Indices based on answer patterns, reliability, component analysis, linear factor analysis, and one the one-parameter latent trait model were ineffective. The sums of absolute residuals from a nonlinear factor analysis (specifying one factor with cubic terms) and from two-parameter latent trait models (Christoffersson, 1975; McDonald, 1980; Muthen, 1978) were able to discriminate between cases with one latent trait and cases with more than one latent trait.</p> / Doctor of Philosophy (PhD)
73

Inference on a genetic model

Bartko, John Jaroslav January 1962 (has links)
This Dissertation deals with statistical inference on the mutation rates α₁ and α₂ of a population genetic model introduced by Moran [Proc. Camb. Phil. Soc. 54 (1958), pp. 60-71]. The deductive theory by approximate methods of such models has reached an advanced stage but little has been done along the line of statistical inference. Moran's model is a model of the Markov chain type. It was selected for investigation because it is the only finite population genetic model for which the cteductive theory by exact methods is well enough established to stimulate an investigation of statistical inference. The first broad area of discussion of this dissertation deals with the simultaneous consideration of the mutation rates α₁ and α₂. Maximum likelihood estimates for α₁ and α₂ are obtained iteratively from the Newton-Raphson scheme for simultaneous solution of two equations in two unknowns. Several theorems are given which ensure that the log likelihood function involving α₁ and α₂ has a unique maximum in the parameter space of useful values. The transition matrix consists of conditional probability elements involving the unknown parameters α₁ and α₂. These elements are the probability of a transition from one state to another in at most unit steps. The eigenvalue expression along with the corresponding pre- and post-eigenvector matrices are given. The post-eigenvector matrix has elements consisting of Hahn polynomials. The pre-eigenvector matrix is obtained by inverting the post-eigenvector matrix for which an expression is given. The Hahn polynomials form a family of orthogonal polynomials. They were introduced by Hahn [Math. Nach. 2 (1949), pp. 4-34], and further discussed by Karlin and McGregor [Scripta Math, 26 (1961), pp. 33-46]. These polynomials form the foundation and are basic to many of the results of the dissertation. The expression for the expected value of the number of transitions from one state to another is given and this expression is also in terms of Hahn polynomials. Finally for this positively regular transition matrix involving both of the mutation rates α₁ and α₂, asymptotic multivariate normality of the maximum likelihood estimates α₁, α₂ is discussed along with hypothesis testing. Also discussed are large sample approximations., methods of designing and conducting experiments and replicated experiments. The second broad area of this dissertation deals with an absorbing Markov chain. That is, α₂ is set equal to zero and investigation on α₁ only is carried out. For this case the above transition matrix becomes an absorbing one (regular) and inferences are obtained from realizations on this absorbing chain whose peculiarities provide some unique difficulties. The eigenvalue expression with the corresponding post-eigenvector matrix whose elements are also Hahn polynomials and the expression (in terms of Hahn polynomials) for the expected number of transitions from one state to another are all given. Of particular interest are several postulated theorems on the maximum likelihood estimate α₁ of the mutation rate α₁ of the absorbing Markov chain in which an attempt is made at establishing the properties and normality of α₁. The estimate is again obtained iteratively. An outline of the proofs of the postulated theorems is presented. Gaps in the proof are a result of unresolved questions in positive regular Markov chain theory. In connection with the above theory and postulated theorems a simulation study on the IBM 650 was undertaken. This study substantiated many of the assumptions of the postulated theorems. The study, however, was not extensive enough to be conclusive. A further study is proposed. Replicated experiments are also discussed. Of particular interest here is a geometric type stopping rule in which the negative binomial is employed. Methods of conducting and designing experiments are discussed. An appendix discusses the Hahn polynomial system along with many of its important properties. / Ph. D.
74

Kinetocardiogram computer diagnostic procedures

Myers, James Henderson January 1969 (has links)
Kinetocardiograms of 507 patients whose clinical records, physical examinations, and electrocardiograms were normal or essentially normal, and who demonstrated one of seven conduction defects, were classified into eight cardiac disease categories, one of which was a normal control. The kinetocardiogram wave patterns were then studied as a single basis for deriving a computer-assisted method of cardiac disease diagnosis. [See document for full abstract.] / Ph. D.
75

The problem of classifying members of a population into groups

Flora, Roger Everette January 1965 (has links)
A model is assumed in which individuals are to be classified into groups as to their "potential” with respect to a given characteristic. For example, one may wish to classify college applicants into groups with respect to their ability to succeed in college. Although actual values for the “potential,” or underlying variable of classification, may be unobservable, it is assumed possible to divide the individuals into groups with respect to this characteristic. Division into groups may be accomplished either by fixing the boundaries of the underlying variable of classification or by fixing the proportion of the individuals which may belong to a given group. For discriminating among the different groups, a set of measurements is obtained for each individual. In the example above, for instance, classification might be based on test scores achieved by the applicants on a set of tests administered to them. Since the value of the underlying variable of classification is unobservable, we may assign, in place of this variable, a characteristic random variable to each individual. The characteristic variable will be the same for every member of a given group. We then consider a choice of characteristic random variable and a linear combination of the observed measurements such that the correlation between the two is a maximum with respect to both the coefficients of the different measurements and the characteristic variable. If a significant correlation is found, one may then use a discriminant for a randomly selected individual the linear combination obtained by using the coefficients found by the above procedure. In order to facilitate a test of validity for the proposed discriminant function, the distribution of a suitable function of the above correlation coefficient is found under the null hypothesis of no correlation between the underlying variable of classification and the observed measurements. A test procedure based on the statistic for which the null distribution is found is then described. Special consideration is given in the study to the case of only two classification groups with the proportion of individuals to belong to each group fixed. For this case, in addition to obtaining the null distribution, the distribution of the test statistic is also considered under the alternative hypothesis. Low order momenta of the test criterion are obtained, and the approximate power of the proposed test is found for specific cases of the model by fitting an appropriate density to the moments derived. A general consideration of the power function and its behavior as the sample size increases and as the population multiple correlation between the underlying variable of classification and the observed measurements increases is also investigated. Finally, the probability of misclassification, or “the problem of shrinkage" as it is often called, is considered. Possible approaches to the problem and aaae of the difficulties in investigating this problem are indicated. / Ph. D.
76

Restrictive ranking

Norman, James Everett January 1965 (has links)
This dissertation is a study of certain aspects of restricted ranking, a method intended for use by a panel of m judges evaluating the relative merits of N subjects, candidates tor scholarships, awards, etc. Each judge divides the N subjects into R classes, so that n₁ individuals receive a grade i (i = 1, 2, R; Σnᵢ = N) where the R numbers nᵢ are close to N/R (nᵢ = N/R when N is divisible by R) and are preassigned and the same for all judges. When this method is used, all subjects are treated alike, the grading system is the same for all judges and the grades of each judge are given equal weight. Equally important, the meaning of a particular grade is clear to each judge and the same for each judge. Under the null hypothesis that all nR = N subjects are of equal merit, tests of significance are developed to determine whether (1) a particular individual is superior or inferior to the rest of the subjects; (2) two particular subjects are of equal merit; (3) the individuals with the highest and lowest scores are respectively superior and interior to the rest of the subjects and (4) the nR subjects form a homogeneous group. The critical values of the test statistics for (1), (2) and (3) are tabled for small to moderate values of m, an approximation based on the asymptotic normality of the appropriate test statistic proving suitable for large m. The test of homogeneity (4) employs a sum of squares of subjects’ scores which is shown to be asymptotically distributed for m→∞ as chi-square with nR-1 degrees of freedom. For the special case of complete ranking (R=N), this statistic is identical to one proposed by Friedman (1937) form rankings. The behavior of two of these tests is theoretically investigated for the non-null case of nR-1 subjects having equal merit and one "outlying" subject whose merit exceeds the others. The assumption is made that each judge j assigns a grade to every subject i on the basis of a "subjective random variable" xᵢⱼ with mean equal to the "true" merit of subject i and that the distribution of xᵢⱼ is the same for all j. The probability, P(δ), that subject #1 with true mean differing from the others by an amount δ would receive a significantly high score according to the test for outliers is obtained and presented graphically as a function of for xᵢⱼ distributed as (1/2) sech² (x-δ) and also as N ( δ, 1). Using a result due to Hannan (1956), an expression for the asymptotic relative efficiency of the chi-squared homogeneity test for restricted vs. complete ranking for the aforementioned non-null case is obtained and values of this A.R.E. for 2 ≤n≤10 and. 2≤R≤8 are tabled. This A.R.E. is found to be at least 0.9 for all cases where n≤10 and R≥4. A further comparison of the performances of restricted (R) and complete (C) ranking is made by way of some simulation studies performed on a high speed digital computer tor the non- null ease where xᵢⱼ is normally distributed with unit variance and a mean δ₁ having as many as three different possible values. The complete and restricted ranks assigned by the jth judge to the ith subject are assigned on the basis of the value of xᵢⱼ obtained by experimental sampling using a random normal number generator in the computer program. A group of Nₛ subjects with the highest rank sums for (R) and for (C) are then selected in each study. The observed difference in true means between selected and remaining groups is then used as a measure of goodness of the two selection procedures. The results of these studies are presented graphically, displaying a very close agreement between (R) and (C) in all instances. / Ph. D.
77

Minimum bias designs for an exponential response

Manson, Allison Ray January 1965 (has links)
For the exponential response η<sub>u</sub> = α + βe<sup>γZ<sub>u</sub></sup> (u = 1,2,…,N) where α and β lie on the real line (-∞,∞), and γ is a positive integer; the designs are given which minimize the bias due to the inherent inability of the approximation function ŷ<sub>u</sub> = Σ<sub>j=0</sub><sup>d</sub>b<sub>j</sub>e<sup>jZ<sub>u</sub></sup> to fit such a model. Transformation to η<sub>u</sub> = α + βx<sub>u</sub><sup>γ</sup> and ŷ<sub>u</sub> = Σ<sub>j=0</sub><sup>d</sub>b<sub>j</sub>x<sub>u</sub><sup>j</sup> facilitates the solution for minimum bias designs. The requirements for minimum bias designs follow along lines similar to those given by Box and Draper (J. Amer. Stat. Assoc., 54, 1959, p. 622). The minimum bias designs are obtained for specific values of N with a maximum protection level, γ<sub>d</sub>*(N), for the parameter γ and an approximation function of degree d. These designs obtained possess several degrees of freedom in the choice of the design levels of the x<sub>u</sub> or the Z<sub>u</sub>u , which may be used to satisfy additional design requirements. It is shown that for a given N, the same designs which minimize bias for approximation functions of degree one also minimize bias for general degree d, with a decrease in γ<sub>d</sub>*(N) as d increases. In fact γ<sub>d</sub>*(N) = γ<sub>1</sub>*(N) - d + 1, but with the decrease in γ<sub>d</sub>*(N) is a compensating decrease in the actual level of the minimum bias. Furthermore, γ<sub>d</sub>*(N) increases monotonically with N, thereby allowing the maximum protection level on 1 to be increased as desired by increasing N. In the course of obtaining solutions, some interesting techniques are developed for determining the nature of the roots of a polynomial equation which has several known coefficients and several variable coefficients. / Ph. D.
78

Empirical Bayes procedures in time series analysis

Launer, Robert L. January 1970 (has links)
Empirical Bayes analysis concerns the analysis of data which occur in similar recurring situations. The parameters involved in the recurring situations are generated independently from an unknown probability distribution G(θ). In many situations it is possible to use the estimates of all of the past parameter values to construct an estimate which reduces the mean squared error of the usual estimate of the present value of the parameter. This dissertation involves the empirical Bayes estimates of various time series parameters: the auto-regressive time series model, the time series regression model with auto-correlated errors and the spectral density function. In each case, empirical Bayes estimators are obtained using asymptotic or approximate distributions of the usual estimators. The Parzen, Tukey and Bartlett smoothing coefficients are all used in the estimation of the spectral density function. Each estimator is tested on a high speed computer using Monte Carlo procedures. It was found that in every situation the empirical Bayes estimators produced smaller mean squared errors than the usual estimator. / Ph. D.
79

Some aspects of paired-comparison experiments

Glenn, William Alexander January 1959 (has links)
I. A Comparison of the Effectiveness of Tournaments. A paired-comparison experiment involving t treatments is analogous to a tournament with t players. A balanced experiment, in which every possible pair occurs once per replication, is the counterpart of a round robin tournament. When the objective is to pick the best treatment, the balanced design may prove to be more expensive than necessary. The knock-out tournament has been suggested as an alternative requiring fewer units of each treatment per replication. In this paper round robin, replicated knock-out, and double elimination tournaments are investigated for their effectiveness in selecting the best one or tour players. Effectiveness is gauged in terms of the two criteria (a) the probability that the best player wins and (b) the expected number of games. For general values of the parameters involved, expressions are derived for the evaluation of the criteria. Comparisons are made on the basis of series of assigned parameter values. Possibilities for the extension of the study are briefly discussed. II. Ties in Paired-Comparison Experiments. In making paired comparisons a judge frequently is unable to express a real preference in a number of the pairs he judges. In spite of this, some or the methods in current use do not permit the judge to declare a tie. In other methods tied observations are either ignored or divided equally or randomly between the tied members. It appears that there is a need, at least in the estimation of response-scale values, for a method which takes tied observations into account. In the Thurstone-Mosteller method the standardized distribution of the difference of two stimulus responses is normal with unit variance and mean equal to the difference or the two mean stimulus responses. In prohibiting ties the assumption is in effect made that all differences, however small, are perceptible to the judge. In this paper the assumption is made that a tie will occur whenever the difference between the judge's responses to the two stimuli lie below a certain threshold, i.e. if the difference lies between -t and t the judge will declare a tie. The parameter t and the mean stimulus responses are estimated by least squares. To overcome a difficulty presented by correlated data, an angular response law is postulated for the response-scale differences. In the resulting transformed data non-homogeneity of variances is encountered. In effecting a weighted solution, weights are first determined by using a preliminary unweighted analysis, and an iterative procedure is proposed. Large-sample variances and covariances of the estimates are obtained. A test of the validity of the model is described. A computational procedure is set up, and exemplified through application to experimental data. / Ph. D.
80

The small-sample power of some nonparametric tests

Gibbons, Jean Dickinson January 1962 (has links)
I. Small-Sample Power of the One-Sample Sign Test for Approximately Normal Distributions. The power function of the one-sided, one-sample sign test is studied for populations which deviate from exact normality, either by skewness, kurtosis, or both. The terms of the Edgeworth asymptotic expansion of order more than N<sup>-3/2</sup> are used to represent the population density. Three sets of hypotheses and alternatives, concerning the location of (1) the median, (2) the median as approximated by the mean and coefficient of skewness, and (3) the mean, are considered in an attempt to make valid comparisons between the power of the sign test and Student's t test under the same conditions. Numerical results are given for samples of size 10, significance level .05, and for several combinations of the coefficients of skewness and kurtosis. II. Power of Two-Sample Rank Teats on the Equality of Two Distribution Functions. A comparative study is made of the power of two-sample rank tests of the hypothesis that both samples are drawn from the same population. The general alternative is that the variables from one population are stochastically larger than the variables from the other. One of the alternatives considered is that the variables in the first sample are distributed as the smallest of k variates with distribution F, and the variables in the second sample are distributed as the largest of these k – H₁ : H = 1 - (1 - F)<sup>k</sup>, G = F<sup>k</sup>. These two alternative distributions are mutually symmetric if F is symmetrical. Formulae are presented, which are independent of F, for the evaluation of the probability under H₁ of any joint arrangement of the variables from the two samples. A theorem is proved concerning the equality of the probabilities of certain pairs of orderings under assumptions of mutually symmetric populations. The other alternative is that both samples are normally distributed with the same variance but different means, the standardized difference between the two extreme distributions in the first alternative corresponding to the difference between the means. Numerical results of power are tabulated for small sample sizes, k = 2, 3 and 4, significance levels .01, .05 and .10. The rank tests considered are the most powerful rank test, the one and two-sided Wilcoxon tests, Terry's c₁ test, the one and two-aided median tests, the Wald-Wolfowitz runs test, and two new tests called the Psi test and the Gamma test. The two-sample rank test which is locally most powerful against any alternative·expressing an arbitrary functional relationship between the two population distribution functions and an unspecified parameter θ is derived and its asymptotic properties studied. The method is applied to two specific functional alternatives, H₁* : H = (1-θ)F<sup>k</sup> + θ[1 - (1-F)<sup>k</sup>], G = F<sup>k</sup>, and H₁**: H = 1 - (1-F)<sup>1+θ</sup>, G = F<sup>1+θ</sup>, where θ ≥ 0, which are similar to the alternative of two extreme distributions. The resulting test statistics are the Gamma test and the Psi test, respectively. The latter test is shown to have desirable small-sample properties. The asymptotic power functions of the Wilcoxon and WaldWolfowitz tests are compared for the alternative of two extreme distributions with k = 2, equal sample sizes and significance level .05. / Ph. D.

Page generated in 0.3385 seconds