• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 35
  • 15
  • 5
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 283
  • 283
  • 101
  • 98
  • 81
  • 67
  • 67
  • 45
  • 39
  • 38
  • 37
  • 37
  • 35
  • 32
  • 30
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

An Investigation of Cluster Analysis

Klingel, John C. 01 May 1973 (has links)
Three cluster analysis programs were used to group the same 64 individuals, generated so as to represent eight populations of eight individuals each. Each individual had quantitative values for seven attributes. All eight populations shared a common attribute variance-covariance matrix. The first program, from F. J. Rohlf's MINT package, implemented single linkage. Correlation was used as the basis for similarity. The results were not satisfactory, and the further use of correlation is in question. The second program, MDISP, bases similarity on Euclidean distance. It was found to give excellent results, in that it clustered individuals into the exact populations from which they were generated. It is the recommended program of the three used here. The last program, MINFO, uses similarity based on mutual information. It also gave very satisfactory results, but, due to visualization reasons, it was found to be less favorable than the MDISP program.
52

Extreme Value Distribution in Hydrology

Chen, Bill (Tzeng-Lwen) 01 May 1980 (has links)
The problems encountered when empirical fit is used as the sole criterion for choosing a distribution to represent annual flood data are discussed. Some theoretical direction is needed for this choice. Extreme value theory is established as a viable tool for analyzing annual flood data. Extreme value distributions have been used in previous analyses of flood data. How�ver, no systematic investigation of the theory has previously been applied. Properties of the extreme value distributions are examined. The most appropriate distribution for flood data has not previously been fit to such data. The fit of the chosen extreme value distribution compares favorably with that of the Pearson and log Pearson Type III distributions.
53

Sequential Analysis for Tolerances of Noxious Weed Seeds

Tokko, Seung 01 May 1972 (has links)
The application of a sequential test, the sequential probability ratio test, for the tolerances of noxious weed seeds is studied. It is proved that the sequential test can give a similar power curve to that of the current fixed sample test if the test parameters are properly chosen. The average sample size required by a sequential test, in general, is smaller than that of the existing test. However, in some cases it requires relatively a larger sample than current test. As a solution to the problem a method of truncation is considered. A kind of mixed procedure is suggested. This procedure gives almost an identical power curve to the standard one with great savings in sample size. The sample size is always less than that of the current test procedure.
54

Nonparametric Confidence Intervals for the Reliability of Real Systems Calculated from Component Data

Spooner, Jean 01 May 1987 (has links)
A methodology which calculates a point estimate and confidence intervals for system reliability directly from component failure data is proposed and evaluated. This is a nonparametric approach which does not require the component time to failures to follow a known reliability distribution. The proposed methods have similar accuracy to the traditional parametric approaches, can be used when the distribution of component reliability is unknown or there is a limited amount of sample component data, are simpler to compute, and use less computer resources. Depuy et al. (1982) studied several parametric approaches to calculating confidence intervals on system reliability. The test systems employed by them are utilized for comparison with published results. Four systems with sample sizes per component of 10, 50, and 100 were studied. The test systems were complex systems made up of I components, each component has n observed (or estimated) times to failure. An efficient method for calculating a point estimate of system reliability is developed based on counting minimum cut sets that cause system failures. Five nonparametric approaches to calculate the confidence intervals on system reliability from one test sample of components were proposed and evaluated. Four of these were based on the binomial theory and the Kolomogorov empirical cumulative distribution theory. 600 Monte Carlo simulations generated 600 new sets of component failure data from the population with corresponding point estimates of system reliability and confidence intervals. Accuracy of these confidence intervals was determined by determining the fraction that included the true system reliability. The bootstrap method was also studied to calculate confidence interval from one sample. The bootstrap method is computer intensive and involves generating many sets of component samples using only the failure data from the initial sample. The empirical cumulative distribution function of 600 bootstrapped point estimates were examined to calculate the confidence intervals for 68, 80, 90 95 and 99 percent confidence levels. The accuracy of the bootstrap confidence intervals was determined by comparison with the distribution of 600 point estimates of system reliability generated from the Monte Carlo simulations. The confidence intervals calculated from the Kolomogorov empirical distribution function and the bootstrap method were very accurate. Sample sizes of 10 were not always sufficient for systems with reliabilities close to one.
55

Probability of Discrete Failures, Weibull Distribution

Hansen, Mary Jo 01 May 1989 (has links)
The intent of this research and these is to describe the development of a series of charts and tables that provide the individual and cumulative probabilities of failure applying to the Weibull statistical distribution. The mathematical relationships are developed and the computer programs are described for deterministic and Monte Carlo models that compute and verify the results. Charts and tables reflecting the probabilities of failure for a selected set of parameters of the Weibull distribution functions are provided.
56

Linear Regression of the Poisson Mean

Brown, Duane Steven 01 May 1982 (has links)
The purpose of this thesis was to compare two estimation procedures, the method of least squares and the method of maximum likelihood, on sample data obtained from a Poisson distribution. Point estimates of the slope and intercept of the regression line and point estimates of the mean squared error for both the slope and intercept were obtained. It is shown that least squares, the preferred method due to its simplicity, does yield results as good as maximum likelihood. Also, confidence intervals were computed by Monte Carlo techniques and then were tested for accuracy. For the method of least squares, confidence bands for the regression line were computed under two different assumptions concerning the variance. It is shown that the assumption of constant variance produces false confidence bands. However, the assumption of the variance equal to the mean yielded accurate results.
57

A Two Sample Test of the Reliability Performance of Equipment Components

Coleman, Miki Lynne 01 May 1972 (has links)
The purpose of this study was to develop a test which can be used to compare the reliability performances of two types of equipment components to determine whether or not the new component satisfies a given feasibility criterion. Two types of tests were presented and compared: the fixed sample size test and the truncated sequential probability ratio test. Both of these tests involve use of a statistic which is approximately distributed as F. This study showed that the truncated sequential probability ratio test has good potential as a means of comparing two component types to see whether or not the reliability of the new component is at least a certain number of times greater than the reliability of the old component.
58

Explanation of the Fast Fourier Transform and Some Applications

Endo, Alan Kazuo 01 May 1981 (has links)
This report describes the Fast Fourier Transform and so~ of its applications. It describes the continuous Fourier transform and some of its properties. Finally, it describes the Fast Fourier Transform and its applications to hurricane risk analysis, ocean wave analysis, and hydrology.
59

An Evaluation of Truncated Sequential Test

Chang, Ryh-Thinn 01 May 1975 (has links)
The development of sequential analysis has led to the proposal of tests that are more economical in that the Average Sample Number (A. S. N.) of the sequential test is smaller than the sample size of the fixed sample test. Although these tests usually have a smaller A. S. N. than the equivelent fixed sample procedure, there still remains the possibility that an extremely large sample size will be necessary to make a decision. To remedy this, truncated sequential tests have been developed. A method of truncation for testing a composite hypotheses is studied. This method is formed by mixing a fixed sample test and a sequential test and is applied to the exponential distribution and normal distribution to establish its usefulness. It is proved that our truncation method can give a similar Operating Characteristic (O. C.) curve to that of corresponding fixed sample test if the test parameters are properly chosen. The average sample size required by our truncation method as compared with other existing truncation methods gives us a satisfactory result. Though the truncation method we suggested in this study is not an optimum truncation, it is still worthwhile, especially, when we are interested in the testing of a composite hypotheses.
60

Automated Circulation Control for the Utah State University Library

Montgomery, Richard M. 01 May 1967 (has links)
This package of programs is a result of the U.S.U. Library incorporating an automated control on the circulation of their books, which would provide the library with a daily record of all books in circulation, or not available for circu­lation, and send notices when books were overdue. Because of the long-range program of the Data Processing Department of the University, it was decided to develop the software for this project rather than purchase the hardware. The then existing hardware included the IBM 1401 computer (4K), 1402 card reader, 1403 on line printer, and a card sorter. The only additional hardware required by the Data Processing Department was the "read punch feed" feature on the card reader. This report includes information for operating the programs involved in pro­cessing the data. Any information required in setting up the data collection system may be obtained from the U.S. U. Library. These programs were developed to be compatible with the previously mentioned hardware and were used until the data processing facilities of the University were updated. All programs were written in the SSPS II symbolic language.

Page generated in 0.3164 seconds