• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • Tagged with
  • 32
  • 32
  • 32
  • 17
  • 17
  • 15
  • 11
  • 11
  • 11
  • 9
  • 9
  • 9
  • 6
  • 6
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Distribution of points on spherical objects and applications

Selvitella, Alessandro, Selvitella, Alessandro 10 1900 (has links)
<p>In this thesis, we discuss some results on the distribution of points on the sphere, asymptotically when both the number of points and the dimension of the sphere tend to infinity. We then give some applications of these results to some statistical problems and especially to hypothesis testing.</p> / Master of Science (MSc)
12

Recursive Marix Game Analysis: Optimal, Simplified, And Human Strategies In Brave Rats

Medwid, William A 01 June 2024 (has links) (PDF)
Brave Rats is a short game with simple rules, yet establishing a comprehensive strategy is very challenging without extensive computation. After explaining the rules, this paper begins by calculating the optimal strategy by recursively solving each turn’s Minimax strategy. It then provides summary statistics about the complex, branching Minimax solution. Next, we examine six other strategy models and evaluate their performance against each other. These models’ flaws highlight the key elements that contribute to the effectiveness of the Minimax strategy and offer insight into simpler strategies that human players could mimic. Finally, we analyze 123 games of human data collected by the author and friends and investigate how that data is different from Minimax optimal play.
13

Using Neural Networks to Classify Discrete Circular Probability Distributions

Gaumer, Madelyn 01 January 2019 (has links)
Given the rise in the application of neural networks to all sorts of interesting problems, it seems natural to apply them to statistical tests. This senior thesis studies whether neural networks built to classify discrete circular probability distributions can outperform a class of well-known statistical tests for uniformity for discrete circular data that includes the Rayleigh Test1, the Watson Test2, and the Ajne Test3. Each neural network used is relatively small with no more than 3 layers: an input layer taking in discrete data sets on a circle, a hidden layer, and an output layer outputting probability values between 0 and 1, with 0 mapping to uniform and 1 mapping to nonuniform. In evaluating performances, I compare the accuracy, type I error, and type II error of this class of statistical tests and of the neural networks built to compete with them. 1 Jammalamadaka, S. Rao(1-UCSB-PB); SenGupta, A.(6-ISI-ASU)Topics in circular statistics. (English summary) With 1 IBM-PC floppy disk (3.5 inch; HD). Series on Multivariate Analysis, 5. World Scientific Publishing Co., Inc., River Edge, NJ, 2001. xii+322 pp. ISBN: 981-02-3778-2 2 Watson, G. S.Goodness-of-fit tests on a circle. II. Biometrika 49 1962 57–63. 3 Ajne, B.A simple test for uniformity of a circular distribution. Biometrika 55 1968 343–354.
14

Application of Inter-Die Rank Statistics in Defect Detection

Bakshi, Vivek 01 March 2012 (has links)
This thesis presents a statistical method to identify the test escapes. Test often acquires parametric measurements as a function of logical state of a chip. The usual method of classifying chips as pass or fail is to compare each state measurement to a test limit. Subtle manufacturing defects are escaping the test limits due to process variations in deep sub-micron technologies which results in mixing of healthy and faulty parametric test measurements. This thesis identifies the chips with subtle defects by using rank order of the parametric measurements. A hypothesis is developed that a defect is likely to disturb the defect-free ranking, whereas a shift caused by process variations will not affect the rank. The hypothesis does not depend on a-priori knowledge of a defect-free ranking of parametric measurements. This thesis introduces a modified Estimation Maximization (EM) algorithm to separate the healthy and faulty tau components calculated from parametric responses of die pairs on a wafer. The modified EM uses generalized beta distributions to model the two components of tau mixture distribution. The modified EM estimates the faulty probability of each die on a wafer. The sensitivity of the modified EM is evaluated using Monte Carlo simulations. The modified EM is applied on production product A. An average 30% reduction in DPPM (defective parts per million) is observed in Product A across all lots.
15

A Comparison of Techniques for Handling Missing Data in Longitudinal Studies

Bogdan, Alexander R 07 November 2016 (has links)
Missing data are a common problem in virtually all epidemiological research, especially when conducting longitudinal studies. In these settings, clinicians may collect biological samples to analyze changes in biomarkers, which often do not conform to parametric distributions and may be censored due to limits of detection. Using complete data from the BioCycle Study (2005-2007), which followed 259 premenopausal women over two menstrual cycles, we compared four techniques for handling missing biomarker data with non-Normal distributions. We imposed increasing degrees of missing data on two non-Normally distributed biomarkers under conditions of missing completely at random, missing at random, and missing not at random. Generalized estimating equations were used to obtain estimates from complete case analysis, multiple imputation using joint modeling, multiple imputation using chained equations, and multiple imputation using chained equations and predictive mean matching on Day 2, Day 13 and Day 14 of a standardized 28-day menstrual cycle. Estimates were compared against those obtained from analysis of the completely observed biomarker data. All techniques performed comparably when applied to a Normally distributed biomarker. Multiple imputation using joint modeling and multiple imputation using chained equations produced similar estimates across all types and degrees of missingness for each biomarker. Multiple imputation using chained equations and predictive mean matching consistently deviated from both the complete data estimates and the other missing data techniques when applied to a biomarker with a bimodal distribution. When addressing missing biomarker data in longitudinal studies, special attention should be given to the underlying distribution of the missing variable. As biomarkers become increasingly Normal, the amount of missing data tolerable while still obtaining accurate estimates may also increase when data are missing at random. Future studies are necessary to assess these techniques under more elaborate missingness mechanisms and to explore interactions between biomarkers for improved imputation models.
16

Takens Theorem with Singular Spectrum Analysis Applied to Noisy Time Series

Torku, Thomas K 01 May 2016 (has links)
The evolution of big data has led to financial time series becoming increasingly complex, noisy, non-stationary and nonlinear. Takens theorem can be used to analyze and forecast nonlinear time series, but even small amounts of noise can hopelessly corrupt a Takens approach. In contrast, Singular Spectrum Analysis is an excellent tool for both forecasting and noise reduction. Fortunately, it is possible to combine the Takens approach with Singular Spectrum analysis (SSA), and in fact, estimation of key parameters in Takens theorem is performed with Singular Spectrum Analysis. In this thesis, we combine the denoising abilities of SSA with the Takens theorem approach to make the manifold reconstruction outcomes of Takens theorem less sensitive to noise. In particular, in the course of performing the SSA on a noisy time series, we branch of into a Takens theorem approach. We apply this approach to a variety of noisy time series.
17

Probabilistic Methods In Information Theory

Pachas, Erik W 01 September 2016 (has links)
Given a probability space, we analyze the uncertainty, that is, the amount of information of a finite system, by studying the entropy of the system. We also extend the concept of entropy to a dynamical system by introducing a measure preserving transformation on a probability space. After showing some theorems and applications of entropy theory, we study the concept of ergodicity, which helps us to further analyze the information of the system.
18

Making Models with Bayes

Olid, Pilar 01 December 2017 (has links)
Bayesian statistics is an important approach to modern statistical analyses. It allows us to use our prior knowledge of the unknown parameters to construct a model for our data set. The foundation of Bayesian analysis is Bayes' Rule, which in its proportional form indicates that the posterior is proportional to the prior times the likelihood. We will demonstrate how we can apply Bayesian statistical techniques to fit a linear regression model and a hierarchical linear regression model to a data set. We will show how to apply different distributions to Bayesian analyses and how the use of a prior affects the model. We will also make a comparison between the Bayesian approach and the traditional frequentist approach to data analyses.
19

Academic Predictors of National Council Licensure Examination for Registered Nurses Pass Rates

Elliott, Maybeth J. 01 January 2011 (has links)
The United States continues to be affected by a severe, long-standing nursing shortage that is not projected to resolve within the next 10 or more years. Unsuccessful passage of the National Council Licensure Examination for Registered Nurses (NCLEX-RN) among graduate nurses remains one of several key contributors to the nursing shortage. The goal of this study was to identify if either cumulative fall semester GPA; the overall prenursing science, mathematics, and English GPA; type of high school background; TOEFL score; clinical pass or fail; and on-time program completion best predicted passage of NCLEX-RN. Archived records from the academic years of 2006-2010 of students/graduates of a small, private BSN program were analyzed. A nonconcurrent, prospective design of secondary data was guided by the theoretical implications of the Seidman retention formula that surmises that early identification of academic problems is a necessary precursor to implementations that promote academic success. Significant, positive correlations were found between GPA of prenursing courses and achievement in clinical courses and on-time nursing program completion. Forward and backward, logistic regression procedures revealed that clinical performance was the strongest predictor of NCLEX-RN success but with an inverse relationship. Implications for positive social change include retention of BSN students to improve graduation rates. This ultimately will foster achievement on the NCLEX-RN, resulting in more graduates will be able to competently serve the health care needs of individuals and communities and alleviation of the nursing shortage.
20

Dynamic Model Pooling Methodology for Improving Aberration Detection Algorithms

Sellati, Brenton J 01 January 2010 (has links) (PDF)
Syndromic surveillance is defined generally as the collection and statistical analysis of data which are believed to be leading indicators for the presence of deleterious activities developing within a system. Conceptually, syndromic surveillance can be applied to any discipline in which it is important to know when external influences manifest themselves in a system by forcing it to depart from its baseline. Comparing syndromic surveillance systems have led to mixed results, where models that dominate in one performance metric are often sorely deficient in another. This results in a zero-sum trade off where one performance metric must be afforded greater importance for a decision to be made. This thesis presents a dynamic pooling technique which allows for the combination of competing syndromic surveillance models in such a way that the resulting detection algorithm offers a superior combination of sensitivity and specificity, two of the key model metrics, than any of the models individually. We then apply this methodology to a simulated data set in the context of detecting outbreaks of disease in an animal population. We find that this dynamic pooling methodology is robust in the sense that it is capable of superior overall performance with respect to sensitivity, specificity, and mean time to detection under varying conditions of baseline data behavior, e.g. controlling for the presence or absence of various levels of trend and seasonality, as well as in simulated out-of-sample performance tests.

Page generated in 0.0957 seconds