• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • Tagged with
  • 7
  • 7
  • 7
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A Two Sample Test of the Reliability Performance of Equipment Components

Coleman, Miki Lynne 01 May 1972 (has links)
The purpose of this study was to develop a test which can be used to compare the reliability performances of two types of equipment components to determine whether or not the new component satisfies a given feasibility criterion. Two types of tests were presented and compared: the fixed sample size test and the truncated sequential probability ratio test. Both of these tests involve use of a statistic which is approximately distributed as F. This study showed that the truncated sequential probability ratio test has good potential as a means of comparing two component types to see whether or not the reliability of the new component is at least a certain number of times greater than the reliability of the old component.
2

Estimation of Hazard Function for Right Truncated Data

Jiang, Yong 27 April 2011 (has links)
This thesis centers on nonparametric inferences of the cumulative hazard function of a right truncated variable. We present three variance estimators for the Nelson-Aalen estimator of the cumulative hazard function and conduct a simulation study to investigate their performances. A close match between the sampling standard deviation and the estimated standard error is observed when an estimated survival probability is not close to 1. However, the problem of poor tail performance exists due to the limitation of the proposed variance estimators. We further analyze an AIDS blood transfusion sample for which the disease latent time is right truncated. We compute three variance estimators, yielding three sets of confidence intervals. This work provides insights of two-sample tests for right truncated data in the future research.
3

Statistical Methods for In-session Hemodialysis Monitoring

Xu, Yunnan 17 June 2020 (has links)
Motivated by real-time monitoring of dialysis, we aim at detecting difference between groups of Raman spectra generated from dialyzates at different time in one session. Baseline correction being a critical procedure in use of Raman Spectra, existing methods may not perform well on dialysis spectra due to nature of dialyzates, which contain numerous chemicals compounds. We first developed a new baseline correction method, Iterative Smoothing-spline with Root Error Adjustment (ISREA), which automatically adjusts intensities and employs smoothing-spline to produce a baseline in each iteration, providing better performance on dialysis spectra than a popular method Goldindec, and better accuracy regardless of types of samples. We proposed a two sample hypothesis testing on groups of baseline-corrected Raman spectra with ISREA. The uniqueness of the test lies in nature of the tested data. Instead of using Raman spectra as curves, we also consider a vector whose elements are peak intensities of biomarkers, meaning the data is regarded as mixed data and that a spectrum curve and a vector compose one observation. Our method tests on equality of the means of the two groups of mixed data. This method is based on asymptotic properties of the covariance of mixed data and FPCA. Simulation studies shows that our method is applicable to small sample size with proper power and size control. Meanwhile, to locate regions that contribute most to significant difference between two groups of univariate functional data, we developed a method to estimate the a sparse coefficient function by using a L1 norm penalty in functional logistic regression, and compared its performance with other methods. / Doctor of Philosophy / In U.S., there are more than 709,501 patients with End-Stage Renal Disease (ESRD). For those patients, dialysis is a standard treatment. While dialysis is time-consuming, expensive, and uncomfortable, it requires patients to take three sessions every week in facilities, and each session lasts for four hours regardless of patients' condition. An affordable, fast, and widely-applied technique called Raman spectroscopy draws attention. Spectral data from used dialysate samples collected at different time in one session can give information on the dialysis process and thus make real-time monitoring possible. With spectral data, we want to develop a statistical method that helps real-time monitoring on dialysis. This method can provide physicians with statistical evidence on dialysis process to improve their decision making, therefore increases efficiency of dialysis and better serve patients. On the other hand, Raman spectroscopy demands preprocessing called baseline correction on the raw spectra. A baseline is generated because of the nature of Raman technique and its instrumentation, which adds complexity to the spectra and interfere with analysis. Despite popularity of this technique and many existing baseline correction method, we found performance on dialysate spectra under expectation. Hence, we proposed a baseline correction method called Iterative Smoothing-spline with Root Error Adjustment (ISREA) and ISREA can provide better performance than existing methods. In addition, we come up with a method that is able to detect difference between the two groups of ISREA baseline-corrected spectra from dialysate collected at different time. Furthermore, we proposed and applied sparse functional logistic regression on two groups to locate regions where the significant difference comes from.
4

Detecting Disguised Missing Data

Belen, Rahime 01 February 2009 (has links) (PDF)
In some applications, explicit codes are provided for missing data such as NA (not available) however many applications do not provide such explicit codes and valid or invalid data codes are recorded as legitimate data values. Such missing values are known as disguised missing data. Disguised missing data may affect the quality of data analysis negatively, for example the results of discovered association rules in KDD-Cup-98 data sets have clearly shown the need of applying data quality management prior to analysis. In this thesis, to tackle the problem of disguised missing data, we analyzed embedded unbiased sample heuristic (EUSH), demonstrated the methods drawbacks and proposed a new methodology based on Chi Square Two Sample Test. The proposed method does not require any domain background knowledge and compares favorably with EUSH.
5

A Tree-based Framework for Difference Summarization

Li, Rong 19 April 2012 (has links)
No description available.
6

Power Studies of Multivariate Two-Sample Tests of Comparison

Siluyele, Ian John January 2007 (has links)
Masters of Science / The multivariate two-sample tests provide a means to test the match between two multivariate distributions. Although many tests exist in the literature, relatively little is known about the relative power of these procedures. The studies reported in the thesis contrasts the effectiveness, in terms of power, of seven such tests with a Monte Carlo study. The relative power of the tests was investigated against location, scale, and correlation alternatives. Samples were drawn from bivariate exponential, normal and uniform populations. Results from the power studies show that there is no single test which is the most powerful in all situations. The use of particular test statistics is recommended for specific alternatives. A possible supplementary non-parametric graphical procedure, such as the Depth-Depth plot, can be recommended for diagnosing possible differences between the multivariate samples, if the null hypothesis is rejected. As an example of the utility of the procedures for real data, the multivariate two-sample tests were applied to photometric data of twenty galactic globular clusters. The results from the analyses support the recommendations associated with specific test statistics.
7

Automatic State Construction using Decision Trees for Reinforcement Learning Agents

Au, Manix January 2005 (has links)
Reinforcement Learning (RL) is a learning framework in which an agent learns a policy from continual interaction with the environment. A policy is a mapping from states to actions. The agent receives rewards as feedback on the actions performed. The objective of RL is to design autonomous agents to search for the policy that maximizes the expectation of the cumulative reward. When the environment is partially observable, the agent cannot determine the states with certainty. These states are called hidden in the literature. An agent that relies exclusively on the current observations will not always find the optimal policy. For example, a mobile robot needs to remember the number of doors went by in order to reach a specific door, down a corridor of identical doors. To overcome the problem of partial observability, an agent uses both current and past (memory) observations to construct an internal state representation, which is treated as an abstraction of the environment. This research focuses on how features of past events are extracted with variable granularity regarding the internal state construction. The project introduces a new method that applies Information Theory and decision tree technique to derive a tree structure, which represents the state and the policy. The relevance, of a candidate feature, is assessed by the Information Gain Ratio ranking with respect to the cumulative expected reward. Experiments carried out on three different RL tasks have shown that our variant of the U-Tree (McCallum, 1995) produces a more robust state representation and faster learning. This better performance can be explained by the fact that the Information Gain Ratio exhibits a lower variance in return prediction than the Kolmogorov-Smirnov statistical test used in the original U-Tree algorithm.

Page generated in 0.0873 seconds