• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 55
  • 9
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 110
  • 32
  • 22
  • 20
  • 20
  • 17
  • 16
  • 16
  • 15
  • 15
  • 14
  • 13
  • 11
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

GLR Control Charts for Monitoring the Mean Vector or the Dispersion of a Multivariate Normal Process

Wang, Sai 28 February 2012 (has links)
In many applications, the quality of process outputs is described by more than one characteristic variable. These quality variables usually follow a multivariate normal (MN) distribution. This dissertation discusses the monitoring of the mean vector and the covariance matrix of MN processes. The first part of this dissertation develops a statistical process control (SPC) chart based on a generalized likelihood ratio (GLR) statistic to monitor the mean vector. The performance of the GLR chart is compared to the performance of the Hotelling Χ² chart, the multivariate exponentially weighted moving average (MEWMA) chart, and a multi-MEWMA combination. Results show that the Hotelling Χ² chart and the MEWMA chart are only effective for a small range of shift sizes in the mean vector, while the GLR chart and some carefully designed multi-MEWMA combinations can give similarly better overall performance in detecting a wide range of shift magnitudes. Unlike most of these other options, the GLR chart does not require specification of tuning parameter values by the user. The GLR chart also has the advantage in process diagnostics: at the time of a signal, estimates of change-point and out-of-control mean vector are immediately available to the user. All these advantages of the GLR chart make it a favorable option for practitioners. For the design of the GLR chart, a series of easy to use equations are provided to users for calculating the control limit to achieve the desired in-control performance. The use of this GLR chart with a variable sampling interval (VSI) scheme has also been evaluated and discussed. The rest of the dissertation considers the problem of monitoring the covariance matrix. Three GLR charts with different covariance matrix estimators have been discussed. Results show that the GLR chart with a multivariate exponentially weighted moving covariance (MEWMC) matrix estimator is slightly better than the existing method for detecting any general changes in the covariance matrix, and the GLR chart with a constrained maximum likelihood estimator (CMLE) gives much better overall performance for detecting a wide range of shift sizes than the best available options for detecting only variance increases. / Ph. D.
72

Monitoring Markov Dependent Binary Observations with a Log-Likelihood Ratio Based CUSUM Control Chart

Modarres-Mousavi, Shabnam 04 April 2006 (has links)
Our objective is to monitor the changes in a proportion with correlated binary observations. All of the published work on this subject used the first-order Markov chain model for the data. Increasing the order of dependence above one by extending a standard Markov chain model entails an exponential increase of both the number of parameters and the dimension of the transition probability matrix. In this dissertation, we develop a particular Markov chain structure, the Multilevel Model (MLM), to model the correlation between binary data. The basic idea is to assign a lower probability to observing a 1 when all previous correlated observations are 0's, and a higher probability to observing a 1 as the last observed 1 gets closer to the current observation. We refer to each of the distinct situations of observing a 1 as a "level". For a given order of dependence, , at most different values of conditional probabilities of observing a 1 can be assigned. So the number of levels is always less than or equal to . Compared to a direct extension of the first-order Markov model to higher orders, our model is considerably parsimonious. The number of parameters for the MLM is only one plus the number of levels, and the transition probability matrix is . We construct a CUSUM control chart for monitoring a proportion with correlated binary observations. First, we use the probability structure of a first-order Markov chain to derive a log-likelihood ratio based CUSUM control statistic. Then, we model this CUSUM statistic itself as a Markov chain, which in turn allows for designing a control chart with specified statistical properties: the Markov Binary CUSUM (MBCUSUM) chart. We generalize the MBCUSUM to account for any order of dependence between binary observations through implying MLM to the data and to our CUSUM control statistic. We verify that the MBCUSUM has a better performance than a curtailed Shewhart chart. Also, we show that except for extremely large changes in the proportion (of interest) the MBCUSUM control chart detects the changes faster than the Bernoulli CUSUM control chart, which is designed for independent observations. / Ph. D.
73

Applications of Control Charts in Medicine and Epidemiology

Sego, Landon Hugh 18 April 2006 (has links)
We consider two applications of control charts in health care. The first involves the comparison of four methods designed to detect an increase in the incidence rate of a rare health event, such as a congenital malformation. A number of methods have been proposed: among these are the Sets method, two modifications of the Sets method, and the CUSUM method based on the Poisson distribution. Many of the previously published comparisons of these methods used unrealistic assumptions or ignored implicit assumptions which led to misleading conclusions. We consider the situation where data are observed as a sequence of Bernoulli trials and propose the Bernoulli CUSUM chart as a desirable method for the surveillance of rare health events. We compare the steady-state average run length performance of the Sets methods and its modifications to the Bernoulli CUSUM chart under a wide variety of circumstances. Except in a very few instances we find that the Bernoulli CUSUM chart performs better than the Sets method and its modifications for the extensive number of cases considered. The second application area involves monitoring clinical outcomes, which requires accounting for the fact that each patient has a different risk of death prior to undergoing a health care procedure. We propose a risk-adjusted survival time CUSUM chart (RAST CUSUM) for monitoring clinical outcomes where the primary endpoint is a continuous, time-to-event variable that is right censored. Risk adjustment is accomplished using accelerated failure time regression models. We compare the average run length performance of the RAST CUSUM chart to the risk-adjusted Bernoulli CUSUM chart, using data from cardiac surgeries to motivate the details of the comparison. The comparisons show that the RAST CUSUM chart is more efficient at detecting deterioration in the quality of a clinical procedure than the risk-adjusted Bernoulli CUSUM chart, especially when the fraction of censored observations is not too high. We address details regarding the implementation of a prospective monitoring scheme using the RAST CUSUM chart. / Ph. D.
74

Detection of DDoS Attacks against the SDN Controller using Statistical Approaches

Al-Mafrachi, Basheer Husham Ali January 2017 (has links)
No description available.
75

Evaluation of Scan Methods Used in the Monitoring of Public Health Surveillance Data

Fraker, Shannon E. 07 December 2007 (has links)
With the recent increase in the threat of biological terrorism as well as the continual risk of other diseases, the research in public health surveillance and disease monitoring has grown tremendously. There is an abundance of data available in all sorts of forms. Hospitals, federal and local governments, and industries are all collecting data and developing new methods to be used in the detection of anomalies. Many of these methods are developed, applied to a real data set, and incorporated into software. This research, however, takes a different view of the evaluation of these methods. We feel that there needs to be solid statistical evaluation of proposed methods no matter the intended area of application. Using proof-by-example does not seem reasonable as the sole evaluation criteria especially concerning methods that have the potential to have a great impact in our lives. For this reason, this research focuses on determining the properties of some of the most common anomaly detection methods. A distinction is made between metrics used for retrospective historical monitoring and those used for prospective on-going monitoring with the focus on the latter situation. Metrics such as the recurrence interval and time-to-signal measures are therefore the most applicable. These metrics, in conjunction with control charts such as exponentially weighted moving average (EWMA) charts and cumulative sum (CUSUM) charts, are examined. Two new time-to-signal measures, the average time-between-signal events and the average signal event length, are introduced to better compare the recurrence interval with the time-to-signal properties of surveillance schemes. The relationship commonly thought to exist between the recurrence interval and the average time to signal is shown to not exist once autocorrelation is present in the statistics used for monitoring. This means that closer consideration needs to be paid to the selection of which of these metrics to report. The properties of a commonly applied scan method are also studied carefully in the strictly temporal setting. The counts of incidences are assumed to occur independently over time and follow a Poisson distribution. Simulations are used to evaluate the method under changes in various parameters. In addition, there are two methods proposed in the literature for the calculation of the p-value, an adjustment based on the tests for previous time periods and the use of the recurrence interval with no adjustment for previous tests. The difference in these two methods is also considered. The quickness of the scan method in detecting an increase in the incidence rate as well as the number of false alarm events that occur and how long the method signals after the increase threat has passed are all of interest. These estimates from the scan method are compared to other attribute monitoring methods, mainly the Poisson CUSUM chart. It is shown that the Poisson CUSUM chart is typically faster in the detection of the increased incidence rate. / Ph. D.
76

Statistical Methods for Improving and Maintaining Product Reliability

Dickinson, Rebecca 17 September 2014 (has links)
When a reliability experiment is used, practitioners can understand better what lifetimes to expect of a product under different operating conditions and what factors are important to designing reliability into a product. Reliability experiments, however, can be very challenging to analyze because often the reliability or lifetime data tend to follow distinctly non-normal distributions and the experiments typically involve censoring. Time and cost constraints may also lead to reliability experiments with experimental protocols that are not completely randomized. In many industrial experiments, for example, the split-plot structure arises when the randomization of the experimental runs is restricted. Additionally, for many reliability experiments, it is often cost effective to apply a treatment combination to a stand with multiple units on it as opposed to each unit individually, which introduces subsampling. The analysis of lifetime data assuming a completely randomized design has been well studied, but until recently analysis methodologies for more complex experimental designs with multiple error terms have not been a focus of the reliability field. This dissertation provides two analysis methods for analyzing right-censored Weibull distributed lifetime data from a split-plot experiment with subsampling. We evaluate the proposed methods through a simulation study. Companies also routinely perform life tests on their products to ensure that products meet requirements. Each of these life tests typically involves testing several units simultaneously with interest in the times to failure. Again, the fact that lifetime data tend to be nonnormally distributed and censored make the development of a control charting procedure more demanding. In this dissertation, one-sided lower and upper likelihood ratio based cumulative sum (CUSUM) control charting procedures are developed for right-censored Weibull lifetime data to monitor changes in the scale parameter, also known as the characteristic life, for a fixed value of the Weibull shape parameter. Because a decrease in the characteristic life indicates a decrease in the mean lifetime of a product, a one-sided lower CUSUM chart is the main focus. We illustrate the development and implementation of the chart and evaluate the properties through a simulation study. / Ph. D.
77

On Development and Performance Evaluation of Some Biosurveillance Methods

Zheng, Hongzhang 09 August 2011 (has links)
This study examines three applications of control charts used for monitoring syndromic data with different characteristics. The first part develops a seasonal autoregressive integrated moving average (SARIMA) based surveillance chart, and compares it with the CDC Early Aberration Reporting System (EARS) W2c method using both authentic and simulated data. After successfully removing the long-term trend and the seasonality involved in syndromic data, the performance of the SARIMA approach is shown to be better than the performance of the EARS method in terms of two key surveillance characteristics, the false alarm rate and the average time to detect the outbreaks. In the second part, we propose a generalized likelihood ratio (GLR) control chart to detect a wide range of shifts in the mean of Poisson distributed biosurveillance data. The application of a sign function on the original GLR chart statistics leads to downward-sided, upward-sided, and two-sided GLR chart statistics in an unified framework. To facilitate the use of such charts in practice, we provide detailed guidance on developing and implementing the GLR chart. Under the steady-state framework, this study indicates that the overall GLR chart performance in detecting a range of shifts of interest is superior to the performance of traditional control charts including the EARS method, Shewhart charts, EWMA charts, and CUSUM charts. There is often an excessive number of zeros involved in health care related data. Zero-inflated Poisson (ZIP) models are more appropriate than Poisson models to describe such data. The last part of the dissertation considers the GLR chart for ZIP data under a research framework similar to the second part. Because small sample sizes may influence the estimation of ZIP parameters, the efficiency of MLEs is investigated in depth, followed by suggestions for improvement. Numerical approaches to solving for the MLEs are discussed as well. Statistics for a set of GLR charts are derived, followed by modifications changing them from two-sided statistics to one-sided statistics. Although not a complete study of GLR charts for ZIP processes, due to limited time and resources, suggestions for future work are proposed at the end of this dissertation. / Ph. D.
78

Detection of the Change Point and Optimal Stopping Time by Using Control Charts on Energy Derivatives

AL, Cihan, Koroglu, Kubra January 2011 (has links)
No description available.
79

CUSUM tests based on grouped observations

Eger, Karl-Heinz, Tsoy, Evgeni Borisovich 08 November 2009 (has links) (PDF)
This paper deals with CUSUM tests based on grouped or classified observations. The computation of average run length is reduced to that of solving of a system of simultaneous linear equations. Moreover a corresponding approximation based on the Wald approximations for characteristics of sequential likelihood ratio tests is presented. The effect of grouping is investigated with a CUSUM test for the mean of a normal distribution based on F-optimal grouping schemes. The considered example demonstrates that hight efficient CUSUM tests can be obtained for F-optimal grouping schemes already with a small number of groups.
80

Development of statistical methods for the surveillance and monitoring of adverse events which adjust for differing patient and surgical risks

Webster, Ronald A. January 2008 (has links)
The research in this thesis has been undertaken to develop statistical tools for monitoring adverse events in hospitals that adjust for varying patient risk. The studies involved a detailed literature review of risk adjustment scores for patient mortality following cardiac surgery, comparison of institutional performance, the performance of risk adjusted CUSUM schemes for varying risk profiles of the populations being monitored, the effects of uncertainty in the estimates of expected probabilities of mortality on performance of risk adjusted CUSUM schemes, and the instability of the estimated average run lengths of risk adjusted CUSUM schemes found using the Markov chain approach. The literature review of cardiac surgical risk found that the number of risk factors in a risk model and its discriminating ability were independent, the risk factors could be classified into their "dimensions of risk", and a risk score could not be generalized to populations remote from its developmental database if accurate predictions of patients' probabilities of mortality were required. The conclusions were that an institution could use an "off the shelf" risk score, provided it was recalibrated, or it could construct a customized risk score with risk factors that provide at least one measure for each dimension of risk. The use of report cards to publish adverse outcomes as a tool for quality improvement has been criticized in the medical literature. An analysis of the report cards for cardiac surgery in New York State showed that the institutions' outcome rates appeared overdispersed compared to the model used to construct confidence intervals, and the uncertainty associated with the estimation of institutions' out come rates could be mitigated with trend analysis. A second analysis of the mortality of patients admitted to coronary care units demonstrated the use of notched box plots, fixed and random effect models, and risk adjusted CUSUM schemes as tools to identify outlying hospitals. An important finding from the literature review was that the primary reason for publication of outcomes is to ensure that health care institutions are accountable for the services they provide. A detailed review of the risk adjusted CUSUM scheme was undertaken and the use of average run lengths (ARLs) to assess the scheme, as the risk profile of the population being monitored changes, was justified. The ARLs for in-control and out-of-control processes were found to increase markedly as the average outcome rate of the patient population decreased towards zero. A modification of the risk adjusted CUSUM scheme, where the step size for in-control to out-of-control outcome probabilities were constrained to no less than 0.05, was proposed. The ARLs of this "minimum effect" CUSUM scheme were found to be stable. The previous assessment of the risk adjusted CUSUM scheme assumed that the predicted probability of a patient's mortality is known. A study of its performance, where the estimates of the expected probability of patient mortality were uncertain, showed that uncertainty at the patient level did not affect the performance of the CUSUM schemes, provided that the risk score was well calibrated. Uncertainty in the calibration of the risk model appeared to cause considerable variation in the ARL performance measures. The ARLs of the risk adjusted CUSUM schemes were approximated using simulation because the approximation method using the Markov chain property of CUSUMs, as proposed by Steiner et al. (2000), gave unstable results. The cause of the instability was the method of computing the Markov chain transition probabilities, where probability is concentrated at the midpoint of its Markov state. If probability was assumed to be uniformly distributed over each Markov state, the ARLs were stabilized, provided that the scores for the patients' risk of adverse outcomes were discrete and finite.

Page generated in 0.0242 seconds