• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 5
  • 5
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Cumulative Sum Control Charts for Censored Reliability Data

Olteanu, Denisa Anca 28 April 2010 (has links)
Companies routinely perform life tests for their products. Typically, these tests involve running a set of products until the units fail. Most often, the data are censored according to different censoring schemes, depending on the particulars of the test. On occasion, tests are stopped at a predetermined time and the units that are yet to fail are suspended. In other instances, the data are collected through periodic inspection and only upper and lower bounds on the lifetimes are recorded. Reliability professionals use a number of non-normal distributions to model the resulting lifetime data with the Weibull distribution being the most frequently used. If one is interested in monitoring the quality and reliability characteristics of such processes, one needs to account for the challenges imposed by the nature of the data. We propose likelihood ratio based cumulative sum (CUSUM) control charts for censored lifetime data with non-normal distributions. We illustrate the development and implementation of the charts, and we evaluate their properties through simulation studies. We address the problem of interval censoring, and we construct a CUSUM chart for censored ordered categorical data, which we illustrate by a case study at Becton Dickinson (BD). We also address the problem of monitoring both of the parameters of the Weibull distribution for processes with right-censored data. / Ph. D.
2

Statistical Monitoring of Queuing Networks

Kaya, Yaren Bilge 26 October 2018 (has links)
Queuing systems are important parts of our daily lives and to keep their operations at an efficient level they need to be monitored by using queuing Performance Metrics, such as average queue lengths and average waiting times. On the other hand queue lengths and waiting times are generally random variables and their distributions depend on different properties like arrival rates, service times, number of servers. We focused on detecting the change in service rates in this report. Therefore, we monitored queues by using Cumulative Sum(CUSUM) charts based on likelihood ratios and compared the Average Run Length values of different service rates.
3

Statistical Monitoring and Control of Locally Proactive Routing Protocols in MANETs

January 2012 (has links)
abstract: Mobile ad hoc networks (MANETs) have attracted attention for mission critical applications. This dissertation investigates techniques of statistical monitoring and control for overhead reduction in a proactive MANET routing protocol. Proactive protocols transmit overhead periodically. Instead, we propose that the local conditions of a node should determine this transmission decision. While the goal is to minimize overhead, a balance in the amount of overhead transmitted and the performance achieved is required. Statistical monitoring consists of techniques to determine if a characteristic has shifted away from an in-control state. A basic tool for monitoring is a control chart, a time-oriented representation of the characteristic. When a sample deviates outside control limits, a significant change has occurred and corrective actions are required to return to the in-control state. We investigate the use of statistical monitoring of local conditions in the Optimized Link State Routing (OLSR) protocol. Three versions are developed. In A-OLSR, each node uses a Shewhart chart to monitor betweenness of its two-hop neighbourhood. Betweenness is a social network metric that measures a node's influence; betweenness is larger when a node has more influence. Changes in topology are associated with changes in betweenness. We incorporate additional local node conditions including speed, density, packet arrival rate, and number of flows it forwards in A+-OLSR. Response Surface Methodology (RSM) is used to optimize timer values. As well, the Shewhart chart is replaced by an Exponentially Weighted Moving Average (EWMA) chart, which is more sensitive to small changes in the characteristic. It is known that control charts do not work as well in the presence of correlation. Hence, in A*-OLSR the autocorrelation in the time series is removed and an Auto-Regressive Integrated Moving Average (ARIMA) model found; this removes the dependence on node speed. A*-OLSR also extends monitoring to two characteristics concurrently using multivariate cumulative sum (MCUSUM) charts. The protocols are evaluated in simulation, and compared to OLSR and its variants. The techniques for statistical monitoring and control are general and have great potential to be applied to the adaptive control of many network protocols. / Dissertation/Thesis / Ph.D. Computer Science 2012
4

Evaluation of Scan Methods Used in the Monitoring of Public Health Surveillance Data

Fraker, Shannon E. 07 December 2007 (has links)
With the recent increase in the threat of biological terrorism as well as the continual risk of other diseases, the research in public health surveillance and disease monitoring has grown tremendously. There is an abundance of data available in all sorts of forms. Hospitals, federal and local governments, and industries are all collecting data and developing new methods to be used in the detection of anomalies. Many of these methods are developed, applied to a real data set, and incorporated into software. This research, however, takes a different view of the evaluation of these methods. We feel that there needs to be solid statistical evaluation of proposed methods no matter the intended area of application. Using proof-by-example does not seem reasonable as the sole evaluation criteria especially concerning methods that have the potential to have a great impact in our lives. For this reason, this research focuses on determining the properties of some of the most common anomaly detection methods. A distinction is made between metrics used for retrospective historical monitoring and those used for prospective on-going monitoring with the focus on the latter situation. Metrics such as the recurrence interval and time-to-signal measures are therefore the most applicable. These metrics, in conjunction with control charts such as exponentially weighted moving average (EWMA) charts and cumulative sum (CUSUM) charts, are examined. Two new time-to-signal measures, the average time-between-signal events and the average signal event length, are introduced to better compare the recurrence interval with the time-to-signal properties of surveillance schemes. The relationship commonly thought to exist between the recurrence interval and the average time to signal is shown to not exist once autocorrelation is present in the statistics used for monitoring. This means that closer consideration needs to be paid to the selection of which of these metrics to report. The properties of a commonly applied scan method are also studied carefully in the strictly temporal setting. The counts of incidences are assumed to occur independently over time and follow a Poisson distribution. Simulations are used to evaluate the method under changes in various parameters. In addition, there are two methods proposed in the literature for the calculation of the p-value, an adjustment based on the tests for previous time periods and the use of the recurrence interval with no adjustment for previous tests. The difference in these two methods is also considered. The quickness of the scan method in detecting an increase in the incidence rate as well as the number of false alarm events that occur and how long the method signals after the increase threat has passed are all of interest. These estimates from the scan method are compared to other attribute monitoring methods, mainly the Poisson CUSUM chart. It is shown that the Poisson CUSUM chart is typically faster in the detection of the increased incidence rate. / Ph. D.
5

Surveillance of Negative Binomial and Bernoulli Processes

Szarka, John Louis III 03 May 2011 (has links)
The evaluation of discrete processes are performed for industrial and healthcare processes. Count data may be used to measure the number of defective items in industrial applications or the incidence of a certain disease at a health facility. Another classification of a discrete random variable is for binary data, where information on an item can be classified as conforming or nonconforming in a manufacturing context, or a patient's status of having a disease in health-related applications. The first phase of this research uses discrete count data modeled from the Poisson and negative binomial distributions in a healthcare setting. Syndromic counts are currently monitored by the BioSense program within the Centers for Disease Control and Prevention (CDC) to provide real-time biosurveillance. The Early Aberration Reporting System (EARS) uses recent baseline information comparatively with a current day's syndromic count to determine if outbreaks may be present. An adaptive threshold method is proposed based on fitting baseline data to a parametric distribution, then calculating an upper-tailed p-value. These statistics are then converted to an approximately standard normal random variable. Monitoring is examined for independent and identically distributed data as well as data following several seasonal patterns. An exponentially weighted moving average (EWMA) chart is also used for these methods. The effectiveness of these methods in detecting simulated outbreaks in several sensitivity analyses is evaluated. The second phase of research explored in this dissertation considers information that can be classified as a binary event. In industry, it is desirable to have the probability of a nonconforming item, p, be extremely small. Traditional Shewhart charts such as the p-chart, are not reliable for monitoring this type of process. A comprehensive literature review of control chart procedures for this type of process is given. The equivalence between two cumulative sum (CUSUM) charts, based on geometric and Bernoulli random variables is explored. An evaluation of the unit and group--runs (UGR) chart is performed, where it is shown that the in--control behavior of this chart is quite misleading and should not be recommended for practitioners. / Ph. D.

Page generated in 0.0598 seconds