171 |
Univariate and Multivariate Surveillance Methods for Detecting Increases in Incidence RatesJoner, Michael D. Jr. 02 May 2007 (has links)
It is often important to detect an increase in the frequency of some event. Particular attention is given to medical events such as mortality or the incidence of a given disease, infection or birth defect. Observations are regularly taken in which either an incidence occurs or one does not. This dissertation contains the result of an investigation of prospective monitoring techniques in two distinct surveillance situations. In the first situation, the observations are assumed to be the results of independent Bernoulli trials. Some have suggested adapting the scan statistic to monitor such rates and detect a rate increase as soon as possible after it occurs. Other methods could be used in prospective surveillance, such as the Bernoulli cumulative sum (CUSUM) technique. Issues involved in selecting parameters for the scan statistic and CUSUM methods are discussed, and a method for computing the expected number of observations needed for the scan statistic method to signal a rate increase is given. A comparison of these methods shows that the Bernoulli CUSUM method tends to be more effective in detecting increases in the rate. In the second situation, the incidence information is available at multiple locations. In this case the individual sites often report a count of incidences on a regularly scheduled basis. It is assumed that the counts are Poisson random variables which are independent over time, but the counts at any given time are possibly correlated between regions. Multivariate techniques have been suggested for this situation, but many of these approaches have shortcomings which have been demonstrated in the quality control literature. In an attempt to remedy some of these shortcomings, a new control chart is recommended based on a multivariate exponentially weighted moving average. The average run-length performance of this chart is compared with that of the existing methods. / Ph. D.
|
172 |
Statistical quality assurance of IGUM : Statistical quality assurance and validation of IGUM in a steady and dynamic gas flow prior to proof of conceptKornsäter, Elin, Kallenberg, Dagmar January 2022 (has links)
To further support and optimise the production of diving tables for the Armed Forces of Sweden, a research team has developed a new machine called IGUM (Inert Gas UndersökningsMaskin) which aims to measure how inert gas is taken up and exhaled. Due to the new design of machine, the goal of this thesis was to statistically validate its accuracy and verify its reliability. In the first stage, a quality assurance of the linear position conversion key of IGUM in a steady and known gas flow was conducted. This was done by collecting and analysing data in 29 experiments followed by examination with ordinary least squares, hypothesis testing, analysis of variance, bootstrapping and Bayesian hierarchical modelling. Autocorrelation among the residuals were detected but concluded to not have an impact on the results due to the bootstrap analysis. The results showed an estimated conversion key equal to 1.276 ml/linear position which was statistically significant for all 29 experiments. In the second stage, it was examined if and how well IGUM could detect small additions of gas in a dynamic flow. The breathing machine ANSTI was used to simulate the sinus pattern of a breathing human in 24 experiments where 3 additions of 30 ml of gas manually was added into the system. The results were analysed through sinusoidal regression where three dummy variables represented the three additions of gas in each experiment. To examine if IGUM detects 30 ml for each input, the previously statistically proven conversion key at 1.276ml/linear position was used. An attempt was made to remove the seasonal trend in the data, something that was not completely successful which could influence the estimations. The results showed that IGUM indeed can detect these small gas additions, where the amount detected showed some differences between dummies and experiments. This is most likely since not enough trend has been removed, rather than IGUM not working properly.
|
173 |
Innovative Forced Response Analysis Method Applied to a Transonic CompressorHutton, Timothy M. January 2003 (has links)
No description available.
|
174 |
Optimal Progressive Type-II Censoring Schemes for Non-Parametric Confidence Intervals of QuantilesHan, Donghoon 09 1900 (has links)
<p> In this work, optimal censoring schemes are investigated for the non-parametric confidence intervals of population quantiles under progressive Type-II right censoring. The proposed inference can be universally applied to any probability distributions for continuous random variables. By using the interval mass as an optimality criterion, the optimization process is also independent of the actual observed values from a sample as long as the initial sample size n and the number of observations m are predetermined. This study is based on the fact that each (uncensored) order statistic observed from progressive Type-II censoring can be represented as a mixture of underlying ordinary order statistics with exactly known weights [11, 12]. Using several sample sizes combined with various degrees of censoring, the results of the optimization are tabulated here for a wide range of quantiles with selected levels of significance (i.e., α = 0.01, 0.05, 0.10). With the optimality criterion under consideration, the efficiencies of the worst progressive Type-II censoring scheme and ordinary Type-II
censoring scheme are also examined in comparison with the best censoring scheme
obtained for a given quantile with fixed n and m.</p> / Thesis / Master of Science (MSc)
|
175 |
Anomaly Detection for Water Quality DataYAN, YAN January 2019 (has links)
Real-time water quality monitoring using automated systems with sensors is becoming increasingly common, which enables and demands timely identification of unexpected values. Technical issues create anomalies, which at the rate of incoming data can prevent the manual detection of problematic data.
This thesis deals with the problem of anomaly detection for water quality data using machine learning and statistic learning approaches. Anomalies in data can cause serious problems in posterior analysis and lead to poor decisions or incorrect conclusions. Five time series anomaly detection techniques: local outlier factor (machine learning), isolation forest (machine learning), robust random cut forest (machine learning), seasonal hybrid extreme studentized deviate (statistic learning approach), and exponential moving average (statistic learning approach) have been analyzed. Extensive experimental analysis of those techniques have been performed on data sets collected from sensors deployed in a wastewater treatment plant.
The results are very promising. In the experiments, three approaches successfully detected anomalies in the ammonia data set. With the temperature data set, the local outlier factor successfully detected all twenty-six outliers whereas the seasonal hybrid extreme studentized deviate only detected one anomaly point. The exponential moving average identified ten time ranges with anomalies. Eight of them cover a total of fourteen anomalies. The reproducible experiments demonstrate that local outlier factor is a feasible approach for detecting anomalies in water quality data. Isolation forest and robust random cut forest also rate high anomaly scores for the anomalies. The result of the primary experiment confirms that local outlier factor is much faster than isolation forest, robust random cut forest, seasonal hybrid extreme studentized deviate and exponential moving average. / Thesis / Master of Computer Science (MCS)
|
176 |
Prospective Spatio-Temporal Surveillance Methods for the Detection of Disease ClustersMarshall, J. Brooke 11 December 2009 (has links)
In epidemiology it is often useful to monitor disease occurrences prospectively to determine the location and time when clusters of disease are forming. This aids in the prevention of illness and injury of the public and is the reason spatio-temporal disease surveillance methods are implemented. Care must be taken in the design and implementation of these types of surveillance methods so that the methods provide accurate information on the development of clusters. Here two spatio-temporal methods for prospective disease surveillance are considered. These include the local Knox monitoring method and a new wavelet-based prospective monitoring method.
The local Knox surveillance method uses a cumulative sum (CUSUM) control chart for monitoring the local Knox statistic, which tests for space-time clustering each time there is an incoming observation. The detection of clusters of events occurring close together both temporally and spatially is important in finding outbreaks of disease within a specified geographic region. The local Knox surveillance method is based on the Knox statistic, which is often used in epidemiology to test for space-time clustering retrospectively. In this method, a local Knox statistic is developed for use with the CUSUM chart for prospective monitoring so that epidemics can be detected more quickly. The design of the CUSUM chart used in this method is considered by determining the in-control average run length (ARL) performance for different space and time closeness thresholds as well as for different control limit values. The effect of nonuniform population density and region shape on the in-control ARL is explained and some issues that should be considered when implementing this method are also discussed.
In the wavelet-based prospective monitoring method, a surface of incidence counts is modeled over time in the geographical region of interest. This surface is modeled using Poisson regression where the regressors are wavelet functions from the Haar wavelet basis. The surface is estimated each time new incidence data is obtained using both past and current observations, weighing current observations more heavily. The flexibility of this method allows for the detection of changes in the incidence surface, increases in the overall mean incidence count, and clusters of disease occurrences within individual areas of the region, through the use of control charts. This method is also able to incorporate information on population size and other covariates as they change in the geographical region over time. The control charts developed for use in this method are evaluated based on their in-control and out-of-control ARL performance and recommendations on the most appropriate control chart to use for different monitoring scenarios is provided. / Ph. D.
|
177 |
Analysis of intraspecific and interspecific interactions between the invasive exotic tree-of-heaven (Ailanthus altissima (Miller) Swingle) and the native black locust (Robinia pseudoacacia L.)Call, Lara J. 28 May 2002 (has links)
Invasive exotic plants can persist and successfully spread within ecosystems and negatively affect the recruitment of native species. The exotic invasive Ailanthus altissima and the native Robinia pseudoacacia are frequently found in disturbed sites and exhibit similar growth and reproductive characteristics, yet each has distinct functional roles such as allelopathy and nitrogen fixation, respectively. 1) A four-month full additive series in the greenhouse and 2) spatial point pattern analysis of trees in a silvicultural experiment were used to analyze the intraspecific and interspecific interference between these two species. In the greenhouse experiment, total biomass responses per plant for both species were significantly affected by interspecific but not by intraspecific interference (p <0.05). Competition indices such as Relative Yield Total and Relative Crowding Coefficient suggested that A. altissima was the better competitor in mixed plantings. Ailanthus altissima consistently produced a larger above ground and below ground relative yield while R. pseudoacacia generated a larger aboveground relative yield in high density mixed species pots. However, R. pseudoacacia exhibited more variation for multiple biomass traits, occasionally giving it an above ground advantage in some mixed species pots. Analysis of spatial point patterns in the field with Ripley's K indicated that the two species were positively associated with each other along highly disturbed skid trails in the majority of the field sites. Locally, increased disturbances could lead to more opportunities for A. altissima to invade, negatively interact with R. pseudoacacia (as was evident in the greenhouse study), and become established in place of native species. / Master of Science
|
178 |
組織、環境變數與管理會計系統資訊特性有用性認知關係之研究賴淑哲, LAI, SHU-ZHE Unknown Date (has links)
設計管理會計系統,須將該系統運作的組織與環境納入考慮,而採開放系統之觀念。
但傳統的管理會計教材,卻因根植於古典組織理論,故只採封閉系統之觀點,而一意
追求「放諸四海皆準」的最佳設計,以期能建立一套對所有組織均能適用的最佳會計
系統。
由於權變理論的與起,會計學者已漸漸明瞭,能配合組織與環境者,方為有交的管理
會計系統。故本文則採此觀點。以問卷調查法,對組織、環境變數與管理會計系統資
訊特性之有用性認知間的關係進行研究。
本論文研究之自變項有三:組織分權化程度、外部環境之不確定性、組織之互依性高
低三
項。依變項則為資訊特性之有用性認知,肪隨機抽樣法決定出500大企業中之25
0家作為受試者;郵寄問卷後,回收113封,其中有11封係敷衍作答故不納入分
析,結果列示於本文中之第四章。
論文最後乃列出研究限制及研究結果之討論,以供後續研究者與現今管理會計人員作
參考。
|
179 |
我國會計人員角色壓力與工作滿足、組織承諾的關係林世昌, LIN, SHI-CHANG Unknown Date (has links)
本論文旨在探討當前我國會計人員角色壓力的情形,並想瞭解角色壓力與各個變項的
關係,以便管理當局能據此採取因應措施,以減輕或消除角色壓力所帶來的不利影響
而增進管理目的的達成。
本論文計一冊五萬餘言,共分為五章二十節。
第一章為前言,說明研究動機、目的、研究假設及論文架構。
第二章為文獻探討,說明角色、角色壓力的意義、角色模型的介紹、角色壓力與各前
因變項後果變項的關係等。
第三章為研究方法,說明樣本的選擇、量表的類型及各量表的構面、信度、效度的測
試等。
第四章為研究結果,應用變異數分析、t 考驗、迴歸、相關、典型相關等統計方法以
求得各變項間的關係。
第五章為結論及建議,彙總前面各章節的結果做成結論,並提出建議以供外界參考。
|
180 |
態度量表中檢定組間差異之統計方法林昱君 Unknown Date (has links)
當研究者想要了解態度量表中不同組間之態度分數是否有所差異時,一個常見的分析方法為變異數分析。然而,變異數分析需要建立在資料服從常態分配之假設上,態度量表之資料類型卻很明顯地不符合此一假設。而非針對連續型資料所推導出來的 統計量,應該是較適合處理序列或是等距尺度等非常態資料之檢定方法。本研究主要之目的即為探討利用 統計量以及利用變異數分析兩者所作出之檢定結果差異為何。過去相關研究皆假設態度量表背後存在一連續潛在變數,本研究則直接由間斷型分配出發。在公式推導上,我們發現 統計量與變異數分析中之 統計量存在一對一對應之關係。雖然兩統計量近似之分配不同,但兩統計量所對應之p值卻始終非常接近。若以0.05為顯著水準, 統計量與 統計量之檢定結果幾乎完全相同。當需要檢定不同組間在多題上之看法是否具有差異時,我們比較了將屬於同一主題之各題分數加總,然後依照單變量變異數分析之方法進行檢定,以及多變量變異數分析法、羅吉斯迴歸分析法等三種方法。根據我們的模擬結果,若各組在各題之態度皆很類似,則利用ANOVA進行分析可以得到較低的型一誤差;若各組在各題之態度不太一致,且有左右偏分配互相抵銷的情形,則利用MANOVA或是羅吉斯迴歸分析法才能夠維持住很高的檢定力。 / In social science literature, we frequently found that ANOVA techniques were utilized to analyze Likert-type response data. However, one of the three basic assumptions behind ANOVA is that response variable is normally distributed, and Likert-type data apparently do not share this property. In this study, we compare the performance between statistic associated with ANOVA with Mantel- Haenszel statistic, a statistic aimed at handling categorical data. We found that statistic and statistic have one-to-one relationship. Although these two statistics can be approximated by distribution and Chi-square distribution respectively, their p values are quite close to each other. At the significant level of 0.05, and statistics almost have the same testing results. In addition to analyzing a single Likert-type response question, we would also like to analyze a set of Likert-type response questions that probably represent a specific concept. We propose two alternatives here. The first one is MANOVA, and the second one is logistic regression analysis. According to the simulation results, using the ANOVA approach is slightly better in terms of the type I error rate if the responses have similar structures among questions. On the other hand, using MANOVA or logistic regression analysis would maintain higher power whenever the responses have different structures among questions.
|
Page generated in 0.0758 seconds