1 
Cumulative Sum Control Charts for Censored Reliability DataOlteanu, Denisa Anca 28 April 2010 (has links)
Companies routinely perform life tests for their products. Typically, these tests involve running a set of products until the units fail. Most often, the data are censored according to different censoring schemes, depending on the particulars of the test. On occasion, tests are stopped at a predetermined time and the units that are yet to fail are suspended. In other instances, the data are collected through periodic inspection and only upper and lower bounds on the lifetimes are recorded. Reliability professionals use a number of nonnormal distributions to model the resulting lifetime data with the Weibull distribution being the most frequently used. If one is interested in monitoring the quality and reliability characteristics of such processes, one needs to account for the challenges imposed by the nature of the data. We propose likelihood ratio based cumulative sum (CUSUM) control charts for censored lifetime data with nonnormal distributions. We illustrate the development and implementation of the charts, and we evaluate their properties through simulation studies. We address the problem of interval censoring, and we construct a CUSUM chart for censored ordered categorical data, which we illustrate by a case study at Becton Dickinson (BD). We also address the problem of monitoring both of the parameters of the Weibull distribution for processes with rightcensored data. / Ph. D.

2 
對平滑直方圖的貝氏與準貝氏方法之比較 / A comparison on Bayesian and quasiBayesian methods for Histogram Smoothing彭志弘, Peng, ChihHung Unknown Date (has links)
針對具有多項分配(multinomial distribution)母體的類別資料，貝氏分析通常採取Dirichlet分配作為其先驗分配(prior distribution)，但在很多實際應用時，卻會遭遇困難；例如，當我們欲推估各年齡層佔總勞動力人口之比例時，母體除具多項分配外，其相鄰類別之比例亦相對接近；換言之，此時母體為具有平滑性(smooth)的多項分配，若依然採用Dirichlet分配作為其先驗分配，則會因為Dirichlet分配本身不具有平滑的特性，因而在做貝氏分析時會產生困擾。對這個難題Dickey and Jiang於1998年提出一個解決之道，他們的理論是對Dirichlet分配作適當之調整，將經過線性轉換後之Dirichlet分配稱為過濾後Dirichlet分配(filteredvariate Dirichlet distribution)，以過濾後Dirichlet分配作為調整後之先驗分配。對於Dickey and Jiang提出的方法，我們重新以蒙地卡羅法(Monte Carlo method)求出貝氏解，同時也嘗試以類似Makov and Smith (1977)和Smith and Makov (1978)對混合分配(mixture distribution)所用之準貝氏方法(quasiBayesian method)來逼近貝氏解。而本文將由電腦模擬的方式，探討貝氏方法與準貝氏方法之執行結果，並且考察準貝氏方法之收斂行為，對準貝氏方法的使用時機提出建議。

3 
Classification in high dimensional feature spaces / by H.O. van DykVan Dyk, Hendrik Oostewald January 2009 (has links)
In this dissertation we developed theoretical models to analyse Gaussian and multinomial distributions. The analysis is focused on classification in high dimensional feature spaces and provides a basis for dealing with issues such as data sparsity and feature selection (for Gaussian and multinomial distributions, two frequently used models for high dimensional applications). A Naïve Bayesian philosophy is followed to deal with issues associated with the curse of dimensionality. The core treatment on Gaussian and multinomial models consists of finding analytical expressions for classification error performances. Exact analytical expressions were found for calculating error rates of binary class systems with Gaussian features of arbitrary dimensionality and using any type of quadratic decision boundary (except for degenerate paraboloidal boundaries).
Similarly, computationally inexpensive (and approximate) analytical error rate expressions were derived for classifiers with multinomial models. Additional issues with regards to the curse of dimensionality that are specific to multinomial models (feature sparsity) were dealt with and tested on a textbased language identification problem for all eleven official languages of South Africa. / Thesis (M.Ing. (Computer Engineering))NorthWest University, Potchefstroom Campus, 2009.

4 
Classification in high dimensional feature spaces / by H.O. van DykVan Dyk, Hendrik Oostewald January 2009 (has links)
In this dissertation we developed theoretical models to analyse Gaussian and multinomial distributions. The analysis is focused on classification in high dimensional feature spaces and provides a basis for dealing with issues such as data sparsity and feature selection (for Gaussian and multinomial distributions, two frequently used models for high dimensional applications). A Naïve Bayesian philosophy is followed to deal with issues associated with the curse of dimensionality. The core treatment on Gaussian and multinomial models consists of finding analytical expressions for classification error performances. Exact analytical expressions were found for calculating error rates of binary class systems with Gaussian features of arbitrary dimensionality and using any type of quadratic decision boundary (except for degenerate paraboloidal boundaries).
Similarly, computationally inexpensive (and approximate) analytical error rate expressions were derived for classifiers with multinomial models. Additional issues with regards to the curse of dimensionality that are specific to multinomial models (feature sparsity) were dealt with and tested on a textbased language identification problem for all eleven official languages of South Africa. / Thesis (M.Ing. (Computer Engineering))NorthWest University, Potchefstroom Campus, 2009.

5 
Skill Evaluation in Women's VolleyballFlorence, Lindsay Walker 11 March 2008 (has links) (PDF)
The Brigham Young University Women's Volleyball Team recorded and rated all skills (pass, set, attack, etc.) and recorded rally outcomes (point for BYU, rally continues, point for opponent) for the entire 2006 home volleyball season. Only sequences of events occurring on BYU's side of the net were considered. Events followed one of these general patterns: serveoutcome, passsetattackoutcome, or blockdigsetattackoutcome. These sequences of events were assumed to be firstorder Markov chains where the quality of each contact depended only explicitly on the quality of the previous contact but not on contacts further removed in the sequence. We represented these sequences in an extensive matrix of transition probabilities where the elements of the matrix were the probabilities of moving from one state to another. The count matrix consisted of the number of times play moved from one transition state to another during the season. Data in the count matrix were assumed to have a multinomial distribution. A Dirichlet prior was formulated for each row of the count matrix, so posterior estimates of the transition probabilities were then available using Gibbs sampling. The different paths in the transition probability matrix were followed through the possible sequences of events at each step of the MCMC process to compute the posterior probability density that a perfect pass results in a point, a perfect set results in a point, and so forth. These posterior probability densities are used to address questions about skill performance in BYU women's volleyball.

6 
Flexible models for hierarchical and overdispersed data in agriculture / Modelos flexíveis para dados hierárquicos e superdispersos na agriculturaSercundes, Ricardo Klein 29 March 2018 (has links)
In this work we explored and proposed flexible models to analyze hierarchical and overdispersed data in agriculture. A semiparametric generalized linear mixed model was applied and compared with the main standard models to assess count data and, a combined model that take into account overdispersion and clustering through two separate sets of random effects was proposed to model nominal outcomes. For all models, the computational codes were implemented using the SAS software and are available in the appendix. / Nesse trabalho, exploramos e propusemos modelos flexíveis para a análise de dados hierárquicos e superdispersos na agricultura. Um modelo linear generalizado semi paramétrico misto foi aplicado e comparado com os principais modelos para a análise de dados de contagem e, um modelo combinado que leva em consideração a superdispersão e a hierarquia dos dados por meio de dois efeitos aleatórios distintos foi proposto para a análise de dados nominais. Todos os códigos computacionais foram implementados no software SAS sendo disponibilizados no apêndice.

7 
Measuring Skill Importance in Women's Soccer and VolleyballAllan, Michelle L. 11 March 2009 (has links) (PDF)
The purpose of this study is to demonstrate how to measure skill importance for two sports: soccer and volleyball. A division I women's soccer team filmed each home game during a competitive season. Every defensive, dribbling, first touch, and passing skill was rated and recorded for each team. It was noted whether each sequence of plays led to a successful shot. A hierarchical Bayesian logistic regression model is implemented to determine how the performance of the skill affects the probability of a successful shot. A division I women's volleyball team rated each skill (serve, pass, set, etc.) and recorded rally outcomes during home games in a competitive season. The skills were only rated when the ball was on the home team's side of the net. Events followed one of these three patterns: serveoutcome, passsetattackoutcome, or digsetattackoutcome. We analyze the volleyball data using two different techniques, Markov chains and Bayesian logistic regression. These sequences of events are assumed to be firstorder Markov chains. This means the quality of the current skill only depends on the quality of the previous skill. The count matrix is assumed to follow a multinomial distribution, so a Dirichlet prior is used to estimate each row of the count matrix. Bayesian simulation is used to produce the unconditional posterior probability (e.g., a perfect serve results in a point). The volleyball logistic regression model uses a Bayesian approach to determine how the performance of the skill affects the probability of a successful outcome. The posterior distributions produced from each of the models are used to calculate importance scores. The soccer data importance scores revealed that passing, first touch, and dribbling skills are the most important to the primary team. The Markov chain model for the volleyball data indicates setting 3–5 feet off the net increases the probability of a successful outcome. The logistic regression model for the volleyball data reveals that serves have a high importance score because of their steep slope. Importance scores can be used to assist coaches in allocating practice time, developing new strategies, and analyzing each player's skill performance.

8 
多項分配之分類方法比較與實證研究 / An empirical study of classification on multinomial data高靖翔, Kao, Ching Hsiang Unknown Date (has links)
由於電腦科技的快速發展，網際網路(World Wide Web；簡稱WWW)使得資料共享及搜尋更為便利，其中的網路搜尋引擎(Search Engine)更是尋找資料的利器，最知名的「Google」公司就是藉由搜尋引擎而發跡。網頁搜尋多半依賴各網頁的特徵，像是熵(Entropy)即是最為常用的特徵指標，藉由使用者選取「關鍵字詞」，找出與使用者最相似的網頁，換言之，找出相似指標函數最高的網頁。藉由相似指標函數分類也常見於生物學及生態學，但多半會計算兩個社群間的相似性，再判定兩個社群是否相似，與搜尋引擎只計算單一社群的想法不同。
本文的目標在於研究若資料服從多項分配，特別是似幾何分配的多項分配（許多生態社群都滿足這個假設），單一社群的指標、兩個社群間的相似指標，何者會有較佳的分類正確性。本文考慮的指標包括單一社群的熵及Simpson指標、兩社群間的熵及相似指標(Yue and Clayton, 2005)、支持向量機(Support Vector Machine)、邏輯斯迴歸等方法，透過電腦模擬及交叉驗證(crossvalidation)比較方法的優劣。本文發現單一社群熵指標之表現，在本文的模擬研究有不錯的分類結果，甚至普遍優於支持向量機，但單一社群熵指標分類法的結果並不穩定，為該分類方法之主要缺點。 / Since computer science had changed rapidly, the worldwide web made it much easier to share and receive the information. Search engines would be the ones to help us find the target information conveniently. The famous Google was also founded by the search engine. The searching process is always depends on the characteristics of the web pages, for example, entropy is one of the characteristics index. The target web pages could be found by combining the index with the keywords information given by user. Or in other words, it is to find out the web pages which are the most similar to the user’s demands. In biology and ecology, similarity index function is commonly used for classification problems. But in practice, the pairwise instead of single similarity would be obtained to check if two communities are similar or not. It is dislike the thinking of search engines.
This research is to find out which has better classification result between single index and pairwise index for the data which is multinomial distributed, especially distributed like a geometry distribution. This data assumption is often satisfied in ecology area. The following classification methods would be considered into this research: single index including entropy and Simpson index, pairwise index including pairwise entropy and similarity index (Yue and Clayton, 2005), and also support vector machine and logistic regression. Computer simulations and cross validations would also be considered here. In this research, it is found that the single index, entropy, has good classification result than imagine. Sometime using entropy to classify would even better than using support vector machine with raw data. But using entropy to classify is not very robust, it is the one needed to be improved in future.

9 
Introduction to Probability TheoryChen, YongYuan 25 May 2010 (has links)
In this paper, we first present the basic principles of set theory and combinatorial analysis which are the most useful tools in computing probabilities. Then, we show some important properties derived from axioms of probability. Conditional probabilities come into play not only when some partial information is available, but also as a tool to compute probabilities more easily, even when partial information is unavailable. Then, the concept of random variable and its some related properties are introduced. For univariate random variables, we introduce the basic properties of some common discrete and continuous distributions. The important properties of jointly distributed random variables are also considered. Some inequalities, the law of large numbers and the central limit theorem are discussed. Finally, we introduce additional topics the Poisson process.

Page generated in 0.1872 seconds