121 |
Key factors influencing checking in maple veneered decorative hardwood plywoodBurnard, Michael D. 23 October 2012 (has links)
Face checking in decorative maple veneered plywood panels is a significant
problem for hardwood plywood manufacturers, furniture makers, cabinetmakers, and consumers. Efforts made by panel producers and researchers to minimize checking conducted to-‐date have been limited, and produced contradictory results. In this study the impact of four manufacturing factors believed to contribute to check development in decorative maple veneer panels were determined. The factors investigated were face veneer thickness and preparation, lathe-‐check orientation, adhesive and core type. An efficient, automated, optical technique based on digital image correlation principles was developed and used to detect and measure checks as they develop.
The novel new method for characterizing check severity and development was effective in efficiently measuring checking for a substantial number of samples. The results of the factor screening analysis reveal intricate four way interactions between factor levels contribute to check development, and that some combinations are likely to exhibit much more checking than others. / Graduation date: 2013
|
122 |
Evaluación en el modelado de las respuestas de recuentoLlorens Aleixandre, Noelia 10 June 2005 (has links)
Este trabajo presenta dos líneas de investigación desarrolladas en los últimos años en torno a la etapa de evaluación en datos de recuento. Los campos de estudio han sido: los datos de recuento, concretamente el estudio del modelo de regresión de Poisson y sus extensiones y la etapa de evaluación como punto de inflexión en el proceso de modelado estadístico. Los resultados obtenidos ponen de manifiesto la importancia de aplicar el modelo adecuado a las características de los datos así como de evaluar el ajuste del mismo. Por otra parte la comparación de pruebas, índices, estimadores y modelos intentan señalar la adecuación o la preferencia de unos sobre otros en determinadas circunstancias y en función de los objetivos del investigador. / This paper presents two lines of research that have been developed in recent years on the evaluation stage in count data. The areas of study have been both count data, specifically the study of Poisson regression modelling and its extension, and the evaluation stage as a point of reflection in the statistical modelling process. The results obtained demonstrate the importance of applying appropriate models to the characteristics of data as well as evaluating their fit. On the other hand, comparisons of trials, indices, estimators and models attempt to indicate the suitability or preference for one over the others in certain circumstances and according to research objectives.
|
123 |
Statistical properties of parasite density estimators in malaria and field applicationsHammami, Imen 24 June 2013 (has links) (PDF)
Malaria is a devastating global health problem that affected 219 million people and caused 660,000 deaths in 2010. Inaccurate estimation of the level of infection may have adverse clinical and therapeutic implications for patients, and for epidemiological endpoint measurements. The level of infection, expressed as the parasite density (PD), is classically defined as the number of asexual parasites relative to a microliter of blood. Microscopy of Giemsa-stained thick blood smears (TBSs) is the gold standard for parasite enumeration. Parasites are counted in a predetermined number of high-power fields (HPFs) or against a fixed number of leukocytes. PD estimation methods usually involve threshold values; either the number of leukocytes counted or the number of HPFs read. Most of these methods assume that (1) the distribution of the thickness of the TBS, and hence the distribution of parasites and leukocytes within the TBS, is homogeneous; and that (2) parasites and leukocytes are evenly distributed in TBSs, and thus can be modeled through a Poisson-distribution. The violation of these assumptions commonly results in overdispersion. Firstly, we studied the statistical properties (mean error, coefficient of variation, false negative rates) of PD estimators of commonly used threshold-based counting techniques and assessed the influence of the thresholds on the cost-effectiveness of these methods. Secondly, we constituted and published the first dataset on parasite and leukocyte counts per HPF. Two sources of overdispersion in data were investigated: latent heterogeneity and spatial dependence. We accounted for unobserved heterogeneity in data by considering more flexible models that allow for overdispersion. Of particular interest were the negative binomial model (NB) and mixture models. The dependent structure in data was modeled with hidden Markov models (HMMs). We found evidence that assumptions (1) and (2) are inconsistent with parasite and leukocyte distributions. The NB-HMM is the closest model to the unknown distribution that generates the data. Finally, we devised a reduced reading procedure of the PD that aims to a better operational optimization and a practical assessing of the heterogeneity in the distribution of parasites and leukocytes in TBSs. A patent application process has been launched and a prototype development of the counter is in process.
|
124 |
Abordagem estatística em modelos para séries temporais de contagemAndrade, Breno Silveira de 06 May 2013 (has links)
Made available in DSpace on 2016-06-02T20:06:08Z (GMT). No. of bitstreams: 1
5190.pdf: 1093269 bytes, checksum: 0d9bf9c7a3855887a0f66859b3a9cc22 (MD5)
Previous issue date: 2013-05-06 / Financiadora de Estudos e Projetos / In this work, it was estudied the models INGARCH , GLARMA and GARMA to model count time series data with Poisson and Negative Binomial discrete conditional distributions. The main goal was analyze in classic and bayesian approach, the adequability and goodness of fit of these models, also the contruction of credibility intervals about each parameter. To the Bayesian study, was cosiderated a joint prior distribuition that satisfied the conditions of each model and got a posterior distribution. This aproach presents too some criterion selection like (EBIC), (DIC) and ordenaded predictive conditional density (CPO) for Bayesian cases and (BIC) for classic cases. A simulation study was done to check the maximum likelihood estimator consistency in classic approach and has used criterion selection classic and Bayesian to choose the order of each model. An Analysis has made in a real data set realized as final stage as, these data consist the number of financial transactions in 30 minutes. These results have made in a classical and Bayesian approach , and discribed the data caracteristic. / Nesta dissertação estudou-se os modelos INGARCH, GLARMA e GARMA para modelar séries temporais de dados de contagem com as distribuições condicionais de Poisson e Binomial Negativa. A principal finalidade foi analisar no contexto clássico e bayesiano, a adequabilidade e qualidade de ajuste dos modelos em questão, assim como a construção de intervalos de credibilidade dos parâmetros para cada modelo testado. Para a abordagem Bayesiana foram consideradas priori conjugada, satisfazendo as condições de cada modelo em questão, obtendo assim uma distribuição a posteriori. A abordagem proposta apresenta também o cálculo de critérios de seleção de modelos como o (EBIC), (DIC) e densidade condicional preditiva ordenada (CPO) para o caso Bayesiano e (BIC) para a abordagem clássica. Com um estudo de simulação foi possível verificar a consistência dos estimadores de máxima verossimilhança (clássicos) além disso, foi usado critérios de seleção clássicos e Bayesianos para a seleção da ordem de cada um dos modelos. Uma análise de um conjunto de dados reais foi realizada, sendo uma série do número de transações financeiras realizadas em 30 minutos respectiva os mês de novembro de 2011. Estes resultados apresentam que tanto o estudo clássico, quanto o bayesiano, são capazes de descrever bem o comportamento da série e foram eficientes na escolha da ordem do mesmo.
|
125 |
簡單順序假設波松母數較強檢定力檢定研究 -兩兩母均數差 / More Powerful Tests for Simple Order Hypotheses in Poisson Distributions -The differences of the parameters孫煜凱, Sun, Yu-Kai Unknown Date (has links)
波松分配(Poisson Distribution)常用在單位時間或是區間內,計算對有興趣之某隨機事件次數(或是已知事件之頻率),例如:速食餐廳的單位時間來客數,又或是每段期間內,某天然災害的發生次數,可以表示為某一特定事件X服從波松分配,若lambda為單位事件發生次數或是平均次數,我們稱lambda為此波松分配之母數,記作Poisson(lambda),其中lambda屬於實數。
今天我們若想要探討由兩個服從不同波松分配抽取的隨機變數,如下列所述:令X={(X1,X2)}為一集合,其中Xi為X(i,1),X(i,2),...,X(i,ni)~Poisson(lambda(i)),i=1,2。欲探討兩波松分配之均數是否相同或相差小於某個常數d時,考慮以下檢定:H0:lambda2-lambda1<=d與H0:lambda2-lambda1>d,對於此問題可以使用的檢定方法有Przyborwski和Wilenski(1940)提出的條件檢定(Conditional test,C-test)或K.Krishnamoorthy與Jessica Thomson(2002)提出的精確性檢定(Exact test,E-test),其中的精確性檢定為一個非條件檢定(Unconditional Test);K.Krishnamoorthy與Jessica Thomson比較條件檢定與精確性檢定的p-value皆小於顯著水準(apha),而精確性檢定的檢定力不亞於條件檢定,因此精確性檢定比條件檢定更適合上面所述之假設問題。
Roger L.Berger(1996)提出一個以信賴區間的p-value所建立的較強力檢定,而目前只用於檢定兩二項分配(Binomial Distribution)的機率參數p是否相同為例,然而Berger在文中提到,較強力檢定比非條件檢定有更好的檢定力,而且要求的計算時間較少,可以提升檢定的效率。
本篇論文我們希望在固定apha與d時檢定的問題,建立一個兩波松分配均數顯著水準為apha的較強力檢定。
利用Roger L.Berger與Dennis D.Boos(1994)提出以信賴區間的p-value方法,建立波松分配兩兩母均數差的較強力檢定;研究發現此較強力檢定與精確性檢定的p-value皆小於apha,然而我們的檢定的檢定力皆不亞於精確性檢定所計算得出的檢定力,然而其apha及虛無假設皆需要善加考慮以本篇研究來看,當檢定為單尾檢定時,若apha<0.01,我們的較強力檢定沒有辦法找到比精確性檢定更好地拒絕域,換言之,此時較強力檢定與精確性檢定的檢定力將會相等。 / Poisson Distribution is used to calculate the probability of a certain phenomenon which attracted by researcher. If we want to test two random variable in an experiment .Therefore ,let X={(X1,X2)} be independent samples ,respectively ,from Poisson distribution ,also X(i,1),X(i,2),...,X(i,ni)~Poisson(lambda(i)),i=1,2.
The problem of interest here is to test:
H0:lambda2-lambda1<=d and H0:lambda2-lambda1>d,
where 0<apha<1/2 ,and let Y1 equals sum of X1 and Y2 equals sum of X2, where apha ,lambda,d be fixed.
In this problem of hypothesis testing about two Poisson means is addressed by the conditional test.However ,the exact method of testing based on the test statistic considered in K.Krishnamoorthy,Jessica Thomson(2002) also commonly used.
Roger L.Berger ,Dennis D.Boos(1994) give a new way to calculate
p-value,which replace the old method ,called it a valid p-value .In 1996, Roger L.Berger used the new way to propose a new test for two parameter of binomial distribution which is more powerful than exact test. In the other hand, Roger L.Berger also explain the unconditional test is more suitable than the conditional test.
In this paper,we propose a new method for two parameter of Poisson distribution which revise from Roger L.Berger’s method. The result we obtain that our new test is really get a much bigger rejection region.We found when the fixed increasing ,the set of more powerful test increasing, and when the fixed power increasing ,the required sample size decreasing.
|
126 |
Vybrané transformace náhodných veličin užívané v klasické lineární regresi / Selected random variables transformations used in classical linear regressionTejkal, Martin January 2017 (has links)
Klasická lineární regrese a z ní odvozené testy hypotéz jsou založeny na předpokladu normálního rozdělení a shodnosti rozptylu závislých proměnných. V případě že jsou předpoklady normality porušeny, obvykle se užívá transformací závisle proměnných. První část této práce se zabývá transformacemi stabilizujícími rozptyl. Značná pozornost je udělena náhodným veličinám s Poissonovým a negativně binomickým rozdělením, pro které jsou studovány zobecněné transformace stabilizující rozptyl obsahující parametry v argumentu navíc. Pro tyto parametry jsou stanoveny jejich optimální hodnoty. Cílem druhé části práce je provést srovnání transformací uvedených v první části a dalších často užívaných transformací. Srovnání je provedeno v rámci analýzy rozptylu testováním hypotézy shodnosti středních hodnot p nezávislých náhodných výběrů s pomocí F testu. V této části jsou nejprve studovány vlastnosti F testu za předpokladu shodných a neshodných rozptylů napříč výběry. Následně je provedeno srovnání silofunkcí F testu aplikovaného pro p výběrů z Poissonova rozdělení transformovanými odmocninovou, logaritmickou a Yeo Johnsnovou transformací a z negativně binomického rozdělení transformovaného argumentem hyperbolického sinu, logaritmickou a Yeo-Johnsnovou transformací.
|
127 |
Extensions of nonnegative matrix factorization for exploratory data analysis / 探索的なデータ分析のための非負値行列因子分解の拡張 / タンサクテキナ データ ブンセキ ノ タメ ノ ヒフチ ギョウレツ インシ ブンカイ ノ カクチョウ阿部 寛康, Hiroyasu Abe 22 March 2017 (has links)
非負値行列因子分解(NMF)は,全要素が非負であるデータ行列に対する行列分解法である.本論文では,実在するデータ行列に頻繁に見られる特徴や解釈容易性の向上を考慮に入れ,探索的にデータ分析を行うためのNMFの拡張について論じている.具体的には,零過剰行列や外れ値を含む行列を扱うための確率分布やダイバージェンス,さらには分解結果である因子行列の数や因子行列への直交制約について述べている. / Nonnegative matrix factorization (NMF) is a matrix decomposition technique to analyze nonnegative data matrices, which are matrices of which all elements are nonnegative. In this thesis, we discuss extensions of NMF for exploratory data analysis considering common features of a real nonnegative data matrix and an easy interpretation. In particular, we discuss probability distributions and divergences for zero-inflated data matrix and data matrix with outliers, two-factor vs. three-factor, and orthogonal constraint to factor matrices. / 博士(文化情報学) / Doctor of Culture and Information Science / 同志社大学 / Doshisha University
|
128 |
Introduction to Probability TheoryChen, Yong-Yuan 25 May 2010 (has links)
In this paper, we first present the basic principles of set theory and combinatorial analysis which are the most useful tools in computing probabilities. Then, we show some important properties derived from axioms of probability. Conditional probabilities come into play not only when some partial information is available, but also as a tool to compute probabilities more easily, even when partial information is unavailable. Then, the concept of random variable and its some related properties are introduced. For univariate random variables, we introduce the basic properties of some common discrete and continuous distributions. The important properties of jointly distributed random variables are also considered. Some inequalities, the law of large numbers and the central limit theorem are discussed. Finally, we introduce additional topics the Poisson process.
|
Page generated in 0.0944 seconds