• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 71
  • 23
  • 6
  • 6
  • 6
  • 5
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 146
  • 146
  • 93
  • 34
  • 31
  • 24
  • 23
  • 23
  • 21
  • 20
  • 18
  • 17
  • 17
  • 17
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Empirical Likelihood Tests For Constant Variance In The Two-Sample Problem

Shen, Paul 01 May 2019 (has links)
No description available.
62

The Design of GLR Control Charts for Process Monitoring

Xu, Liaosa 27 February 2013 (has links)
Generalized likelihood ratio (GLR) control charts are investigated for two types of statistical process monitoring (SPC) problems. The first part of this dissertation considers the problem of monitoring a normally distributed process variable when a special cause may produce a time varying linear drift in the mean. The design and application of a GLR control chart for drift detection is investigated. The GLR drift chart does not require specification of any tuning parameters by the practitioner, and has the advantage that, at the time of the signal, estimates of both the change point and the drift rate are immediately available. An equation is provided to accurately approximate the control limit. The performance of the GLR drift chart is compared to other control charts such as a standard CUSUM chart and a CUSCORE chart designed for drift detection. We also compare the GLR chart designed for drift detection to the GLR chart designed for sustained shift detection since both of them require only a control limit to be specified. In terms of the expected time for detection and in terms of the bias and mean squared error of the change-point estimators, the GLR drift chart has better performance for a wide range of drift rates relative to the GLR shift chart when the out-of-control process is truly a linear drift. The second part of the dissertation considers the problem of monitoring a linear functional relationship between a response variable and one or more explanatory variables (a linear profile). The design and application of GLR control charts for this problem are investigated. The likelihood ratio test of the GLR chart is generalized over the regression coefficients, the variance of the error term, and the possible change-point. The performance of the GLR chart is compared to various existing control charts. We show that the overall performance of the GLR chart is much better than other options in detecting a wide range of shift sizes. The existing control charts designed for certain shifts that may be of particular interest have several chart parameters that need to be specified by the user, which makes the design of such control charts more difficult. The GLR chart is very simple to design, as it is invariant to the choice of design matrix and the values of in-control parameters. Therefore there is only one design parameter (the control limit) that needs to be specified. Especially, the GLR chart can be constructed based on the sample size of n=1 at each sampling point, whereas other charts cannot be applied. Another advantage of the GLR chart is its built-in diagnostic aids that provide estimates of both the change-point and the values of linear profile parameters. / Ph. D.
63

Inferences on the power-law process with applications to repairable systems

Chumnaul, Jularat 13 December 2019 (has links)
System testing is very time-consuming and costly, especially for complex high-cost and high-reliability systems. For this reason, the number of failures needed for the developmental phase of system testing should be relatively small in general. To assess the reliability growth of a repairable system, the generalized confidence interval and the modified signed log-likelihood ratio test for the scale parameter of the power-law process are studied concerning incomplete failure data. Specifically, some recorded failure times in the early developmental phase of system testing cannot be observed; this circumstance is essential to establish a warranty period or determine a maintenance phase for repairable systems. For the proposed generalized confidence interval, we have found that this method is not biased estimates which can be seen from the coverage probabilities obtained from this method being close to the nominal level 0.95 for all levels of γ and β. When the performance of the proposed method and the existing method are compared and validated regarding average widths, the simulation results show that the proposed method is superior to another method due to shorter average widths when the predetermined number of failures is small. For the proposed modified signed log-likelihood ratio test, we have found that this test performs well in controlling type I errors for complete failure data, and it has desirable powers for all parameters configurations even for the small number of failures. For incomplete failure data, the proposed modified signed log-likelihood ratio test is preferable to the signed log-likelihood ratio test in most situations in terms of controlling type I errors. Moreover, the proposed test also performs well when the missing ratio is up to 30% and n > 10. In terms of empirical powers, the proposed modified signed log-likelihood ratio test is superior to another test for most situations. In conclusion, it is quite clear that the proposed methods, the generalized confidence interval, and the modified signed log-likelihood ratio test, are practically useful to save business costs and time during the developmental phase of system testing since the only small number of failures is required to test systems, and it yields precise results.
64

Likelihood as a Method of Multi Sensor Data Fusion for Target Tracking

Gallagher, Jonathan G. 08 September 2009 (has links)
No description available.
65

Cluster-based lack of fit tests for nonlinear regression models

Munasinghe, Wijith Prasantha January 1900 (has links)
Doctor of Philosophy / Department of Statistics / James W. Neill / Checking the adequacy of a proposed parametric nonlinear regression model is important in order to obtain useful predictions and reliable parameter inferences. Lack of fit is said to exist when the regression function does not adequately describe the mean of the response vector. This dissertation considers asymptotics, implementation and a comparative performance for the likelihood ratio tests suggested by Neill and Miller (2003). These tests use constructed alternative models determined by decomposing the lack of fit space according to clusterings of the observations. Clusterings are selected by a maximum power strategy and a sequence of statistical experiments is developed in the sense of Le Cam. L2 differentiability of the parametric array of probability measures associated with the sequence of experiments is established in this dissertation, leading to local asymptotic normality. Utilizing contiguity, the limit noncentral chi-square distribution under local parameter alternatives is then derived. For implementation purposes, standard linear model projection algorithms are used to approximate the likelihood ratio tests, after using the convexity of a class of fuzzy clusterings to form a smooth alternative model which is necessarily used to approximate the corresponding maximum optimal statistical experiment. It is demonstrated empirically that good power can result by allowing cluster selection to vary according to different points along the expectation surface of the proposed nonlinear regression model. However, in some cases, a single maximum clustering suffices, leading to the development of a Bonferroni adjusted multiple testing procedure. In addition, the maximin clustering based likelihood ratio tests were observed to possess markedly better simulated power than the generalized likelihood ratio test with semiparametric alternative model presented by Ciprian and Ruppert (2004).
66

A phylogenomics approach to resolving fungal evolution, and phylogenetic method development

Liu, Yu 12 1900 (has links)
Bien que les champignons soient régulièrement utilisés comme modèle d'étude des systèmes eucaryotes, leurs relations phylogénétiques soulèvent encore des questions controversées. Parmi celles-ci, la classification des zygomycètes reste inconsistante. Ils sont potentiellement paraphylétiques, i.e. regroupent de lignées fongiques non directement affiliées. La position phylogénétique du genre Schizosaccharomyces est aussi controversée: appartient-il aux Taphrinomycotina (précédemment connus comme archiascomycetes) comme prédit par l'analyse de gènes nucléaires, ou est-il plutôt relié aux Saccharomycotina (levures bourgeonnantes) tel que le suggère la phylogénie mitochondriale? Une autre question concerne la position phylogénétique des nucléariides, un groupe d'eucaryotes amiboïdes que l'on suppose étroitement relié aux champignons. Des analyses multi-gènes réalisées antérieurement n'ont pu conclure, étant donné le choix d'un nombre réduit de taxons et l'utilisation de six gènes nucléaires seulement. Nous avons abordé ces questions par le biais d'inférences phylogénétiques et tests statistiques appliqués à des assemblages de données phylogénomiques nucléaires et mitochondriales. D'après nos résultats, les zygomycètes sont paraphylétiques (Chapitre 2) bien que le signal phylogénétique issu du jeu de données mitochondriales disponibles est insuffisant pour résoudre l'ordre de cet embranchement avec une confiance statistique significative. Dans le Chapitre 3, nous montrons à l'aide d'un jeu de données nucléaires important (plus de cent protéines) et avec supports statistiques concluants, que le genre Schizosaccharomyces appartient aux Taphrinomycotina. De plus, nous démontrons que le regroupement conflictuel des Schizosaccharomyces avec les Saccharomycotina, venant des données mitochondriales, est le résultat d'un type d'erreur phylogénétique connu: l'attraction des longues branches (ALB), un artéfact menant au regroupement d'espèces dont le taux d'évolution rapide n'est pas représentatif de leur véritable position dans l'arbre phylogénétique. Dans le Chapitre 4, en utilisant encore un important jeu de données nucléaires, nous démontrons avec support statistique significatif que les nucleariides constituent le groupe lié de plus près aux champignons. Nous confirmons aussi la paraphylie des zygomycètes traditionnels tel que suggéré précédemment, avec support statistique significatif, bien que ne pouvant placer tous les membres du groupe avec confiance. Nos résultats remettent en cause des aspects d'une récente reclassification taxonomique des zygomycètes et de leurs voisins, les chytridiomycètes. Contrer ou minimiser les artéfacts phylogénétiques telle l'attraction des longues branches (ALB) constitue une question récurrente majeure. Dans ce sens, nous avons développé une nouvelle méthode (Chapitre 5) qui identifie et élimine dans une séquence les sites présentant une grande variation du taux d'évolution (sites fortement hétérotaches - sites HH); ces sites sont connus comme contribuant significativement au phénomène d'ALB. Notre méthode est basée sur un test de rapport de vraisemblance (likelihood ratio test, LRT). Deux jeux de données publiés précédemment sont utilisés pour démontrer que le retrait graduel des sites HH chez les espèces à évolution accélérée (sensibles à l'ALB) augmente significativement le support pour la topologie « vraie » attendue, et ce, de façon plus efficace comparée à d'autres méthodes publiées de retrait de sites de séquences. Néanmoins, et de façon générale, la manipulation de données préalable à l'analyse est loin d’être idéale. Les développements futurs devront viser l'intégration de l'identification et la pondération des sites HH au processus d'inférence phylogénétique lui-même. / Despite the popularity of fungi as eukaryotic model systems, several questions on their phylogenetic relationships continue to be controversial. These include the classification of zygomycetes that are potentially paraphyletic, i.e. a combination of several not directly related fungal lineages. The phylogenetic position of Schizosaccharomyces species has also been controversial: do they belong to Taphrinomycotina (previously known as archiascomycetes) as predicted by analyses with nuclear genes, or are they instead related to Saccharomycotina (budding yeast) as in mitochondrial phylogenies? Another question concerns the precise phylogenetic position of nucleariids, a group of amoeboid eukaryotes that are believed to be close relatives of Fungi. Previously conducted multi-gene analyses have been inconclusive, because of limited taxon sampling and the use of only six nuclear genes. We have addressed these issues by assembling phylogenomic nuclear and mitochondrial datasets for phylogenetic inference and statistical testing. According to our results zygomycetes appear to be paraphyletic (Chapter 2), but the phylogenetic signal in the available mitochondrial dataset is insufficient for resolving their branching order with statistical confidence. In Chapter 3 we show with a large nuclear dataset (more than 100 proteins) and conclusive supports that Schizosaccharomyces species are part of Taphrinomycotina. We further demonstrate that the conflicting grouping of Schizosaccharomyces with budding yeasts, obtained with mitochondrial sequences, results from a phylogenetic error known as long-branch attraction (LBA, a common artifact that leads to the regrouping of species with high evolutionary rates irrespective of their true phylogenetic positions). In Chapter 4, using again a large nuclear dataset we demonstrate with significant statistical support that nucleariids are the closest known relatives of Fungi. We also confirm paraphyly of traditional zygomycetes as previously suggested, with significant support, but without placing all members of this group with confidence. Our results question aspects of a recent taxonomical reclassification of zygomycetes and their chytridiomycete neighbors (a group of zoospore-producing Fungi). Overcoming or minimizing phylogenetic artifacts such as LBA has been among our most recurring questions. We have therefore developed a new method (Chapter 5) that identifies and eliminates sequence sites with highly uneven evolutionary rates (highly heterotachous sites, or HH sites) that are known to contribute significantly to LBA. Our method is based on a likelihood ratio test (LRT). Two previously published datasets are used to demonstrate that gradual removal of HH sites in fast-evolving species (suspected for LBA) significantly increases the support for the expected ‘true’ topology, in a more effective way than comparable, published methods of sequence site removal. Yet in general, data manipulation prior to analysis is far from ideal. Future development should aim at integration of HH site identification and weighting into the phylogenetic inference process itself.
67

Essays on Fine Structure of Asset Returns, Jumps, and Stochastic Volatility

Yu, Jung-Suk 22 May 2006 (has links)
There has been an on-going debate about choices of the most suitable model amongst a variety of model specifications and parameterizations. The first dissertation essay investigates whether asymmetric leptokurtic return distributions such as Hansen's (1994) skewed tdistribution combined with GARCH specifications can outperform mixed GARCH-jump models such as Maheu and McCurdy's (2004) GARJI model incorporating the autoregressive conditional jump intensity parameterization in the discrete-time framework. I find that the more parsimonious GJR-HT model is superior to mixed GARCH-jump models. Likelihood-ratio (LR) tests, information criteria such as AIC, SC, and HQ and Value-at-Risk (VaR) analysis confirm that GJR-HT is one of the most suitable model specifications which gives us both better fit to the data and parsimony of parameterization. The benefits of estimating GARCH models using asymmetric leptokurtic distributions are more substantial for highly volatile series such as emerging stock markets, which have a higher degree of non-normality. Furthermore, Hansen's skewed t-distribution also provides us with an excellent risk management tool evidenced by VaR analysis. The second dissertation essay provides a variety of empirical evidences to support redundancy of stochastic volatility for SP500 index returns when stochastic volatility is taken into account with infinite activity pure Lévy jumps models and the importance of stochastic volatility to reduce pricing errors for SP500 index options without regard to jumps specifications. This finding is important because recent studies have shown that stochastic volatility in a continuous-time framework provides an excellent fit for financial asset returns when combined with finite-activity Merton's type compound Poisson jump-diffusion models. The second essay also shows that stochastic volatility with jumps (SVJ) and extended variance-gamma with stochastic volatility (EVGSV) models perform almost equally well for option pricing, which strongly imply that the type of Lévy jumps specifications is not important factors to enhance model performances once stochastic volatility is incorporated. In the second essay, I compute option prices via improved Fast Fourier Transform (FFT) algorithm using characteristic functions to match arbitrary log-strike grids with equal intervals with each moneyness and maturity of actual market option prices.
68

Diagnóstico no modelo de regressão logística ordinal / Diagnostic of ordinal logistic regression model

Moura, Marina Calais de Freitas 11 June 2019 (has links)
Os modelos de regressão logística ordinais são usados para descrever a relação entre uma variável resposta categórica ordinal e uma ou mais variáveis explanatórias. Uma vez ajustado o modelo de regressão, se faz necessário verificar a qualidade do ajuste do modelo. As estatísticas qui-quadrado de Pearson e da razão de verossimilhanças não são adequadas para acessar a qualidade do ajuste do modelo de regressão logística ordinal quando variáveis contínuas estão presentes no modelo. Para este caso, foram propostos os testes de Lipsitz, a versão ordinal do teste de Hosmer-Lemeshow e os testes qui-quadrado e razão de verossimilhanças de Pulkistenis-Robinson. Nesta dissertação é feita uma revisão das técnicas de diagnóstico disponíveis para os Modelos logito cumulativo, Modelos logito categorias adjacentes e Modelos logito razão contínua, bem como uma aplicação a fim de investigar a relação entre a perda auditiva, o equilíbrio e aspectos emocionais nos idosos. / Ordinal regression models are used to describe the relationship between an ordered categorical response variable and one or more explanatory variables which could be discrete or continuous. Once the regression model has been fitted, it is necessary to check the goodness-of-fit of the model. The Pearson and likelihood-ratio statistics are not adequate for assessing goodness-of-fit in ordinal logistic regression model with continuous explanatory variables. For this case, the Lipsitz test, the ordinal version of the Hosmer-Lemeshow test and Pulkstenis-Robinson chi-square and likelihood ratio tests were proposed. This dissertation aims to review the diagnostic techniques available for the cumulative logit models, categories adjacent logit models and continuous ratio logistic models. In addition, an application was developed in order to investigate the relationship between hearing loss, balance and emotional aspects in the elderly.
69

The new class of Kummer beta generalized distributions: theory and applications / A nova classe de distribuições Kummer beta generalizada: teoria e aplicações

Pescim, Rodrigo Rossetto 06 December 2013 (has links)
In this study, a new class of generalized distributions was developed, based on the Kummer beta distribution (NG; KOTZ, 1995), which contains as particular cases the exponentiated and beta generators of distributions. The main feature of the new family of distributions is to provide greater flexibility to the extremes of the density function and therefore, it becomes suitable for analyzing data sets with high degree of asymmetry and kurtosis. Also, two new distributions belonging to the new class of distributions, based on the Birnbaum-Saunders and generalized gamma distributions, that has as main characteristic the hazard function which assumes different forms (unimodal, bathtub shape, increase, decrease) were studied. In all studies, general mathematical properties such as ordinary and incomplete moments, generating function, mean deviations, reliability, entropies, order statistics and their moments were discussed. The estimation of parameters is approached by the method of maximum likelihood and Bayesian analysis and the observed information matrix is derived. It is also considered the likelihood ratio statistics and formal goodness-of-fit tests to compare all the proposed distributions with some of its sub-models and non-nested models. The developed results for all studies were applied to six real data sets. / Neste trabalho, foi proposta uma nova classe de distribuições generalizadas, baseada na distribuição Kummer beta (NG; KOTZ, 1995), que contém como casos particulares os geradores exponencializado e beta de distribuições. A principal característica da nova família de distribuições é fornecer grande flexibilidade para as extremidades da função densidade e portanto, ela torna-se adequada para a análise de conjuntos de dados com alto grau de assimetria e curtose. Também foram estudadas duas novas distribuições que pertencem à nova família de distribuições, baseadas nas distribuições Birnbaum-Saunders e gama generalizada, que possuem função de taxas de falhas que assumem diferentes formas (unimodal, forma de banheira, crescente e decrescente). Em todas as pesquisas, propriedades matemáticas gerais como momentos ordinários e incompletos, função geradora, desvios médio, confiabilidade, entropias, estatísticas de ordem e seus momentos foram discutidas. A estimação dos parâmetros é abordada pelo método da máxima verossimilhança e pela análise bayesiana e a matriz de informação observada foi derivada. Considerou-se, também, a estatística de razão de verossimilhanças e testes formais de qualidade de ajuste para comparar todas as distribuições propostas com alguns de seus submodelos e modelos não encaixados. Os resultados desenvolvidos foram aplicados a seis conjuntos de dados.
70

Modelos lineares mistos para dados longitudinais em ensaio fatorial com tratamento adicional / Mixed linear models for longitudinal data in a factorial experiment with additional treatment

Rocha, Gilson Silvério da 09 October 2015 (has links)
Em experimentos agronômicos são comuns ensaios planejados para estudar determinadas culturas por meio de múltiplas mensurações realizadas na mesma unidade amostral ao longo do tempo, espaço, profundidade entre outros. Essa forma com que as mensurações são coletadas geram conjuntos de dados que são chamados de dados longitudinais. Nesse contexto, é de extrema importância a utilização de metodologias estatísticas que sejam capazes de identificar possíveis padrões de variação e correlação entre as mensurações. A possibilidade de inclusão de efeitos aleatórios e de modelagem das estruturas de covariâncias tornou a metodologia de modelos lineares mistos uma das ferramentas mais apropriadas para a realização desse tipo de análise. Entretanto, apesar de todo o desenvolvimento teórico e computacional, a utilização dessa metodologia em delineamentos mais complexos envolvendo dados longitudinais e tratamentos adicionais, como os utilizados na área de forragicultura, ainda é passível de estudos. Este trabalho envolveu o uso do diagrama de Hasse e da estratégia top-down na construção de modelos lineares mistos no estudo de cortes sucessivos de forragem provenientes de um experimento de adubação com boro em alfafa (Medicago sativa L.) realizado no campo experimental da Embrapa Pecuária Sudeste. Primeiramente, considerou-se uma abordagem qualitativa para todos os fatores de estudo e devido à complexidade do delineamento experimental optou-se pela construção do diagrama de Hasse. A incorporação de efeitos aleatórios e seleção de estruturas de covariâncias para os resíduos foram realizadas com base no teste da razão de verossimilhanças calculado a partir de parâmetros estimados pelo método da máxima verossimilhança restrita e nos critérios de informação de Akaike (AIC), Akaike corrigido (AICc) e bayesiano (BIC). Os efeitos fixos foram testados por meio do teste Wald-F e, devido aos efeitos significativos das fontes de variação associadas ao fator longitudinal, desenvolveu-se um estudo de regressão. A construção do diagrama de Hasse foi fundamental para a compreensão e visualização simbólica do relacionamento de todos os fatores presentes no estudo, permitindo a decomposição das fontes de variação e de seus graus de liberdade, garantindo que todos os testes fossem realizados corretamente. A inclusão de efeito aleatório associado à unidade experimental foi essencial para a modelagem do comportamento de cada unidade e a estrutura de componentes de variância com heterogeneidade, incorporada aos resíduos, foi capaz de modelar eficientemente a heterogeneidade de variâncias presente nos diferentes cortes da cultura da alfafa. A verificação do ajuste foi realizada por meio de gráficos de diagnósticos de resíduos. O estudo de regressão permitiu avaliar a produtividade de matéria seca da parte aérea da planta (kg ha-1) de cortes consecutivos da cultura da alfafa, envolvendo a comparação de adubações com diferentes fontes e doses de boro. Os melhores resultados de produtividade foram observados para a combinação da fonte ulexita com as doses 3, 6 e 9 kg ha-1 de boro. / Assays aimed at studying some crops through multiple measurements performed in the same sample unit along time, space, depth etc. have been frequently adopted in agronomical experiments. This type of measurement originates a dataset named longitudinal data, in which the use of statistical procedures capable of identifying possible standards of variation and correlation among measurements has great importance. The possibility of including random effects and modeling of covariance structures makes the methodology of mixed linear models one of the most appropriate tools to perform this type of analysis. However, despite of all theoretical and computational development, the use of such methodology in more complex designs involving longitudinal data and additional treatments, such as those used in forage crops, still needs to be studied. The present work covered the use of the Hasse diagram and the top-down strategy in the building of mixed linear models for the study of successive cuts from an experiment involving boron fertilization in alfalfa (Medicago sativa L.) carried out in the field area of Embrapa Southeast Livestock. First, we considered a qualitative approach for all study factors and we chose the Hasse diagram building due to the model complexity. The inclusion of random effects and selection of covariance structures for residues were performed based on the likelihood ratio test, calculated based on parameters estimated through the restricted maximum likelihood method, the Akaike\'s Information Criterion (AIC), the Akaike\'s information criterion corrected (AICc) and the Bayesian Information Criterion (BIC). The fixed effects were analyzed through the Wald-F test and we performed a regression study due to the significant effects of the variation sources associated with the longitudinal factor. The Hasse diagram building was essential for understanding and symbolic displaying regarding the relation among all factors present in the study, thus allowing variation sources and their degrees of freedom to be decomposed, assuring that all tests were correctly performed. The inclusion of random effect associated with the sample unit was essential for modeling the behavior of each unity. Furthermore, the structure of variance components with heterogeneity, added to the residues, was capable of modeling efficiently the heterogeneity of variances present in the different cuts of alfalfa plants. The fit was checked by residual diagnostic plots. The regression study allowed us to evaluate the productivity of shoot dry matter (kg ha-1) related to successive cuts of alfalfa plants, involving the comparison of fertilization with different boron sources and doses. We observed the best productivity in the combination of the source ulexite with the doses 3, 6 and 9 kg ha-1 boron.

Page generated in 0.0742 seconds