• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 71
  • 23
  • 6
  • 6
  • 6
  • 5
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 146
  • 146
  • 93
  • 34
  • 31
  • 24
  • 23
  • 23
  • 21
  • 20
  • 18
  • 17
  • 17
  • 17
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Likelihood ratio tests of separable or double separable covariance structure, and the empirical null distribution

Gottfridsson, Anneli January 2011 (has links)
The focus in this thesis is on the calculations of an empirical null distributionfor likelihood ratio tests testing either separable or double separable covariancematrix structures versus an unstructured covariance matrix. These calculationshave been performed for various dimensions and sample sizes, and are comparedwith the asymptotic χ2-distribution that is commonly used as an approximative distribution. Tests of separable structures are of particular interest in cases when data iscollected such that more than one relation between the components of the observationis suspected. For instance, if there are both a spatial and a temporalaspect, a hypothesis of two covariance matrices, one for each aspect, is reasonable.
82

Detection and Classification of DIF Types Using Parametric and Nonparametric Methods: A comparison of the IRT-Likelihood Ratio Test, Crossing-SIBTEST, and Logistic Regression Procedures

Lopez, Gabriel E. 01 January 2012 (has links)
The purpose of this investigation was to compare the efficacy of three methods for detecting differential item functioning (DIF). The performance of the crossing simultaneous item bias test (CSIBTEST), the item response theory likelihood ratio test (IRT-LR), and logistic regression (LOGREG) was examined across a range of experimental conditions including different test lengths, sample sizes, DIF and differential test functioning (DTF) magnitudes, and mean differences in the underlying trait distributions of comparison groups, herein referred to as the reference and focal groups. In addition, each procedure was implemented using both an all-other anchor approach, in which the IRT-LR baseline model, CSIBEST matching subtest, and LOGREG trait estimate were based on all test items except for the one under study, and a constant anchor approach, in which the baseline model, matching subtest, and trait estimate were based on a predefined subset of DIF-free items. Response data for the reference and focal groups were generated using known item parameters based on the three-parameter logistic item response theory model (3-PLM). Various types of DIF were simulated by shifting the generating item parameters of select items to achieve desired DIF and DTF magnitudes based on the area between the groups' item response functions. Power, Type I error, and Type III error rates were computed for each experimental condition based on 100 replications and effects analyzed via ANOVA. Results indicated that the procedures varied in efficacy, with LOGREG when implemented using an all-other approach providing the best balance of power and Type I error rate. However, none of the procedures were effective at identifying the type of DIF that was simulated.
83

Comparing latent means using two factor scaling methods : a Monte Carlo study

Wang, Dandan, 1981- 10 July 2012 (has links)
Social science researchers are increasingly using multi-group confirmatory factor analysis (MG-CFA) to compare different groups' latent variable means. To ensure that a MG-CFA model is identified, two approaches are commonly used to set the scale of the latent variable. The reference indicator (RI) strategy, which involves constraining one loading per factor to a value of one across groups, assumes that the RI has equal factor loadings across groups. The second approach involves constraining each factor's variance to a value of one across groups and, thus, assumes that the factor variances are equal across groups. Latent mean differences may be tested and described using Gonzalez and Griffin's (2001) likelihood ratio test (LRT[subscript k]) and Hancock's (2001) standardized latent mean difference effect size measure ([delta subscript k]), respectively. Applied researchers using the LRT[subscript k] and/or the [delta subscript k] when comparing groups' latent means may not explicitly test the assumptions underlying the two factor scaling methods. To date, no study has examined the impact of violating the assumptions associated with the two scaling methods on latent mean comparisons. The purpose of this study was to assess the performance of the LRT[subscript k] and the [delta subscript k] when violating the assumptions underlying the RI strategy and/or the factor variance scaling method. Type I error and power of the LRT[subscript k] as well as relative parameter bias and parameter bias of the [delta subscript k] were examined when varying loading difference magnitude, factor variance ratio, factor loading pattern and sample size ratio. Rejection rates of model fit indices, including the x² test, RMSEA, CFI, TLI and SRMR, under these varied conditions were also examined. The results indicated that violating the assumptions underlying the RI strategy did not affect the LRT[subscript k] or the [delta subscript k]. However, violating the assumption underlying the factorvariance scaling method influenced Type I error rates of the LRT[subscript k], particularly in unequal sample size conditions. Results also indicated that the four factors manipulated in this study had an impact on correct model rejection rates of the model fit indices. It is hoped that this study provides useful information to researchers concerning the use of the LRT[subscript k] and [delta subscript k] under factor scaling method assumption violations. / text
84

The Likelihood Ratio Test for Order Restricted Hypotheses in Non-Inferiority Trials / Der Likelihood-Quotienten-Test für geordnete Hypothesen in Nichtunterlegenheitsstudien

Skipka, Guido 25 June 2003 (has links)
No description available.
85

A phylogenomics approach to resolving fungal evolution, and phylogenetic method development

Liu, Yu 12 1900 (has links)
Bien que les champignons soient régulièrement utilisés comme modèle d'étude des systèmes eucaryotes, leurs relations phylogénétiques soulèvent encore des questions controversées. Parmi celles-ci, la classification des zygomycètes reste inconsistante. Ils sont potentiellement paraphylétiques, i.e. regroupent de lignées fongiques non directement affiliées. La position phylogénétique du genre Schizosaccharomyces est aussi controversée: appartient-il aux Taphrinomycotina (précédemment connus comme archiascomycetes) comme prédit par l'analyse de gènes nucléaires, ou est-il plutôt relié aux Saccharomycotina (levures bourgeonnantes) tel que le suggère la phylogénie mitochondriale? Une autre question concerne la position phylogénétique des nucléariides, un groupe d'eucaryotes amiboïdes que l'on suppose étroitement relié aux champignons. Des analyses multi-gènes réalisées antérieurement n'ont pu conclure, étant donné le choix d'un nombre réduit de taxons et l'utilisation de six gènes nucléaires seulement. Nous avons abordé ces questions par le biais d'inférences phylogénétiques et tests statistiques appliqués à des assemblages de données phylogénomiques nucléaires et mitochondriales. D'après nos résultats, les zygomycètes sont paraphylétiques (Chapitre 2) bien que le signal phylogénétique issu du jeu de données mitochondriales disponibles est insuffisant pour résoudre l'ordre de cet embranchement avec une confiance statistique significative. Dans le Chapitre 3, nous montrons à l'aide d'un jeu de données nucléaires important (plus de cent protéines) et avec supports statistiques concluants, que le genre Schizosaccharomyces appartient aux Taphrinomycotina. De plus, nous démontrons que le regroupement conflictuel des Schizosaccharomyces avec les Saccharomycotina, venant des données mitochondriales, est le résultat d'un type d'erreur phylogénétique connu: l'attraction des longues branches (ALB), un artéfact menant au regroupement d'espèces dont le taux d'évolution rapide n'est pas représentatif de leur véritable position dans l'arbre phylogénétique. Dans le Chapitre 4, en utilisant encore un important jeu de données nucléaires, nous démontrons avec support statistique significatif que les nucleariides constituent le groupe lié de plus près aux champignons. Nous confirmons aussi la paraphylie des zygomycètes traditionnels tel que suggéré précédemment, avec support statistique significatif, bien que ne pouvant placer tous les membres du groupe avec confiance. Nos résultats remettent en cause des aspects d'une récente reclassification taxonomique des zygomycètes et de leurs voisins, les chytridiomycètes. Contrer ou minimiser les artéfacts phylogénétiques telle l'attraction des longues branches (ALB) constitue une question récurrente majeure. Dans ce sens, nous avons développé une nouvelle méthode (Chapitre 5) qui identifie et élimine dans une séquence les sites présentant une grande variation du taux d'évolution (sites fortement hétérotaches - sites HH); ces sites sont connus comme contribuant significativement au phénomène d'ALB. Notre méthode est basée sur un test de rapport de vraisemblance (likelihood ratio test, LRT). Deux jeux de données publiés précédemment sont utilisés pour démontrer que le retrait graduel des sites HH chez les espèces à évolution accélérée (sensibles à l'ALB) augmente significativement le support pour la topologie « vraie » attendue, et ce, de façon plus efficace comparée à d'autres méthodes publiées de retrait de sites de séquences. Néanmoins, et de façon générale, la manipulation de données préalable à l'analyse est loin d’être idéale. Les développements futurs devront viser l'intégration de l'identification et la pondération des sites HH au processus d'inférence phylogénétique lui-même. / Despite the popularity of fungi as eukaryotic model systems, several questions on their phylogenetic relationships continue to be controversial. These include the classification of zygomycetes that are potentially paraphyletic, i.e. a combination of several not directly related fungal lineages. The phylogenetic position of Schizosaccharomyces species has also been controversial: do they belong to Taphrinomycotina (previously known as archiascomycetes) as predicted by analyses with nuclear genes, or are they instead related to Saccharomycotina (budding yeast) as in mitochondrial phylogenies? Another question concerns the precise phylogenetic position of nucleariids, a group of amoeboid eukaryotes that are believed to be close relatives of Fungi. Previously conducted multi-gene analyses have been inconclusive, because of limited taxon sampling and the use of only six nuclear genes. We have addressed these issues by assembling phylogenomic nuclear and mitochondrial datasets for phylogenetic inference and statistical testing. According to our results zygomycetes appear to be paraphyletic (Chapter 2), but the phylogenetic signal in the available mitochondrial dataset is insufficient for resolving their branching order with statistical confidence. In Chapter 3 we show with a large nuclear dataset (more than 100 proteins) and conclusive supports that Schizosaccharomyces species are part of Taphrinomycotina. We further demonstrate that the conflicting grouping of Schizosaccharomyces with budding yeasts, obtained with mitochondrial sequences, results from a phylogenetic error known as long-branch attraction (LBA, a common artifact that leads to the regrouping of species with high evolutionary rates irrespective of their true phylogenetic positions). In Chapter 4, using again a large nuclear dataset we demonstrate with significant statistical support that nucleariids are the closest known relatives of Fungi. We also confirm paraphyly of traditional zygomycetes as previously suggested, with significant support, but without placing all members of this group with confidence. Our results question aspects of a recent taxonomical reclassification of zygomycetes and their chytridiomycete neighbors (a group of zoospore-producing Fungi). Overcoming or minimizing phylogenetic artifacts such as LBA has been among our most recurring questions. We have therefore developed a new method (Chapter 5) that identifies and eliminates sequence sites with highly uneven evolutionary rates (highly heterotachous sites, or HH sites) that are known to contribute significantly to LBA. Our method is based on a likelihood ratio test (LRT). Two previously published datasets are used to demonstrate that gradual removal of HH sites in fast-evolving species (suspected for LBA) significantly increases the support for the expected ‘true’ topology, in a more effective way than comparable, published methods of sequence site removal. Yet in general, data manipulation prior to analysis is far from ideal. Future development should aim at integration of HH site identification and weighting into the phylogenetic inference process itself.
86

Nonparametric criteria for sparse contingency tables / Neparametriniai kriterijai retų įvykių dažnių lentelėms

Samusenko, Pavel 18 February 2013 (has links)
In the dissertation, the problem of nonparametric testing for sparse contingency tables is addressed. Statistical inference problems caused by sparsity of contingency tables are widely discussed in the literature. Traditionally, the expected (under null the hypothesis) frequency is required to exceed 5 in almost all cells of the contingency table. If this condition is violated, the χ2 approximations of goodness of fit statistics may be inaccurate and the table is said to be sparse . Several techniques have been proposed to tackle the problem: exact tests, alternative approximations, parametric and nonparametric bootstrap, Bayes approach and other methods. However they all are not applicable or have some limitations in nonparametric statistical inference of very sparse contingency tables. In the dissertation, it is shown that, for sparse categorical data, the likelihood ratio statistic and Pearson’s χ2 statistic may become noninformative: they do not anymore measure the goodness-of-fit of null hypotheses to data. Thus, they can be inconsistent even in cases where a simple consistent test does exist. An improvement of the classical criteria for sparse contingency tables is proposed. The improvement is achieved by grouping and smoothing of sparse categorical data by making use of a new sparse asymptotics model relying on (extended) empirical Bayes approach. Under general conditions, the consistency of the proposed criteria based on grouping is proved. Finite-sample behavior of... [to full text] / Disertacijoje sprendžiami neparametrinių hipotezių tikrinimo uždaviniai išretintoms dažnių lentelėms. Problemos, susijusios su retų įvykių dažnių lentelėmis yra plačiai aptartos mokslinėje literatūroje. Yra pasiūlyta visa eilė metodų: tikslieji testai, alternatyvūs aproksimavimo būdai parametrinė ir neparametrinė saviranka, Bayeso ir kiti metodai. Tačiau jie nepritaikomi arba yra neefektyvūs neparametrinėje labai išretintų dažnių lentelių analizėje. Disertacijoje parodyta, kad labai išretintiems kategoriniams duomenims tikėtinumo santykio statistika ir Pearsono χ2 statistika gali pasidaryti neinformatyviomis: jos jau nėra tinkamos nulinės hipotezės ir duomenų suderinamumui matuoti. Vadinasi, jų pagrindu sudaryti kriterijai gali būti net nepagrįsti net tuo atveju, kai egzistuoja paprastas pagrįstas kriterijus. Darbe yra pasiūlytas klasikinių kriterijų patobulinimas išretintų dažnių lentelėms. Siūlomi kriterijai remiasi išretintų kategorinių duomenų grupavimu ir glodinimu naudojant naują išretinimo asimtotikos modelį, kuris remiasi (išplėstine) empirine Bayeso metodologija. Prie bendrų sąlygų yra įrodytas siūlomų kriterijų, naudojančių grupavimą, pagrįstumas. Kriterijų elgesys baigtinių imčių atveju tiriamas taikant Monte Carlo modeliavimą. Disertacija susideda iš įvado, 4 skyrių, literatūros sąrašo, bendrų išvadų ir priedo. Įvade atskleidžiama nagrinėjamos mokslinės problemos svarba, aprašomi darbo tikslai ir uždaviniai, tyrimo metodai, mokslinis naujumas, praktinė gautų... [toliau žr. visą tekstą]
87

Neparametriniai kriterijai retų įvykių dažnių lentelėms / Nonparametric criteria for sparse contingency tables

Samusenko, Pavel 18 February 2013 (has links)
Disertacijoje sprendžiami neparametrinių hipotezių tikrinimo uždaviniai išretintoms dažnių lentelėms. Problemos, susijusios su retų įvykių dažnių lentelėmis yra plačiai aptartos mokslinėje literatūroje. Yra pasiūlyta visa eilė metodų: tikslieji testai, alternatyvūs aproksimavimo būdai parametrinė ir neparametrinė saviranka, Bayeso ir kiti metodai. Tačiau jie nepritaikomi arba yra neefektyvūs neparametrinėje labai išretintų dažnių lentelių analizėje. Disertacijoje parodyta, kad labai išretintiems kategoriniams duomenims tikėtinumo santykio statistika ir Pearsono χ2 statistika gali pasidaryti neinformatyviomis: jos jau nėra tinkamos nulinės hipotezės ir duomenų suderinamumui matuoti. Vadinasi, jų pagrindu sudaryti kriterijai gali būti net nepagrįsti net tuo atveju, kai egzistuoja paprastas pagrįstas kriterijus. Darbe yra pasiūlytas klasikinių kriterijų patobulinimas išretintų dažnių lentelėms. Siūlomi kriterijai remiasi išretintų kategorinių duomenų grupavimu ir glodinimu naudojant naują išretinimo asimtotikos modelį, kuris remiasi (išplėstine) empirine Bayeso metodologija. Prie bendrų sąlygų yra įrodytas siūlomų kriterijų, naudojančių grupavimą, pagrįstumas. Kriterijų elgesys baigtinių imčių atveju tiriamas taikant Monte Carlo modeliavimą. Disertacija susideda iš įvado, 4 skyrių, literatūros sąrašo, bendrų išvadų ir priedo. Įvade atskleidžiama nagrinėjamos mokslinės problemos svarba, aprašomi darbo tikslai ir uždaviniai, tyrimo metodai, mokslinis naujumas, praktinė gautų... [toliau žr. visą tekstą] / In the dissertation, the problem of nonparametric testing for sparse contingency tables is addressed. Statistical inference problems caused by sparsity of contingency tables are widely discussed in the literature. Traditionally, the expected (under null the hypothesis) frequency is required to exceed 5 in almost all cells of the contingency table. If this condition is violated, the χ2 approximations of goodness of fit statistics may be inaccurate and the table is said to be sparse . Several techniques have been proposed to tackle the problem: exact tests, alternative approximations, parametric and nonparametric bootstrap, Bayes approach and other methods. However they all are not applicable or have some limitations in nonparametric statistical inference of very sparse contingency tables. In the dissertation, it is shown that, for sparse categorical data, the likelihood ratio statistic and Pearson’s χ2 statistic may become noninformative: they do not anymore measure the goodness-of-fit of null hypotheses to data. Thus, they can be inconsistent even in cases where a simple consistent test does exist. An improvement of the classical criteria for sparse contingency tables is proposed. The improvement is achieved by grouping and smoothing of sparse categorical data by making use of a new sparse asymptotics model relying on (extended) empirical Bayes approach. Under general conditions, the consistency of the proposed criteria based on grouping is proved. Finite sample behavior of... [to full text]
88

Testing Benford’s Law with the first two significant digits

Wong, Stanley Chun Yu 07 September 2010 (has links)
Benford’s Law states that the first significant digit for most data is not uniformly distributed. Instead, it follows the distribution: P(d = d1) = log10(1 + 1/d1) for d1 ϵ {1, 2, …, 9}. In 2006, my supervisor, Dr. Mary Lesperance et. al tested the goodness-of-fit of data to Benford’s Law using the first significant digit. Here we extended the research to the first two significant digits by performing several statistical tests – LR-multinomial, LR-decreasing, LR-generalized Benford, LR-Rodriguez, Cramѐr-von Mises Wd2, Ud2, and Ad2 and Pearson’s χ2; and six simultaneous confidence intervals – Quesenberry, Goodman, Bailey Angular, Bailey Square, Fitzpatrick and Sison. When testing compliance with Benford’s Law, we found that the test statistics LR-generalized Benford, Wd2 and Ad2 performed well for Generalized Benford distribution, Uniform/Benford mixture distribution and Hill/Benford mixture distribution while Pearson’s χ2 and LR-multinomial statistics are more appropriate for the contaminated additive/multiplicative distribution. With respect to simultaneous confidence intervals, we recommend Goodman and Sison to detect deviation from Benford’s Law.
89

The Differential Item Functioning (dif) Analysis Of Mathematics Items In The International Assessment Programs

Yildirim, Huseyin Husnu 01 April 2006 (has links) (PDF)
Cross-cultural studies, like TIMSS and PISA 2003, are being conducted since 1960s with an idea that these assessments can provide a broad perspective for evaluating and improving education. In addition countries can assess their relative positions in mathematics achievement among their competitors in the global world. However, because of the different cultural and language settings of different countries, these international tests may not be functioning as expected across all the countries. Thus, tests may not be equivalent, or fair, linguistically and culturally across the participating countries. In this conte! ! xt, the present study aimed at assessing the equivalence of mathematics items of TIMSS 1999 and PISA 2003 across cultures and languages, to fin! d out if mathematics achievement possesses any culture specifi! c aspect s. For this purpose, the present study assessed Turkish and English versions of TIMSS 1999 and PISA 2003 mathematics items with respect to, (a) psychometric characteristics of items, and (b) possible sources of Differential Item Functioning (DIF) between these two versions. The study used Restricted Factor Analysis, Mantel-Haenzsel Statistics and Item Response Theory Likelihood Ratio methodologies to determine DIF items. The results revealed that there were adaptation problems in both TIMSS and PISA studies. However it was still possible to determine a subtest of items functioning fairly between cultures, to form a basis for a cross-cultural comparison. In PISA, there was a high rate of agreement among the DIF methodologies used. However, in TIMSS, the agree! ment ra! te decreased considerably possibly because the rate o! f differ e! ntially functioning items within TIMSS was higher, and differential guessing and differential discriminating were also issues in the test. The study! also revealed that items requiring competencies of reproduction of practiced knowledge, knowledge of facts, performance of routine procedures, application of technical skills were less likely to be biased against Turkish students with respect to American students at the same ability level. On the other hand, items requiring students to communicate mathematically, items where various results must be compared, and items that had real-world context were less likely to be in favor of Turkish students.
90

Statistical signal processing in sensor networks with applications to fault detection in helicopter transmissions

Galati, F. Antonio Unknown Date (has links) (PDF)
In this thesis two different problems in distributed sensor networks are considered. Part I involves optimal quantiser design for decentralised estimation of a two-state hidden Markov model with dual sensors. The notion of optimality for quantiser design is based on minimising the probability of error in estimating the hidden Markov state. Equations for the filter error are derived for the continuous (unquantised) sensor outputs (signals), which are used to benchmark the performance of the quantisers. Minimising the probability of filter error to obtain the quantiser breakpoints is a difficult problem therefore an alternative method is employed. The quantiser breakpoints are obtained by maximising the mutual information between the quantised signals and the hidden Markov state. This method is known to work well for the single sensor case. Cases with independent and correlated noise across the signals are considered. The method is then applied to Markov processes with Gaussian signal noise, and further investigated through simulation studies. Simulations involving both independent and correlated noise across the sensors are performed and a number of interesting new theoretical results are obtained, particularly in the case of correlated noise. In Part II, the focus shifts to the detection of faults in helicopter transmission systems. The aim of the investigation is to determine whether the acoustic signature can be used for fault detection and diagnosis. To investigate this, statistical change detection algorithms are applied to acoustic vibration data obtained from the main rotor gearbox of a Bell 206 helicopter, which is run at high load under test conditions.

Page generated in 0.092 seconds