• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 163
  • 29
  • 15
  • 10
  • 9
  • 6
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • Tagged with
  • 290
  • 290
  • 143
  • 82
  • 59
  • 46
  • 46
  • 37
  • 32
  • 31
  • 31
  • 26
  • 24
  • 22
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
201

Tests d’hypothèses statistiquement et algorithmiquement efficaces de similarité et de dépendance / Statistically and computationally efficient hypothesis tests for similarity and dependency

Bounliphone, Wacha 30 January 2017 (has links)
Cette thèse présente de nouveaux tests d’hypothèses statistiques efficaces pour la relative similarité et dépendance, et l’estimation de la matrice de précision. La principale méthodologie adoptée dans cette thèse est la classe des estimateurs U-statistiques.Le premier test statistique porte sur les tests de relative similarité appliqués au problème de la sélection de modèles. Les modèles génératifs probabilistes fournissent un cadre puissant pour représenter les données. La sélection de modèles dans ce contexte génératif peut être difficile. Pour résoudre ce problème, nous proposons un nouveau test d’hypothèse non paramétrique de relative similarité et testons si un premier modèle candidat génère un échantillon de données significativement plus proche d’un ensemble de validation de référence.La deuxième test d’hypothèse statistique non paramétrique est pour la relative dépendance. En présence de dépendances multiples, les méthodes existantes ne répondent qu’indirectement à la question de la relative dépendance. Or, savoir si une dépendance est plus forte qu’une autre est important pour la prise de décision. Nous présentons un test statistique qui détermine si une variable dépend beaucoup plus d’une première variable cible ou d’une seconde variable.Enfin, une nouvelle méthode de découverte de structure dans un modèle graphique est proposée. En partant du fait que les zéros d’une matrice de précision représentent les indépendances conditionnelles, nous développons un nouveau test statistique qui estime une borne pour une entrée de la matrice de précision. Les méthodes existantes de découverte de structure font généralement des hypothèses restrictives de distributions gaussiennes ou parcimonieuses qui ne correspondent pas forcément à l’étude de données réelles. Nous introduisons ici un nouveau test utilisant les propriétés des U-statistics appliqués à la matrice de covariance, et en déduisons une borne sur la matrice de précision. / The dissertation presents novel statistically and computationally efficient hypothesis tests for relative similarity and dependency, and precision matrix estimation. The key methodology adopted in this thesis is the class of U-statistic estimators. The class of U-statistics results in a minimum-variance unbiased estimation of a parameter.The first part of the thesis focuses on relative similarity tests applied to the problem of model selection. Probabilistic generative models provide a powerful framework for representing data. Model selection in this generative setting can be challenging. To address this issue, we provide a novel non-parametric hypothesis test of relative similarity and test whether a first candidate model generates a data sample significantly closer to a reference validation set.Subsequently, the second part of the thesis focuses on developing a novel non-parametric statistical hypothesis test for relative dependency. Tests of dependence are important tools in statistical analysis, and several canonical tests for the existence of dependence have been developed in the literature. However, the question of whether there exist dependencies is secondary. The determination of whether one dependence is stronger than another is frequently necessary for decision making. We present a statistical test which determine whether one variables is significantly more dependent on a first target variable or a second.Finally, a novel method for structure discovery in a graphical model is proposed. Making use of a result that zeros of a precision matrix can encode conditional independencies, we develop a test that estimates and bounds an entry of the precision matrix. Methods for structure discovery in the literature typically make restrictive distributional (e.g. Gaussian) or sparsity assumptions that may not apply to a data sample of interest. Consequently, we derive a new test that makes use of results for U-statistics and applies them to the covariance matrix, which then implies a bound on the precision matrix.
202

Dimensionality Reduction in High-Dimensional Profile Analysis Using Scores

Vikbladh, Jonathan January 2022 (has links)
Profile analysis is a multivariate statistical method for comparing the mean vectors for different groups. It consists of three tests, they are the tests for parallelism, level and flatness. The results from each test give information about the behaviour of the groups and the variables in the groups. The test statistics used when there are more than two groups are likelihood-ratio tests. However, issues in the form indeterminate test statistics occur in the high-dimensional setting, that is when there are more variables than observations. This thesis investigates a method to approach this problem by reducing the dimensionality of the data using scores, that is linear combinations of the variables. Three different ways of choosing this score are compared: the eigendecomposition and two variations of the non-negative matrix factorization. The methods are compared using simulations for five different type of mean parameter settings. The results show that the eigendecomposition is the best technique for choosing the score, and that using more scores only slightly improves the results. Moreover, the results for the parallelism and the flatness tests are shown to be very good, but the results for the level hypothesis deviate from the expectation.
203

Spatial Pattern of Yield Distributions: Implications for Crop Insurance

Annan, Francis 11 August 2012 (has links)
Despite the potential benefits of larger datasets for crop insurance ratings, pooling yields with similar distributions is not a common practice. The current USDA-RMA county insurance ratings do not consider information across state lines, a politically driven assumption that ignores a wealth of climate and agronomic evidence suggesting that growing regions are not constrained by state boundaries. We test the appropriateness of this assumption, and provide empirical grounds for benefits of pooling datasets. We find evidence in favor of pooling across state lines, with poolable counties sometimes being as far as 2,500 miles apart. An out-of-sample performance exercise suggests our proposed pooling framework out-performs a no-pooling alternative, and supports the hypothesis that economic losses should be expected as a result of not adopting our pooling framework. Our findings have strong empirical and policy implications for accurate modeling of yield distributions and vis-à-vis the rating of crop insurance products.
204

Statistical quality assurance of IGUM : Statistical quality assurance and validation of IGUM in a steady and dynamic gas flow prior to proof of concept

Kornsäter, Elin, Kallenberg, Dagmar January 2022 (has links)
To further support and optimise the production of diving tables for the Armed Forces of Sweden, a research team has developed a new machine called IGUM (Inert Gas UndersökningsMaskin) which aims to measure how inert gas is taken up and exhaled. Due to the new design of machine, the goal of this thesis was to statistically validate its accuracy and verify its reliability.  In the first stage, a quality assurance of the linear position conversion key of IGUM in a steady and known gas flow was conducted. This was done by collecting and analysing data in 29 experiments followed by examination with ordinary least squares, hypothesis testing, analysis of variance, bootstrapping and Bayesian hierarchical modelling. Autocorrelation among the residuals were detected but concluded to not have an impact on the results due to the bootstrap analysis. The results showed an estimated conversion key equal to 1.276 ml/linear position which was statistically significant for all 29 experiments.  In the second stage, it was examined if and how well IGUM could detect small additions of gas in a dynamic flow. The breathing machine ANSTI was used to simulate the sinus pattern of a breathing human in 24 experiments where 3 additions of 30 ml of gas manually was added into the system. The results were analysed through sinusoidal regression where three dummy variables represented the three additions of gas in each experiment. To examine if IGUM detects 30 ml for each input, the previously statistically proven conversion key at 1.276ml/linear position was used. An attempt was made to remove the seasonal trend in the data, something that was not completely successful which could influence the estimations. The results showed that IGUM indeed can detect these small gas additions, where the amount detected showed some differences between dummies and experiments. This is most likely since not enough trend has been removed, rather than IGUM not working properly.
205

Towards a Human Genomic Coevolution Network

Savel, Daniel M. 04 June 2018 (has links)
No description available.
206

A Monte Carlo Study of Several Alpha-Adjustment Procedures Used in Testing Multiple Hypotheses in Factorial Anova

An, Qian 20 July 2010 (has links)
No description available.
207

Variance Change Point Detection under A Smoothly-changing Mean Trend with Application to Liver Procurement

Gao, Zhenguo 23 February 2018 (has links)
Literature on change point analysis mostly requires a sudden change in the data distribution, either in a few parameters or the distribution as a whole. We are interested in the scenario that the variance of data may make a significant jump while the mean of data changes in a smooth fashion. It is motivated by a liver procurement experiment with organ surface temperature monitoring. Blindly applying the existing change point analysis methods to the example can yield erratic change point estimates since the smoothly-changing mean violates the sudden-change assumption. In my dissertation, we propose a penalized weighted least squares approach with an iterative estimation procedure that naturally integrates variance change point detection and smooth mean function estimation. Given the variance components, the mean function is estimated by smoothing splines as the minimizer of the penalized weighted least squares. Given the mean function, we propose a likelihood ratio test statistic for identifying the variance change point. The null distribution of the test statistic is derived together with the rates of convergence of all the parameter estimates. Simulations show excellent performance of the proposed method. Application analysis offers numerical support to the non-invasive organ viability assessment by surface temperature monitoring. The method above can only yield the variance change point of temperature at a single point on the surface of the organ at a time. In practice, an organ is often transplanted as a whole or in part. Therefore, it is generally of more interest to study the variance change point for a chunk of organ. With this motivation, we extend our method to study variance change point for a chunk of the organ surface. Now the variances become functions on a 2D space of locations (longitude and latitude) and the mean is a function on a 3D space of location and time. We model the variance functions by thin-plate splines and the mean function by the tensor product of thin-plate splines and cubic splines. However, the additional dimensions in these functions incur serious computational problems since the sample size, as a product of the number of locations and the number of sampling time points, becomes too large to run the standard multi-dimensional spline models. To overcome the computational hurdle, we introduce a multi-stages subsampling strategy into our modified iterative algorithm. The strategy involves several down-sampling or subsampling steps educated by preliminary statistical measures. We carry out extensive simulations to show that the new method can efficiently cut down the computational cost and make a practically unsolvable problem solvable with reasonable time and satisfactory parameter estimates. Application of the new method to the liver surface temperature monitoring data shows its effectiveness in providing accurate status change information for a portion of or the whole organ. / Ph. D.
208

Die kerk en die sorggewers van VIGS-weeskinders

Strydom, Marina 01 January 2002 (has links)
Text in Afrikaans / Weens die veeleisende aard van sorggewing aan VIGS-weeskinders, bevind die sorggewers hulle dikwels in 'n posisie waar hulle self sorg en ondersteuning nodig het. Die vraag het begin ontstaan op watter manier hierdie sorggewers ondersteun kan word. Dit het duidelik geword dat die kerk vanuit hul sosiale verantwoordelikheid sorg en ondersteuning aan die sorggewers kan bied. Sorggewers van een instansie wat aan die navorsingsreis deelgeneem het, het inderdaad nie genoeg sorg en ondersteuning van die kerk ontvang nie. Hierdie gebrek aan ondersteuning het 'n direkte invloed op die sorggewers se hantering van sorggewingseise. Sorggewers van die ander twee deelnemende instansies ontvang genoeg ondersteuning van lidmate, en dit maak 'n groot verskil aan hoe sorggewingspanning beleef word. In hierdie studie is daar krities gekyk na wyses waarop die kerk betrokke is en verder kan betrokke raak by die sorggewers van VIGSweeskinders. / Philosophy, Practical & Systematic Theology / M.Th. (Praktiese Teologie)
209

Creating Systems and Applying Large-Scale Methods to Improve Student Remediation in Online Tutoring Systems in Real-time and at Scale

Selent, Douglas A 08 June 2017 (has links)
"A common problem shared amongst online tutoring systems is the time-consuming nature of content creation. It has been estimated that an hour of online instruction can take up to 100-300 hours to create. Several systems have created tools to expedite content creation, such as the Cognitive Tutors Authoring Tool (CTAT) and the ASSISTments builder. Although these tools make content creation more efficient, they all still depend on the efforts of a content creator and/or past historical. These tools do not take full advantage of the power of the crowd. These issues and challenges faced by online tutoring systems provide an ideal environment to implement a solution using crowdsourcing. I created the PeerASSIST system to provide a solution to the challenges faced with tutoring content creation. PeerASSIST crowdsources the work students have done on problems inside the ASSISTments online tutoring system and redistributes that work as a form of tutoring to their peers, who are in need of assistance. Multi-objective multi-armed bandit algorithms are used to distribute student work, which balance exploring which work is good and exploiting the best currently known work. These policies are customized to run in a real-world environment with multiple asynchronous reward functions and an infinite number of actions. Inspired by major companies such as Google, Facebook, and Bing, PeerASSIST is also designed as a platform for simultaneous online experimentation in real-time and at scale. Currently over 600 teachers (grades K-12) are requiring students to show their work. Over 300,000 instances of student work have been collected from over 18,000 students across 28,000 problems. From the student work collected, 2,000 instances have been redistributed to over 550 students who needed help over the past few months. I conducted a randomized controlled experiment to evaluate the effectiveness of PeerASSIST on student performance. Other contributions include representing learning maps as Bayesian networks to model student performance, creating a machine-learning algorithm to derive student incorrect processes from their incorrect answer and the inputs of the problem, and applying Bayesian hypothesis testing to A/B experiments. We showed that learning maps can be simplified without practical loss of accuracy and that time series data is necessary to simplify learning maps if the static data is highly correlated. I also created several interventions to evaluate the effectiveness of the buggy messages generated from the machine-learned incorrect processes. The null results of these experiments demonstrate the difficulty of creating a successful tutoring and suggest that other methods of tutoring content creation (i.e. PeerASSIST) should be explored."
210

A PROFICIÊNCIA MATEMÁTICA DOS ALUNOS DO NÚCLEO REGIONAL DE EDUCAÇÃO DE PONTA GROSSA NO SAEP 2012: UMA ANÁLISE DOS DESCRITORES DO TRATAMENTO DA INFORMAÇÃO

Anjos, Luiz Fabiano dos 19 March 2015 (has links)
Made available in DSpace on 2017-07-21T20:56:27Z (GMT). No. of bitstreams: 1 Luiz Fabiano dos Anjos.pdf: 1403026 bytes, checksum: a11a5c920d7a1e504e6f9467bb570c19 (MD5) Previous issue date: 2015-03-19 / The external evaluations of Evaluation System, at the national and state level, allow collecting data that offer to the managers and the school community about the use of educational institutions and the performance of each student. This work aims at analyzing the performance of students of the 3rd year of high school under the jurisdiction of public facilities to the Regional Center of Ponta Grossa Education External Evaluation of the evaluation system of the State of Paraná Education in Mathematics with respect to Content Structuring - Treatment Information. From the official documents, there was the recommendations for teaching such content. In textbooks used by the institutions there was the presentation and the time the contente of the treatment was developed. In carrying out the study opted for a literature consisting in a qualitative and quantitative approach to research. As the research and statistical analysis tool used a hypothesis test for data analysis in response to the initial question. So, to complete the examination of the data and the test we found that this came corroborate the hypothesis that the textbook adopted, does not significantly affect the average performance of students in analyzed Content Structuring. / As avaliações externas do Sistema de Avaliação, seja no âmbito nacional ou estadual, permitem coletar dados que oferecem aos gestores e a comunidade escolar informações sobre o aproveitamento das instituições de ensino e o rendimento de cada aluno. Este trabalho tem como objeto analisar o desempenho dos estudantes do 3º ano do Ensino Médio dos estabelecimentos públicos jurisdicionadas ao Núcleo Regional de Educação de Ponta Grossa na Avaliação Externa do Sistema de Avaliação da Educação do Estado do Paraná, na disciplina de Matemática com relação ao Conteúdo Estruturante - Tratamento da Informação. A partir dos documentos oficiais, observou-se as recomendações para o ensino destes conteúdos. Nos livros didáticos utilizados pelas instituições verificou-se a forma de apresentação e o momento em que o conteúdo Tratamento da Informação era desenvolvido. Para a concretização deste estudo optou-se por uma pesquisa bibliográfica constituindo-se em uma pesquisa de abordagem qualitativa e quantitativa. Como ferramenta da pesquisa e análise estatística utilizou-se um Teste de Hipótese para análise dos dados em resposta ao questionamento inicial. Assim, ao finalizar a análise dos dados e o teste foi possível verificar que este veio corroborar com a hipótese de que o livro didático adotado, não afeta significativamente o desempenho médio dos estudantes, no conteúdo estruturante analisado.

Page generated in 0.106 seconds