• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 165
  • 30
  • 15
  • 10
  • 9
  • 6
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • Tagged with
  • 293
  • 293
  • 143
  • 82
  • 59
  • 46
  • 46
  • 37
  • 32
  • 31
  • 31
  • 26
  • 24
  • 22
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
201

An Exercise to Introduce Power

Seier, Edith, Liu, Yali 01 March 2013 (has links)
In introductory statistics courses, the concept of power is usually presented in the context of testing hypotheses about the population mean. We instead propose an exercise that uses a binomial probability table to introduce the idea of power in the context of testing a population proportion.
202

An Exercise to Introduce Power

Seier, Edith, Liu, Yali 01 March 2013 (has links)
In introductory statistics courses, the concept of power is usually presented in the context of testing hypotheses about the population mean. We instead propose an exercise that uses a binomial probability table to introduce the idea of power in the context of testing a population proportion.
203

Détection binaire distribuée sous contraintes de communication / Distributed binary detection with communication constraints

Katz, Gil 06 January 2017 (has links)
Ces dernières années, l'intérêt scientifique porté aux différents aspects des systèmes autonomes est en pleine croissance. Des voitures autonomes jusqu'à l'Internet des objets, il est clair que la capacité de systèmes à prendre des décision de manière autonome devient cruciale. De plus, ces systèmes opéreront avec des ressources limitées. Dans cette thèse, ces systèmes sont étudiés sous l'aspect de la théorie de l'information, dans l'espoir qu'une compréhension fondamentale de leurs limites et de leurs utilisations pourrait aider leur conception par les futures ingénieurs.Dans ce travail, divers problèmes de décision binaire distribuée et collaborative sont considérés. Deux participants doivent "déclarer" la mesure de probabilité de deux variables aléatoires, distribuées conjointement par un processus sans mémoire et désignées par $vct{X}^n=(X_1,dots,X_n)$ et $vct{Y}^n=(Y_1,dots,Y_n)$. Cette décision et prise entre deux mesures de probabilité possibles sur un alphabet fini, désignés $P_{XY}$ et $P_{bar{X}bar{Y}}$. Les prélèvements marginaux des variables aléatoires, $vct{X}^n$ et $vct{Y}^n$ sont supposés à être disponibles aux différents sites .Il est permis aux participants d'échanger des quantités limitées d'information sur un canal parfait avec un contraint de débit maximal. Durant cette thèse, la nature de cette communication varie. La communication unidirectionnelle est considérée d'abord, suivie par la considération de communication bidirectionnelle, qui permet des échanges interactifs entre les participants. / In recents years, interest has been growing in research of different autonomous systems. From the self-dring car to the Internet of Things (IoT), it is clear that the ability of automated systems to make autonomous decisions in a timely manner is crucial in the 21st century. These systems will often operate under stricts constains over their resources. In this thesis, an information-theoric approach is taken to this problem, in hope that a fundamental understanding of the limitations and perspectives of such systems can help future engineers in designing them.Throughout this thesis, collaborative distributed binary decision problems are considered. Two statisticians are required to declare the correct probability measure of two jointly distributed memoryless process, denoted by $vct{X}^n=(X_1,dots,X_n)$ and $vct{Y}^n=(Y_1,dots,Y_n)$, out of two possible probability measures on finite alphabets, namely $P_{XY}$ and $P_{bar{X}bar{Y}}$. The marginal samples given by $vct{X}^n$ and $vct{Y}^n$ are assumed to be available at different locations.The statisticians are allowed to exchange limited amounts of data over a perfect channel with a maximum-rate constraint. Throughout the thesis, the nature of communication varies. First, only unidirectional communication is allowed. Using its own observations, the receiver of this communication is required to first identify the legitimacy of its sender by declaring the joint distribution of the process, and then depending on such authentication it generates an adequate reconstruction of the observations satisfying an average per-letter distortion. Bidirectional communication is subsequently considered, in a scenario that allows interactive communication between the participants.
204

Tests d’hypothèses statistiquement et algorithmiquement efficaces de similarité et de dépendance / Statistically and computationally efficient hypothesis tests for similarity and dependency

Bounliphone, Wacha 30 January 2017 (has links)
Cette thèse présente de nouveaux tests d’hypothèses statistiques efficaces pour la relative similarité et dépendance, et l’estimation de la matrice de précision. La principale méthodologie adoptée dans cette thèse est la classe des estimateurs U-statistiques.Le premier test statistique porte sur les tests de relative similarité appliqués au problème de la sélection de modèles. Les modèles génératifs probabilistes fournissent un cadre puissant pour représenter les données. La sélection de modèles dans ce contexte génératif peut être difficile. Pour résoudre ce problème, nous proposons un nouveau test d’hypothèse non paramétrique de relative similarité et testons si un premier modèle candidat génère un échantillon de données significativement plus proche d’un ensemble de validation de référence.La deuxième test d’hypothèse statistique non paramétrique est pour la relative dépendance. En présence de dépendances multiples, les méthodes existantes ne répondent qu’indirectement à la question de la relative dépendance. Or, savoir si une dépendance est plus forte qu’une autre est important pour la prise de décision. Nous présentons un test statistique qui détermine si une variable dépend beaucoup plus d’une première variable cible ou d’une seconde variable.Enfin, une nouvelle méthode de découverte de structure dans un modèle graphique est proposée. En partant du fait que les zéros d’une matrice de précision représentent les indépendances conditionnelles, nous développons un nouveau test statistique qui estime une borne pour une entrée de la matrice de précision. Les méthodes existantes de découverte de structure font généralement des hypothèses restrictives de distributions gaussiennes ou parcimonieuses qui ne correspondent pas forcément à l’étude de données réelles. Nous introduisons ici un nouveau test utilisant les propriétés des U-statistics appliqués à la matrice de covariance, et en déduisons une borne sur la matrice de précision. / The dissertation presents novel statistically and computationally efficient hypothesis tests for relative similarity and dependency, and precision matrix estimation. The key methodology adopted in this thesis is the class of U-statistic estimators. The class of U-statistics results in a minimum-variance unbiased estimation of a parameter.The first part of the thesis focuses on relative similarity tests applied to the problem of model selection. Probabilistic generative models provide a powerful framework for representing data. Model selection in this generative setting can be challenging. To address this issue, we provide a novel non-parametric hypothesis test of relative similarity and test whether a first candidate model generates a data sample significantly closer to a reference validation set.Subsequently, the second part of the thesis focuses on developing a novel non-parametric statistical hypothesis test for relative dependency. Tests of dependence are important tools in statistical analysis, and several canonical tests for the existence of dependence have been developed in the literature. However, the question of whether there exist dependencies is secondary. The determination of whether one dependence is stronger than another is frequently necessary for decision making. We present a statistical test which determine whether one variables is significantly more dependent on a first target variable or a second.Finally, a novel method for structure discovery in a graphical model is proposed. Making use of a result that zeros of a precision matrix can encode conditional independencies, we develop a test that estimates and bounds an entry of the precision matrix. Methods for structure discovery in the literature typically make restrictive distributional (e.g. Gaussian) or sparsity assumptions that may not apply to a data sample of interest. Consequently, we derive a new test that makes use of results for U-statistics and applies them to the covariance matrix, which then implies a bound on the precision matrix.
205

Dimensionality Reduction in High-Dimensional Profile Analysis Using Scores

Vikbladh, Jonathan January 2022 (has links)
Profile analysis is a multivariate statistical method for comparing the mean vectors for different groups. It consists of three tests, they are the tests for parallelism, level and flatness. The results from each test give information about the behaviour of the groups and the variables in the groups. The test statistics used when there are more than two groups are likelihood-ratio tests. However, issues in the form indeterminate test statistics occur in the high-dimensional setting, that is when there are more variables than observations. This thesis investigates a method to approach this problem by reducing the dimensionality of the data using scores, that is linear combinations of the variables. Three different ways of choosing this score are compared: the eigendecomposition and two variations of the non-negative matrix factorization. The methods are compared using simulations for five different type of mean parameter settings. The results show that the eigendecomposition is the best technique for choosing the score, and that using more scores only slightly improves the results. Moreover, the results for the parallelism and the flatness tests are shown to be very good, but the results for the level hypothesis deviate from the expectation.
206

Spatial Pattern of Yield Distributions: Implications for Crop Insurance

Annan, Francis 11 August 2012 (has links)
Despite the potential benefits of larger datasets for crop insurance ratings, pooling yields with similar distributions is not a common practice. The current USDA-RMA county insurance ratings do not consider information across state lines, a politically driven assumption that ignores a wealth of climate and agronomic evidence suggesting that growing regions are not constrained by state boundaries. We test the appropriateness of this assumption, and provide empirical grounds for benefits of pooling datasets. We find evidence in favor of pooling across state lines, with poolable counties sometimes being as far as 2,500 miles apart. An out-of-sample performance exercise suggests our proposed pooling framework out-performs a no-pooling alternative, and supports the hypothesis that economic losses should be expected as a result of not adopting our pooling framework. Our findings have strong empirical and policy implications for accurate modeling of yield distributions and vis-à-vis the rating of crop insurance products.
207

Statistical quality assurance of IGUM : Statistical quality assurance and validation of IGUM in a steady and dynamic gas flow prior to proof of concept

Kornsäter, Elin, Kallenberg, Dagmar January 2022 (has links)
To further support and optimise the production of diving tables for the Armed Forces of Sweden, a research team has developed a new machine called IGUM (Inert Gas UndersökningsMaskin) which aims to measure how inert gas is taken up and exhaled. Due to the new design of machine, the goal of this thesis was to statistically validate its accuracy and verify its reliability.  In the first stage, a quality assurance of the linear position conversion key of IGUM in a steady and known gas flow was conducted. This was done by collecting and analysing data in 29 experiments followed by examination with ordinary least squares, hypothesis testing, analysis of variance, bootstrapping and Bayesian hierarchical modelling. Autocorrelation among the residuals were detected but concluded to not have an impact on the results due to the bootstrap analysis. The results showed an estimated conversion key equal to 1.276 ml/linear position which was statistically significant for all 29 experiments.  In the second stage, it was examined if and how well IGUM could detect small additions of gas in a dynamic flow. The breathing machine ANSTI was used to simulate the sinus pattern of a breathing human in 24 experiments where 3 additions of 30 ml of gas manually was added into the system. The results were analysed through sinusoidal regression where three dummy variables represented the three additions of gas in each experiment. To examine if IGUM detects 30 ml for each input, the previously statistically proven conversion key at 1.276ml/linear position was used. An attempt was made to remove the seasonal trend in the data, something that was not completely successful which could influence the estimations. The results showed that IGUM indeed can detect these small gas additions, where the amount detected showed some differences between dummies and experiments. This is most likely since not enough trend has been removed, rather than IGUM not working properly.
208

Towards a Human Genomic Coevolution Network

Savel, Daniel M. 04 June 2018 (has links)
No description available.
209

A Monte Carlo Study of Several Alpha-Adjustment Procedures Used in Testing Multiple Hypotheses in Factorial Anova

An, Qian 20 July 2010 (has links)
No description available.
210

Variance Change Point Detection under A Smoothly-changing Mean Trend with Application to Liver Procurement

Gao, Zhenguo 23 February 2018 (has links)
Literature on change point analysis mostly requires a sudden change in the data distribution, either in a few parameters or the distribution as a whole. We are interested in the scenario that the variance of data may make a significant jump while the mean of data changes in a smooth fashion. It is motivated by a liver procurement experiment with organ surface temperature monitoring. Blindly applying the existing change point analysis methods to the example can yield erratic change point estimates since the smoothly-changing mean violates the sudden-change assumption. In my dissertation, we propose a penalized weighted least squares approach with an iterative estimation procedure that naturally integrates variance change point detection and smooth mean function estimation. Given the variance components, the mean function is estimated by smoothing splines as the minimizer of the penalized weighted least squares. Given the mean function, we propose a likelihood ratio test statistic for identifying the variance change point. The null distribution of the test statistic is derived together with the rates of convergence of all the parameter estimates. Simulations show excellent performance of the proposed method. Application analysis offers numerical support to the non-invasive organ viability assessment by surface temperature monitoring. The method above can only yield the variance change point of temperature at a single point on the surface of the organ at a time. In practice, an organ is often transplanted as a whole or in part. Therefore, it is generally of more interest to study the variance change point for a chunk of organ. With this motivation, we extend our method to study variance change point for a chunk of the organ surface. Now the variances become functions on a 2D space of locations (longitude and latitude) and the mean is a function on a 3D space of location and time. We model the variance functions by thin-plate splines and the mean function by the tensor product of thin-plate splines and cubic splines. However, the additional dimensions in these functions incur serious computational problems since the sample size, as a product of the number of locations and the number of sampling time points, becomes too large to run the standard multi-dimensional spline models. To overcome the computational hurdle, we introduce a multi-stages subsampling strategy into our modified iterative algorithm. The strategy involves several down-sampling or subsampling steps educated by preliminary statistical measures. We carry out extensive simulations to show that the new method can efficiently cut down the computational cost and make a practically unsolvable problem solvable with reasonable time and satisfactory parameter estimates. Application of the new method to the liver surface temperature monitoring data shows its effectiveness in providing accurate status change information for a portion of or the whole organ. / Ph. D. / The viability evaluation is the key issue in the organ transplant operation. The donated organ must be viable at the time of being transplanted to the recipient. Nowadays, viability evaluation can be assessed by analyzing the temperature data monitored on the organ surface. In my dissertation, I have developed two new statistical methods to evaluate the viability status of a prepared organ by studying the organ surface temperature. The first method I have developed can be used to detect the change of viability status at a spot on the organ surface. The second method I have developed can be used to detect the change of viability condition for the selected organ chunks. In practice, combining these two methods together can provide accurate viability status change information for a portion of or the whole organ effectively.

Page generated in 0.0811 seconds