• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 24
  • 24
  • 8
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Equivalence testing for identity authentication using pulse waves from photoplethysmograph

Wu, Mengjiao January 1900 (has links)
Doctor of Philosophy / Department of Statistics / Suzanne Dubnicka / Christopher Vahl / Photoplethysmograph sensors use a light-based technology to sense the rate of blood flow as controlled by the heart’s pumping action. This allows for a graphical display of a patient’s pulse wave form and the description of its key features. A person’s pulse wave has been proposed as a tool in a wide variety of applications. For example, it could be used to diagnose the cause of coldness felt in the extremities or to measure stress levels while performing certain tasks. It could also be applied to quantify the risk of heart disease in the general population. In the present work, we explore its use for identity authentication. First, we visualize the pulse waves from individual patients using functional boxplots which assess the overall behavior and identify unusual observations. Functional boxplots are also shown to be helpful in preprocessing the data by shifting individual pulse waves to a proper starting point. We then employ functional analysis of variance (FANOVA) and permutation tests to demonstrate that the identities of a group of subjects could be differentiated and compared by their pulse wave forms. One of the primary tasks of the project is to confirm the identity of a person, i.e., we must decide if a given person is whom they claim to be. We used an equivalence test to determine whether the pulse wave of the person under verification and the actual person were close enough to be considered equivalent. A nonparametric bootstrap functional equivalence test was applied to evaluate equivalence by constructing point-wise confidence intervals for the metric of identity assurance. We also proposed new testing procedures, including the way of building the equivalence hypothesis and test statistics, determination of evaluation range and equivalence bands, to authenticate the identity.
2

Svertinių rodiklių agregavimo lygmens parinkimas / Choice of the sectoral aggregation level

Kačkina, Julija 08 September 2009 (has links)
Šiame darbe aš apibendrinau informaciją apie pasirinkimo tarp tiesinio prognozavimo mikro ir makro-modelių problemą. Agregavimas suprantamas kaip sektorinis agregavomas, o modeliai yra iš vienmatės tiesinės regresijos klasės. Aš išvedžiau kriterijų pasirinkimui tarp makro ir mikro-modelių ir idealaus agregavimo testą tiesinio agregavimo su fiksuotais ir atsitiktiniais svoriais atvejais. Paskutiniu atveju idealų agregavimą rekomenduoju tikrinti permutaciniu testu. Rezultatai iliustruoju ekonominiu pavyzdžiu. Modeliuoju Lietuvos vidutinį darbo užmokestį agreguotu modeliu ir atskirose ekonominės veiklos sektoriuose. Analizės rezultatas parodo, kad modeliai yra ekvivalentūs. / This paper focuses on the choice between macro and micro models. I suggest a hypothesis testing procedure for in-sample model selection for such variables as average wage. Empirical results show that Lithuanian average wage should be predict by using aggregate model.
3

On the statistical analysis of functional data arising from designed experiments

Sirski, Monica 10 April 2012 (has links)
We investigate various methods for testing whether two groups of curves are statistically significantly different, with the motivation to apply the techniques to the analysis of data arising from designed experiments. We propose a set of tests based on pairwise differences between individual curves. Our objective is to compare the power and robustness of a variety of tests, including a collection of permutation tests, a test based on the functional principal components scores, the adaptive Neyman test and the functional F test. We illustrate the application of these tests in the context of a designed 2^4 factorial experiment with a case study using data provided by NASA. We apply the methods for comparing curves to this factorial data by dividing the data into two groups by each effect (A, B, . . . , ABCD) in turn. We carry out a large simulation study investigating the power of the tests in detecting contamination, location, and shift effects on unimodal and monotone curves. We conclude that the permutation test using the mean of the pairwise differences in L1 norm has the best overall power performance and is a robust test statistic applicable in a wide variety of situations. The advantage of using a permutation test is that it is an exact, distribution-free test that performs well overall when applied to functional data. This test may be extended to more than two groups by constructing test statistics based on averages of pairwise differences between curves from the different groups and, as such, is an important building-block for larger experiments and more complex designs.
4

On the statistical analysis of functional data arising from designed experiments

Sirski, Monica 10 April 2012 (has links)
We investigate various methods for testing whether two groups of curves are statistically significantly different, with the motivation to apply the techniques to the analysis of data arising from designed experiments. We propose a set of tests based on pairwise differences between individual curves. Our objective is to compare the power and robustness of a variety of tests, including a collection of permutation tests, a test based on the functional principal components scores, the adaptive Neyman test and the functional F test. We illustrate the application of these tests in the context of a designed 2^4 factorial experiment with a case study using data provided by NASA. We apply the methods for comparing curves to this factorial data by dividing the data into two groups by each effect (A, B, . . . , ABCD) in turn. We carry out a large simulation study investigating the power of the tests in detecting contamination, location, and shift effects on unimodal and monotone curves. We conclude that the permutation test using the mean of the pairwise differences in L1 norm has the best overall power performance and is a robust test statistic applicable in a wide variety of situations. The advantage of using a permutation test is that it is an exact, distribution-free test that performs well overall when applied to functional data. This test may be extended to more than two groups by constructing test statistics based on averages of pairwise differences between curves from the different groups and, as such, is an important building-block for larger experiments and more complex designs.
5

Computational Medical Image Analysis : With a Focus on Real-Time fMRI and Non-Parametric Statistics

Eklund, Anders January 2012 (has links)
Functional magnetic resonance imaging (fMRI) is a prime example of multi-disciplinary research. Without the beautiful physics of MRI, there wouldnot be any images to look at in the first place. To obtain images of goodquality, it is necessary to fully understand the concepts of the frequencydomain. The analysis of fMRI data requires understanding of signal pro-cessing, statistics and knowledge about the anatomy and function of thehuman brain. The resulting brain activity maps are used by physicians,neurologists, psychologists and behaviourists, in order to plan surgery andto increase their understanding of how the brain works. This thesis presents methods for real-time fMRI and non-parametric fMRIanalysis. Real-time fMRI places high demands on the signal processing,as all the calculations have to be made in real-time in complex situations.Real-time fMRI can, for example, be used for interactive brain mapping.Another possibility is to change the stimulus that is given to the subject, inreal-time, such that the brain and the computer can work together to solvea given task, yielding a brain computer interface (BCI). Non-parametricfMRI analysis, for example, concerns the problem of calculating signifi-cance thresholds and p-values for test statistics without a parametric nulldistribution. Two BCIs are presented in this thesis. In the first BCI, the subject wasable to balance a virtual inverted pendulum by thinking of activating theleft or right hand or resting. In the second BCI, the subject in the MRscanner was able to communicate with a person outside the MR scanner,through a virtual keyboard. A graphics processing unit (GPU) implementation of a random permuta-tion test for single subject fMRI analysis is also presented. The randompermutation test is used to calculate significance thresholds and p-values forfMRI analysis by canonical correlation analysis (CCA), and to investigatethe correctness of standard parametric approaches. The random permuta-tion test was verified by using 10 000 noise datasets and 1484 resting statefMRI datasets. The random permutation test is also used for a non-localCCA approach to fMRI analysis.
6

Social Approaches to Disease Prediction

Mansouri, Mehrdad 25 November 2014 (has links)
Objective: This thesis focuses on design and evaluation of a disease prediction system that be able to detect hidden and upcoming diseases of an individual. Unlike previous works that has typically relied on precise medical examinations to extract symptoms and risk factors for computing probability of occurrence of a disease, the proposed disease prediction system is based on similar patterns of disease comorbidity in population and the individual to evaluate the risk of a disease. Methods: We combine three machine learning algorithms to construct the prediction system: an item based recommendation system, a Bayesian graphical model and a rule based recommender. We also propose multiple similarity measures for the recommendation system, each useful in a particular condition. We finally show how best values of parameters of the system can be derived from optimization of cost function and ROC curve. Results: A permutation test is designed to evaluate accuracy of the prediction system accurately. Results showed considerable advantage of the proposed system in compare to an item based recommendation system and improvements of prediction if system is trained for each specific gender and race. Conclusion: The proposed system has been shown to be a competent method in accurately identifying potential diseases in patients with multiple diseases, just based on their disease records. The procedure also contains novel soft computing and machine learning ideas that can be used in prediction problems. The proposed system has the possibility of using more complex datasets that include timeline of diseases, disease networks and social network. This makes it an even more capable platform for disease prediction. Hence, this thesis contributes to improvement of the disease prediction field. / Graduate / 0800 / 0766 / 0984 / mehrdadmansouri@yahoo.com
7

A permutation evaluation of the robustness of a high-dimensional test

Eckerdal, Nils January 2018 (has links)
The present thesis is a study of the robustness and performance of a test applicable in the high-dimensional context (𝑝>𝑛) whose components are unbiased statistics (U-statistics). This test (the U-test) has been shown to perform well under a variety of circumstances and can be adapted to any general linear hypothesis. However, the robustness of the test is largely unexplored. Here, a simulation study is performed, focusing particularly on violations of the assumptions the test is based on. For extended evaluation, the performance of the U-test is compared to its permutation counterpart. The simulations show that the U-test is robust, performing poorly only when the permutation test does so as well. It is also discussed that the U-test does not inevitably rest on the assumptions originally imposed on it.
8

Determining Attribute Importance Using an Ensemble of Genetic Programs and Permutation Tests : Relevansbestämning av attribut med hjälp av genetiska program och permutationstester

Annica, Ivert January 2015 (has links)
When classifying high-dimensional data, a lot can be gained, in terms of both computational time and precision, by only considering the most important features. Many feature selection methods are based on the assumption that important features are highly correlated with their corresponding classes, but mainly uncorrelated with each other. Often, this assumption can help eliminate redundancies and produce good predictors using only a small subset of features. However, when the predictability depends on interactions between the features, such methods will fail to produce satisfactory results. Also, since the suitability of the selected features depends on the learning algorithm in which they will be used, correlation-based filter methods might not be optimal when using genetic programs as the final classifiers, as they fail to capture the possibly complex relationships that are expressible by the genetic programming rules. In this thesis a method that can find important features, both independently and dependently discriminative, is introduced. This method works by performing two different types of permutation tests that classifies each of the features as either irrelevant, independently predictive or dependently predictive. The proposed method directly evaluates the suitability of the features with respect to the learning algorithm in question. Also, in contrast to computationally expensive wrapper methods that require several subsets of features to be evaluated, a feature classification can be obtained after only one single pass, even though the time required does equal the training time of the classifier. The evaluation shows that the attributes chosen by the permutation tests always yield a classifier at least as good as the one obtained when all attributes are used during training - and often better. The proposed method also fares well when compared to other attribute selection methods such as RELIEFF and CFS. / Då man handskas med data av hög dimensionalitet kan man uppnå både bättre precision och förkortad exekveringstid genom att enbart fokusera på de viktigaste attributen. Många metoder för att hitta viktiga attribut är baserade på ett grundantagande om en stark korrelation mellan de viktiga attributen och dess tillhörande klass, men ofta även på ett oberoende mellan de individuella attributen. Detta kan å ena sidan leda till att överflödiga attribut lätt kan elimineras och därmed underlätta processen att hitta en bra klassifierare, men å andra sidan också ge missvisande resultat ifall förmågan att separera klasser i hög grad beror på interaktioner mellan olika attribut. Då lämpligheten av de valda attributen också beror på inlärningsalgoritmen i fråga är det troligtvis inte optimalt att använda sig av metoder som är baserade på korrelationer mellan individuella attribut och dess tillhörande klass, ifall målet är att skapa klassifierare i form av genetiska program, då sådana metoder troligtvis inte har förmågan att fånga de komplexa interaktioner som genetiska program faktiskt möjliggör. Det här arbetet introducerar en metod för att hitta viktiga attribut - både de som kan klassifiera data relativt oberoende och de som får sina krafter endast genom att utnyttja beroenden av andra attribut. Den föreslagna metoden baserar sig på två olika typer av permutationstester, där attribut permuteras mellan de olika dataexemplaren för att sedan klassifieras som antingen oberende, beroende eller irrelevanta. Lämpligheten av ett attribut utvärderas direkt med hänsyn till den valda inlärningsalgoritmen till skillnad från så kallade wrappers, som är tidskrävande då de kräver att flera delmängder av attribut utvärderas. Resultaten visar att de attribut som ansetts viktiga efter permutationstesten genererar klassifierare som är åtminstone lika bra som när alla attribut används, men ofta bättre. Metoden står sig också bra när den jämförs med andra metoder som till exempel RELIEFF och CFS.
9

Comparing the Statistical Tests for Homogeneity of Variances.

Mu, Zhiqiang 15 August 2006 (has links) (PDF)
Testing the homogeneity of variances is an important problem in many applications since statistical methods of frequent use, such as ANOVA, assume equal variances for two or more groups of data. However, testing the equality of variances is a difficult problem due to the fact that many of the tests are not robust against non-normality. It is known that the kurtosis of the distribution of the source data can affect the performance of the tests for variance. We review the classical tests and their latest, more robust modifications, some other tests that have recently appeared in the literature, and use bootstrap and permutation techniques to test for equal variances. We compare the performance of these tests under different types of distributions, sample sizes and true ratios of variances of the populations. Monte-Carlo methods are used in this study to calculate empirical powers and type I errors under different settings.
10

STATISTICAL APPROACHES TO ANALYZE CENSORED DATA WITH MULTIPLE DETECTION LIMITS

ZHONG, WEI January 2005 (has links)
No description available.

Page generated in 0.1337 seconds