Spelling suggestions: "subject:"istatistical tests "" "subject:"bystatistical tests ""
1 |
Foreground Removal in a Multi-Camera SystemMortensen, Daniel T. 01 December 2019 (has links)
Traditionally, whiteboards have been used to brainstorm, teach, and convey ideas with others. However distributing whiteboard content remotely can be challenging. To solve this problem, A multi-camera system was developed which can be scaled to broadcast an arbitrarily large writing surface while removing objects not related to the whiteboard content. Related research has been performed previously to combine multiple images together, identify and remove unrelated objects, also referred to as foreground, in a single image and correct for warping differences in camera frames. However, this is the first time anyone has attempted to solve this problem using a multi-camera system.
The main components of this problem include stitching the input images together, identifying foreground material, and replacing the foreground information with the most recent background (desired) information. This problem can be subdivided into two main components: fusing multiple images into one cohesive frame, and detecting/removing foreground objects. for the first component, homographic transformations are used to create a mathematical mapping from the input image to the desired reference frame. Blending techniques are then applied to remove artifacts that remain after the perspective transform. For the second, statistical tests and modeling in conjunction with additional classification algorithms were used.
|
2 |
Ανάλυση των σχέσεων εργασίας και χρόνος απασχόλησηςΝτάρμα, Ελισάβετ 16 June 2011 (has links)
Η εργασία αποτελεί αντικείμενο ενασχόλησης της οικονομικής θεωρίας, της διοίκησης επιχειρήσεων και της κοινωνιολογίας. Η θεωρία της απασχόλησης, ο καταμερισμός της εργασίας, η εργασία σαν αξία και η ικανοποίηση από την εργασία αποτελούν μερικές πτυχές στον προβληματισμό των παραπάνω επιστημονικών περιοχών.
Με τη βοήθεια ερωτηματολογίων και του στατιστικού πακέτου spss αναλύεται η προτίμηση που έχουν εργαζόμενοι από διάφορους κλάδους ανάλογα με το χρόνο απασχόλησης.
Αρχικά, αναφέρονται μερικά στατιστικά μέτρα που αναλύονται παρακάτω και περιγράφεται η μέθοδος με την οποία γίνεται η ανάλυση του ερωτηματολογίου και οι συσχετίσεις ανάμεσα στις μεταβλητές. Οι κύριες μέθοδοι που χρησιμοποιούνται είναι ο χι – τετράγωνο έλεγχος και ο συντελεστής συσχέτισης Gamma.
Έπειτα, γίνεται ανάλυση του δείγματος παρουσιάζοντας στατιστικούς πίνακες, ραβδογράμματα, διαγράμματα και κυκλικά διαγράμματα που έχουν προκύψει από το spss. Επιπροσθέτως, γίνεται η ανάλυση των συσχετίσεων ανάμεσα στις μεταβλητές και παρουσιάζονται οι πιο ισχυρές συσχετίσεις. Βέβαια, σε κάθε επάγγελμα υπάρχουν επιπλέον ερωτήσεις οι οποίες παρουσιάζονται ξεχωριστά από τις κύριες μεταβλητές που είναι κοινές σε όλο το δείγμα. Τέλος, εξάγονται τα συμπεράσματα από την ανάλυση του δείγματος και στο παράρτημα παρουσιάζεται το ερωτηματολόγιο με όλες τις ερωτήσεις. / The work adds up to the subject of foray among economic theory, business administration and social science. The theory of working time, the division of labor, the work as value and the desire of working described analytically.
Using questionnaire and the statistical SPSS elaborated the preference have workers from any department about working time.
In the first place, we refer some statistical measures and described the method with which assayed the questionnaires and correlations among variables. The main methods that used are chi – square tests and correlation coefficient Gamma.
Consequently, displayed the results by statistical tables, bar graphs and pie charts. Later, we refer the stronger correlations. In addition, we display some extra questions that haw any occupation.
|
3 |
Rank Regression in Order Restricted Randomized DesignsGao, Jinguo 25 September 2013 (has links)
No description available.
|
4 |
COMPOSITE NONPARAMETRIC TESTS IN HIGH DIMENSIONVillasante Tezanos, Alejandro G. 01 January 2019 (has links)
This dissertation focuses on the problem of making high-dimensional inference for two or more groups. High-dimensional means both the sample size (n) and dimension (p) tend to infinity, possibly at different rates. Classical approaches for group comparisons fail in the high-dimensional situation, in the sense that they have incorrect sizes and low powers. Much has been done in recent years to overcome these problems. However, these recent works make restrictive assumptions in terms of the number of treatments to be compared and/or the distribution of the data. This research aims to (1) propose and investigate refined small-sample approaches for high-dimension data in the multi-group setting (2) propose and study a fully-nonparametric approach, and (3) conduct an extensive comparison of the proposed methods with some existing ones in a simulation.
When treatment effects can meaningfully be formulated in terms of means, a semiparametric approach under equal and unequal covariance assumptions is investigated. Composites of F-type statistics are used to construct two tests. One test is a moderate-p version – the test statistic is centered by asymptotic mean – and the other test is a large-p version asymptotic-expansion based finite-sample correction for the mean of the test statistic. These tests do not make any distributional assumptions and, therefore, they are nonparametric in a way. The theory for the tests only requires mild assumptions to regulate the dependence. Simulation results show that, for moderately small samples, the large-p version yields substantial gain in the size with a small power tradeoff.
In some situations mean-based inference is not appropriate, for example, for data that is in ordinal scale or heavy tailed. For these situations, a high-dimensional fully-nonparametric test is proposed. In the two-sample situation, a composite of a Wilcoxon-Mann-Whitney type test is investigated. Assumptions needed are weaker than those in the semiparametric approach. Numerical comparisons with the moderate-p version of the semiparametric approach show that the nonparametric test has very similar size but achieves superior power, especially for skewed data with some amount of dependence between variables.
Finally, we conduct an extensive simulation to compare our proposed methods with other nonparametric test and rank transformation methods. A wide spectrum of simulation settings is considered. These simulation settings include a variety of heavy tailed and skewed data distributions, homoscedastic and heteroscedastic covariance structures, various amounts of dependence and choices of tuning (smoothing window) parameter for the asymptotic variance estimators. The fully-nonparametric and the rank transformation methods behave similarly in terms of type I and type II errors. However, the two approaches fundamentally differ in their hypotheses. Although there are no formal mathematical proofs for the rank transformations, they have a tendency to provide immunity against effects of outliers. From a theoretical standpoint, our nonparametric method essentially uses variable-by-variable ranking which naturally arises from estimating the nonparametric effect of interest. As a result of this, our method is invariant against application of any monotone marginal transformations. For a more practical comparison, real-data from an Encephalogram (EEG) experiment is analyzed.
|
5 |
ROBUST STATISTICAL METHODS FOR NON-NORMAL QUALITY ASSURANCE DATA ANALYSIS IN TRANSPORTATION PROJECTSUddin, Mohammad Moin 01 January 2011 (has links)
The American Association of Highway and Transportation Officials (AASHTO) and Federal Highway Administration (FHWA) require the use of the statistically based quality assurance (QA) specifications for construction materials. As a result, many of the state highway agencies (SHAs) have implemented the use of a QA specification for highway construction. For these statistically based QA specifications, quality characteristics of most construction materials are assumed normally distributed, however, the normality assumption can be violated in several forms. Distribution of data can be skewed, kurtosis induced, or bimodal. If the process shows evidence of a significant departure from normality, then the quality measures calculated may be erroneous.
In this research study, an extended QA data analysis model is proposed which will significantly improve the Type I error and power of the F-test and t-test, and remove bias estimates of Percent within Limit (PWL) based pay factor calculation. For the F-test, three alternative tests are proposed when sampling distribution is non-normal. These are: 1) Levene’s test; 2) Brown and Forsythe’s test; and 3) O’Brien’s test. One alternative method is proposed for the t-test, which is the non-parametric Wilcoxon - Mann – Whitney Sign Rank test. For PWL based pay factor calculation when lot data suffer non-normality, three schemes were investigated, which are: 1) simple transformation methods, 2) The Clements method, and 3) Modified Box-Cox transformation using “Golden Section Search” method.
The Monte Carlo simulation study revealed that both Levene’s test and Brown and Forsythe’s test are robust alternative tests of variances when underlying sample population distribution is non-normal. Between the t-test and Wilcoxon test, the t-test was found significantly robust even when sample population distribution was severely non-normal. Among the data transformation for PWL based pay factor, the modified Box-Cox transformation using the golden section search method was found to be the most effective in minimizing or removing pay bias. Field QA data was analyzed to validate the model and a Microsoft® Excel macro based software is developed, which can adjust any pay consequences due to non-normality.
|
6 |
Méthodes statistiques pour la fouille de données dans les bases de données de génomique / Statistical methods for data mining in genomics databases (Gene Set En- richment Analysis)Charmpi, Konstantina 03 July 2015 (has links)
Cette thèse est consacrée aux tests statistiques, visant à comparer un vecteur de données numériques, indicées par l'ensemble des gènes du génome humain, à un certain ensemble de gènes, connus pour être associés par exemple à un type donné de cancer. Parmi les méthodes existantes, le test Gene Set Enrichment Analysis est le plus utilisé. Néanmoins, il a deux inconvénients. D'une part, le calcul des p-valeurs est coûteux et peu précis. D'autre part, il déclare de nombreux résultats significatifs, dont une majorité n'ont pas de sens biologique. Ces deux problèmes sont traités, par l'introduction de deux procédures statistiques nouvelles, les tests de Kolmogorov-Smirnov pondéré et doublement pondéré. Ces deux tests ont été appliqués à des données simulées et réelles, et leurs résultats comparés aux procédures existantes. Notre conclusion est que, au-delà leurs avantages mathématiques et algorithmiques, les tests proposés pourraient se révéler, dans de nombreux cas, plus informatifs que le test GSEA classique, et traiter efficacement les deux problèmes qui ont motivé leur construction. / Our focus is on statistical testing methods, that compare a given vector of numeric values, indexed by all genes in the human genome, to a given set of genes, known to be associated to a particular type of cancer for instance. Among existing methods, Gene Set Enrichment Analysis is the most widely used. However it has several drawbacks. Firstly, the calculation of p-values is very much time consuming, and insufficiently precise. Secondly, like most other methods, it outputs a large number of significant results, the majority of which are not biologically meaningful. The two issues are addressed here, by two new statistical procedures, the Weighted and Doubly Weighted Kolmogorov-Smirnov tests. The two tests have been applied both to simulated and real data, and compared with other existing procedures. Our conclusion is that, beyond their mathematical and algorithmic advantages, the WKS and DWKS tests could be more informative in many cases, than the classical GSEA test and efficiently address the issues that have led to their construction.
|
7 |
Méthodes statistiques pour la fouille de données dans les bases de données de génomique / Statistical methods for data mining in genomics databases (Gene Set En- richment Analysis)Charmpi, Konstantina 03 July 2015 (has links)
Cette thèse est consacrée aux tests statistiques, visant à comparer un vecteur de données numériques, indicées par l'ensemble des gènes du génome humain, à un certain ensemble de gènes, connus pour être associés par exemple à un type donné de cancer. Parmi les méthodes existantes, le test Gene Set Enrichment Analysis est le plus utilisé. Néanmoins, il a deux inconvénients. D'une part, le calcul des p-valeurs est coûteux et peu précis. D'autre part, il déclare de nombreux résultats significatifs, dont une majorité n'ont pas de sens biologique. Ces deux problèmes sont traités, par l'introduction de deux procédures statistiques nouvelles, les tests de Kolmogorov-Smirnov pondéré et doublement pondéré. Ces deux tests ont été appliqués à des données simulées et réelles, et leurs résultats comparés aux procédures existantes. Notre conclusion est que, au-delà leurs avantages mathématiques et algorithmiques, les tests proposés pourraient se révéler, dans de nombreux cas, plus informatifs que le test GSEA classique, et traiter efficacement les deux problèmes qui ont motivé leur construction. / Our focus is on statistical testing methods, that compare a given vector of numeric values, indexed by all genes in the human genome, to a given set of genes, known to be associated to a particular type of cancer for instance. Among existing methods, Gene Set Enrichment Analysis is the most widely used. However it has several drawbacks. Firstly, the calculation of p-values is very much time consuming, and insufficiently precise. Secondly, like most other methods, it outputs a large number of significant results, the majority of which are not biologically meaningful. The two issues are addressed here, by two new statistical procedures, the Weighted and Doubly Weighted Kolmogorov-Smirnov tests. The two tests have been applied both to simulated and real data, and compared with other existing procedures. Our conclusion is that, beyond their mathematical and algorithmic advantages, the WKS and DWKS tests could be more informative in many cases, than the classical GSEA test and efficiently address the issues that have led to their construction.
|
8 |
Approches statistiques pour la détection de changements en IRM de diffusion : application au suivi longitudinal de pathologies neuro-dégénératives / Statistical approaches for change detection in diffusion MRI : application to the longitudinal follow-up of neuro-degenerative pathologiesGrigis, Antoine 25 September 2012 (has links)
L'IRM de diffusion (IRMd) est une modalité d'imagerie médicale qui suscite un intérêt croissant dans la recherche en neuro-imagerie. Elle permet d'apporter in vivo des informations nouvelles sur les micro-structures locales des tissus. En chaque point d'une acquisition d'IRMd, la distribution des directions de diffusion des molécules d'eau est modélisée par un tenseur de diffusion. La nature multivariée de ces images requiert la conception de nouvelles méthodes de traitement adaptées. Le contexte de cette thèse est l'analyse automatique de changements longitudinaux intra-patient avec pour application le suivi de pathologies neuro-dégénératives. Notre recherche a ainsi porté sur le développement de nouveaux modèles et tests statistiques permettant la détection automatique de changements sur des séquences temporelles d'images de diffusion. Cette thèse a ainsi permis une meilleure prise en compte de la nature tensorielle des modèles d'ordre 2 (tests statistiques sur des matrices définies positives), une extension vers des modèles d'ordre supérieur et une gestion plus fine des voisinages sur lesquels les tests sont menés, avec en particulier la conception de tests statistiques sur des faisceaux de fibres. / Diffusion MRI is a new medical imaging modality of great interest in neuroimaging research. This modality enables the characterization in vivo of local micro-structures. Tensors have commonly been used to model the diffusivity profile at each voxel. This multivariate data set requires the design of new dedicated image processing techniques. The context of this thesis is the automatic analysis of intra-patient longitudinal changes with application to the follow-up of neuro-degenerative pathologies. Our research focused on the development of new models and statistical tests for the automatic detection of changes in temporal sequences of diffusion images. Thereby, this thesis led to a better modeling of second order tensors (statistical tests on positive definite matrices), to an extension to higher-order models, and to the definition of refined neighborhoods on which tests are conducted, in particular the design of statistical tests on fiber bundles.
|
9 |
Additional comparisons of randomization-test procedures for single-case multiple-baseline designs: Alternative effect typesLevin, Joel R., Ferron, John M., Gafurov, Boris S. 08 1900 (has links)
A number of randomization statistical procedures have been developed to analyze the results from single-case multiple-baseline intervention investigations. In a previous simulation study, comparisons of the various procedures revealed distinct differences among them in their ability to detect immediate abrupt intervention effects of moderate size, with some procedures (typically those with randomized intervention start points) exhibiting power that was both respectable and superior to other procedures (typically those with single fixed intervention start points). In Investigation 1 of the present follow-up simulation study, we found that when the same randomization-test procedures were applied to either delayed abrupt or immediate gradual intervention effects: (1) the powers of all of the procedures were severely diminished; and (2) in contrast to the previous study's results, the single fixed intervention start-point procedures generally outperformed those with randomized intervention start points. In Investigation 2 we additionally demonstrated that if researchers are able to successfully anticipate the specific alternative effect types, it is possible for them to formulate adjusted versions of the original randomization-test procedures that can recapture substantial proportions of the lost powers.
|
10 |
Tests statistiques pour l’analyse de trajectoires de particules : application à l’imagerie intracellulaire / Statistical tests for analysing particle trajectories : application to intracellular imagingBriane, Vincent 20 December 2017 (has links)
L'objet de cette thèse est l'étude quantitative du mouvement des particules intracellulaires, comme les protéines ou les molécules. L'estimation du mouvement des particules au sein de la cellule est en effet d'un intérêt majeur en biologie cellulaire puisqu'il permet de comprendre les interactions entre les différents composants de la cellule. Dans cette thèse, nous modélisons les trajectoires des particules avec des processus stochastiques puisque le milieu intra-cellulaire est soumis à de nombreux aléas. Les diffusions, des processus à trajectoires continues, permettent de modéliser un large panel de mouvements intra-cellulaires. Les biophysiciens distinguent trois principaux types de diffusion: le mouvement brownien, la super-diffusion et la sous-diffusion. Ces différents types de mouvement correspondent à des scénarios biologiques distincts. Le déplacement d'une particule évoluant sans contrainte dans le cytosol ou dans le plasma membranaire est modélisée par un mouvement brownien; la particule ne se déplace pas dans une direction précise et atteint sa destination en un temps long en moyenne. Les particules peuvent aussi être propulsées par des moteurs moléculaires le long des microtubules et filaments d'actine du cytosquelette de la cellule. Leur mouvement est alors modélisé par des super-diffusions. Enfin, la sous-diffusion peut être observée dans deux situations: i/ lorsque la particule est confinée dans un micro domaine, ii/ lorsqu’elle est ralentie par l'encombrement moléculaire et doit se frayer un chemin parmi des obstacles mobiles ou immobiles. Nous présentons un test statistique pour effectuer la classification des trajectoires en trois groupes: brownien, super-diffusif et sous-diffusif. Nous développons également un algorithme pour détecter les ruptures de mouvement le long d’une trajectoire. Nous définissons les temps de rupture comme les instants où la particule change de régime de diffusion (brownien, sous-diffusif ou super-diffusif). Enfin, nous associons une méthode de regroupement avec notre procédure de test pour identifier les micro domaines dans lesquels des particules sont confinées. De telles zones correspondent à des lieux d’interactions moléculaires dans la cellule. / In this thesis, we are interested in quantifying the dynamics of intracellular particles, as proteins or molecules, inside living cells. In fact, inference on the modes of mobility of molecules is central in cell biology since it reflects the interactions between the structures of the cell. We model the particle trajectories with stochastic processes as the interior of a living cell is a fluctuating environment. Diffusions are stochastic processes with continuous paths and can model a large range of intracellular movements. Biophysicists distinguish three main types of diffusions, namely Brownian motion, superdiffusion and subdiffusion. These different diffusion processes correspond to distinct biological scenarios. A particle evolving freely inside the cytosol or along the plasma membrane is modelled by Brownian motion; the particle does not travel along any particular direction and can take a very long time to go to a precise area in the cell. Active intracellular transport can overcome this difficulty so that motion is faster and direct specific. In this case, particles are carried by molecular motors along microtubular filament networks and their motion is modelled with superdiffusions. Subdiffusion can be observed in two cases i/ when the particle is confined in a microdomain, ii/ when the particle is hindered by molecular crowding and encounters dynamic or fixed obstacles. We develop a statistical test for classifying the observed trajectories into the three groups of diffusion of interest namely Brownian motion, super-diffusion and subdiffusion. We also design an algorithm to detect the changes of dynamics along a single trajectory. We define the change points as the times at which the particle switches from one diffusion type (Brownian motion, superdiffusion or subdiffusion) to another. Finally, we combine a clustering algorithm with our test procedure to identify micro domains that is zones where the particles are confined. Molecular interactions of great importance for the functioning of the cell take place in such areas.
|
Page generated in 0.08 seconds