Spelling suggestions: "subject:"dimensionality reduction"" "subject:"dimensionnality reduction""
121 |
Estimating the discriminative power of time varying features for EEG BMIMappus, Rudolph Louis, IV 16 November 2009 (has links)
In this work, we present a set of methods aimed at improving the discriminative power of time-varying features of signals that contain noise. These methods use properties of noise signals as well as information theoretic techniques to factor types of noise and support signal inference for electroencephalographic (EEG) based brain-machine interfaces (BMI). EEG data were collected over two studies aimed at addressing Psychophysiological issues involving symmetry and mental rotation processing. The Psychophysiological data gathered in the mental rotation study also tested the feasibility of using dissociations of mental rotation tasks correlated with rotation angle in a BMI. We show the feasibility of mental rotation for BMI by showing comparable bitrates and recognition accuracy to state-of-the-art BMIs. The conclusion is that by using the feature selection methods introduced in this work to dissociate mental rotation tasks, we produce bitrates and recognition rates comparable to current BMIs.
|
122 |
Stochastic modeling and simulation of biochemical reaction kineticsAgarwal, Animesh 21 September 2011 (has links)
Biochemical reactions make up most of the activity in a cell. There is inherent stochasticity in the kinetic behavior of biochemical reactions which in turn governs the fate of various cellular processes. In this work, the precision of a method for dimensionality reduction for stochastic modeling of biochemical reactions is evaluated. Further, a method of stochastic simulation of reaction kinetics is implemented in case of a specific biochemical network involved in maintenance of long-term potentiation (LTP), the basic substrate for learning and memory formation. The dimensionality reduction method diverges significantly from a full stochastic model in prediction the variance of the fluctuations. The application of the stochastic simulation method to LTP modeling was used to find qualitative dependence of stochastic fluctuations on reaction volume and model parameters. / text
|
123 |
Feature extraction via dependence structure optimization / Požymių išskyrimas optimizuojant priklausomumo struktūrąDaniušis, Povilas 01 October 2012 (has links)
In many important real world applications the initial representation of the data is inconvenient,
or even prohibitive for further analysis. For example, in image analysis, text
analysis and computational genetics high-dimensional, massive, structural, incomplete,
and noisy data sets are common. Therefore, feature extraction, or revelation of informative
features from the raw data is one of fundamental machine learning problems.
Efficient feature extraction helps to understand data and the process that generates it,
reduce costs for future measurements and data analysis. The representation of the structured
data as a compact set of informative numeric features allows applying well studied
machine learning techniques instead of developing new ones..
The dissertation focuses on supervised and semi-supervised feature extraction methods,
which optimize the dependence structure of features. The dependence is measured using
the kernel estimator of Hilbert-Schmidt norm of covariance operator (HSIC measure).
Two dependence structures are investigated: in the first case we seek features which
maximize the dependence on the dependent variable, and in the second one, we additionally
minimize the mutual dependence of features. Linear and kernel formulations of
HBFE and HSCA are provided. Using Laplacian regularization framework we construct
semi-supervised variants of HBFE and HSCA.
Suggested algorithms were investigated experimentally using conventional and multilabel
classification data... [to full text] / Daugelis praktiškai reikšmingu sistemu mokymo uždaviniu reikalauja gebeti panaudoti didelio matavimo, strukturizuotus, netiesinius duomenis. Vaizdu, teksto, socialiniu bei verslo ryšiu analize, ivairus bioinformatikos uždaviniai galetu buti tokiu uždaviniu pavyzdžiais. Todel požymiu išskyrimas dažnai yra pirmasis žingsnis, kuriuo pradedama duomenu analize ir nuo kurio priklauso galutinio rezultato sekme. Šio disertacinio darbo tyrimo objektas yra požymiu išskyrimo algoritmai, besiremiantys priklausomumo savoka. Darbe nagrinejamas priklausomumas, nusakytas kovariacinio operatoriaus Hilberto-Šmidto normos (HSIC mato) branduoliniu ivertiniu. Pasiulyti šiuo ivertiniu besiremiantys HBFE ir HSCA algoritmai leidžia dirbti su bet kokios strukturos duomenimis, bei yra formuluojami tikriniu vektoriu terminais (tai leidžia optimizavimui naudoti standartinius paketus), bei taikytini ne tik prižiurimo, bet ir dalinai prižiurimo mokymo imtims. Pastaruoju atveju HBFE ir HSCA modifikacijos remiasi Laplaso reguliarizacija. Eksperimentais su klasifikavimo bei daugiažymio klasifikavimo duomenimis parodyta, jog pasiulyti algoritmai leidžia pagerinti klasifikavimo efektyvuma lyginant su PCA ar LDA.
|
124 |
Požymių išskyrimas optimizuojant priklausomumo struktūrą / Feature extraction via dependence structure optimizationDaniušis, Povilas 01 October 2012 (has links)
Daugelis praktiškai reikšmingu sistemu mokymo uždaviniu reikalauja gebeti panaudoti didelio matavimo, strukturizuotus, netiesinius duomenis. Vaizdu, teksto, socialiniu bei verslo ryšiu analize, ivairus bioinformatikos uždaviniai galetu buti tokiu uždaviniu pavyzdžiais. Todel požymiu išskyrimas dažnai yra pirmasis žingsnis, kuriuo pradedama duomenu analize ir nuo kurio priklauso galutinio rezultato sekme. Šio disertacinio darbo tyrimo objektas yra požymiu išskyrimo algoritmai, besiremiantys priklausomumo savoka. Darbe nagrinejamas priklausomumas, nusakytas kovariacinio operatoriaus Hilberto-Šmidto normos (HSIC mato) branduoliniu ivertiniu. Pasiulyti šiuo ivertiniu besiremiantys HBFE ir HSCA algoritmai leidžia dirbti su bet kokios strukturos duomenimis, bei yra formuluojami tikriniu vektoriu terminais (tai leidžia optimizavimui naudoti standartinius paketus), bei taikytini ne tik prižiurimo, bet ir dalinai prižiurimo mokymo imtims. Pastaruoju atveju HBFE ir HSCA modifikacijos remiasi Laplaso reguliarizacija. Eksperimentais su klasifikavimo bei daugiažymio klasifikavimo duomenimis parodyta, jog pasiulyti algoritmai leidžia pagerinti klasifikavimo efektyvuma lyginant su PCA ar LDA. / In many important real world applications the initial representation of the data is inconvenient, or even prohibitive for further analysis. For example, in image analysis, text analysis and computational genetics high-dimensional, massive, structural, incomplete, and noisy data sets are common. Therefore, feature extraction, or revelation of informative features from the raw data is one of fundamental machine learning problems. Efficient feature extraction helps to understand data and the process that generates it, reduce costs for future measurements and data analysis. The representation of the structured data as a compact set of informative numeric features allows applying well studied machine learning techniques instead of developing new ones.. The dissertation focuses on supervised and semi-supervised feature extraction methods, which optimize the dependence structure of features. The dependence is measured using the kernel estimator of Hilbert-Schmidt norm of covariance operator (HSIC measure). Two dependence structures are investigated: in the first case we seek features which maximize the dependence on the dependent variable, and in the second one, we additionally minimize the mutual dependence of features. Linear and kernel formulations of HBFE and HSCA are provided. Using Laplacian regularization framework we construct semi-supervised variants of HBFE and HSCA. Suggested algorithms were investigated experimentally using conventional and multilabel classification data... [to full text]
|
125 |
Single View Reconstruction for Human Face and Motion with PriorsWang, Xianwang 01 January 2010 (has links)
Single view reconstruction is fundamentally an under-constrained problem. We aim to develop new approaches to model human face and motion with model priors that restrict the space of possible solutions. First, we develop a novel approach to recover the 3D shape from a single view image under challenging conditions, such as large variations in illumination and pose. The problem is addressed by employing the techniques of non-linear manifold embedding and alignment. Specifically, the local image models for each patch of facial images and the local surface models for each patch of 3D shape are learned using a non-linear dimensionality reduction technique, and the correspondences between these local models are then learned by a manifold alignment method. Local models successfully remove the dependency of large training databases for human face modeling. By combining the local shapes, the global shape of a face can be reconstructed directly from a single linear system of equations via least square.
Unfortunately, this learning-based approach cannot be successfully applied to the problem of human motion modeling due to the internal and external variations in single view video-based marker-less motion capture. Therefore, we introduce a new model-based approach for capturing human motion using a stream of depth images from a single depth sensor. While a depth sensor provides metric 3D information, using a single sensor, instead of a camera array, results in a view-dependent and incomplete measurement of object motion. We develop a novel two-stage template fitting algorithm that is invariant to subject size and view-point variations, and robust to occlusions. Starting from a known pose, our algorithm first estimates a body configuration through temporal registration, which is used to search the template motion database for a best match. The best match body configuration as well as its corresponding surface mesh model are deformed to fit the input depth map, filling in the part that is occluded from the input and compensating for differences in pose and body-size between the input image and the template. Our approach does not require any makers, user-interaction, or appearance-based tracking.
Experiments show that our approaches can achieve good modeling results for human face and motion, and are capable of dealing with variety of challenges in single view reconstruction, e.g., occlusion.
|
126 |
Learning with Limited Supervision by Input and Output CodingZhang, Yi 01 May 2012 (has links)
In many real-world applications of supervised learning, only a limited number of labeled examples are available because the cost of obtaining high-quality examples is high. Even with a relatively large number of labeled examples, the learning problem may still suffer from limited supervision as the complexity of the prediction function increases. Therefore, learning with limited supervision presents a major challenge to machine learning. With the goal of supervision reduction, this thesis studies the representation, discovery and incorporation of extra input and output information in learning.
Information about the input space can be encoded by regularization. We first design a semi-supervised learning method for text classification that encodes the correlation of words inferred from seemingly irrelevant unlabeled text. We then propose a multi-task learning framework with a matrix-normal penalty, which compactly encodes the covariance structure of the joint input space of multiple tasks. To capture structure information that is more general than covariance and correlation, we study a class of regularization penalties on model compressibility. Then we design the projection penalty, which encodes the structure information from a dimension reduction while controlling the risk of information loss.
Information about the output space can be exploited by error correcting output codes. Using the composite likelihood view, we propose an improved pairwise coding for multi-label classification, which encodes pairwise label density (as opposed to label comparisons) and decodes using variational methods. We then investigate problemdependent codes, where the encoding is learned from data instead of being predefined. We first propose a multi-label output code using canonical correlation analysis, where predictability of the code is optimized. We then argue that both discriminability and predictability are critical for output coding, and propose a max-margin formulation that promotes both discriminative and predictable codes.
We empirically study our methods in a wide spectrum of applications, including document categorization, landmine detection, face recognition, brain signal classification, handwritten digit recognition, house price forecasting, music emotion prediction, medical decision, email analysis, gene function classification, outdoor scene recognition, and so forth. In all these applications, our proposed methods for encoding input and output information lead to significantly improved prediction performance.
|
127 |
Acquiring symbolic design optimization problem reformulation knowledge: On computable relationships between design syntax and semanticsSarkar, Somwrita January 2009 (has links)
Doctor of Philosophy (PhD) / This thesis presents a computational method for the inductive inference of explicit and implicit semantic design knowledge from the symbolic-mathematical syntax of design formulations using an unsupervised pattern recognition and extraction approach. Existing research shows that AI / machine learning based design computation approaches either require high levels of knowledge engineering or large training databases to acquire problem reformulation knowledge. The method presented in this thesis addresses these methodological limitations. The thesis develops, tests, and evaluates ways in which the method may be employed for design problem reformulation. The method is based on the linear algebra based factorization method Singular Value Decomposition (SVD), dimensionality reduction and similarity measurement through unsupervised clustering. The method calculates linear approximations of the associative patterns of symbol cooccurrences in a design problem representation to infer induced coupling strengths between variables, constraints and system components. Unsupervised clustering of these approximations is used to identify useful reformulations. These two components of the method automate a range of reformulation tasks that have traditionally required different solution algorithms. Example reformulation tasks that it performs include selection of linked design variables, parameters and constraints, design decomposition, modularity and integrative systems analysis, heuristically aiding design “case” identification, topology modeling and layout planning. The relationship between the syntax of design representation and the encoded semantic meaning is an open design theory research question. Based on the results of the method, the thesis presents a set of theoretical postulates on computable relationships between design syntax and semantics. The postulates relate the performance of the method with empirical findings and theoretical insights provided by cognitive neuroscience and cognitive science on how the human mind engages in symbol processing and the resulting capacities inherent in symbolic representational systems to encode “meaning”. The performance of the method suggests that semantic “meaning” is a higher order, global phenomenon that lies distributed in the design representation in explicit and implicit ways. A one-to-one local mapping between a design symbol and its meaning, a largely prevalent approach adopted by many AI and learning algorithms, may not be sufficient to capture and represent this meaning. By changing the theoretical standpoint on how a “symbol” is defined in design representations, it was possible to use a simple set of mathematical ideas to perform unsupervised inductive inference of knowledge in a knowledge-lean and training-lean manner, for a knowledge domain that traditionally relies on “giving” the system complex design domain and task knowledge for performing the same set of tasks.
|
128 |
Acquiring symbolic design optimization problem reformulation knowledge: On computable relationships between design syntax and semanticsSarkar, Somwrita January 2009 (has links)
Doctor of Philosophy (PhD) / This thesis presents a computational method for the inductive inference of explicit and implicit semantic design knowledge from the symbolic-mathematical syntax of design formulations using an unsupervised pattern recognition and extraction approach. Existing research shows that AI / machine learning based design computation approaches either require high levels of knowledge engineering or large training databases to acquire problem reformulation knowledge. The method presented in this thesis addresses these methodological limitations. The thesis develops, tests, and evaluates ways in which the method may be employed for design problem reformulation. The method is based on the linear algebra based factorization method Singular Value Decomposition (SVD), dimensionality reduction and similarity measurement through unsupervised clustering. The method calculates linear approximations of the associative patterns of symbol cooccurrences in a design problem representation to infer induced coupling strengths between variables, constraints and system components. Unsupervised clustering of these approximations is used to identify useful reformulations. These two components of the method automate a range of reformulation tasks that have traditionally required different solution algorithms. Example reformulation tasks that it performs include selection of linked design variables, parameters and constraints, design decomposition, modularity and integrative systems analysis, heuristically aiding design “case” identification, topology modeling and layout planning. The relationship between the syntax of design representation and the encoded semantic meaning is an open design theory research question. Based on the results of the method, the thesis presents a set of theoretical postulates on computable relationships between design syntax and semantics. The postulates relate the performance of the method with empirical findings and theoretical insights provided by cognitive neuroscience and cognitive science on how the human mind engages in symbol processing and the resulting capacities inherent in symbolic representational systems to encode “meaning”. The performance of the method suggests that semantic “meaning” is a higher order, global phenomenon that lies distributed in the design representation in explicit and implicit ways. A one-to-one local mapping between a design symbol and its meaning, a largely prevalent approach adopted by many AI and learning algorithms, may not be sufficient to capture and represent this meaning. By changing the theoretical standpoint on how a “symbol” is defined in design representations, it was possible to use a simple set of mathematical ideas to perform unsupervised inductive inference of knowledge in a knowledge-lean and training-lean manner, for a knowledge domain that traditionally relies on “giving” the system complex design domain and task knowledge for performing the same set of tasks.
|
129 |
Τμηματοποίηση εικόνων υφής με χρήση πολυφασματικής ανάλυσης και ελάττωσης διαστάσεωνΘεοδωρακόπουλος, Ηλίας 16 June 2010 (has links)
Τμηματοποίηση υφής ονομάζεται η διαδικασία του διαμερισμού μίας εικόνας σε πολλαπλά τμήματα-περιοχές, με κριτήριο την υφή κάθε περιοχής. Η διαδικασία αυτή βρίσκει πολλές εφαρμογές στους τομείς της υπολογιστικής όρασης, της ανάκτησης εικόνων, της ρομποτικής, της ανάλυσης δορυφορικών εικόνων κλπ. Αντικείμενο της παρούσης εργασίας είναι να διερευνηθεί η ικανότητα των αλγορίθμων μη γραμμικής ελάττωσης διάστασης, και ιδιαίτερα του αλγορίθμου Laplacian Eigenmaps, να παράγει μία αποδοτική αναπαράσταση των δεδομένων που προέρχονται από πολυφασματική ανάλυση εικόνων με χρήση φίλτρων Gabor, για την επίλυση του προβλήματος της τμηματοποίησης εικόνων υφής. Για το σκοπό αυτό προτείνεται μία νέα μέθοδος επιβλεπόμενης τμηματοποίησης υφής, που αξιοποιεί μία χαμηλής διάστασης αναπαράσταση των χαρακτηριστικών διανυσμάτων, και γνωστούς αλγόριθμους ομαδοποίησης δεδομένων όπως οι Fuzzy C-means και K-means, για την παραγωγή της τελικής τμηματοποίησης. Η αποτελεσματικότητα της μεθόδου συγκρίνεται με παρόμοιες μεθόδους που έχουν προταθεί στη βιβλιογραφία, και χρησιμοποιούν την αρχική , υψηλών διαστάσεων, αναπαράσταση των χαρακτηριστικών διανυσμάτων. Τα πειράματα διενεργήθηκαν χρησιμοποιώντας την βάση εικόνων υφής Brodatz. Κατά το στάδιο αξιολόγησης της μεθόδου, χρησιμοποιήθηκε ο δείκτης Rand index σαν μέτρο ομοιότητας ανάμεσα σε κάθε παραγόμενη τμηματοποίηση και την αντίστοιχη ground-truth τμηματοποίηση. / Texture segmentation is the process of partitioning an image into multiple segments (regions) based on their texture, with many applications in the area of computer vision, image retrieval, robotics, satellite imagery etc. The objective of this thesis is to investigate the ability of non-linear dimensionality reduction algorithms, and especially of LE algorithm, to produce an efficient representation for data derived from multi-spectral image analysis using Gabor filters, in solving the texture segmentation problem. For this purpose, we introduce a new supervised texture segmentation algorithm, which exploits a low-dimensional representation of feature vectors and well known clustering methods, such as Fuzzy C-means and K-means, to produce the final segmentation. The effectiveness of this method was compared to that of similar methods proposed in the literature, which use the initial high-dimensional representation of feature vectors. Experiments were performed on Brodatz texture database. During evaluation stage, Rand index has been used as a similarity measure between each segmentation and the corresponding ground-truth segmentation.
|
130 |
Développement d'outils statistiques pour l'analyse de données transcriptomiques par les réseaux de co-expression de gènes / A systemic approach to statistical analysis to transcriptomic data through co-expression network analysisBrunet, Anne-Claire 17 June 2016 (has links)
Les nouvelles biotechnologies offrent aujourd'hui la possibilité de récolter une très grande variété et quantité de données biologiques (génomique, protéomique, métagénomique...), ouvrant ainsi de nouvelles perspectives de recherche pour la compréhension des processus biologiques. Dans cette thèse, nous nous sommes plus spécifiquement intéressés aux données transcriptomiques, celles-ci caractérisant l'activité ou le niveau d'expression de plusieurs dizaines de milliers de gènes dans une cellule donnée. L'objectif était alors de proposer des outils statistiques adaptés pour analyser ce type de données qui pose des problèmes de "grande dimension" (n<<p), car collectées sur des échantillons de tailles très limitées au regard du très grand nombre de variables (ici l'expression des gènes).La première partie de la thèse est consacrée à la présentation de méthodes d'apprentissage supervisé, telles que les forêts aléatoires de Breiman et les modèles de régressions pénalisées, utilisées dans le contexte de la grande dimension pour sélectionner les gènes (variables d'expression) qui sont les plus pertinents pour l'étude de la pathologie d'intérêt. Nous évoquons les limites de ces méthodes pour la sélection de gènes qui soient pertinents, non pas uniquement pour des considérations d'ordre statistique, mais qui le soient également sur le plan biologique, et notamment pour les sélections au sein des groupes de variables fortement corrélées, c'est à dire au sein des groupes de gènes co-exprimés. Les méthodes d'apprentissage classiques considèrent que chaque gène peut avoir une action isolée dans le modèle, ce qui est en pratique peu réaliste. Un caractère biologique observable est la résultante d'un ensemble de réactions au sein d'un système complexe faisant interagir les gènes les uns avec les autres, et les gènes impliqués dans une même fonction biologique ont tendance à être co-exprimés (expression corrélée). Ainsi, dans une deuxième partie, nous nous intéressons aux réseaux de co-expression de gènes sur lesquels deux gènes sont reliés si ils sont co-exprimés. Plus précisément, nous cherchons à mettre en évidence des communautés de gènes sur ces réseaux, c'est à dire des groupes de gènes co-exprimés, puis à sélectionner les communautés les plus pertinentes pour l'étude de la pathologie, ainsi que les "gènes clés" de ces communautés. Cela favorise les interprétations biologiques, car il est souvent possible d'associer une fonction biologique à une communauté de gènes. Nous proposons une approche originale et efficace permettant de traiter simultanément la problématique de la modélisation du réseau de co-expression de gènes et celle de la détection des communautés de gènes sur le réseau. Nous mettons en avant les performances de notre approche en la comparant à des méthodes existantes et populaires pour l'analyse des réseaux de co-expression de gènes (WGCNA et méthodes spectrales). Enfin, par l'analyse d'un jeu de données réelles, nous montrons dans la dernière partie de la thèse que l'approche que nous proposons permet d'obtenir des résultats convaincants sur le plan biologique, plus propices aux interprétations et plus robustes que ceux obtenus avec les méthodes d'apprentissage supervisé classiques. / Today's, new biotechnologies offer the opportunity to collect a large variety and volume of biological data (genomic, proteomic, metagenomic...), thus opening up new avenues for research into biological processes. In this thesis, what we are specifically interested is the transcriptomic data indicative of the activity or expression level of several thousands of genes in a given cell. The aim of this thesis was to propose proper statistical tools to analyse these high dimensional data (n<<p) collected from small samples with regard to the very large number of variables (gene expression variables). The first part of the thesis is devoted to a description of some supervised learning methods, such as random forest and penalized regression models. The following methods can be used for selecting the most relevant disease-related genes. However, the statistical relevance of the selections doesn't determine the biological relevance, and particularly when genes are selected within a group of highly correlated variables or co-expressed genes. Common supervised learning methods consider that every gene can have an isolated action in the model which is not so much realistic. An observable biological phenomenum is the result of a set of reactions inside a complex system which makes genes interact with each other, and genes that have a common biological function tend to be co-expressed (correlation between expression variables). Then, in a second part, we are interested in gene co-expression networks, where genes are linked if they are co-expressed. More precisely, we aim to identify communities of co-expressed genes, and then to select the most relevant disease-related communities as well as the "key-genes" of these communities. It leads to a variety of biological interpretations, because a community of co-expressed genes is often associated with a specific biological function. We propose an original and efficient approach that permits to treat simultaneously the problem of modeling the gene co-expression network and the problem of detecting the communities in network. We put forward the performances of our approach by comparing it to the existing methods that are popular for analysing gene co-expression networks (WGCNA and spectral approaches). The last part presents the results produced by applying our proposed approach on a real-world data set. We obtain convincing and robust results that help us make more diverse biological interpretations than with results produced by common supervised learning methods.
|
Page generated in 0.1895 seconds