31 |
Mathematical Modeling of Gas Transport Across Cell Membrane: Forward andInverse ProblemsBocchinfuso, Alberto 26 May 2023 (has links)
No description available.
|
32 |
Acquisitions d'IRM de diffusion à haute résolution spatiale : nouvelles perspectives grâce au débruitage spatialement adaptatif et angulaireSt-Jean, Samuel January 2015 (has links)
Le début des années 2000 a vu la cartographie du génome humain se réaliser après 13 ans de recherche. Le défi du prochain siècle réside dans la construction du connectome humain, qui consiste à cartographier les connexions du cerveau en utilisant l’imagerie par résonance magnétique (IRM) de diffusion. Cette technique permet en effet d’étudier la matière blanche du cerveau de façon complètement non invasive. Bien que le défi soit monumental, la résolution d’une image d’IRM se situe à l’échelle macroscopique et est environ 1000 fois inférieure à la taille des axones qu’il faut cartographier. Pour aider à pallier à ce problème, ce mémoire propose une nouvelle technique de débruitage spécialement conçue pour l’imagerie de diffusion. L’algorithme Non Local Spatial and Angular Matching (NLSAM) se base sur les principes du block matching et du dictionary learning pour exploiter la redondance des données d’IRM de diffusion. Un seuillage sur les voisins angulaire est aussi réalisé à l’aide du sparse coding, où l’erreur de reconstruction en norme l2 est bornée par la variance locale du bruit. L’algorithme est aussi conçu pour gérer le biais du bruit Ricien et Chi non centré puisque les images d’IRM contiennent du bruit non Gaussien. Ceci permet ainsi d’acquérir des données d’IRM de diffusion à une plus grande résolution spatiale que présentement disponible en milieu clinique. Ce travail ouvre donc la voie à un meilleur type d’acquisition, ce qui pourrait contribuer à révéler de nouveaux détails anatomiques non discernables à la résolution spatiale présentement utilisée par la communauté d’IRM de diffusion. Ceci pourrait aussi éventuellement contribuer à identifier de nouveaux biomarqueurs permettant de comprendre les maladies dégénératives telles que la sclérose en plaques, la maladie d’Alzheimer et la maladie de Parkinson.
|
33 |
Provable alternating minimization for non-convex learning problemsNetrapalli, Praneeth Kumar 17 September 2014 (has links)
Alternating minimization (AltMin) is a generic term for a widely popular approach in non-convex learning: often, it is possible to partition the variables into two (or more) sets, so that the problem is convex/tractable in one set if the other is held fixed (and vice versa). This allows for alternating between optimally updating one set of variables, and then the other. AltMin methods typically do not have associated global consistency guarantees; even though they are empirically observed to perform better than methods (e.g. based on convex optimization) that do have guarantees. In this thesis, we obtain rigorous performance guarantees for AltMin in three statistical learning settings: low rank matrix completion, phase retrieval and learning sparsely-used dictionaries. The overarching theme behind our results consists of two parts: (i) devising new initialization procedures (as opposed to doing so randomly, as is typical), and (ii) establishing exponential local convergence from this initialization. Our work shows that the pursuit of statistical guarantees can yield algorithmic improvements (initialization in our case) that perform better in practice. / text
|
34 |
Sparse coding for machine learning, image processing and computer vision / Représentations parcimonieuses en apprentissage statistique, traitement d’image et vision par ordinateurMairal, Julien 30 November 2010 (has links)
Nous étudions dans cette thèse une représentation particulière de signaux fondée sur une méthode d’apprentissage statistique, qui consiste à modéliser des données comme combinaisons linéaires de quelques éléments d’un dictionnaire appris. Ceci peut être vu comme une extension du cadre classique des ondelettes, dont le but est de construire de tels dictionnaires (souvent des bases orthonormales) qui sont adaptés aux signaux naturels. Un succès important de cette approche a été sa capacité à modéliser des imagettes, et la performance des méthodes de débruitage d’images fondées sur elle. Nous traitons plusieurs questions ouvertes, qui sont reliées à ce cadre : Comment apprendre efficacement un dictionnaire ? Comment enrichir ce modèle en ajoutant une structure sous-jacente au dictionnaire ? Est-il possible d’améliorer les méthodes actuelles de traitement d’image fondées sur cette approche ? Comment doit-on apprendre le dictionnaire lorsque celui-ci est utilisé pour une tâche autre que la reconstruction de signaux ? Y a-t-il des applications intéressantes de cette méthode en vision par ordinateur ? Nous répondons à ces questions, avec un point de vue multidisciplinaire, en empruntant des outils d’apprentissage statistique, d’optimisation convexe et stochastique, de traitement des signaux et des images, de vison par ordinateur, mais aussi d'optimisation sur des graphes. / We study in this thesis a particular machine learning approach to represent signals that that consists of modelling data as linear combinations of a few elements from a learned dictionary. It can be viewed as an extension of the classical wavelet framework, whose goal is to design such dictionaries (often orthonormal basis) that are adapted to natural signals. An important success of dictionary learning methods has been their ability to model natural image patches and the performance of image denoising algorithms that it has yielded. We address several open questions related to this framework: How to efficiently optimize the dictionary? How can the model be enriched by adding a structure to the dictionary? Can current image processing tools based on this method be further improved? How should one learn the dictionary when it is used for a different task than signal reconstruction? How can it be used for solving computer vision problems? We answer these questions with a multidisciplinarity approach, using tools from statistical machine learning, convex and stochastic optimization, image and signal processing, computer vision, but also optimization on graphs.
|
35 |
Représentations parcimonieuses et apprentissage de dictionnaires pour la classification et le clustering de séries temporelles / Time warp invariant sparse coding and dictionary learning for time series classification and clusteringVarasteh Yazdi, Saeed 15 November 2018 (has links)
L'apprentissage de dictionnaires à partir de données temporelles est un problème fondamental pour l’extraction de caractéristiques temporelles latentes, la révélation de primitives saillantes et la représentation de données temporelles complexes. Cette thèse porte sur l’apprentissage de dictionnaires pour la représentation parcimonieuse de séries temporelles. On s’intéresse à l’apprentissage de représentations pour la reconstruction, la classification et le clustering de séries temporelles sous des transformations de distortions temporelles. Nous proposons de nouveaux modèles invariants aux distortions temporelles.La première partie du travail porte sur l’apprentissage de dictionnaire pour des tâches de reconstruction et de classification de séries temporelles. Nous avons proposé un modèle TWI-OMP (Time-Warp Invariant Orthogonal Matching Pursuit) invariant aux distorsions temporelles, basé sur un opérateur de maximisation du cosinus entre des séries temporelles. Nous avons ensuite introduit le concept d’atomes jumelés (sibling atomes) et avons proposé une approche d’apprentissage de dictionnaires TWI-kSVD étendant la méthode kSVD à des séries temporelles.Dans la seconde partie du travail, nous nous sommes intéressés à l’apprentissage de dictionnaires pour le clustering de séries temporelles. Nous avons proposé une formalisation du problème et une solution TWI-DLCLUST par descente de gradient.Les modèles proposés sont évalués au travers plusieurs jeux de données publiques et réelles puis comparés aux approches majeures de l’état de l’art. Les expériences conduites et les résultats obtenus montrent l’intérêt des modèles d’apprentissage de représentations proposés pour la classification et le clustering de séries temporelles. / Learning dictionary for sparse representing time series is an important issue to extract latent temporal features, reveal salient primitives and sparsely represent complex temporal data. This thesis addresses the sparse coding and dictionary learning problem for time series classification and clustering under time warp. For that, we propose a time warp invariant sparse coding and dictionary learning framework where both input samples and atoms define time series of different lengths that involve varying delays.In the first part, we formalize an L0 sparse coding problem and propose a time warp invariant orthogonal matching pursuit based on a new cosine maximization time warp operator. For the dictionary learning stage, a non linear time warp invariant kSVD (TWI-kSVD) is proposed. Thanks to a rotation transformation between each atom and its sibling atoms, a singular value decomposition is used to jointly approximate the coefficients and update the dictionary, similar to the standard kSVD. In the second part, a time warp invariant dictionary learning for time series clustering is formalized and a gradient descent solution is proposed.The proposed methods are confronted to major shift invariant, convolved and kernel dictionary learning methods on several public and real temporal data. The conducted experiments show the potential of the proposed frameworks to efficiently sparse represent, classify and cluster time series under time warp.
|
36 |
Nonparametric Bayesian Dictionary Learning and Count and Mixture ModelingZhou, Mingyuan January 2013 (has links)
<p>Analyzing the ever-increasing data of unprecedented scale, dimensionality, diversity, and complexity poses considerable challenges to conventional approaches of statistical modeling. Bayesian nonparametrics constitute a promising research direction, in that such techniques can fit the data with a model that can grow with complexity to match the data. In this dissertation we consider nonparametric Bayesian modeling with completely random measures, a family of pure-jump stochastic processes with nonnegative increments. In particular, we study dictionary learning for sparse image representation using the beta process and the dependent hierarchical beta process, and we present the negative binomial process, a novel nonparametric Bayesian prior that unites the seemingly disjoint problems of count and mixture modeling. We show a wide variety of successful applications of our nonparametric Bayesian latent variable models to real problems in science and engineering, including count modeling, text analysis, image processing, compressive sensing, and computer vision.</p> / Dissertation
|
37 |
Panic Detection in Human Crowds using Sparse CodingKumar, Abhishek 21 August 2012 (has links)
Recently, the surveillance of human activities has drawn a lot of attention from the research community and the camera based surveillance is being tried with the aid of computers. Cameras are being used extensively for surveilling human activities; however, placing cameras and transmitting visual data is not the end of a surveillance system. Surveillance needs to detect abnormal or unwanted activities. Such abnormal activities are very infrequent as compared to regular activities. At present, surveillance is done manually, where the job of operators is to watch a set of surveillance video screens to discover an abnormal event. This is expensive and prone to error.
The limitation of these surveillance systems can be effectively removed if an automated anomaly detection system is designed. With powerful computers, computer vision is being seen as a panacea for surveillance. A computer vision aided anomaly detection system will enable the selection of those video frames which contain an anomaly, and only those selected frames will be used for manual verifications.
A panic is a type of anomaly in a human crowd, which appears when a group of people start to move faster than the usual speed. Such situations can arise due to a fearsome activity near a crowd such as fight, robbery, riot, etc. A variety of computer vision based algorithms have been developed to detect panic in human crowds, however, most of the proposed algorithms are computationally expensive and hence too slow to be real-time.
Dictionary learning is a robust tool to model a behaviour in terms of the linear combination of dictionary elements. A few panic detection algorithms have shown high accuracy using the dictionary learning method; however, the dictionary learning approach is computationally expensive. Orthogonal matching pursuit (OMP) is an inexpensive way to model a behaviour using dictionary elements and in this research OMP is used to design a panic detection algorithm. The proposed algorithm has been tested on two datasets and results are found to be comparable to state-of-the-art algorithms.
|
38 |
On sparse representations and new meta-learning paradigms for representation learningMehta, Nishant A. 27 August 2014 (has links)
Given the "right" representation, learning is easy. This thesis studies representation learning and meta-learning, with a special focus on sparse representations. Meta-learning is fundamental to machine learning, and it translates to learning to learn itself. The presentation unfolds in two parts. In the first part, we establish learning theoretic results for learning sparse representations. The second part introduces new multi-task and meta-learning paradigms for representation learning.
On the sparse representations front, our main pursuits are generalization error bounds to support a supervised dictionary learning model for Lasso-style sparse coding. Such predictive sparse coding algorithms have been applied with much success in the literature; even more common have been applications of unsupervised sparse coding followed by supervised linear hypothesis learning. We present two generalization error bounds for predictive sparse coding, handling the overcomplete setting (more original dimensions than learned features) and the infinite-dimensional setting. Our analysis led to a fundamental stability result for the Lasso that shows the stability of the solution vector to design matrix perturbations. We also introduce and analyze new multi-task models for (unsupervised) sparse coding and predictive sparse coding, allowing for one dictionary per task but with sharing between the tasks' dictionaries.
The second part introduces new meta-learning paradigms to realize unprecedented types of learning guarantees for meta-learning. Specifically sought are guarantees on a meta-learner's performance on new tasks encountered in an environment of tasks. Nearly all previous work produced bounds on the expected risk, whereas we produce tail bounds on the risk, thereby providing performance guarantees on the risk for a single new task drawn from the environment. The new paradigms include minimax multi-task learning (minimax MTL) and sample variance penalized meta-learning (SVP-ML). Regarding minimax MTL, we provide a high probability learning guarantee on its performance on individual tasks encountered in the future, the first of its kind. We also present two continua of meta-learning formulations, each interpolating between classical multi-task learning and minimax multi-task learning. The idea of SVP-ML is to minimize the task average of the training tasks' empirical risks plus a penalty on their sample variance. Controlling this sample variance can potentially yield a faster rate of decrease for upper bounds on the expected risk of new tasks, while also yielding high probability guarantees on the meta-learner's average performance over a draw of new test tasks. An algorithm is presented for SVP-ML with feature selection representations, as well as a quite natural convex relaxation of the SVP-ML objective.
|
39 |
Panic Detection in Human Crowds using Sparse CodingKumar, Abhishek 21 August 2012 (has links)
Recently, the surveillance of human activities has drawn a lot of attention from the research community and the camera based surveillance is being tried with the aid of computers. Cameras are being used extensively for surveilling human activities; however, placing cameras and transmitting visual data is not the end of a surveillance system. Surveillance needs to detect abnormal or unwanted activities. Such abnormal activities are very infrequent as compared to regular activities. At present, surveillance is done manually, where the job of operators is to watch a set of surveillance video screens to discover an abnormal event. This is expensive and prone to error.
The limitation of these surveillance systems can be effectively removed if an automated anomaly detection system is designed. With powerful computers, computer vision is being seen as a panacea for surveillance. A computer vision aided anomaly detection system will enable the selection of those video frames which contain an anomaly, and only those selected frames will be used for manual verifications.
A panic is a type of anomaly in a human crowd, which appears when a group of people start to move faster than the usual speed. Such situations can arise due to a fearsome activity near a crowd such as fight, robbery, riot, etc. A variety of computer vision based algorithms have been developed to detect panic in human crowds, however, most of the proposed algorithms are computationally expensive and hence too slow to be real-time.
Dictionary learning is a robust tool to model a behaviour in terms of the linear combination of dictionary elements. A few panic detection algorithms have shown high accuracy using the dictionary learning method; however, the dictionary learning approach is computationally expensive. Orthogonal matching pursuit (OMP) is an inexpensive way to model a behaviour using dictionary elements and in this research OMP is used to design a panic detection algorithm. The proposed algorithm has been tested on two datasets and results are found to be comparable to state-of-the-art algorithms.
|
40 |
Apprentissage avec la parcimonie et sur des données incertaines par la programmation DC et DCA / Learning with sparsity and uncertainty by Difference of Convex functions optimizationVo, Xuan Thanh 15 October 2015 (has links)
Dans cette thèse, nous nous concentrons sur le développement des méthodes d'optimisation pour résoudre certaines classes de problèmes d'apprentissage avec la parcimonie et/ou avec l'incertitude des données. Nos méthodes sont basées sur la programmation DC (Difference of Convex functions) et DCA (DC Algorithms) étant reconnues comme des outils puissants d'optimisation. La thèse se compose de deux parties : La première partie concerne la parcimonie tandis que la deuxième partie traite l'incertitude des données. Dans la première partie, une étude approfondie pour la minimisation de la norme zéro a été réalisée tant sur le plan théorique qu'algorithmique. Nous considérons une approximation DC commune de la norme zéro et développons quatre algorithmes basées sur la programmation DC et DCA pour résoudre le problème approché. Nous prouvons que nos algorithmes couvrent tous les algorithmes standards existants dans le domaine. Ensuite, nous étudions le problème de la factorisation en matrices non-négatives (NMF) et fournissons des algorithmes appropriés basés sur la programmation DC et DCA. Nous étudions également le problème de NMF parcimonieuse. Poursuivant cette étude, nous étudions le problème d'apprentissage de dictionnaire où la représentation parcimonieuse joue un rôle crucial. Dans la deuxième partie, nous exploitons la technique d'optimisation robuste pour traiter l'incertitude des données pour les deux problèmes importants dans l'apprentissage : la sélection de variables dans SVM (Support Vector Machines) et le clustering. Différents modèles d'incertitude sont étudiés. Les algorithmes basés sur DCA sont développés pour résoudre ces problèmes. / In this thesis, we focus on developing optimization approaches for solving some classes of optimization problems in sparsity and robust optimization for data uncertainty. Our methods are based on DC (Difference of Convex functions) programming and DCA (DC Algorithms) which are well-known as powerful tools in optimization. This thesis is composed of two parts: the first part concerns with sparsity while the second part deals with uncertainty. In the first part, a unified DC approximation approach to optimization problem involving the zero-norm in objective is thoroughly studied on both theoretical and computational aspects. We consider a common DC approximation of zero-norm that includes all standard sparse inducing penalty functions, and develop general DCA schemes that cover all standard algorithms in the field. Next, the thesis turns to the nonnegative matrix factorization (NMF) problem. We investigate the structure of the considered problem and provide appropriate DCA based algorithms. To enhance the performance of NMF, the sparse NMF formulations are proposed. Continuing this topic, we study the dictionary learning problem where sparse representation plays a crucial role. In the second part, we exploit robust optimization technique to deal with data uncertainty for two important problems in machine learning: feature selection in linear Support Vector Machines and clustering. In this context, individual data point is uncertain but varies in a bounded uncertainty set. Different models (box/spherical/ellipsoidal) related to uncertain data are studied. DCA based algorithms are developed to solve the robust problems
|
Page generated in 0.1219 seconds