• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • Tagged with
  • 3
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

On Learning from Collective Data

Xiong, Liang 01 December 2013 (has links)
In many machine learning problems and application domains, the data are naturally organized by groups. For example, a video sequence is a group of images, an image is a group of patches, a document is a group of paragraphs/words, and a community is a group of people. We call them the collective data. In this thesis, we study how and what we can learn from collective data. Usually, machine learning focuses on individual objects, each of which is described by a feature vector and studied as a point in some metric space. When approaching collective data, researchers often reduce the groups into vectors to which traditional methods can be applied. We, on the other hand, will try to develop machine learning methods that respect the collective nature of data and learn from them directly. Several different approaches were taken to address this learning problem. When the groups consist of unordered discrete data points, it can naturally be characterized by its sufficient statistics – the histogram. For this case we develop efficient methods to address the outliers and temporal effects in the data based on matrix and tensor factorization methods. To learn from groups that contain multi-dimensional real-valued vectors, we develop both generative methods based on hierarchical probabilistic models and discriminative methods using group kernels based on new divergence estimators. With these tools, we can accomplish various tasks such as classification, regression, clustering, anomaly detection, and dimensionality reduction on collective data. We further consider the practical side of the divergence based algorithms. To reduce their time and space requirements, we evaluate and find methods that can effectively reduce the size of the groups with little impact on the accuracy. We also proposed the conditional divergence along with an efficient estimator in order to correct the sampling biases that might be present in the data. Finally, we develop methods to learn in cases where some divergences are missing, caused by either insufficient computational resources or extreme sampling biases. In addition to designing new learning methods, we will use them to help the scientific discovery process. In our collaboration with astronomers and physicists, we see that the new techniques can indeed help scientists make the best of data.
2

Dirty statistical models

Jalali, Ali, 1982- 11 July 2012 (has links)
In fields across science and engineering, we are increasingly faced with problems where the number of variables or features we need to estimate is much larger than the number of observations. Under such high-dimensional scaling, for any hope of statistically consistent estimation, it becomes vital to leverage any potential structure in the problem such as sparsity, low-rank structure or block sparsity. However, data may deviate significantly from any one such statistical model. The motivation of this thesis is: can we simultaneously leverage more than one such statistical structural model, to obtain consistency in a larger number of problems, and with fewer samples, than can be obtained by single models? Our approach involves combining via simple linear superposition, a technique we term dirty models. The idea is very simple: while any one structure might not capture the data, a superposition of structural classes might. Dirty models thus searches for a parameter that can be decomposed into a number of simpler structures such as (a) sparse plus block-sparse, (b) sparse plus low-rank and (c) low-rank plus block-sparse. In this thesis, we propose dirty model based algorithms for different problems such as multi-task learning, graph clustering and time-series analysis with latent factors. We analyze these algorithms in terms of the number of observations we need to estimate the variables. These algorithms are based on convex optimization and sometimes they are relatively slow. We provide a class of low-complexity greedy algorithms that not only can solve these optimizations faster, but also guarantee the solution. Other than theoretical results, in each case, we provide experimental results to illustrate the power of dirty models. / text
3

Décomposition de petit rang, problèmes de complétion et applications : décomposition de matrices de Hankel et des tenseurs de rang faible / Low rank decomposition, completion problems and applications : low rank decomposition of Hankel matrices and tensors

Harmouch, Jouhayna 19 December 2018 (has links)
On étudie la décomposition de matrice de Hankel comme une somme des matrices de Hankel de rang faible en corrélation avec la décomposition de son symbole σ comme une somme des séries exponentielles polynomiales. On présente un nouvel algorithme qui calcule la décomposition d’un opérateur de Hankel de petit rang et sa décomposition de son symbole en exploitant les propriétés de l’algèbre quotient de Gorenstein . La base de est calculée à partir la décomposition en valeurs singuliers d’une sous-matrice de matrice de Hankel . Les fréquences et les poids se déduisent des vecteurs propres généralisés des sous matrices de Hankel déplacés de . On présente une formule pour calculer les poids en fonction des vecteurs propres généralisés au lieu de résoudre un système de Vandermonde. Cette nouvelle méthode est une généralisation de Pencil méthode déjà utilisée pour résoudre un problème de décomposition de type de Prony. On analyse son comportement numérique en présence des moments contaminés et on décrit une technique de redimensionnement qui améliore la qualité numérique des fréquences d’une grande amplitude. On présente une nouvelle technique de Newton qui converge localement vers la matrice de Hankel de rang faible la plus proche au matrice initiale et on montre son effet à corriger les erreurs sur les moments. On étudie la décomposition d’un tenseur multi-symétrique T comme une somme des puissances de produit des formes linéaires en corrélation avec la décomposition de son dual comme une somme pondérée des évaluations. On utilise les propriétés de l’algèbre de Gorenstein associée pour calculer la décomposition de son dual qui est définie à partir d’une série formelle τ. On utilise la décomposition d’un opérateur de Hankel de rang faible associé au symbole τ comme une somme des opérateurs indécomposables de rang faible. La base d’ est choisie de façon que la multiplication par certains variables soit possible. On calcule les coordonnées des points et leurs poids correspondants à partir la structure propre des matrices de multiplication. Ce nouvel algorithme qu’on propose marche bien pour les matrices de Hankel de rang faible. On propose une approche théorique de la méthode dans un espace de dimension n. On donne un exemple numérique de la décomposition d’un tenseur multilinéaire de rang 3 en dimension 3 et un autre exemple de la décomposition d’un tenseur multi-symétrique de rang 3 en dimension 3. On étudie le problème de complétion de matrice de Hankel comme un problème de minimisation. On utilise la relaxation du problème basé sur la minimisation de la norme nucléaire de la matrice de Hankel. On adapte le SVT algorithme pour le cas d’une matrice de Hankel et on calcule l’opérateur linéaire qui décrit les contraintes du problème de minimisation de norme nucléaire. On montre l’utilité du problème de décomposition à dissocier un modèle statistique ou biologique. / We study the decomposition of a multivariate Hankel matrix as a sum of Hankel matrices of small rank in correlation with the decomposition of its symbol σ as a sum of polynomialexponential series. We present a new algorithm to compute the low rank decomposition of the Hankel operator and the decomposition of its symbol exploiting the properties of the associated Artinian Gorenstein quotient algebra . A basis of is computed from the Singular Value Decomposition of a sub-matrix of the Hankel matrix . The frequencies and the weights are deduced from the generalized eigenvectors of pencils of shifted sub-matrices of Explicit formula for the weights in terms of the eigenvectors avoid us to solve a Vandermonde system. This new method is a multivariate generalization of the so-called Pencil method for solving Pronytype decomposition problems. We analyse its numerical behaviour in the presence of noisy input moments, and describe a rescaling technique which improves the numerical quality of the reconstruction for frequencies of high amplitudes. We also present a new Newton iteration, which converges locally to the closest multivariate Hankel matrix of low rank and show its impact for correcting errors on input moments. We study the decomposition of a multi-symmetric tensor T as a sum of powers of product of linear forms in correlation with the decomposition of its dual as a weighted sum of evaluations. We use the properties of the associated Artinian Gorenstein Algebra to compute the decomposition of its dual which is defined via a formal power series τ. We use the low rank decomposition of the Hankel operator associated to the symbol τ into a sum of indecomposable operators of low rank. A basis of is chosen such that the multiplication by some variables is possible. We compute the sub-coordinates of the evaluation points and their weights using the eigen-structure of multiplication matrices. The new algorithm that we propose works for small rank. We give a theoretical generalized approach of the method in n dimensional space. We show a numerical example of the decomposition of a multi-linear tensor of rank 3 in 3 dimensional space. We show a numerical example of the decomposition of a multi-symmetric tensor of rank 3 in 3 dimensional space. We study the completion problem of the low rank Hankel matrix as a minimization problem. We use the relaxation of it as a minimization problem of the nuclear norm of Hankel matrix. We adapt the SVT algorithm to the case of Hankel matrix and we compute the linear operator which describes the constraints of the problem and its adjoint. We try to show the utility of the decomposition algorithm in some applications such that the LDA model and the ODF model.

Page generated in 0.1319 seconds