• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 2
  • Tagged with
  • 8
  • 8
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Adaptive sparse coding and dictionary selection

Yaghoobi Vaighan, Mehrdad January 2010 (has links)
The sparse coding is approximation/representation of signals with the minimum number of coefficients using an overcomplete set of elementary functions. This kind of approximations/ representations has found numerous applications in source separation, denoising, coding and compressed sensing. The adaptation of the sparse approximation framework to the coding problem of signals is investigated in this thesis. Open problems are the selection of appropriate models and their orders, coefficient quantization and sparse approximation method. Some of these questions are addressed in this thesis and novel methods developed. Because almost all recent communication and storage systems are digital, an easy method to compute quantized sparse approximations is introduced in the first part. The model selection problem is investigated next. The linear model can be adapted to better fit a given signal class. It can also be designed based on some a priori information about the model. Two novel dictionary selection methods are separately presented in the second part of the thesis. The proposed model adaption algorithm, called Dictionary Learning with the Majorization Method (DLMM), is much more general than current methods. This generality allowes it to be used with different constraints on the model. Particularly, two important cases have been considered in this thesis for the first time, Parsimonious Dictionary Learning (PDL) and Compressible Dictionary Learning (CDL). When the generative model order is not given, PDL not only adapts the dictionary to the given class of signals, but also reduces the model order redundancies. When a fast dictionary is needed, the CDL framework helps us to find a dictionary which is adapted to the given signal class without increasing the computation cost so much. Sometimes a priori information about the linear generative model is given in format of a parametric function. Parametric Dictionary Design (PDD) generates a suitable dictionary for sparse coding using the parametric function. Basically PDD finds a parametric dictionary with a minimal dictionary coherence, which has been shown to be suitable for sparse approximation and exact sparse recovery. Theoretical analyzes are accompanied by experiments to validate the analyzes. This research was primarily used for audio applications, as audio can be shown to have sparse structures. Therefore, most of the experiments are done using audio signals.
2

An Equivalence Between Sparse Approximation and Support Vector Machines

Girosi, Federico 01 May 1997 (has links)
In the first part of this paper we show a similarity between the principle of Structural Risk Minimization Principle (SRM) (Vapnik, 1982) and the idea of Sparse Approximation, as defined in (Chen, Donoho and Saunders, 1995) and Olshausen and Field (1996). Then we focus on two specific (approximate) implementations of SRM and Sparse Approximation, which have been used to solve the problem of function approximation. For SRM we consider the Support Vector Machine technique proposed by V. Vapnik and his team at AT&T Bell Labs, and for Sparse Approximation we consider a modification of the Basis Pursuit De-Noising algorithm proposed by Chen, Donoho and Saunders (1995). We show that, under certain conditions, these two techniques are equivalent: they give the same solution and they require the solution of the same quadratic programming problem.
3

Implementation of the locally competitive algorithm on a field programmable analog array

Balavoine, Aurèle 17 November 2009 (has links)
Sparse approximation is an important class of optimization problem in signal and image processing applications. This thesis presents an analog solution to this problem, based on the Locally Competitive Algorithm (LCA). A Hopfield-Network-like analog system, operating on sub-threshold currents is proposed as a solution. The results of the circuit components' implementation on the RASP2.8a chip, a Field Programmable Analog Array, are presented.
4

Sparse Methods for Hyperspectral Unmixing and Image Fusion

Bieniarz, Jakub 02 March 2016 (has links)
In recent years, the substantial increase in the number of spectral channels in optical remote sensing sensors allows more detailed spectroscopic analysis of objects on the Earth surface. Modern hyperspectral sensors are able to sample the sunlight reflected from a target on the ground with hundreds of adjacent narrow spectral channels. However, the increased spectral resolution comes at the price of a lower spatial resolution, e.g. the forthcoming German hyperspectral sensor Environmental Mapping and Analysis Program (EnMAP) which will have 244 spectral channels and a pixel size on ground as large as 30 m x 30 m. The main aim of this thesis is dealing with the problem of reduced spatial resolution in hyperspectral sensors. This is addressed first as an unmixing problem, i.e., extraction and quantification of the spectra of pure materials mixed in a single pixel, and second as a resolution enhancement problem based on fusion of multispectral and hyperspectral imagery. This thesis proposes novel methods for hyperspectral unmixing using sparse approximation techniques and external spectral dictionaries, which unlike traditional least squares-based methods, do not require pure material spectrum selection step and are thus able to simultaneously estimate the underlying active materials along with their respective abundances. However, in previous works it has been shown that these methods suffer from some drawbacks, mainly from the intra dictionary coherence. To improve the performance of sparse spectral unmixing, the use of derivative transformation and a novel two step group unmixing algorithm are proposed. Additionally, the spatial homogeneity of abundance vectors by introducing a multi-look model for spectral unmixing is exploited. Based on the above findings, a new method for fusion of hyperspectral images with higher spatial resolution multispectral images is proposed. The algorithm exploits the spectral information of the hyperspectral image and the spatial information from the multispectral image by means of sparse spectral unmixing to form a new high spatial and spectral resolution hyperspectral image. The introduced method is robust when applied to highly mixed scenarios as it relies on external spectral dictionaries. Both the proposed sparse spectral unmixing algorithms as well as the resolution enhancement approach are evaluated quantitatively and qualitatively. Algorithms developed in this thesis are significantly faster and yield better or similar results to state-of-the-art methods.
5

Application of AAK theory for sparse approximation

Pototskaia, Vlada 16 October 2017 (has links)
No description available.
6

Configurable analog hardware for neuromorphic Bayesian inference and least-squares solutions

Shapero, Samuel Andre 10 January 2013 (has links)
Sparse approximation is a Bayesian inference program with a wide number of signal processing applications, such as Compressed Sensing recovery used in medical imaging. Previous sparse coding implementations relied on digital algorithms whose power consumption and performance scale poorly with problem size, rendering them unsuitable for portable applications, and a bottleneck in high speed applications. A novel analog architecture, implementing the Locally Competitive Algorithm (LCA), was designed and programmed onto a Field Programmable Analog Arrays (FPAAs), using floating gate transistors to set the analog parameters. A network of 6 coefficients was demonstrated to converge to similar values as a digital sparse approximation algorithm, but with better power and performance scaling. A rate encoded spiking algorithm was then developed, which was shown to converge to similar values as the LCA. A second novel architecture was designed and programmed on an FPAA implementing the spiking version of the LCA with integrate and fire neurons. A network of 18 neurons converged on similar values as a digital sparse approximation algorithm, with even better performance and power efficiency than the non-spiking network. Novel algorithms were created to increase floating gate programming speed by more than two orders of magnitude, and reduce programming error from device mismatch. A new FPAA chip was designed and tested which allowed for rapid interfacing and additional improvements in accuracy. Finally, a neuromorphic chip was designed, containing 400 integrate and fire neurons, and capable of converging on a sparse approximation solution in 10 microseconds, over 1000 times faster than the best digital solution.
7

Separation of parameterized and delayed sources : application to spectroscopic and multispectral data / Séparation de sources paramétriques et retardées : application aux données spectroscopiques et multispectrales

Mortada, Hassan 13 December 2018 (has links)
Ce travail est motivé par la spectroscopie de photoélectrons et l'étude de la cinématique des galaxies où les données correspondent respectivement à une séquence temporelle de spectres et à une image multispectrale. L'objectif est d'estimer les caractéristiques (amplitude, position spectrale et paramètre de forme) des raies présentes dans les spectres, ainsi que leur évolution au sein des données. Dans les applications considérées, cette évolution est lente puisque deux spectres voisins sont souvent très similaires : c'est une connaissance a priori qui sera prise en compte dans les méthodes développées. Ce problème inverse est abordé sous l'angle de la séparation de sources retardées, où les spectres et les raies sont attribués respectivement aux mélanges et aux sources. Les méthodes de l'état de l'art sont inadéquates car elles supposent la décorrélation ou l'indépendance des sources, ce qui n'est pas le cas. Nous tirons parti de la connaissance des sources pour les modéliser par une fonction paramétrique. Nous proposons une première méthode de moindres carrés alternés : les paramètres de formes sont estimés avec l'algorithme de Levenberg-Marquardt, tandis que les amplitudes et les positions sont estimées avec un algorithme inspiré d'Orthogonal Matching Pursuit. Une deuxième méthode introduit un terme de régularisation pour prendre en compte l'évolution lente des positions; un nouvel algorithme d'approximation parcimonieuse conjointe est alors proposée. Enfin, une troisième méthode contraint l'évolution des amplitudes, positions et paramètres de forme par des fonctions B-splines afin de garantir une évolution lente conforme au physique des phénomènes observés. Les points de contrôle des B-splines sont estimés par un algorithme de moindre carrés non-linéaires. Les résultats sur des données synthétiques et réelles montrent que les méthodes proposées sont plus efficaces que les méthodes de l'état de l'art et aussi efficaces qu'une méthode bayésienne adaptée au problème mais avec un temps de calcul sensiblement réduit. / This work is motivated by photoelectron spectroscopy and the study of galaxy kinematics where data respectively correspond to a temporal sequence of spectra and a multispectral image. The objective is to estimate the characteristics (amplitude, spectral position and shape) of peaks embedded in the spectra, but also their evolution within the data. In the considered applications, this evolution is slow since two neighbor spectra are often very similar: this a priori knowledge that will be taken into account in the developed methods. This inverse problem is approached as a delayed source separation problem where spectra and peaks are respectively associated with mixtures and sources. The state-of-the-art methods are inadequate because they suppose the source decorrelation and independence, which is not the case. We take advantage of the source knowledge in order to model them by a parameterized function. We first propose an alternating least squares method: the shape parameters are estimated with the Levenberg-Marquardt algorithm, whilst the amplitudes and positions are estimated with an algorithm inspired from Orthogonal Matching Pursuit. A second method introduces a regularization term to consider the delay slow evolution; a new joint sparse approximation algorithm is thus proposed. Finally, a third method constrains the evolution of the amplitudes, positions and shape parameters by B-spline functions to guarantee their slow evolution. The B-spline control points are estimated with a non-linear least squares algorithm. The results on synthetic and real data show that the proposed methods are more effective than state-of-the-art methods and as effective as a Bayesian method which is adapted to the problem. Moreover, the proposed methods are significantly faster.
8

Restauration et séparation de signaux polynômiaux par morceaux. Application à la microscopie de force atomique / Restoration and separation of piecewise polynomial signals. Application to Atomic Force Microscopy

Duan, Junbo 15 November 2010 (has links)
Cette thèse s'inscrit dans le domaine des problèmes inverses en traitement du signal. Elle est consacrée à la conception d'algorithmes de restauration et de séparation de signaux parcimonieux et à leur application à l'approximation de courbes de forces en microscopie de force atomique (AFM), où la notion de parcimonie est liée au nombre de points de discontinuité dans le signal (sauts, changements de pente, changements de courbure). Du point de vue méthodologique, des algorithmes sous-optimaux sont proposés pour le problème de l'approximation parcimonieuse basée sur la pseudo-norme l0 : l'algorithme Single Best Replacement (SBR) est un algorithme itératif de type « ajout-retrait » inspiré d'algorithmes existants pour la restauration de signaux Bernoulli-Gaussiens. L'algorithme Continuation Single Best Replacement (CSBR) est un algorithme permettant de fournir des approximations à des degrés de parcimonie variables. Nous proposons aussi un algorithme de séparation de sources parcimonieuses à partir de mélanges avec retards, basé sur l'application préalable de l'algorithme CSBR sur chacun des mélanges, puis sur une procédure d'appariement des pics présents dans les différents mélanges. La microscopie de force atomique est une technologie récente permettant de mesurer des forces d'interaction entre nano-objets. L'analyse de courbes de forces repose sur des modèles paramétriques par morceaux. Nous proposons un algorithme permettant de détecter les régions d'intérêt (les morceaux) où chaque modèle s'applique puis d'estimer par moindres carrés les paramètres physiques (élasticité, force d'adhésion, topographie, etc.) dans chaque région. Nous proposons finalement une autre approche qui modélise une courbe de force comme un mélange de signaux sources parcimonieux retardées. La recherche des signaux sources dans une image force-volume s'effectue à partir d'un grand nombre de mélanges car il y autant de mélanges que de pixels dans l'image / This thesis handles several inverse problems occurring in sparse signal processing. The main contributions include the conception of algorithms dedicated to the restoration and the separation of sparse signals, and their application to force curve approximation in Atomic Force Microscopy (AFM), where the notion of sparsity is related to the number of discontinuity points in the signal (jumps, change of slope, change of curvature).In the signal processing viewpoint, we propose sub-optimal algorithms dedicated to the sparse signal approximation problem based on the l0 pseudo-norm : the Single Best Replacement algorithm (SBR) is an iterative "forward-backward" algorithm inspired from existing Bernoulli-Gaussian signal restoration algorithms. The Continuation Single Best Replacement algorithm (CSBR) is an extension providing approximations at various sparsity levels. We also address the problem of sparse source separation from delayed mixtures. The proposed algorithm is based on the prior application of CSBR on every mixture followed by a matching procedure which attributes a label for each peak occurring in each mixture.Atomic Force Microscopy (AFM) is a recent technology enabling to measure interaction forces between nano-objects. The force-curve analysis relies on piecewise parametric models. We address the detection of the regions of interest (the pieces) where each model holds and the subsequent estimation of physical parameters (elasticity, adhesion forces, topography, etc.) in each region by least-squares optimization. We finally propose an alternative approach in which a force curve is modeled as a mixture of delayed sparse sources. The research of the source signals and the delays from a force-volume image is done based on a large number of mixtures since there are as many mixtures as the number of image pixels

Page generated in 0.1294 seconds