• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 133
  • 24
  • 12
  • 7
  • 5
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 212
  • 53
  • 30
  • 30
  • 29
  • 27
  • 24
  • 21
  • 21
  • 19
  • 19
  • 19
  • 18
  • 18
  • 18
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Regularization for Sparseness and Smoothness : Applications in System Identification and Signal Processing

Ohlsson, Henrik January 2010 (has links)
In system identification, the Akaike Information Criterion (AIC) is a well known method to balance the model fit against model complexity. Regularization here acts as a price on model complexity. In statistics and machine learning, regularization has gained popularity due to modeling methods such as Support Vector Machines (SVM), ridge regression and lasso. But also when using a Bayesian approach to modeling, regularization often implicitly shows up and can be associated with the prior knowledge. Regularization has also had a great impact on many applications, and very much so in clinical imaging. In e.g., breast cancer imaging, the number of sensors is physically restricted which leads to long scantimes. Regularization and sparsity can be used to reduce that. In Magnetic Resonance Imaging (MRI), the number of scans is physically limited and to obtain high resolution images, regularization plays an important role. Regularization shows-up in a variety of different situations and is a well known technique to handle ill-posed problems and to control for overfit. We focus on the use of regularization to obtain sparseness and smoothness and discuss novel developments relevant to system identification and signal processing. In regularization for sparsity a quantity is forced to contain elements equal to zero, or to be sparse. The quantity could e.g., be the regression parameter vectorof a linear regression model and regularization would then result in a tool for variable selection. Sparsity has had a huge impact on neighboring disciplines, such as machine learning and signal processing, but rather limited effect on system identification. One of the major contributions of this thesis is therefore the new developments in system identification using sparsity. In particular, a novel method for the estimation of segmented ARX models using regularization for sparsity is presented. A technique for piecewise-affine system identification is also elaborated on as well as several novel applications in signal processing. Another property that regularization can be used to impose is smoothness. To require the relation between regressors and predictions to be a smooth function is a way to control for overfit. We are here particularly interested in regression problems with regressors constrained to limited regions in the regressor-space e.g., a manifold. For this type of systems we develop a new regression technique, Weight Determination by Manifold Regularization (WDMR). WDMR is inspired byapplications in biology and developments in manifold learning and uses regularization for smoothness to obtain smooth estimates. The use of regularization for smoothness in linear system identification is also discussed. The thesis also presents a real-time functional Magnetic Resonance Imaging (fMRI) bio-feedback setup. The setup has served as proof of concept and been the foundation for several real-time fMRI studies.
22

Application of Compressive Sensing and Belief Propagation for Channel Occupancy Detection in Cognitive Radio Networks

Sadiq, Sadiq Jafar 25 August 2011 (has links)
Wide-band spectrum sensing is an approach for finding spectrum holes within a wideband signal with less complexity/delay than the conventional approaches. In this thesis, we propose four different algorithms for detecting the holes in a wide-band spectrum and finding the sparsity level of compressive signals. The first algorithm estimates the spectrum in an efficient manner and uses this estimation to find the holes. The second algorithm detects the spectrum holes by reconstructing channel energies instead of reconstructing the spectrum itself. In this method, the signal is fed into a number of filters. The energies of the filter outputs are used as the compressed measurement to reconstruct the signal energy. The third algorithm employs two information theoretic algorithms to find the sparsity level of a compressive signal and the last algorithm employs belief propagation for detecting the sparsity level.
23

Application of Compressive Sensing and Belief Propagation for Channel Occupancy Detection in Cognitive Radio Networks

Sadiq, Sadiq Jafar 25 August 2011 (has links)
Wide-band spectrum sensing is an approach for finding spectrum holes within a wideband signal with less complexity/delay than the conventional approaches. In this thesis, we propose four different algorithms for detecting the holes in a wide-band spectrum and finding the sparsity level of compressive signals. The first algorithm estimates the spectrum in an efficient manner and uses this estimation to find the holes. The second algorithm detects the spectrum holes by reconstructing channel energies instead of reconstructing the spectrum itself. In this method, the signal is fed into a number of filters. The energies of the filter outputs are used as the compressed measurement to reconstruct the signal energy. The third algorithm employs two information theoretic algorithms to find the sparsity level of a compressive signal and the last algorithm employs belief propagation for detecting the sparsity level.
24

Seismic data processing with curvelets: a multiscale and nonlinear approach.

Herrmann, Felix J., Wang, Deli, Hennenfent, Gilles, Moghaddam, Peyman P. January 2007 (has links)
In this abstract, we present a nonlinear curvelet-based sparsity promoting formulation of a seismic processing flow, consisting of the following steps: seismic data regularization and the restoration of migration amplitudes. We show that the curvelet’s wavefront detection capability and invariance under the migration-demigration operator lead to a formulation that is stable under noise and missing data.
25

Deblurring with Framelets in the Sparse Analysis Setting

Danniels, Travis 23 December 2013 (has links)
In this thesis, algorithms for blind and non-blind motion deblurring of digital images are proposed. The non-blind algorithm is based on a convex program consisting of a data fitting term and a sparsity-promoting regularization term. The data fitting term is the squared l_2 norm of the residual between the blurred image and the latent image convolved with a known blur kernel. The regularization term is the l_1 norm of the latent image under a wavelet frame (framelet) decomposition. This convex program is solved with the first-order primal-dual algorithm proposed by Chambolle and Pock. The proposed blind deblurring algorithm is based on the work of Cai, Ji, Liu, and Shen. It works by embedding the proposed non-blind algorithm in an alternating minimization scheme and imposing additional constraints in order to deal with the challenging non-convex nature of the blind deblurring problem. Numerical experiments are performed on artificially and naturally blurred images, and both proposed algorithms are found to be competitive with recent deblurring methods. / Graduate / 0544 / tdanniels@gmail.com
26

Model-based analysis of stability in networks of neurons

Panas, Dagmara January 2017 (has links)
Neurons, the building blocks of the brain, are an astonishingly capable type of cell. Collectively they can store, manipulate and retrieve biologically important information, allowing animals to learn and adapt to environmental changes. This universal adaptability is widely believed to be due to plasticity: the readiness of neurons to manipulate and adjust their intrinsic properties and strengths of connections to other cells. It is through such modifications that associations between neurons can be made, giving rise to memory representations; for example, linking a neuron responding to the smell of pancakes with neurons encoding sweet taste and general gustatory pleasure. However, this malleability inherent to neuronal cells poses a dilemma from the point of view of stability: how is the brain able to maintain stable operation while in the state of constant flux? First of all, won’t there occur purely technical problems akin to short-circuiting or runaway activity? And second of all, if the neurons are so easily plastic and changeable, how can they provide a reliable description of the environment? Of course, evidence abounds to testify to the robustness of brains, both from everyday experience and scientific experiments. How does this robustness come about? Firstly, many control feedback mechanisms are in place to ensure that neurons do not enter wild regimes of behaviour. These mechanisms are collectively known as homeostatic plasticity, since they ensure functional homeostasis through plastic changes. One well-known example is synaptic scaling, a type of plasticity ensuring that a single neuron does not get overexcited by its inputs: whenever learning occurs and connections between cells get strengthened, subsequently all the neurons’ inputs get downscaled to maintain a stable level of net incoming signals. And secondly, as hinted by other researchers and directly explored in this work, networks of neurons exhibit a property present in many complex systems called sloppiness. That is, they produce very similar behaviour under a wide range of parameters. This principle appears to operate on many scales and is highly useful (perhaps even unavoidable), as it permits for variation between individuals and for robustness to mutations and developmental perturbations: since there are many combinations of parameters resulting in similar operational behaviour, a disturbance of a single, or even several, parameters does not need to lead to dysfunction. It is also that same property that permits networks of neurons to flexibly reorganize and learn without becoming unstable. As an illustrative example, consider encountering maple syrup for the first time and associating it with pancakes; thanks to sloppiness, this new link can be added without causing the network to fire excessively. As has been found in previous experimental studies, consistent multi-neuron activity patterns arise across organisms, despite the interindividual differences in firing profiles of single cells and precise values of connection strengths. Such activity patterns, as has been furthermore shown, can be maintained despite pharmacological perturbation, as neurons compensate for the perturbed parameters by adjusting others; however, not all pharmacological perturbations can be thus amended. In the present work, it is for the first time directly demonstrated that groups of neurons are by rule sloppy; their collective parameter space is mapped to reveal which are the sensitive and insensitive parameter combinations; and it is shown that the majority of spontaneous fluctuations over time primarily affect the insensitive parameters. In order to demonstrate the above, hippocampal neurons of the rat were grown in culture over multi-electrode arrays and recorded from for several days. Subsequently, statistical models were fit to the activity patterns of groups of neurons to obtain a mathematically tractable description of their collective behaviour at each time point. These models provide robust fits to the data and allow for a principled sensitivity analysis with the use of information-theoretic tools. This analysis has revealed that groups of neurons tend to be governed by a few leader units. Furthermore, it appears that it was the stability of these key neurons and their connections that ensured the stability of collective firing patterns across time. The remaining units, in turn, were free to undergo plastic changes without risking destabilizing the collective behaviour. Together with what has been observed by other researchers, the findings of the present work suggest that the impressively adaptable yet robust functioning of the brain is made possible by the interplay of feedback control of few crucial properties of neurons and the general sloppy design of networks. It has, in fact, been hypothesised that any complex system subject to evolution is bound to rely on such design: in order to cope with natural selection under changing environmental circumstances, it would be difficult for a system to rely on tightly controlled parameters. It might be, therefore, that all life is just, by nature, sloppy.
27

Représentations parcimonieuses pour les signaux multivariés / Sparse representations for multivariate signals

Barthelemy, Quentin 13 May 2013 (has links)
Dans cette thèse, nous étudions les méthodes d'approximation et d'apprentissage qui fournissent des représentations parcimonieuses. Ces méthodes permettent d'analyser des bases de données très redondantes à l'aide de dictionnaires d'atomes appris. Etant adaptés aux données étudiées, ils sont plus performants en qualité de représentation que les dictionnaires classiques dont les atomes sont définis analytiquement. Nous considérons plus particulièrement des signaux multivariés résultant de l'acquisition simultanée de plusieurs grandeurs, comme les signaux EEG ou les signaux de mouvements 2D et 3D. Nous étendons les méthodes de représentations parcimonieuses au modèle multivarié, pour prendre en compte les interactions entre les différentes composantes acquises simultanément. Ce modèle est plus flexible que l'habituel modèle multicanal qui impose une hypothèse de rang 1. Nous étudions des modèles de représentations invariantes : invariance par translation temporelle, invariance par rotation, etc. En ajoutant des degrés de liberté supplémentaires, chaque noyau est potentiellement démultiplié en une famille d'atomes, translatés à tous les échantillons, tournés dans toutes les orientations, etc. Ainsi, un dictionnaire de noyaux invariants génère un dictionnaire d'atomes très redondant, et donc idéal pour représenter les données étudiées redondantes. Toutes ces invariances nécessitent la mise en place de méthodes adaptées à ces modèles. L'invariance par translation temporelle est une propriété incontournable pour l'étude de signaux temporels ayant une variabilité temporelle naturelle. Dans le cas de l'invariance par rotation 2D et 3D, nous constatons l'efficacité de l'approche non-orientée sur celle orientée, même dans le cas où les données ne sont pas tournées. En effet, le modèle non-orienté permet de détecter les invariants des données et assure la robustesse à la rotation quand les données tournent. Nous constatons aussi la reproductibilité des décompositions parcimonieuses sur un dictionnaire appris. Cette propriété générative s'explique par le fait que l'apprentissage de dictionnaire est une généralisation des K-means. D'autre part, nos représentations possèdent de nombreuses invariances, ce qui est idéal pour faire de la classification. Nous étudions donc comment effectuer une classification adaptée au modèle d'invariance par translation, en utilisant des fonctions de groupement consistantes par translation. / In this thesis, we study approximation and learning methods which provide sparse representations. These methods allow to analyze very redundant data-bases thanks to learned atoms dictionaries. Being adapted to studied data, they are more efficient in representation quality than classical dictionaries with atoms defined analytically. We consider more particularly multivariate signals coming from the simultaneous acquisition of several quantities, as EEG signals or 2D and 3D motion signals. We extend sparse representation methods to the multivariate model, to take into account interactions between the different components acquired simultaneously. This model is more flexible that the common multichannel one which imposes a hypothesis of rank 1. We study models of invariant representations: invariance to temporal shift, invariance to rotation, etc. Adding supplementary degrees of freedom, each kernel is potentially replicated in an atoms family, translated at all samples, rotated at all orientations, etc. So, a dictionary of invariant kernels generates a very redundant atoms dictionary, thus ideal to represent the redundant studied data. All these invariances require methods adapted to these models. Temporal shift-invariance is an essential property for the study of temporal signals having a natural temporal variability. In the 2D and 3D rotation invariant case, we observe the efficiency of the non-oriented approach over the oriented one, even when data are not revolved. Indeed, the non-oriented model allows to detect data invariants and assures the robustness to rotation when data are revolved. We also observe the reproducibility of the sparse decompositions on a learned dictionary. This generative property is due to the fact that dictionary learning is a generalization of K-means. Moreover, our representations have many invariances that is ideal to make classification. We thus study how to perform a classification adapted to the shift-invariant model, using shift-consistent pooling functions.
28

Multiscale local polynomial transforms in smoothing and density estimation

Amghar, Mohamed 22 December 2017 (has links)
Un défi majeur dans les méthodes d'estimation non linéaire multi-échelle, comme le seuillage des ondelettes, c'est l'extension de ces méthodes vers une disposition où les observations sont irrégulières et non équidistantes. L'application de ces techniques dans le lissage de données ou l'estimation des fonctions de densité, il est crucial de travailler dans un espace des fonctions qui impose un certain degré de régularité. Nous suivons donc une approche différente, en utilisant le soi-disant système de levage. Afin de combiner la régularité et le bon conditionnement numérique, nous adoptons un schéma similaire à la pyramide Laplacienne, qui peut être considérée comme une transformation d'ondelettes légèrement redondantes. Alors que le schéma de levage classique repose sur l'interpolation comme opération de base, ce schéma permet d'utiliser le lissage, en utilisant par exemple des polynômes locaux. Le noyau de l'opération de lissage est choisi de manière multi-échelle. Le premier chapitre de ce projet consiste sur le développement de La transformée polynomiale locale multi-échelle, qui combine les avantages du lissage polynomial local avec la parcimonie de la décomposition multi-échelle. La contribution de cette partie est double. Tout d'abord, il se concentre sur les largeurs de bande utilisées tout au long de la transformée. Ces largeurs de bande fonctionnent comme des échelles contrôlées par l'utilisateur dans une analyse multi-échelle, ce qui s'explique par un intérêt particulier dans le cas des données non-équidistantes. Cette partie présente à la fois une sélection de bande passante optimale basée sur la vraisemblance et une approche heuristique rapide. La deuxième contribution consiste sur la combinaison du lissage polynomial local avec les préfiltres orthogonaux dans le but de diminuer la variance de la reconstruction. Dans le deuxième chapitre, le projet porte sur l'estimation des fonctions de densité à travers la transformée polynomiale locale multi-échelle, en proposant une reconstruction plus avancée, appelée reconstruction pondérée pour contrôler la propagation de la variance. Dans le dernier chapitre, On s’intéresse à l’extension de la transformée polynomiale locale multi-échelle dans le cas bivarié, tout en énumérant quelques avantages qu'on peut exploiter de cette transformée (la parcimonie, pas de triangulations), comparant à la transformée en ondelette classique en deux dimension. / Doctorat en Sciences / info:eu-repo/semantics/nonPublished
29

Generalised Bayesian matrix factorisation models

Mohamed, Shakir January 2011 (has links)
Factor analysis and related models for probabilistic matrix factorisation are of central importance to the unsupervised analysis of data, with a colourful history more than a century long. Probabilistic models for matrix factorisation allow us to explore the underlying structure in data, and have relevance in a vast number of application areas including collaborative filtering, source separation, missing data imputation, gene expression analysis, information retrieval, computational finance and computer vision, amongst others. This thesis develops generalisations of matrix factorisation models that advance our understanding and enhance the applicability of this important class of models. The generalisation of models for matrix factorisation focuses on three concerns: widening the applicability of latent variable models to the diverse types of data that are currently available; considering alternative structural forms in the underlying representations that are inferred; and including higher order data structures into the matrix factorisation framework. These three issues reflect the reality of modern data analysis and we develop new models that allow for a principled exploration and use of data in these settings. We place emphasis on Bayesian approaches to learning and the advantages that come with the Bayesian methodology. Our port of departure is a generalisation of latent variable models to members of the exponential family of distributions. This generalisation allows for the analysis of data that may be real-valued, binary, counts, non-negative or a heterogeneous set of these data types. The model unifies various existing models and constructs for unsupervised settings, the complementary framework to the generalised linear models in regression. Moving to structural considerations, we develop Bayesian methods for learning sparse latent representations. We define ideas of weakly and strongly sparse vectors and investigate the classes of prior distributions that give rise to these forms of sparsity, namely the scale-mixture of Gaussians and the spike-and-slab distribution. Based on these sparsity favouring priors, we develop and compare methods for sparse matrix factorisation and present the first comparison of these sparse learning approaches. As a second structural consideration, we develop models with the ability to generate correlated binary vectors. Moment-matching is used to allow binary data with specified correlation to be generated, based on dichotomisation of the Gaussian distribution. We then develop a novel and simple method for binary PCA based on Gaussian dichotomisation. The third generalisation considers the extension of matrix factorisation models to multi-dimensional arrays of data that are increasingly prevalent. We develop the first Bayesian model for non-negative tensor factorisation and explore the relationship between this model and the previously described models for matrix factorisation.
30

Interactions entre rang et parcimonie en estimation pénalisée, et détection d'objets structurés / Interactions between rank and sparsity in penalized estimation, and detection of structured objects

Savalle, Pierre-André 21 October 2014 (has links)
Cette thèse est organisée en deux parties indépendantes. La première partie s'intéresse à l'estimation convexe de matrice en prenant en compte à la fois la parcimonie et le rang. Dans le contexte de graphes avec une structure de communautés, on suppose souvent que la matrice d'adjacence sous-jacente est diagonale par blocs dans une base appropriée. Cependant, de tels graphes possèdent généralement une matrice d'adjacente qui est aussi parcimonieuse, ce qui suggère que combiner parcimonie et range puisse permettre de modéliser ce type d'objet de manière plus fine. Nous proposons et étudions ainsi une pénalité convexe pour promouvoir parcimonie et rang faible simultanément. Même si l'hypothèse de rang faible permet de diminuer le sur-apprentissage en diminuant la capacité d'un modèle matriciel, il peut être souhaitable lorsque suffisamment de données sont disponible de ne pas introduire une telle hypothèse. Nous étudions un exemple dans le contexte multiple kernel learning localisé, où nous proposons une famille de méthodes a vaste-marge convexes et accompagnées d'une analyse théorique. La deuxième partie de cette thèse s'intéresse à des problèmes de détection d'objets ou de signaux structurés. Dans un premier temps, nous considérons un problème de test statistique, pour des modèles où l'alternative correspond à des capteurs émettant des signaux corrélés. Contrairement à la littérature traditionnelle, nous considérons des procédures de test séquentielles, et nous établissons que de telles procédures permettent de détecter des corrélations significativement plus faible que les méthodes traditionnelles. Dans un second temps, nous considérons le problème de localiser des objets dans des images. En s'appuyant sur de récents résultats en apprentissage de représentation pour des problèmes similaires, nous intégrons des features de grande dimension issues de réseaux de neurones convolutionnels dans les modèles déformables traditionnellement utilisés pour ce type de problème. Nous démontrons expérimentalement que ce type d'approche permet de diminuer significativement le taux d'erreur de ces modèles. / This thesis is organized in two independent parts. The first part focused on convex matrix estimation problems, where both rank and sparsity are taken into account simultaneously. In the context of graphs with community structures, a common assumption is that the underlying adjacency matrices are block-diagonal in an appropriate basis. However, these types of graphs are usually far from complete, and their adjacency representations are thus also inherently sparse. This suggests that combining the sparse hypothesis and the low rank hypothesis may allow to more accurately model such objects. To this end, we propose and analyze a convex penalty to promote both low rank and high sparsity at the same time. Although the low rank hypothesis allows to reduce over-fitting by decreasing the modeling capacity of a matrix model, the opposite may be desirable when enough data is available. We study such an example in the context of localized multiple kernel learning, which extends multiple kernel learning by allowing each of the kernels to select different support vectors. In this framework, multiple kernel learning corresponds to a rank one estimator, while higher-rank estimators have been observed to increase generalization performance. We propose a novel family of large-margin methods for this problem that, unlike previous methods, are both convex and theoretically grounded. The second part of the thesis is about detection of objects or signals which exhibit combinatorial structures, and we present two such problems. First, we consider detection in the statistical hypothesis testing sense, in models where anomalous signals correspond to correlated values at different sensors. In most existing work, detection procedures are provided with a full sample of all the sensors. However, the experimenter may have the capacity to make targeted measurements in an on-line and adaptive manner, and we investigate such adaptive sensing procedures. Finally, we consider the task of identifying and localizing objects in images. This is an important problem in computer vision, where hand-crafted features are usually used. Following recent successes in learning ad-hoc representations for similar problems, we integrate the method of deformable part models with high-dimensional features from convolutional neural networks, and shows that this significantly decreases the error rates of existing part-based models.

Page generated in 0.068 seconds