• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 133
  • 24
  • 12
  • 7
  • 5
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 213
  • 53
  • 31
  • 30
  • 29
  • 27
  • 24
  • 21
  • 21
  • 20
  • 19
  • 19
  • 19
  • 18
  • 18
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Classification in high dimensional feature spaces / by H.O. van Dyk

Van Dyk, Hendrik Oostewald January 2009 (has links)
In this dissertation we developed theoretical models to analyse Gaussian and multinomial distributions. The analysis is focused on classification in high dimensional feature spaces and provides a basis for dealing with issues such as data sparsity and feature selection (for Gaussian and multinomial distributions, two frequently used models for high dimensional applications). A Naïve Bayesian philosophy is followed to deal with issues associated with the curse of dimensionality. The core treatment on Gaussian and multinomial models consists of finding analytical expressions for classification error performances. Exact analytical expressions were found for calculating error rates of binary class systems with Gaussian features of arbitrary dimensionality and using any type of quadratic decision boundary (except for degenerate paraboloidal boundaries). Similarly, computationally inexpensive (and approximate) analytical error rate expressions were derived for classifiers with multinomial models. Additional issues with regards to the curse of dimensionality that are specific to multinomial models (feature sparsity) were dealt with and tested on a text-based language identification problem for all eleven official languages of South Africa. / Thesis (M.Ing. (Computer Engineering))--North-West University, Potchefstroom Campus, 2009.
112

Channel estimation techniques applied to massive MIMO systems using sparsity and statistics approaches

Araújo, Daniel Costa 29 September 2016 (has links)
ARAÚJO, D. C. Channel estimation techniques applied to massive MIMO systems using sparsity and statistics approaches. 2016. 124 f. Tese (Doutorado em Engenharia de Teleinformática)–Centro de Tecnologia, Universidade Federal do Ceará, Fortaleza, 2016. / Submitted by Renato Vasconcelos (ppgeti@ufc.br) on 2017-06-21T13:52:26Z No. of bitstreams: 1 2016_tese_dcaraújo.pdf: 1832588 bytes, checksum: a4bb5d44287b92a9321d5fcc3589f22e (MD5) / Approved for entry into archive by Marlene Sousa (mmarlene@ufc.br) on 2017-06-21T16:17:55Z (GMT) No. of bitstreams: 1 2016_tese_dcaraújo.pdf: 1832588 bytes, checksum: a4bb5d44287b92a9321d5fcc3589f22e (MD5) / Made available in DSpace on 2017-06-21T16:17:55Z (GMT). No. of bitstreams: 1 2016_tese_dcaraújo.pdf: 1832588 bytes, checksum: a4bb5d44287b92a9321d5fcc3589f22e (MD5) Previous issue date: 2016-09-29 / Massive MIMO has the potential of greatly increasing the system spectral efficiency by employing many individually steerable antenna elements at the base station (BS). This potential can only be achieved if the BS has sufficient channel state information (CSI) knowledge. The way of acquiring it depends on the duplexing mode employed by the communication system. Currently, frequency division duplexing (FDD) is the most used in the wireless communication system. However, the amount of overhead necessary to estimate the channel scales with the number of antennas which poses a big challenge in implementing massive MIMO systems with FDD protocol. To enable both operating together, this thesis tackles the channel estimation problem by proposing methods that exploit a compressed version of the massive MIMO channel. There are mainly two approaches used to achieve such a compression: sparsity and second order statistics. To derive sparsity-based techniques, this thesis uses a compressive sensing (CS) framework to extract a sparse-representation of the channel. This is investigated initially in a flat channel and afterwards in a frequency-selective one. In the former, we show that the Cramer-Rao lower bound (CRLB) for the problem is a function of pilot sequences that lead to a Grassmannian matrix. In the frequency-selective case, a novel estimator which combines CS and tensor analysis is derived. This new method uses the measurements obtained of the pilot subcarriers to estimate a sparse tensor channel representation. Assuming a Tucker3 model, the proposed solution maps the estimated sparse tensor to a full one which describes the spatial-frequency channel response. Furthermore, this thesis investigates the problem of updating the sparse basis that arises when the user is moving. In this study, an algorithm is proposed to track the arrival and departure directions using very few pilots. Besides the sparsity-based techniques, this thesis investigates the channel estimation performance using a statistical approach. In such a case, a new hybrid beamforming (HB) architecture is proposed to spatially multiplex the pilot sequences and to reduce the overhead. More specifically, the new solution creates a set of beams that is jointly calculated with the channel estimator and the pilot power allocation using the minimum mean square error (MMSE) criterion. We show that this provides enhanced performance for the estimation process in low signal-noise ratio (SNR) scenarios. / Pesquisas em sistemas MIMO massivo (do inglês multiple-input multiple-output) ganha- ram muita atenção da comunidade científica devido ao seu potencial em aumentar a eficiência espectral do sistema comunicações sem-fio utilizando centenas de elementos de antenas na estação de base (EB). Porém, tal potencial só poderá é obtido se a EB possuir suficiente informação do estado de canal. A maneira de adquiri-lo depende de como os recursos de comunicação tempo-frequência são empregados. Atualmente, a solução mais utilizada em sistemas de comunicação sem fio é a multiplexação por divisão na frequência (FDD) dos pilotos. Porém, o grande desafio em implementar esse tipo solução é porque a quantidade de tons pilotos exigidos para estimar o canal aumenta com o número de antenas. Isso resulta na perda do eficiência espectral prometido pelo sistema massivo. Esta tese apresenta métodos de estimação de canal que demandam uma quantidade de tons pilotos reduzida, mas mantendo alta precisão na estimação do canal. Esta redução de tons pilotos é obtida porque os estimadores propostos exploram a estrutura do canal para obter uma redução das dimensões do canal. Nesta tese, existem essencialmente duas abordagens utilizadas para alcançar tal redução de dimensionalidade: uma é através da esparsidade e a outra através das estatísticas de segunda ordem. Para derivar as soluções que exploram a esparsidade do canal, o estimador de canal é obtido usando a teoria de “compressive sensing” (CS) para extrair a representação esparsa do canal. A teoria é aplicada inicialmente ao problem de estimação de canais seletivos e não-seletivos em frequência. No primeiro caso, é mostrado que limitante de Cramer-Rao (CRLB) é definido como uma função das sequências pilotos que geram uma matriz Grassmaniana. No segundo caso, CS e a análise tensorial são combinado para derivar um novo algoritmo de estimatição baseado em decomposição tensorial esparsa para canais com seletividade em frequência. Usando o modelo Tucker3, a solução proposta mapeia o tensor esparso para um tensor cheio o qual descreve a resposta do canal no espaço e na frequência. Além disso, a tese investiga a otimização da base de representação esparsa propondo um método para estimar e corrigir as variações dos ângulos de chegada e de partida, causados pela mobilidade do usuário. Além das técnicas baseadas em esparsidade, esta tese investida aquelas que usam o conhecimento estatístico do canal. Neste caso, uma nova arquitetura de beamforming híbrido é proposta para realizar multiplexação das sequências pilotos. A nova solução consite em criar um conjunto de feixes, que são calculados conjuntamente com o estimator de canal e alocação de potência para os pilotos, usand o critério de minimização erro quadrático médio. É mostrado que esta solução reduz a sequencia pilot e mostra bom desempenho e cenários de baixa relação sinal ruído (SNR).
113

Desconvolução não-supervisionada baseada em esparsidade

Fernandes, Tales Gouveia January 2016 (has links)
Orientador: Prof. Dr. Ricardo Suyama / Dissertação (mestrado) - Universidade Federal do ABC, Programa de Pós-Graduação em Engenharia da Informação, 2016. / O presente trabalho analisa o problema de desconvolução não-supervisionada de sinais abordando a característica esparsa dos sinais envolvidos. O problema de desconvolução não-supervisionada de sinais se assemelha, em muitos aspectos, ao problema de separação cega de fontes, que consiste basicamente de se estimar sinais a partir de versões que correspondem a misturas desses sinais originais, denominados simplesmente de fontes. Ao aplicar a desconvolução não-supervisionada é necessario explorar características dos sinais e/ou do sistema para auxiliar na resolução do problema. Uma dessas características, a qual foi utilizada neste trabalho, é o conceito de esparsidade. O conceito de esparsidade está relacionado a sinais e/ou sistemas em que toda a informação está concentrada em uma quantidade pequena de valores, os quais representam a informação real do que se queira analisar sobre o sinal ou sobre o sistema. Nesse contexto, há critérios que estabelecem condições suficientes, sobre os sinais e/ou sistemas envolvidos, capazes de garantir a desconvolução dos mesmos. Com isso, os algoritmos para recuperação dos sinais e/ou sistemas utilizarão os critérios estabelecidos baseado na característica esparsa dos mesmos. Desta forma, neste trabalho será feito a comparação de convergência dos algoritmos aplicados em alguns cenários específicos, os quais definem o sinal e o sistema utilizados. Por fim, os resultados obtidos nas simulações permitem obter uma boa ideia do comportamento dos diferentes algoritmos analisados e a viabilidade de uso no problema de desconvolução de sinais esparsos. / The present work analyzes the deconvolution problem unsupervised signs approaching the sparse characteristic of the signals involved. The deconvolution problem unsupervised signals resembles in many aspects to the problem of blind source separation, which consists primarily of estimating signals from versions which are mixtures of these original signals, simply referred to as sources. By applying unsupervised deconvolution it is necessary to explore characteristics of signals and/or system to assistant in problem resolution. One of these features, which was used in this work is the concept of sparsity. The concept of sparseness associated signs and/or systems in which all the information is concentrated in a small number of values, which represent the actual information that one wants to analyze on the signal or on the system. In this context, there are criteria that establish sufficient conditions on the signs and/or systems involved, able to ensure the deconvolution of them. Thus, the algorithms for signal recovery and/or systems will use the criteria based on sparse characteristic of them. Thus, the present work will be doing the convergence of algorithms comparison applied in some specific scenarios, which define the signal and the system used. Finally, the results obtained from simulations allow getting a good idea of the behavior of different algorithms and analyzed for viability using the deconvolution problem of sparse signals.
114

O Método Primal Dual Barreira Logarítmica aplicado ao problema de fluxo de carga ótimo / Optimal power flow by a Logarithmic-Barrier Primal-Dual method

Alessandra Macedo de Souza 18 February 1998 (has links)
Neste trabalho será apresentado um algoritmo de pontos interiores para a solução do problema de fluxo de carga ótimo (FCO). A abordagem proposta é o método primai dual barreira logarítmica. As restrições de desigualdade do problema de FCO são transformadas em igualdades pelo uso de variáveis de folga, e estas são incorporadas na função objetivo através da função barreira logarítmica. A esparsidade da matriz Lagrangeana é explorada e o processo de fatoração é feito por elementos e não por submatrizes. Resultados numéricos de testes realizados em sistemas de 3, 14, 30 e 118 barras serão apresentados com o objetivo de mostrar a eficiência do método. / In this thesis an interior point algorithm is presented for the solution of the optimal power flow problem (OPF). The approach proposed here is the logarithmic barrier primal-dual method. The inequality constraints of the optimal power flow problem are transformed into equalities by slack variables that are incorporated into the objective function through the logarithmic barrier function. The sparsity of the Lagrangian matrix is explored and the factorization process is carried out by elements rather than submatrices. Numerical tests results obtained with systems of 3, 14, 30 and 118 buses are presented to show the efficiency of the method.
115

3D Knowledge-based Segmentation Using Sparse Hierarchical Models : contribution and Applications in Medical Imaging / Segmentation d'images 3D avec des modèles hiérarchiques et parcimonieux : applications et Contributions en Imagerie Médicale

Essafi, Salma 12 May 2010 (has links)
CETTE thèse est consacrée à la conception d’un système d’aide au diagnostic dédiéau muscle squelettique humain. Au cours du premier volet de ce manuscrit nousproposons une nouvelle représentation basée sur les modèles parcimonieux dans le cadrede la segmentation d’Images de Résonances Magnétiques (IRM) T1 du muscle squelettiquedu mollet. Notre méthode Sparse Shape Model/ Modèle de Formes Parcimonieux(MFP), apprend un modèle statistique de formes et de textures locales annoté et réussità en tirer une représentation réduite afin de reconstruire le mécanisme musculaire sur unexemple test. Dans la seconde partie du manuscrit, nous présentons une approche baséesur des ondelettes de diffusion pour la segmentation du muscle squelettique. Contrairementaux méthodes de l’état de l’art, notre approche au cours de la phase d’apprentissagepermet à optimiser les coefficients des ondelettes, ainsi que leur nombres et leur positions.Le modèle prend en charge aussi bien les hiérarchies dans l’espace de recherche,que l’encodage des dépendances géométriques complexes et photométriques de la structured’intérêt. Notre modélisation offre ainsi l’avantage de traiter des topologies arbitraires.L’évaluation expérimentale a été effectué sur un ensemble de mollets acquisespar un scanner IRM, ainsi qu’un ensemble d’images tomodensitométriques du ventriculegauche. / THE thesis is dedicated to three dimensional shape analysis and the segmentation ofhuman skeletal muscles in the context of myopathies and their treatment. In particular,we study the local and global structural characteristics of muscles. The methodologicalfocus of the thesis is to devise methods for the segmentation of muscles, theconsistent localization of positions in the anatomy and the navigation within the muscledata across patients. Currently diagnosis and follow-up examinations during therapy ofmyopathies are typically performed by means of biopsy. This has several disadvantages:it is an invasive method, covers only a small muscle region, is mainly restricted to diagnosticpurpose and is not suitable for follow-up evaluation. We develop the followingmethods to make the use of non-invasive imaging modalities such as MRI for a virtualbiopsy possible: first, a novel approach to model shape variations that encodes sparsity,exploits geometric redundancy, and accounts for the different degrees of local variationand image support in data. It makes the modeling and localization of muscles possible,that exhibit sparsely distributed salient imaging features, and heterogeneous shapevariability. Second, we extend the shape representation of 3D structures using diffusionwavelets. The proposed method can represent shape variation and exploits continuousinter-dependencies of arbitrary topology in the shape data. We then explore several approachesfor the shape model search, and appearance representation based on boostingtechniques and canonical correlation analysis. Last we present a robust diffusion wavelettechnique that covers the integration of our two shape models approaches to finally getan enhanced sparse wavelet based method. We validate the approaches on two medicalimaging data sets that represent the properties tackled by the approaches: T1 weightedMRI data of full calf muscles and computed tomography data of the left heart ventricle.
116

Optimální metody výměny řídkých dat v senzorové síti / Optimal methods for sparse data exchange in sensor networks

Valová, Alena January 2017 (has links)
This thesis is focused on object tracking by a decentralized sensor network using fusion center-based and consensus-based distributed particle filters. The model includes clutter as well as missed detections of the object. The approach uses sparsity of global likelihood function, which, by means of appropriate sparse approximation and the suitable dictionaty selection can significantly reduce communication requirements in the decentralized sensor network. The master's thesis contains a design of exchange methods of sparse data in the sensor network and a comparison of the proposed methods in terms of accuracy and energy requirements.
117

Restaurace audiosignálů založená na řídkých reprezentacích / Audio restoration based on sparse signal representations

Záviška, Pavel January 2017 (has links)
This Master's Thesis deals with the issue of audio clipping and the application of sparse represenations model for the task of declipping. First, a general theory of clipping is described, followed by a brief overview of existing methods and a description of the general theory concerning sparse representations of signals and bases, respectively frames. Subsequently, two methods solving declipping problem based on sparse representations are intruduced. The first method uses the Generic proximal algorithm for convex optimization, the second one uses the Douglas-Rachford algorithm. The above mentioned methods have been programmed in the Matlab environment. The results of the declipping methods are evaluated according to SNR, PEMO-Q and also by subjective listening tests.
118

Apprentissage par noyaux multiples : application à la classification automatique des images biomédicales microscopiques / Multiple kernel learning : contribution to the automatic classification of microscopic medical images

Zribi, Abir 17 March 2016 (has links)
Cette thèse s'inscrit dans le contexte de diagnostic assisté par ordinateur pour la localisation subcellulaire des protéines dans les images microscopiques. L'objectif est la conception et le développement d'un système de classification automatique permettant d'identifier le compartiment cellulaire dans lequel une protéine d'intérêt exerce son activité biologique. Afin de surmonter les difficultés rencontrées pour discerner les compartiments cellulaires présents dans les images microscopiques, les systèmes décrits dans la littérature proposent d'extraire plusieurs descripteurs associés à une combinaison de classifieurs. Dans cette thèse, nous proposons un schéma de classification différent répondant mieux aux besoins de généricité et de flexibilité pour traiter différentes bases d'images.Dans le but de fournir une caractérisation riche des images microscopiques, nous proposons un nouveau système de représentation permettant d'englober de multiples descripteurs visuels identifiés dans les différentes approches d'extraction de caractéristiques : locale, fréquentielle, globale et par région. Nous formulons ensuite le problème de fusion et de sélection des caractéristiques sous forme d'un problème de sélection de noyaux. Basé sur l'apprentissage de noyaux multiples (MKL), les tâches de sélection et de fusion de caractéristiques sont considérées simultanément. Les expériences effectuées montrent que la plateforme de classification proposée est à la fois plus simple, plus générique et souvent plus performante que les autres approches de la littérature. Dans le but d'approfondir notre étude sur l'apprentissage de noyaux multiples, nous définissons un nouveau formalisme d'apprentissage MKL réalisé en deux étapes. Cette contribution consiste à proposer trois termes régularisant liés à la résolution du problème d'apprentissage des poids associés à une combinaison linéaire de noyaux, problème reformulé en un problème de classification à vaste marge dans l'espace des couples. Le premier terme régularisant proposé assure une sélection parcimonieuse des noyaux. Les deux autres termes ont été conçus afin de tenir compte de la similarité entre les noyaux via une métrique basée sur la corrélation. Les différentes expérimentations réalisées montrent que le formalisme proposé permet d'obtenir des résultats de même ordre que les méthodes de référence, mais offrant l'avantage d'utiliser moins de fonctions noyaux. / This thesis arises in the context of computer aided analysis for subcellular protein localization in microscopic images. The aim is the establishment of an automatic classification system allowing to identify the cellular compartment in which a protein of interest exerts its biological activity. In order to overcome the difficulties in attempting to discern the cellular compartments in microscopic images, the existing state-of-art systems use several descriptors to train an ensemble of classifiers. In this thesis, we propose a different classification scheme wich better cope with the requirement of genericity and flexibility to treat various image datasets. Aiming to provide an efficient image characterization of microscopic images, a new feature system combining local, frequency-domain, global, and region-based features is proposed. Then, we formulate the problem of heterogeneous feature fusion as a kernel selection problem. Using multiple kernel learning, the problems of optimal feature sets selection and classifier training are simultaneously resolved. The proposed combination scheme leads to a simple and a generic framework capable of providing a high performance for microscopy image classification. Extensive experiments were carried out using widely-used and best known datasets. When compared with the state-of-the-art systems, our framework is more generic and outperforms other classification systems. To further expand our study on multiple kernel learning, we introduce a new formalism for learning with multiple kernels performed in two steps. This contribution consists in proposing three regularized terms with in the minimization of kernels weights problem, formulated as a classification problem using Separators with Vast Margin on the space of pairs of data. The first term ensures that kernels selection leads to a sparse representation. While the second and the third terms introduce the concept of kernels similarity by using a correlation measure. Experiments on various biomedical image datasets show a promising performance of our method compared to states of art methods.
119

Performances et méthodes pour l'échantillonnage comprimé : Robustesse à la méconnaissance du dictionnaire et optimisation du noyau d'échantillonnage. / Performance and methods for sparse sampling : robustness to basis mismatch and kernel optimization

Bernhardt, Stéphanie 05 December 2016 (has links)
Dans cette thèse, nous nous intéressons à deux méthodes permettant de reconstruire un signal parcimonieux largement sous-échantillonné : l’échantillonnage de signaux à taux d’innovation fini et l’acquisition comprimée.Il a été montré récemment qu’en utilisant un noyau de pré-filtrage adapté, les signaux impulsionnels peuvent être parfaitement reconstruits bien qu’ils soient à bande non-limitée. En présence de bruit, la reconstruction est réalisée par une procédure d’estimation de tous les paramètres du signal d’intérêt. Dans cette thèse, nous considérons premièrement l’estimation des amplitudes et retards paramétrisant une somme finie d'impulsions de Dirac filtrée par un noyau quelconque et deuxièmement l’estimation d’une somme d’impulsions de forme quelconque filtrée par un noyau en somme de sinus cardinaux (SoS). Le noyau SoS est intéressant car il est paramétrable par un jeu de paramètres à valeurs complexes et vérifie les conditions nécessaires à la reconstruction. En se basant sur l’information de Fisher Bayésienne relative aux paramètres d’amplitudes et de retards et sur des outils d’optimisation convexe, nous proposons un nouveau noyau d’échantillonnage.L’acquisition comprimée permet d’échantillonner un signal en-dessous de la fréquence d’échantillonnage de Shannon, si le vecteur à échantillonner peut être approximé comme une combinaison linéaire d’un nombre réduit de vecteurs extraits d’un dictionnaire sur-complet. Malheureusement, dans des conditions réalistes, le dictionnaire (ou base) n’est souvent pas parfaitement connu, et est donc entaché d’une erreur (DB). L’estimation par dictionnaire, se basant sur les mêmes principes, permet d’estimer des paramètres à valeurs continues en les associant selon une grille partitionnant l’espace des paramètres. Généralement, les paramètres ne se trouvent pas sur la grille, ce qui induit un erreur d’estimation même à haut rapport signal sur bruit (RSB). C’est le problème de l’erreur de grille (EG). Dans cette thèse nous étudions les conséquences des modèles d’erreur DB et EG en terme de performances bayésiennes et montrons qu’un biais est introduit même avec une estimation parfaite du support et à haut RSB. La BCRB est dérivée pour les modèles DB et EG non structurés, qui bien qu’ils soient très proches, ne sont pas équivalents en terme de performances. Nous donnons également la borne de Cramér-Rao moyennée (BCRM) dans le cas d’une petite erreur de grille et étudions l’expression analytique de l’erreur quadratique moyenne bayésienne (BEQM) sur l’estimation de l’erreur de grille à haut RSB. Cette dernière est confirmée en pratique dans le contexte de l’estimation de fréquence pour différents algorithmes de reconstruction parcimonieuse.Nous proposons deux nouveaux estimateurs : le Bias-Correction Estimator (BiCE) et l’Off-Grid Error Correction (OGEC) permettant de corriger l'erreur de modèle induite par les erreurs DB et EG, respectivement. Ces deux estimateurs principalement basés sur une projection oblique des mesures sont conçus comme des post-traitements, destinés à réduire le biais d’estimation suite à une pré-estimation effectuée par n’importe quel algorithme de reconstruction parcimonieuse. Les biais et variances théoriques du BiCE et du OGEC sont dérivés afin de caractériser leurs efficacités statistiques.Nous montrons, dans le contexte difficile de l’échantillonnage des signaux impulsionnels à bande non-limitée que ces deux estimateurs permettent de réduire considérablement l’effet de l'erreur de modèle sur les performances d’estimation. Les estimateurs BiCE et OGEC sont tout deux des schémas (i) génériques, car ils peuvent être associés à tout estimateur parcimonieux de la littérature, (ii) rapides, car leur coût de calcul reste faible comparativement au coût des estimateurs parcimonieux, et (iii) ont de bonnes propriétés statistiques. / In this thesis, we are interested in two different low rate sampling schemes that challenge Shannon’s theory: the sampling of finite rate of innovation signals and compressed sensing.Recently it has been shown that using appropriate sampling kernel, finite rate of innovation signals can be perfectly sampled even though they are non-bandlimited. In the presence of noise, reconstruction is achieved by a model-based estimation procedure. In this thesis, we consider the estimation of the amplitudes and delays of a finite stream of Dirac pulses using an arbitrary kernel and the estimation of a finite stream of arbitrary pulses using the Sum of Sincs (SoS) kernel. In both scenarios, we derive the Bayesian Cramér-Rao Bound (BCRB) for the parameters of interest. The SoS kernel is an interesting kernel since it is totally configurable by a vector of weights. In the first scenario, based on convex optimization tools, we propose a new kernel minimizing the BCRB on the delays, while in the second scenario we propose a family of kernels which maximizes the Bayesian Fisher Information, i.e., the total amount of information about each of the parameter in the measures. The advantage of the proposed family is that it can be user-adjusted to favor either of the estimated parameters.Compressed sensing is a promising emerging domain which outperforms the classical limit of the Shannon sampling theory if the measurement vector can be approximated as the linear combination of few basis vectors extracted from a redundant dictionary matrix. Unfortunately, in realistic scenario, the knowledge of this basis or equivalently of the entire dictionary is often uncertain, i.e. corrupted by a Basis Mismatch (BM) error. The related estimation problem is based on the matching of continuous parameters of interest to a discretized parameter set over a regular grid. Generally, the parameters of interest do not lie in this grid and there exists an estimation error even at high Signal to Noise Ratio (SNR). This is the off-grid (OG) problem. The consequence of the BM and the OG mismatch problems is that the estimation accuracy in terms of Bayesian Mean Square Error (BMSE) of popular sparse-based estimators collapses even if the support is perfectly estimated and in the high Signal to Noise Ratio (SNR) regime. This saturation effect considerably limits the effective viability of these estimation schemes.In this thesis, the BCRB is derived for CS model with unstructured BM and OG. We show that even though both problems share a very close formalism, they lead to different performances. In the biased dictionary based estimation context, we propose and study analytical expressions of the Bayesian Mean Square Error (BMSE) on the estimation of the grid error at high SNR. We also show that this class of estimators is efficient and thus reaches the Bayesian Cramér-Rao Bound (BCRB) at high SNR. The proposed results are illustrated in the context of line spectra analysis for several popular sparse estimator. We also study the Expected Cramér-Rao Bound (ECRB) on the estimation of the amplitude for a small OG error and show that it follows well the behavior of practical estimators in a wide SNR range.In the context of BM and OG errors, we propose two new estimation schemes called Bias-Correction Estimator (BiCE) and Off-Grid Error Correction (OGEC) respectively and study their statistical properties in terms of theoretical bias and variances. Both estimators are essentially based on an oblique projection of the measurement vector and act as a post-processing estimation layer for any sparse-based estimator and mitigate considerably the BM (OG respectively) degradation. The proposed estimators are generic since they can be associated to any sparse-based estimator, fast, and have good statistical properties. To illustrate our results and propositions, they are applied in the challenging context of the compressive sampling of finite rate of innovation signals.
120

Essays on Inference in Linear Mixed Models

Kramlinger, Peter 28 April 2020 (has links)
No description available.

Page generated in 0.073 seconds