• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 1
  • Tagged with
  • 6
  • 6
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Contributions to Large Covariance and Inverse Covariance Matrices Estimation

Kang, Xiaoning 25 August 2016 (has links)
Estimation of covariance matrix and its inverse is of great importance in multivariate statistics with broad applications such as dimension reduction, portfolio optimization, linear discriminant analysis and gene expression analysis. However, accurate estimation of covariance or inverse covariance matrices is challenging due to the positive definiteness constraint and large number of parameters, especially in the high-dimensional cases. In this thesis, I develop several approaches for estimating large covariance and inverse covariance matrices with different applications. In Chapter 2, I consider an estimation of time-varying covariance matrices in the analysis of multivariate financial data. An order-invariant Cholesky-log-GARCH model is developed for estimating the time-varying covariance matrices based on the modified Cholesky decomposition. This decomposition provides a statistically interpretable parametrization of the covariance matrix. The key idea of the proposed model is to consider an ensemble estimation of covariance matrix based on the multiple permutations of variables. Chapter 3 investigates the sparse estimation of inverse covariance matrix for the highdimensional data. This problem has attracted wide attention, since zero entries in the inverse covariance matrix imply the conditional independence among variables. I propose an orderinvariant sparse estimator based on the modified Cholesky decomposition. The proposed estimator is obtained by assembling a set of estimates from the multiple permutations of variables. Hard thresholding is imposed on the ensemble Cholesky factor to encourage the sparsity in the estimated inverse covariance matrix. The proposed method is able to catch the correct sparse structure of the inverse covariance matrix. Chapter 4 focuses on the sparse estimation of large covariance matrix. Traditional estimation approach is known to perform poorly in the high dimensions. I propose a positive-definite estimator for the covariance matrix using the modified Cholesky decomposition. Such a decomposition provides a exibility to obtain a set of covariance matrix estimates. The proposed method considers an ensemble estimator as the center" of these available estimates with respect to Frobenius norm. The proposed estimator is not only guaranteed to be positive definite, but also able to catch the underlying sparse structure of the true matrix. / Ph. D.
2

Estimation and optimal input design in sparse models

Parsa, Javad January 2023 (has links)
Sparse parameter estimation is an important aspect of system identification, as it allows for reducing the order of a model, and also some models in system identification inherently exhibit sparsity in their parameters. The accuracy of the estimated sparse model depends directly on the performance of the sparse estimation methods. It is well known that the accuracy of a sparse estimation method relies on the correlations between the regressors of the model being estimated. Mutual coherence represents the maximum of these correlations. When the parameter vector is known to be sparse, accurate estimation requires a low mutual coherence. However, in system identification, a major challenge arises from the construction of the regressor based on time series data, which often leads to a high mutual coherence. This conflict hinders accurate sparse estimation. To address this issue, the first part of this thesis introduces novel methods that reduce mutual coherence through linear coordinate transformations. These methods can be integrated with any sparse estimation techniques. Our numerical studies demonstrate significant improvements in performance compared to state-of-the-art sparse estimation algorithms. In the second part of the thesis, we shift our focus to optimal input design in system identification, which aims to achieve maximum accuracy in a model based on specific criteria. The original optimal input design techniques lack coherence constraints between the input sequences, often resulting in high mutual coherence and, consequently, increased sparse estimation errors for sparse models. Therefore, the second part of the thesis concentrates on designing optimal input for sparse models. We formulate the proposed methods and propose numerical algorithms using alternating minimization. Additionally, we compare the performance of our proposed methods with state-of-the-art input design algorithms, and we provide theoretical analysis of the proposed methods in both parts of the thesis. / Gles parameterestimering är viktigt inom systemidentifiering eftersom vissa modeller har naturligt förekommande gleshet i dess parametrar, men även för att det kan låta en minska ordningen av icke-glesa modeller. Noggrannheten av en skattad gles modell beror direkt på prestandan av de glesa estimeringsmetoderna. Det ¨ar välkänt att noggrannheten av en gles estimeringsmetod beror på korrelationer mellan regressorerna av den skattade modellen. Ömsesidig koherens (eng: mutual coherence) representerar maximum av dessa korrelationer. Noggrann estimering kräver låg ömsesidig koherens i de fallen då det är känt att parametervektorn är gles. En stor utmaning inom systemidentifiering orsakas av att, när en regressor konstrueras av tidsserie-data, så leder detta ofta till hög ömsesidig koherens. Denna konflikt hindrar noggrann gles estimering. För att åtgärda detta problem så introducerar avhandlingens första del nya metoder som minskar den ömsesidiga koherensen genom linjära koordinattransformationer. Dessa metoder är möjliga att kombinera med godtyckliga glesa estimeringsmetoder. Våra numeriska studier visar märkvärdig förbättring av prestanda jämfört med de bästa tillgängliga algoritmerna för gles parameterestimering. I avhandlingens andra del så ändrar vi vårt fokus till design utav optimala insignaler i systemidentifiering, där målet är att uppnå maximal noggrannhet i en modell, baserat på specifika kriterier. De ursprungliga metoderna för design av insignaler saknar bivillkor för ömsesidig koherens mellan insignalssekvenserna, vilket ofta resulterar i hög ömsesidig koherens och därmed också högre estimeringsfel för glesa modeller. Det är därför avhandlingens andra del fokuserar på att designa optimala insignaler för glesa modeller. Vi formulerar de föreslagna metoderna och erbjuder numeriska algoritmer som använder sig utav alternerande minimering. Vi jämför dessutom prestandan av vår metod med de bästa tillgängliga metoderna för design av insignaler, och vi presenterar även teoretisk analys av de föreslagna metoderna i avhandlingens båda delar. / <p>QC 20230911</p>
3

Estimation parcimonieuse de biais multitrajets pour les systèmes GNSS / Sparse estimation of multipath biases for GNSS

Lesouple, Julien 15 March 2019 (has links)
L’évolution des technologies électroniques (miniaturisation, diminution des coûts) a permis aux GNSS (systèmes de navigation par satellites) d’être de plus en plus accessibles et doncutilisés au quotidien, par exemple par le biais d’un smartphone, ou de récepteurs disponibles dans le commerce à des prix raisonnables (récepteurs bas-coûts). Ces récepteurs fournissent à l’utilisateur plusieurs informations, comme par exemple sa position et sa vitesse, ainsi que des mesures des temps de propagation entre le récepteur et les satellites visibles entre autres. Ces récepteurs sont donc devenus très répandus pour les utilisateurs souhaitant évaluer des techniques de positionnement sans développer tout le hardware nécessaire. Les signaux issus des satellites GNSS sont perturbés par de nombreuses sources d’erreurs entre le moment où ils sont traités par le récepteurs pour estimer la mesure correspondante. Il est donc nécessaire decompenser chacune des ces erreurs afin de fournir à l’utilisateur la meilleure position possible. Une des sources d’erreurs recevant beaucoup d’intérêt, est le phénomène de réflexion des différents signaux sur les éventuels obstacles de la scène dans laquelle se trouve l’utilisateur, appelé multitrajets. L’objectif de cette thèse est de proposer des algorithmes permettant de limiter l’effet des multitrajets sur les mesures GNSS. La première idée développée dans cette thèse est de supposer que ces signaux multitrajets donnent naissance à des biais additifs parcimonieux. Cette hypothèse de parcimonie permet d’estimer ces biais à l’aide de méthodes efficaces comme le problème LASSO. Plusieurs variantes ont été développés autour de cette hypothèse visant à contraindre le nombre de satellites ne souffrant pas de multitrajet comme non nul. La deuxième idée explorée dans cette thèse est une technique d’estimation des erreurs de mesure GNSS à partir d’une solution de référence, qui suppose que les erreurs dues aux multitrajets peuvent se modéliser à l’aide de mélanges de Gaussiennes ou de modèles de Markov cachés. Deux méthodes de positionnement adaptées à ces modèles sont étudiées pour la navigation GNSS. / The evolution of electronic technologies (miniaturization, price decreasing) allowed Global Navigation Satellite Systems (GNSS) to be used in our everyday life, through a smartphone for instance, or through receivers available in the market at reasonable prices (low cost receivers). Those receivers provide the user with many information, such as his position or velocity, but also measurements such as propagation delays of the signals emitted by the satellites and processed by the receiver. These receivers are thus widespread for users who want to challenge positioning techniques without developing the whole product. GNSS signals are affected by many error sources between the moment they are emitted and the moment they are processed by the receiver to compute the measurements. It is then necessary to mitigate each of these error sources to provide the user the most accurate solution. One of the most intense research topic in navigation is the phenomenon of reflexions on the eventual obstacles in the scene the receiver is located in, called multipath. The aim of this thesis is to propose algorithms allowing the effects of multipath on GNSS measurements to be reduced. The first idea presented in this thesis is to assume these multipath lead to sparse additive biases. This hypothesis allows us to estimate this biases thanks to efficient methods such as the LASSO problem. The second idea explored in this thesis is an estimation method of GNSS measurement errors corresponding to the proposed navigation algorithm thanks to a reference trajectory, which assumes these errors can be modelled by Gaussian mixtures or Hidden Markov Models. Two filtering methods corresponding to these two models are studied for GNSS navigation.
4

Statistical methods for imaging data, imaging genetics and sparse estimation in linear mixed models

Opoku, Eugene A. 21 October 2021 (has links)
This thesis presents research focused on developing statistical methods with emphasis on techniques that can be used for the analysis of data in imaging studies and sparse estimations for applications in high-dimensional data. The first contribution addresses the pixel/voxel-labeling problem for spatial hidden Markov models in image analysis. We formulate a Gaussian spatial mixture model with Potts model used as a prior for mixture allocations for the latent states in the model. Jointly estimating the model parameters, the discrete state variables and the number of states (number of mixture components) is recognized as a difficult combinatorial optimization. To overcome drawbacks associated with local algorithms, we implement and make comparisons between iterated conditional modes (ICM), simulated annealing (SA) and hybrid ICM with ant colony system (ACS-ICM) optimization for pixel labelling, parameter estimation and mixture component estimation. In the second contribution, we develop ACS-ICM algorithm for spatiotemporal modeling of combined MEG/EEG data for computing estimates of the neural source activity. We consider a Bayesian finite spatial mixture model with a Potts model as a spatial prior and implement the ACS-ICM for simultaneous point estimation and model selection for the number of mixture components. Our approach is evaluated using simulation studies and an application examining the visual response to scrambled faces. In addition, we develop a nonparametric bootstrap for interval estimation to account for uncertainty in the point estimates. In the third contribution, we present sparse estimation strategies in linear mixed model (LMM) for longitudinal data. We address the problem of estimating the fixed effects parameters of the LMM when the model is sparse and predictors are correlated. We propose and derive the asymptotic properties of the pretest and shrinkage estimation strategies. Simulation studies is performed to compare the numerical performance of the Lasso and adaptive Lasso estimators with the pretest and shrinkage ridge estimators. The methodology is evaluated through an application of a high-dimensional data examining effective brain connectivity and genetics. In the fourth and final contribution, we conduct an imaging genetics study to explore how effective brain connectivity in the default mode network (DMN) may be related to genetics within the context of Alzheimer’s disease. We develop an analysis of longitudinal resting-state functional magnetic resonance imaging (rs-fMRI) and genetic data obtained from a sample of 111 subjects with a total of 319 rs-fMRI scans from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database. A Dynamic Causal Model (DCM) is fit to the rs-fMRI scans to estimate effective brain connectivity within the DMN and related to a set of single nucleotide polymorphisms (SNPs) contained in an empirical disease-constrained set. We relate longitudinal effective brain connectivity estimated using spectral DCM to SNPs using both linear mixed effect (LME) models as well as function-on-scalar regression (FSR). / Graduate
5

Scalable Sensor Network Field Reconstruction with Robust Basis Pursuit

Schmidt, Aurora C. 01 May 2013 (has links)
We study a scalable approach to information fusion for large sensor networks. The algorithm, field inversion by consensus and compressed sensing (FICCS), is a distributed method for detection, localization, and estimation of a propagating field generated by an unknown number of point sources. The approach combines results in the areas of distributed average consensus and compressed sensing to form low dimensional linear projections of all sensor readings throughout the network, allowing each node to reconstruct a global estimate of the field. Compressed sensing is applied to continuous source localization by quantizing the potential locations of sources, transforming the model of sensor observations to a finite discretized linear model. We study the effects of structured modeling errors induced by spatial quantization and the robustness of ℓ1 penalty methods for field inversion. We develop a perturbations method to analyze the effects of spatial quantization error in compressed sensing and provide a model-robust version of noise-aware basis pursuit with an upperbound on the sparse reconstruction error. Numerical simulations illustrate system design considerations by measuring the performance of decentralized field reconstruction, detection performance of point phenomena, comparing trade-offs of quantization parameters, and studying various sparse estimators. The method is extended to time-varying systems using a recursive sparse estimator that incorporates priors into ℓ1 penalized least squares. This thesis presents the advantages of inter-sensor measurement mixing as a means of efficiently spreading information throughout a network, while identifying sparse estimation as an enabling technology for scalable distributed field reconstruction systems.
6

Surveillance d'intégrité des structures par apprentissage statistique : application aux structures tubulaires / Structural health monitoring using statistical learning methods : Application on tubular structures

Mountassir, Mahjoub El 30 April 2019 (has links)
Les approches de surveillance de l’intégrité des structures ont été proposées pour permettre un contrôle continu de l’état des structures en intégrant à celle-ci des capteurs intelligents. En effet, ce contrôle continu doit être effectué pour s’assurer du bon fonctionnement de celles-ci car la présence d’un défaut dans la structure peut aboutir à un accident catastrophique. Cependant, la variation des conditions environnementales et opérationnelles (CEO) dans lesquelles la structure évolue, impacte sévèrement les signaux collectés ce qui induit parfois une mauvaise interprétation de la présence du défaut dans la structure. Dans ce travail de thèse, l’application des méthodes d’apprentissage statistiques classiques a été envisagée dans le cas des structures tubulaires. Ici, les effets des paramètres de mesures sur la robustesse de ces méthodes ont été investiguées. Ensuite, deux approches ont été proposées pour remédier aux effets des CEO. La première approche suppose que la base de données des signaux de référence est suffisamment riche en variation des CEO. Dans ce cas, une estimation parcimonieuse du signal mesuré est calculée. Puis, l’erreur d’estimation est utilisée comme indicateur de défaut. Tandis que la deuxième approche est utilisée dans le cas où la base de données des signaux des références contient une variation limitée des CEO mais on suppose que celles-ci varient lentement. Dans ce cas, une mise à jour du modèle de l’état sain est effectuée en appliquant l’analyse en composante principale (PCA) par fenêtre mobile. Dans les deux approches, la localisation du défaut a été assurée en utilisant une fenêtre glissante sur le signal provenant de l’état endommagé. / To ensure better working conditions of civil and engineering structures, inspections must be made on a regular basis. However, these inspections could be labor-intensive and cost-consuming. In this context, structural health monitoring (SHM) systems using permanently attached transducers were proposed to ensure continuous damage diagnostic of these structures. In SHM, damage detection is generally based on comparison between the healthy state signals and the current signals. Nevertheless, the environmental and operational conditions will have an effect on the healthy state signals. If these effects are not taken into account they would result in false indication of damage (false alarm). In this thesis, classical machine learning methods used for damage detection have been applied in the case of pipelines. The effects of some measurements parameters on the robustness of these methods have been investigated. Afterthat, two approaches were proposed for damage diagnostic depending on the database of reference signals. If this database contains large variation of these EOCs, a sparse estimation of the current signal is calculated. Then, the estimation error is used as an indication of the presence of damage. Otherwise, if this database is acquired at limited range of EOCs, moving window PCA can be applied to update the model of the healthy state provided that the EOCs show slow and continuous variation. In both approaches, damage localization was ensured using a sliding window over the damaged pipe signal.

Page generated in 0.1216 seconds