• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 133
  • 24
  • 12
  • 7
  • 5
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 213
  • 53
  • 31
  • 30
  • 29
  • 27
  • 24
  • 21
  • 21
  • 20
  • 19
  • 19
  • 19
  • 18
  • 18
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Decomposition methods of NMR signal of complex mixtures : models ans applications

Toumi, Ichrak 28 October 2013 (has links)
L'objectif de ce travail était de tester des méthodes de SAS pour la séparation des spectres complexes RMN de mélanges dans les plus simples des composés purs. Dans une première partie, les méthodes à savoir JADE et NNSC ont été appliqué es dans le cadre de la DOSY , une application aux données CPMG était démontrée. Dans une deuxième partie, on s'est concentré sur le développement d'un algorithme efficace "beta-SNMF" . Ceci s'est montré plus performant que NNSC pour beta inférieure ou égale à 2. Etant donné que dans la littérature, le choix de beta a été adapté aux hypothèses statistiques sur le bruit additif, une étude statistique du bruit RMN de la DOSY a été faite pour obtenir une image plus complète de nos données RMN étudiées. / The objective of the work was to test BSS methods for the separation of the complex NMR spectra of mixtures into the simpler ones of the pure compounds. In a first part, known methods namely JADE and NNSC were applied in conjunction for DOSY , performing applications for CPMG were demonstrated. In a second part, we focused on developing an effective algorithm "beta- SNMF ". This was demonstrated to outperform NNSC for beta less or equal to 2. Since in the literature, the choice of beta has been adapted to the statistical assumptions on the additive noise, a statistical study of NMR DOSY noise was done to get a more complete picture about our studied NMR data.
132

Identification de systèmes dynamiques hybrides : géométrie, parcimonie et non-linéarités / Hybrid dynamical system identification : geometry, sparsity and nonlinearities

Le, Van Luong 04 October 2013 (has links)
En automatique, l'obtention d'un modèle du système est la pierre angulaire des procédures comme la synthèse d'une commande, la détection des défaillances, la prédiction... Cette thèse traite de l'identification d'une classe de systèmes complexes, les systèmes dynamiques hybrides. Ces systèmes impliquent l'interaction de comportements continus et discrets. Le but est de construire un modèle à partir de mesures expérimentales d'entrée et de sortie. Une nouvelle approche pour l'identification de systèmes hybrides linéaires basée sur les propriétés géométriques des systèmes hybrides dans l'espace des paramètres est proposée. Un nouvel algorithme est ensuite proposé pour le calcul de la solution la plus parcimonieuse (ou creuse) de systèmes d'équations linéaires sous-déterminés. Celui-ci permet d'améliorer une approche d'identification basée sur l'optimisation de la parcimonie du vecteur d'erreur. De plus, de nouvelles approches, basées sur des modèles à noyaux, sont proposées pour l'identification de systèmes hybrides non linéaires et de systèmes lisses par morceaux / In automatic control, obtaining a model is always the cornerstone of the synthesis procedures such as controller design, fault detection or prediction... This thesis deals with the identification of a class of complex systems, hybrid dynamical systems. These systems involve the interaction of continuous and discrete behaviors. The goal is to build a model from experimental measurements of the system inputs and outputs. A new approach for the identification of linear hybrid systems based on the geometric properties of hybrid systems in the parameter space is proposed. A new algorithm is then proposed to recover the sparsest solutions of underdetermined systems of linear equations. This allows us to improve an identification approach based on the error sparsification. In addition, new approaches based on kernel models are proposed for the identification of nonlinear hybrid systems and piecewise smooth systems
133

Apprentissage avec la parcimonie et sur des données incertaines par la programmation DC et DCA / Learning with sparsity and uncertainty by Difference of Convex functions optimization

Vo, Xuan Thanh 15 October 2015 (has links)
Dans cette thèse, nous nous concentrons sur le développement des méthodes d'optimisation pour résoudre certaines classes de problèmes d'apprentissage avec la parcimonie et/ou avec l'incertitude des données. Nos méthodes sont basées sur la programmation DC (Difference of Convex functions) et DCA (DC Algorithms) étant reconnues comme des outils puissants d'optimisation. La thèse se compose de deux parties : La première partie concerne la parcimonie tandis que la deuxième partie traite l'incertitude des données. Dans la première partie, une étude approfondie pour la minimisation de la norme zéro a été réalisée tant sur le plan théorique qu'algorithmique. Nous considérons une approximation DC commune de la norme zéro et développons quatre algorithmes basées sur la programmation DC et DCA pour résoudre le problème approché. Nous prouvons que nos algorithmes couvrent tous les algorithmes standards existants dans le domaine. Ensuite, nous étudions le problème de la factorisation en matrices non-négatives (NMF) et fournissons des algorithmes appropriés basés sur la programmation DC et DCA. Nous étudions également le problème de NMF parcimonieuse. Poursuivant cette étude, nous étudions le problème d'apprentissage de dictionnaire où la représentation parcimonieuse joue un rôle crucial. Dans la deuxième partie, nous exploitons la technique d'optimisation robuste pour traiter l'incertitude des données pour les deux problèmes importants dans l'apprentissage : la sélection de variables dans SVM (Support Vector Machines) et le clustering. Différents modèles d'incertitude sont étudiés. Les algorithmes basés sur DCA sont développés pour résoudre ces problèmes. / In this thesis, we focus on developing optimization approaches for solving some classes of optimization problems in sparsity and robust optimization for data uncertainty. Our methods are based on DC (Difference of Convex functions) programming and DCA (DC Algorithms) which are well-known as powerful tools in optimization. This thesis is composed of two parts: the first part concerns with sparsity while the second part deals with uncertainty. In the first part, a unified DC approximation approach to optimization problem involving the zero-norm in objective is thoroughly studied on both theoretical and computational aspects. We consider a common DC approximation of zero-norm that includes all standard sparse inducing penalty functions, and develop general DCA schemes that cover all standard algorithms in the field. Next, the thesis turns to the nonnegative matrix factorization (NMF) problem. We investigate the structure of the considered problem and provide appropriate DCA based algorithms. To enhance the performance of NMF, the sparse NMF formulations are proposed. Continuing this topic, we study the dictionary learning problem where sparse representation plays a crucial role. In the second part, we exploit robust optimization technique to deal with data uncertainty for two important problems in machine learning: feature selection in linear Support Vector Machines and clustering. In this context, individual data point is uncertain but varies in a bounded uncertainty set. Different models (box/spherical/ellipsoidal) related to uncertain data are studied. DCA based algorithms are developed to solve the robust problems
134

Técnicas de esparsidade em sistemas estáticos de energia elétrica / not available

Simeão, Sandra Fiorelli de Almeida Penteado 27 September 2001 (has links)
Neste trabalho foi realizado um grande levantamento de técnicas de esparsidade relacionadas a sistemas estáticos de energia elétrica. Tais técnicas visam, do ponto de vista computacional, ao aumento da eficiência na solução de rede elétrica objetivando, além da resolução em si, a redução dos requisitos de memória, armazenamento e tempo de processamento. Para tanto, uma extensa revisão bibliográfica foi compilada, apresentando um posicionamento histórico e uma ampla visão do desenvolvimento teórico. Os testes comparativos realizados para sistemas de 14, 30, 57 e 118 barras, sobre a implantação de três das técnicas mais empregadas, apontou a Bi-fatoração como tendo o melhor desempenho médio. Para sistemas pequenos, a Eliminação Esparsa e Sintética de Gauss apresentou melhores resultados. Este trabalho fornecerá subsídios conceituais e metodológicos a técnicos e pesquisadores da área. / In this work a great survey of sparsity techniques related to static systems of electric power was accomplished. Such techniques seek, for of the computational point of view, the increase of the efficiency in the solution of the electric net aiming, besides the resolution of itself, the reduction of memory requirements, the storage and time processing. For that, an extensive bibliographic review was compiled providing a historic positioning and a broad view of theoretic development. The comparative tests accomplished for systems of 14,30, 57 and 118 buses, on the implementation of three of the most employed techniques, it pointed out an bi-factorisation as best medium performance. For small systems, the sparse symmetric Gaussian elimination showed the best results. This work will supply conceptual and methodological subsidies to technicians and researchers of the area.
135

A concentration inequality based statistical methodology for inference on covariance matrices and operators

Kashlak, Adam B. January 2017 (has links)
In the modern era of high and infinite dimensional data, classical statistical methodology is often rendered inefficient and ineffective when confronted with such big data problems as arise in genomics, medical imaging, speech analysis, and many other areas of research. Many problems manifest when the practitioner is required to take into account the covariance structure of the data during his or her analysis, which takes on the form of either a high dimensional low rank matrix or a finite dimensional representation of an infinite dimensional operator acting on some underlying function space. Thus, novel methodology is required to estimate, analyze, and make inferences concerning such covariances. In this manuscript, we propose using tools from the concentration of measure literature–a theory that arose in the latter half of the 20th century from connections between geometry, probability, and functional analysis–to construct rigorous descriptive and inferential statistical methodology for covariance matrices and operators. A variety of concentration inequalities are considered, which allow for the construction of nonasymptotic dimension-free confidence sets for the unknown matrices and operators. Given such confidence sets a wide range of estimation and inferential procedures can be and are subsequently developed. For high dimensional data, we propose a method to search a concentration in- equality based confidence set using a binary search algorithm for the estimation of large sparse covariance matrices. Both sub-Gaussian and sub-exponential concentration inequalities are considered and applied to both simulated data and to a set of gene expression data from a study of small round blue-cell tumours. For infinite dimensional data, which is also referred to as functional data, we use a celebrated result, Talagrand’s concentration inequality, in the Banach space setting to construct confidence sets for covariance operators. From these confidence sets, three different inferential techniques emerge: the first is a k-sample test for equality of covariance operator; the second is a functional data classifier, which makes its decisions based on the covariance structure of the data; the third is a functional data clustering algorithm, which incorporates the concentration inequality based confidence sets into the framework of an expectation-maximization algorithm. These techniques are applied to simulated data and to speech samples from a set of spoken phoneme data. Lastly, we take a closer look at a key tool used in the construction of concentration based confidence sets: Rademacher symmetrization. The symmetrization inequality, which arises in the probability in Banach spaces literature, is shown to be connected with optimal transport theory and specifically the Wasserstein distance. This insight is used to improve the symmetrization inequality resulting in tighter concentration bounds to be used in the construction of nonasymptotic confidence sets. A variety of other applications are considered including tests for data symmetry and tightening inequalities in Banach spaces. An R package for inference on covariance operators is briefly discussed in an appendix chapter.
136

Achieving shrinkage in a time-varying parameter model framework

Bitto, Angela, Frühwirth-Schnatter, Sylvia January 2019 (has links) (PDF)
Shrinkage for time-varying parameter (TVP) models is investigated within a Bayesian framework, with the aim to automatically reduce time-varying Parameters to staticones, if the model is overfitting. This is achieved through placing the double gamma shrinkage prior on the process variances. An efficient Markov chain Monte Carlo scheme is devel- oped, exploiting boosting based on the ancillarity-sufficiency interweaving strategy. The method is applicable both to TVP models for univariate a swell as multivariate time series. Applications include a TVP generalized Phillips curve for EU area inflation modeling and a multivariate TVP Cholesky stochastic volatility model for joint modeling of the Returns from the DAX-30index.
137

Méthodes de reconstruction d'images à partir d'un faible nombre de projections en tomographie par rayons x / X-ray CT Image Reconstruction from Few Projections

Wang, Han 24 October 2011 (has links)
Afin d'améliorer la sûreté (dose plus faible) et la productivité (acquisition plus rapide) du système de la tomographie par rayons X (CT), nous cherchons à reconstruire une image de haute qualitée avec un faible nombre de projections. Les algorithmes classiques ne sont pas adaptés à cette situation et la reconstruction est instable et perturbée par des artefacts. L'approche "Compressed Sensing" (CS) fait l'hypothèse que l'image inconnue est "parcimonieuse" ou "compressible", et la reconstruit via un problème d'optimisation (minimisation de la norme TV/L1) en promouvant la parcimonie. Pour appliquer le CS en CT, en utilisant le pixel/voxel comme base de representation, nous avons besoin d'une transformée parcimonieuse, et nous devons la combiner avec le "projecteur du rayon X" appliqué sur une image pixelisée. Dans cette thèse, nous avons adapté une base radiale de famille Gaussienne nommée "blob" à la reconstruction CT par CS. Elle a une meilleure localisation espace-fréquentielle que le pixel, et des opérations comme la transformée en rayons-X, peuvent être évaluées analytiquement et sont facilement parallélisables (sur plateforme GPU par exemple). Comparé au blob classique de Kaisser-Bessel, la nouvelle base a une structure multi-échelle : une image est la somme des fonctions translatées et dilatées de chapeau Mexicain radiale. Les images médicales typiques sont compressibles sous cette base. Ainsi le système de representation parcimonieuse dans les algorithmes ordinaires de CS n'est plus nécessaire. Des simulations (2D) ont montré que les algorithmes TV/L1 existants sont plus efficaces et les reconstructions ont des meilleures qualités visuelles que par l'approche équivalente basée sur la base de pixel-ondelettes. Cette nouvelle approche a également été validée sur des données expérimentales (2D), où nous avons observé que le nombre de projections en général peut être réduit jusqu'à 50%, sans compromettre la qualité de l'image. / To improve the safety (lower dose) and the productivity (faster acquisition) of an X-ray CT system, we want to reconstruct a high quality image from a small number of projections. The classical reconstruction algorithms generally fail since the reconstruction procedure is unstable and the reconstruction suffers from artifacts. The "Compressed Sensing" (CS) approach supposes that the unknown image is in some sense "sparse" or "compressible", and reoncstructs it through a non linear optimization problem (TV/$llo$ minimization) by enhancing the sparsity. Using the pixel/voxel as basis, to apply CS framework in CT one usually needs a "sparsifying" transform, and combine it with the "X-ray projector" applying on the pixel image. In this thesis, we have adapted a "CT-friendly" radial basis of Gaussian family called "blob" to the CS-CT framework. It have better space-frequency localization properties than the pixel, and many operations, such as the X-ray transform, can be evaluated analytically and are highly parallelizable (on GPU platform). Compared to the classical Kaisser-Bessel blob, the new basis has a multiscale structure: an image is the sum of dilated and translated radial Mexican hat functions. The typical medical objects are compressible under this basis, so the sparse representation system used in the ordinary CS algorithms is no more needed. Simulations (2D) show that the existing TV/L1 algorithms are more efficient and the reconstructions have better visual quality than the equivalent approach based on the pixel/wavelet basis. The new approach has also been validated on experimental data (2D), where we have observed that the number of projections in general can be reduced to about 50%, without compromising the image quality.
138

Récepteur radiofréquence basé sur l’échantillonnage parcimonieux pour de l'extraction de caractéristiques dans les applications de radio cognitive / Radiofrequency receiver based on compressive sampling for feature extraction in cognitive radio applications

Marnat, Marguerite 29 November 2018 (has links)
Cette thèse traite de la conception de récepteurs radiofréquences basés sur l'acquisition compressée pour de l'estimation paramétrique en radio cognitive.L'acquisition compressée est un changement de paradigme dans la conversion analogique-numérique qui permet de s'affranchir de la fréquence d'échantillonnage de Nyquist.Dans ces travaux, les estimations sont effectuées directement sur les échantillons compressés vu le coût prohibitif de la reconstruction du signal d'entrée.Tout d'abord, l'aspect architecture du récepteur est abordé,avec notamment le choix des codes de mélange pour le convertisseur modulé à large bande (MWC).Une analyse haut niveau des propriétés de la matrice d'acquisition, à savoir la cohérence pour réduire le nombre de mesures et l'isométrie pour la robustesse au bruit,est menée puis validée par une plateforme de simulation.Enfin l'estimation paramétrique à partir des échantillons compressés est abordée à travers la borne de Cramér-Rao sur la variance d'un estimateur non biaisé.Une forme analytique de la matrice de Fisher est établie sous certaines hypothèses et permet de dissocier les effets de la compression et de la création de diversité.L'influence du processus d'acquisition compressée, notamment le couplage entre paramètres et la fuite spectrale, est illustré par l'exemple. / This work deals with the topic of radiofrequency receivers based on Compressive Sampling for feature extraction in Cognitive Radio.Compressive Sampling is a paradigm shift in analog to digital conversion that bypasses the Nyquist sampling frequency.In this work, estimations are carried out directly on the compressed samples due to the prohibitive cost of signal reconstruction.First, the receiver architecture is considered, in particular through the choice of the mixing codes of the Modulated Wideband Converter.A high-level analysis on properties of the sensing matrix, coherence to reduce the number of measurement and isometry for noise robustness,is carried out and validated by a simulation platform.Finally, parametric estimation based on compressed samples is tackled through the prism of the Cram'{e}r-Rao lower bound on unbiased estimators.A closed form expression of the Fisher matrix is established under certain assumptions and enables to dissociate the effects of compression and diversity creation.The influence of Compressive Sampling on estimation bounds, in particular coupling between parameters and spectral leakage, is illustrated by the example.
139

Estimação de estado em sistemas elétricos de potência: programa para análise e atualização das características qualitativas de conjunto de medidas / Power system state estimation: computer program for analysis and updating of measurement set qualitative characteristics

Moreira, Eduardo Marmo 23 October 2006 (has links)
Para obter-se uma operação segura dos sistemas elétricos de potência (SEP), é imprescindível uma estimação de estado (EE) confiável, pois, as ações de controle e operação, em tempo real, dos SEP se baseiam no banco de dados obtido pelo processo de EE. O primeiro passo, para o sucesso do processo de EE, é a obtenção de um plano de medição confiável, ou seja, um plano de medição que garanta a observabilidade do sistema e a não presença de medidas críticas e dos conjuntos críticos de medidas. Entretanto, tendo em vista a possibilidade de ocorrer, durante a operação de um SEP, de problemas causando a perda de medidas, a obtenção de um plano de medição confiável é uma condição necessária, mas não suficiente, para o sucesso do processo de EE. Face ao exposto, desenvolveu-se neste trabalho um programa computacional que possibilita uma EE confiável mesmo em situação de perda de medidas. O programa proposto permite, de uma forma rápida em termos de velocidade de execução, análise e restauração da observabilidade, identificação de medidas críticas e de conjuntos críticos de medidas, bem como a atualização dessas características qualitativas de conjunto de medidas após a perda de medidas. Como embasamento teórico para o desenvolvimento do programa proposto, foram utilizados dois algoritmos destinados à análise das características qualitativas de conjuntos de medidas, que se baseiam na fatoração triangular da matriz Jacobiana, bem como técnicas de esparsidade e de desenvolvimento de programas computacionais. Para comprovar a eficiência do programa proposto, vários testes foram realizados, utilizando o sistema de 6, 14 e 30 barras do IEEE e 121 barras da ELETROSUL. / To obtain a safe power system (PS) operation, becomes necessary a reliable state estimation (SE), since the real time control actions of a PS are based on the data obtained through the SE process. The first requirement for a successful SE process is the existence of a reliable measurement placement plan, that is, a measurement placement plan that guarantees system observability and the absence of both critical measurements and critical sets. However, considering that during the operation of a PS measurements can be lost decreasing the measurement-redundancy, one can say that although a reliable measurement placement plan is a necessary condition to guarantee a reliable state estimation, it is not sufficient. This dissertation presents a computer program that allows for a reliable SE, even in situations of problems causing loss of measurements. The proposed software allows, in a very fast way in terms of execution time, observability analysis and restoration, identification of critical measurements and critical sets, as well as the updating of these measurement set qualitative characteristics after loss of measurements. As a theoretical background for the development of the software, two algorithms were utilized allowing for the analysis of measurement set qualitative characteristics based on the triangular factorization of the Jacobian matrix, as well as sparsity techniques and techniques for the development of programs. To prove the efficiency of the proposed software, several tests were performed using the system of 6, 14 and 30 buses from IEEE and 121 buses from ELETROSUL.
140

Statistical and numerical optimization for speckle blind structured illumination microscopy / Optimisation numérique et statistique pour la microscopie à éclairement structuré non contrôlé

Liu, Penghuan 25 May 2018 (has links)
La microscopie à éclairements structurés(structured illumination microscopy, SIM) permet de dépasser la limite de résolution en microscopie optique due à la diffraction, en éclairant l’objet avec un ensemble de motifs périodiques parfaitement connus. Cependant, il s’avère difficile de contrôler exactement la forme des motifs éclairants. Qui plus est, de fortes distorsions de la grille de lumière peuvent être générées par l’échantillon lui-même dans le volume d’étude, ce qui peut provoquer de forts artefacts dans les images reconstruites. Récemment, des approches dites blind-SIM ont été proposées, où les images sont acquises à partir de motifs d’éclairement inconnus, non-périodiques, de type speckle,bien plus faciles à générer en pratique. Le pouvoir de super résolution de ces méthodes a été observé, sans forcément être bien compris théoriquement. Cette thèse présente deux nouvelles méthodes de reconstruction en microscopie à éclairements structurés inconnus (blind speckle-SIM) : une approche conjointe et une approche marginale. Dans l’approche conjointe, nous estimons conjointement l’objet et les motifs d’éclairement au moyen d’un modèle de type Basis Pursuit DeNoising (BPDN) avec une régularisation en norme lp,q où p=>1 et 0<q<=1. La norme lp,q est introduite afin de prendre en compte une hypothèse de parcimonie sur l’objet. Dans l’approche marginale, nous reconstruisons uniquement l’objet et les motifs d’éclairement sont traités comme des paramètres de nuisance. Notre contribution est double. Premièrement, une analyse théorique démontre que l’exploitation des statistiques d’ordre deux des données permet d’accéder à un facteur de super résolution de deux, lorsque le support de la densité spectrale du speckle correspond au support fréquentiel de la fonction de transfert du microscope. Ensuite, nous abordons le problème du calcul numérique de la solution. Afin de réduire à la fois le coût de calcul et les ressources en mémoire, nous proposons un estimateur marginal à base de patches. L’élément clé de cette méthode à patches est de négliger l’information de corrélation entre les pixels appartenant à différents patches. Des résultats de simulations et en application à des données réelles démontrent la capacité de super résolution de nos méthodes. De plus, celles-ci peuvent être appliquées aussi bien sur des problèmes de reconstruction 2D d’échantillons fins, mais également sur des problèmes d’imagerie 3D d’objets plus épais. / Conventional structured illumination microscopy (SIM) can surpass the resolution limit inoptical microscopy caused by the diffraction effect, through illuminating the object with a set of perfectly known harmonic patterns. However, controlling the illumination patterns is a difficult task. Even worse, strongdistortions of the light grid can be induced by the sample within the investigated volume, which may give rise to strong artifacts in SIM reconstructed images. Recently, blind-SIM strategies were proposed, whereimages are acquired through unknown, non-harmonic,speckle illumination patterns, which are much easier to generate in practice. The super-resolution capacity of such approaches was observed, although it was not well understood theoretically. This thesis presents two new reconstruction methods in SIM using unknown speckle patterns (blind-speckle-SIM): one joint reconstruction approach and one marginal reconstruction approach. In the joint reconstruction approach, we estimate the object and the speckle patterns together by considering a basis pursuit denoising (BPDN) model with lp,q-norm regularization, with p=>1 and 0<q<=1. The lp,q-norm is introduced based on the sparsity assumption of the object. In the marginal approach, we only reconstruct the object, while the unknown speckle patterns are considered as nuisance parameters. Our contribution is two fold. First, a theoretical analysis demonstrates that using the second order statistics of the data, blind-speckle-SIM yields a super-resolution factor of two, provided that the support of the speckle spectral density equals the frequency support of the microscope point spread function. Then, numerical implementation is addressed. In order to reduce the computational burden and the memory requirement of the marginal approach, a patch-based marginal estimator is proposed. The key idea behind the patch-based estimator consists of neglecting the correlation information between pixels from different patches. Simulation results and experiments with real data demonstrate the super-resolution capacity of our methods. Moreover, our proposed methods can not only be applied in 2D super-resolution problems with thin samples, but are also compatible with 3D imaging problems of thick samples.

Page generated in 0.0686 seconds