• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 2
  • Tagged with
  • 6
  • 6
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Structured low rank approaches for exponential recovery - application to MRI

Balachandrasekaran, Arvind 01 December 2018 (has links)
Recovering a linear combination of exponential signals characterized by parameters is highly significant in many MR imaging applications such as parameter mapping and spectroscopy. The parameters carry useful clinical information and can act as biomarkers for various cardiovascular and neurological disorders. However, their accurate estimation requires a large number of high spatial resolution images, resulting in long scan time. One of the ways to reduce scan time is by acquiring undersampled measurements. The recovery of images is usually posed as an optimization problem, which is regularized by functions enforcing sparsity, smoothness or low rank structure. Recently structured matrix priors have gained prominence in many MRI applications because of their superior performance over the aforementioned conventional priors. However, none of them are designed to exploit the smooth exponential structure of the 3D dataset. In this thesis, we exploit the exponential structure of the signal at every pixel location and the spatial smoothness of the parameters to derive a 3D annihilation relation in the Fourier domain. This relation translates into a product of a Hankel/Toeplitz structured matrix, formed from the k-t samples, and a vector of filter coefficients. We show that this matrix has a low rank structure, which is exploited to recover the images from undersampled measurements. We demonstrate the proposed method on the problem of MR parameter mapping. We compare the algorithm with the state-of-the-art methods and observe that the proposed reconstructions and parameter maps have fewer artifacts and errors. We extend the structured low rank framework to correct field inhomogeneity artifacts in MR images. We introduce novel approaches for field map compensation for data acquired using Cartesian and non-Cartesian trajectories. We adopt the time segmentation approach and reformulate the artifact correction problem into a recovery of time series of images from undersampled measurements. Upon recovery, the first image of the series will correspond to the distortion-free image. With the above re-formulation, we can assume that the signal at every pixel follows an exponential signal characterized by field map and the damping constant R2*. We exploit the smooth exponential structure of the 3D dataset to derive a low rank structured matrix prior, similar to the parameter mapping case. We demonstrate the algorithm on spherical MR phantom and human data and show that the artifacts are greatly reduced compared to the uncorrected images. Finally, we develop a structured matrix recovery framework to accelerate cardiac breath-held MRI. We model the cardiac image data as a 3D piecewise constant function. We assume that the zeros of a 3D trigonometric polynomial coincides with the edges of the image data, resulting in a Fourier domain annihilation relation. This relation can be compactly expressed in terms of a structured low rank matrix. We exploit this low rank property to recover the cardiac images from undersampled measurements. We demonstrate the superiority of the proposed technique over conventional sparsity and smoothness based methods. Though the model assumed here is not exponential, yet the proposed algorithm is closely related to that developed for parameter mapping. The direct implementation of the algorithms has a high memory demand and computational complexity due to the formation and storage of a large multi-fold Toeplitz matrix. Till date, the practical utility of such algorithms on high dimensional datasets has been limited due to the aforementioned reasons. We address these issues by introducing novel Fourier domain approximations which result in a fast and memory efficient algorithm for the above-mentioned applications. Such approximations allow us to work with large datasets efficiently and eliminate the need to store the Toeplitz matrix. We note that the algorithm developed for exponential recovery is general enough to be applied to other applications beyond MRI.
2

Kernel Matrix Rank Structures with Applications

Mikhail Lepilov (12469881) 27 April 2022 (has links)
<p>Many kernel matrices from differential equations or data science applications possess low or approximately low off-diagonal rank for certain key matrix subblocks; such matrices are referred to as rank-structured. Operations on rank-structured matrices like factorization and linear system solution can be greatly accelerated by converting them into hierarchical matrix forms, such as the hiearchically semiseparable (HSS) matrix form. The dominant cost of this conversion process, called HSS construction, is the low-rank approximation of certain matrix blocks. Low-rank approximation is also a required step in many other contexts throughout numerical linear algebra. In this work, a proxy point low-rank approximation method is detailed for general analytic kernel matrices, in both one and several dimensions. A new accuracy analysis for this approximation is also provided, as well as numerical evidence of its accuracy. The extension of this method to kernels in several dimensions is novel, and its new accuracy analysis makes it a convenient choice to use over existing proxy point methods. Finally, a new HSS construction algorithm using this method for certain Cauchy and Toeplitz matrices is given, which is asymptotically faster than existing methods. Numerical evidence for the accuracy and efficacy of the new construction algorithm is also provided.</p>
3

Strategies for Sparsity-based Time-Frequency Analyses

Zhang, Shuimei, 0000-0001-8477-5417 January 2021 (has links)
Nonstationary signals are widely observed in many real-world applications, e.g., radar, sonar, radio astronomy, communication, acoustics, and vibration applications. Joint time-frequency (TF) domain representations provide a time-varying spectrum for their analyses, discrimination, and classifications. Nonstationary signals commonly exhibit sparse occupancy in the TF domain. In this dissertation, we incorporate such sparsity to enable robust TF analysis in impaired observing environments. In practice, missing data samples frequently occur during signal reception due to various reasons, e.g., propagation fading, measurement obstruction, removal of impulsive noise or narrowband interference, and intentional undersampling. Missing data samples in the time domain lend themselves to be missing entries in the instantaneous autocorrelation function (IAF) and induce artifacts in the TF representation (TFR). Compared to random missing samples, a more realistic and more challenging problem is the existence of burst missing data samples. Unlike the effects of random missing samples, which cause the artifacts to be uniformly spread over the entire TF domain, the artifacts due to burst missing samples are highly localized around the true instantaneous frequencies, rendering extremely challenging TF analyses for which many existing methods become ineffective. In this dissertation, our objective is to develop novel signal processing techniques that offer effective TF analysis capability in the presence of burst missing samples. We propose two mutually related methods that recover missing entries in the IAF and reconstruct high-fidelity TFRs, which approach full-data results with negligible performance loss. In the first method, an IAF slice corresponding to the time or lag is converted to a Hankel matrix, and its missing entries are recovered via atomic norm minimization. The second method generalizes this approach to reduce the effects of TF crossterms. It considers an IAF patch, which is reformulated as a low-rank block Hankel matrix, and the annihilating filter-based approach is used to interpolate the IAF and recover the missing entries. Both methods are insensitive to signal magnitude differences. Furthermore, we develop a novel machine learning-based approach that offers crossterm-free TFRs with effective autoterm preservation. The superiority and usefulness of the proposed methods are demonstrated using simulated and real-world signals. / Electrical and Computer Engineering
4

Structured matrix nearness problems : theory and algorithms

Borsdorf, Ruediger January 2012 (has links)
In many areas of science one often has a given matrix, representing for example a measured data set and is required to find a matrix that is closest in a suitable norm to the matrix and possesses additionally a structure, inherited from the model used or coming from the application. We call these problems structured matrix nearness problems. We look at three different groups of these problems that come from real applications, analyze the properties of the corresponding matrix structure, and propose algorithms to solve them efficiently. The first part of this thesis concerns the nearness problem of finding the nearest k factor correlation matrix C(X) = diag(I_n -XX T)+XX T to a given symmetric matrix, subject to natural nonlinear constraints on the elements of the n x k matrix X, where distance is measured in the Frobenius norm. Such problems arise, for example, when one is investigating factor models of collateralized debt obligations (CDOs) or multivariate time series. We examine several algorithms for solving the nearness problem that differ in whether or not they can take account of the nonlinear constraints and in their convergence properties. Our numerical experiments show that the performance of the methods depends strongly on the problem, but that, among our tested methods, the spectral projected gradient method is the clear winner. In the second part we look at two two-sided optimization problems where the matrix of unknowns Y ε R {n x p} lies in the Stiefel manifold. These two problems come from an application in atomic chemistry where one is looking for atomic orbitals with prescribed occupation numbers. We analyze these two problems, propose an analytic optimal solution of the first and show that an optimal solution of the second problem can be found by solving a convex quadratic programming problem with box constraints and p unknowns. We prove that the latter problem can be solved by the active-set method in at most 2p iterations. Subsequently, we analyze the set of optimal solutions C}= {Y ε R n x p:Y TY=I_p,Y TNY=D} of the first problem for N symmetric and D diagonal and find that a slight modification of it is a Riemannian manifold. We derive the geometric objects required to make an optimization over this manifold possible. We propose an augmented Lagrangian-based algorithm that uses these geometric tools and allows us to optimize an arbitrary smooth function over C. This algorithm can be used to select a particular solution out of the latter set C by posing a new optimization problem. We compare it numerically with a similar algorithm that ,however, does not apply these geometric tools and find that our algorithm yields better performance. The third part is devoted to low rank nearness problems in the Q-norm, where the matrix of interest is additionally of linear structure, meaning it lies in the set spanned by s predefined matrices U₁,..., U_s ε {0,1} n x p. These problems are often associated with model reduction, for example in speech encoding, filter design, or latent semantic indexing. We investigate three approaches that support any linear structure and examine further the geometric reformulation by Schuermans et al. (2003). We improve their algorithm in terms of reliability by applying the augmented Lagrangian method and show in our numerical tests that the resulting algorithm yields better performance than other existing methods.
5

Méthodes par blocs adaptées aux matrices structurées et au calcul du pseudo-inverse / Block methods adapted to structured matrices and calculation of the pseudo-inverse

Archid, Atika 27 April 2013 (has links)
Nous nous intéressons dans cette thèse, à l'étude de certaines méthodes numériques de type krylov dans le cas symplectique, en utilisant la technique de blocs. Ces méthodes, contrairement aux méthodes classiques, permettent à la matrice réduite de conserver la structure Hamiltonienne ou anti-Hamiltonienne ou encore symplectique d'une matrice donnée. Parmi ces méthodes, nous nous sommes intéressés à la méthodes d'Arnoldi symplectique par bloc que nous appelons aussi bloc J-Arnoldi. Notre but essentiel est d’étudier cette méthode de façon théorique et numérique, sur la nouvelle structure du K-module libre ℝ²nx²s avec K = ℝ²sx²s où s ≪ n désigne la taille des blocs utilisés. Un deuxième objectif est de chercher une approximation de l'epérateur exp(A)V, nous étudions en particulier le cas où A est une matrice réelle Hamiltonnienne et anti-symétrique de taille 2n x 2n et V est une matrice rectangulaire ortho-symplectique de taille 2n x 2s sur le sous-espace de Krylov par blocs Km(A,V) = blockspan {V,AV,...,Am-1V}, en conservant la structure de la matrice V. Cette approximation permet de résoudre plusieurs problèmes issus des équations différentielles dépendants d'un paramètre (EDP) et des systèmes d'équations différentielles ordinaires (EDO). Nous présentons également une méthode de Lanczos symplectique par bloc, que nous nommons bloc J-Lanczos. Cette méthode permet de réduire une matrice structurée sous la forme J-tridiagonale par bloc. Nous proposons des algorithmes basés sur deux types de normalisation : la factorisation S R et la factorisation Rj R. Dans une dernière partie, nous proposons un algorithme qui généralise la méthode de Greville afin de déterminer la pseudo inverse de Moore-Penros bloc de lignes par bloc de lignes d'une matrice rectangulaire de manière itérative. Nous proposons un algorithme qui utilise la technique de bloc. Pour toutes ces méthodes, nous proposons des exemples numériques qui montrent l'efficacité de nos approches. / We study, in this thesis, some numerical block Krylov subspace methods. These methods preserve geometric properties of the reduced matrix (Hamiltonian or skew-Hamiltonian or symplectic). Among these methods, we interest on block symplectic Arnoldi, namely block J-Arnoldi algorithm. Our main goal is to study this method, theoretically and numerically, on using ℝ²nx²s as free module on (ℝ²sx²s, +, x) with s ≪ n the size of block. A second aim is to study the approximation of exp (A)V, where A is a real Hamiltonian and skew-symmetric matrix of size 2n x 2n and V a rectangular matrix of size 2n x 2s on block Krylov subspace Km (A, V) = blockspan {V, AV,...Am-1V}, that preserve the structure of the initial matrix. this approximation is required in many applications. For example, this approximation is important for solving systems of ordinary differential equations (ODEs) or time-dependant partial differential equations (PDEs). We also present a block symplectic structure preserving Lanczos method, namely block J-Lanczos algorithm. Our approach is based on a block J-tridiagonalization procedure of a structured matrix. We propose algorithms based on two normalization methods : the SR factorization and the Rj R factorization. In the last part, we proposea generalized algorithm of Greville method for iteratively computing the Moore-Penrose inverse of a rectangular real matrix. our purpose is to give a block version of Greville's method. All methods are completed by many numerical examples.
6

Sur des méthodes préservant les structures d'une classe de matrices structurées / On structure-preserving methods of a class of structured matrices

Ben Kahla, Haithem 14 December 2017 (has links)
Les méthodes d'algèbres linéaire classiques, pour le calcul de valeurs et vecteurs propres d'une matrice, ou des approximations de rangs inférieurs (low-rank approximations) d'une solution, etc..., ne tiennent pas compte des structures de matrices. Ces dernières sont généralement détruites durant le procédé du calcul. Des méthodes alternatives préservant ces structures font l'objet d'un intérêt important par la communauté. Cette thèse constitue une contribution dans ce domaine. La décomposition SR peut être calculé via l'algorithme de Gram-Schmidt symplectique. Comme dans le cas classique, une perte d'orthogonalité peut se produire. Pour y remédier, nous avons proposé deux algorithmes RSGSi et RMSGSi qui consistent à ré-orthogonaliser deux fois les vecteurs à calculer. La perte de la J-orthogonalité s'est améliorée de manière très significative. L'étude directe de la propagation des erreurs d'arrondis dans les algorithmes de Gram-Schmidt symplectique est très difficile à effectuer. Nous avons réussi à contourner cette difficulté et donner des majorations pour la perte de la J-orthogonalité et de l'erreur de factorisation. Une autre façon de calculer la décomposition SR est basée sur les transformations de Householder symplectique. Un choix optimal a abouti à l'algorithme SROSH. Cependant, ce dernier peut être sujet à une instabilité numérique. Nous avons proposé une version modifiée nouvelle SRMSH, qui a l'avantage d'être aussi stable que possible. Une étude approfondie a été faite, présentant les différentes versions : SRMSH et SRMSH2. Dans le but de construire un algorithme SR, d'une complexité d'ordre O(n³) où 2n est la taille de la matrice, une réduction (appropriée) de la matrice à une forme condensée (J(Hessenberg forme) via des similarités adéquates, est cruciale. Cette réduction peut être effectuée via l'algorithme JHESS. Nous avons montré qu'il est possible de réduire une matrice sous la forme J-Hessenberg, en se basant exclusivement sur les transformations de Householder symplectiques. Le nouvel algorithme, appelé JHSJ, est basé sur une adaptation de l'algorithme SRSH. Nous avons réussi à proposer deux nouvelles variantes, aussi stables que possible : JHMSH et JHMSH2. Nous avons constaté que ces algorithmes se comportent d'une manière similaire à l'algorithme JHESS. Une caractéristique importante de tous ces algorithmes est qu'ils peuvent rencontrer un breakdown fatal ou un "near breakdown" rendant impossible la suite des calculs, ou débouchant sur une instabilité numérique, privant le résultat final de toute signification. Ce phénomène n'a pas d'équivalent dans le cas Euclidien. Nous avons réussi à élaborer une stratégie très efficace pour "guérir" le breakdown fatal et traîter le near breakdown. Les nouveaux algorithmes intégrant cette stratégie sont désignés par MJHESS, MJHSH, JHM²SH et JHM²SH2. Ces stratégies ont été ensuite intégrées dans la version implicite de l'algorithme SR lui permettant de surmonter les difficultés rencontrées lors du fatal breakdown ou du near breakdown. Rappelons que, sans ces stratégies, l'algorithme SR s'arrête. Finalement, et dans un autre cadre de matrices structurées, nous avons présenté un algorithme robuste via FFT et la matrice de Hankel, basé sur le calcul approché de plus grand diviseur commun (PGCD) de deux polynômes, pour résoudre le problème de la déconvolution d'images. Plus précisément, nous avons conçu un algorithme pour le calcul du PGCD de deux polynômes bivariés. La nouvelle approche est basée sur un algorithme rapide, de complexité quadratique O(n²), pour le calcul du PGCD des polynômes unidimensionnels. La complexité de notre algorithme est O(n²log(n)) où la taille des images floues est n x n. Les résultats expérimentaux avec des images synthétiquement floues illustrent l'efficacité de notre approche. / The classical linear algebra methods, for calculating eigenvalues and eigenvectors of a matrix, or lower-rank approximations of a solution, etc....do not consider the structures of matrices. Such structures are usually destroyed in the numerical process. Alternative structure-preserving methods are the subject of an important interest mattering to the community. This thesis establishes a contribution in this field. The SR decomposition is usually implemented via the symplectic Gram-Schmidt algorithm. As in the classical case, a loss of orthogonality can occur. To remedy this, we have proposed two algorithms RSGSi and RMSGSi, where the reorthogonalization of a current set of vectors against the previously computed set is performed twice. The loss of J-orthogonality has significantly improved. A direct rounding error analysis of symplectic Gram-Schmidt algorithm is very hard to accomplish. We managed to get around this difficulty and give the error bounds on the loss of the J-orthogonality and on the factorization. Another way to implement the SR decomposition is based on symplectic Householder transformations. An optimal choice of free parameters provided an optimal version of the algorithm SROSH. However, the latter may be subject to numerical instability. We have proposed a new modified version SRMSH, which has the advantage of being numerically more stable. By a detailes study, we are led to two new variants numerically more stables : SRMSH and SRMSH2. In order to build a SR algorithm of complexity O(n³), where 2n is the size of the matrix, a reduction to the condensed matrix form (upper J-Hessenberg form) via adequate similarities is crucial. This reduction may be handled via the algorithm JHESS. We have shown that it is possible to perform a reduction of a general matrix, to an upper J-Hessenberg form, based only on the use of symplectic Householder transformations. The new algorithm, which will be called JHSH algorithm, is based on an adaptation of SRSH algorithm. We are led to two news variants algorithms JHMSH and JHMSH2 which are significantly more stable numerically. We found that these algortihms behave quite similarly to JHESS algorithm. The main drawback of all these algorithms (JHESS, JHMSH, JHMSH2) is that they may encounter fatal breakdowns or may suffer from a severe form of near-breakdowns, causing a brutal stop of the computations, the algorithm breaks down, or leading to a serious numerical instability. This phenomenon has no equivalent in the Euclidean case. We sketch out a very efficient strategy for curing fatal breakdowns and treating near breakdowns. Thus, the new algorithms incorporating this modification will be referred to as MJHESS, MJHSH, JHM²SH and JHM²SH2. These strategies were then incorporated into the implicit version of the SR algorithm to overcome the difficulties encountered by the fatal breakdown or near-breakdown. We recall that without these strategies, the SR algorithms breaks. Finally ans in another framework of structured matrices, we presented a robust algorithm via FFT and a Hankel matrix, based on computing approximate greatest common divisors (GCD) of polynomials, for solving the problem pf blind image deconvolution. Specifically, we designe a specialized algorithm for computing the GCD of bivariate polynomials. The new algorithm is based on the fast GCD algorithm for univariate polynomials , of quadratic complexity O(n²) flops. The complexitiy of our algorithm is O(n²log(n)) where the size of blurred images is n x n. The experimental results with synthetically burred images are included to illustrate the effectiveness of our approach

Page generated in 0.0692 seconds