• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 58
  • 8
  • 4
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 97
  • 97
  • 37
  • 25
  • 23
  • 16
  • 16
  • 15
  • 15
  • 14
  • 12
  • 12
  • 12
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

On the use of low-rank arithmetic to reduce the complexity of parallel sparse linear solvers based on direct factorization techniques / Utilisation de la compression low-rank pour réduire la complexité des solveurs creux parallèles basés sur des techniques de factorisation directes.

Pichon, Grégoire 29 November 2018 (has links)
La résolution de systèmes linéaires creux est un problème qui apparaît dans de nombreuses applications scientifiques, et les solveurs creux sont une étape coûteuse pour ces applications ainsi que pour des solveurs plus avancés comme les solveurs hybrides direct-itératif. Pour ces raisons, optimiser la performance de ces solveurs pour les architectures modernes est un problème critique. Cependant, les contraintes mémoire et le temps de résolution limitent l’utilisation de ce type de solveur pour des problèmes de très grande taille. Pour les approches concurrentes, par exemple les méthodes itératives, des préconditionneurs garantissant une bonne convergence pour un large ensemble de problèmes sont toujours inexistants. Dans la première partie de cette thèse, nous présentons deux approches exploitant la compression Block Low-Rank (BLR) pour réduire la consommation mémoire et/ou le temps de résolution d’un solveur creux. Ce format de compression à plat, sans hiérarchie, permet de tirer profit du caractère low-rank des blocs apparaissant dans la factorisation de systèmes linéaires creux. La solution proposée peut être utilisée soit en tant que solveur direct avec une précision réduite, soit comme un préconditionneur très robuste. La première approche, appelée Minimal Memory, illustre le meilleur gain mémoire atteignable avec la compression BLR, alors que la seconde approche, appelée Just-In-Time, est dédiée à la réduction du nombre d’opérations, et donc du temps de résolution. Dans la seconde partie, nous présentons une stratégie de reordering qui augmente la granularité des blocs pour tirer davantage profit de la localité dans l’utilisation d’architectures multi-coeurs et pour fournir de tâches plus volumineuses aux GPUs. Cette stratégie s’appuie sur la factorisation symbolique par blocs pour raffiner la numérotation produite par des outils de partitionnement comme Metis ou Scotch, et ne modifie pas le nombre d’opérations nécessaires à la résolution du problème. A partir de cette approche, nous proposons dans la troisième partie de ce manuscrit une technique de clustering low-rank qui a pour objectif de former des clusters d’inconnues au sein d’un séparateur. Nous démontrons notamment les intérêts d’une telle approche par rapport aux techniques de clustering classiquement utilisées. Ces deux stratégies ont été développées pour le format à plat BLR, mais sont également une première étape pour le passage à un format hiérarchique. Dans la dernière partie de cette thèse, nous nous intéressons à une modification de la technique de dissection emboîtée afin d’aligner les séparateurs par rapport à leur père pour obtenir des structures de données plus régulières. / Solving sparse linear systems is a problem that arises in many scientific applications, and sparse direct solvers are a time consuming and key kernel for those applications and for more advanced solvers such as hybrid direct-iterative solvers. For those reasons, optimizing their performance on modern architectures is critical. However, memory requirements and time-to-solution limit the use of direct methods for very large matrices. For other approaches, such as iterative methods, general black-box preconditioners that can ensure fast convergence for a wide range of problems are still missing. In the first part of this thesis, we present two approaches using a Block Low-Rank (BLR) compression technique to reduce the memory footprint and/or the time-to-solution of a supernodal sparse direct solver. This flat, non-hierarchical, compression method allows to take advantage of the low-rank property of the blocks appearing during the factorization of sparse linear systems. The proposed solver can be used either as a direct solver at a lower precision or as a very robust preconditioner. The first approach, called Minimal Memory, illustrates the maximum memory gain that can be obtained with the BLR compression method, while the second approach, called Just-In-Time, mainly focuses on reducing the computational complexity and thus the time-to-solution. In the second part, we present a reordering strategy that increases the block granularity to better take advantage of the locality for multicores and provide larger tasks to GPUs. This strategy relies on the block-symbolic factorization to refine the ordering produced by tools such as Metis or Scotch, but it does not impact the number of operations required to solve the problem. From this approach, we propose in the third part of this manuscript a new low-rank clustering technique that is designed to cluster unknowns within a separator to obtain the BLR partition, and demonstrate its assets with respect to widely used clustering strategies. Both reordering and clustering where designed for the flat BLR representation but are also a first step to move to hierarchical formats. We investigate in the last part of this thesis a modified nested dissection strategy that aligns separators with respect to their father to obtain more regular data structure.
12

Low-Rank and Sparse Decomposition for Hyperspectral Image Enhancement and Clustering

Tian, Long 03 May 2019 (has links)
In this dissertation, some new algorithms are developed for hyperspectral imaging analysis enhancement. Tensor data format is applied in hyperspectral dataset sparse and low-rank decomposition, which could enhance the classification and detection performance. And multi-view learning technique is applied in hyperspectral imaging clustering. Furthermore, kernel version of multi-view learning technique has been proposed, which could improve clustering performance. Most of low-rank and sparse decomposition algorithms are based on matrix data format for HSI analysis. As HSI contains high spectral dimensions, tensor based extended low-rank and sparse decomposition (TELRSD) is proposed in this dissertation for better performance of HSI classification with low-rank tensor part, and HSI detection with sparse tensor part. With this tensor based method, HSI is processed in 3D data format, and information between spectral bands and pixels maintain integrated during decomposition process. This proposed algorithm is compared with other state-of-art methods. And the experiment results show that TELRSD has the best performance among all those comparison algorithms. HSI clustering is an unsupervised task, which aims to group pixels into different groups without labeled information. Low-rank sparse subspace clustering (LRSSC) is the most popular algorithms for this clustering task. The spatial-spectral based multi-view low-rank sparse subspace clustering (SSMLC) algorithms is proposed in this dissertation, which extended LRSSC with multi-view learning technique. In this algorithm, spectral and spatial views are created to generate multi-view dataset of HSI, where spectral partition, morphological component analysis (MCA) and principle component analysis (PCA) are applied to create others views. Furthermore, kernel version of SSMLC (k-SSMLC) also has been investigated. The performance of SSMLC and k-SSMLC are compared with sparse subspace clustering (SSC), low-rank sparse subspace clustering (LRSSC), and spectral-spatial sparse subspace clustering (S4C). It has shown that SSMLC could improve the performance of LRSSC, and k-SSMLC has the best performance. The spectral clustering has been proved that it equivalent to non-negative matrix factorization (NMF) problem. In this case, NMF could be applied to the clustering problem. In order to include local and nonlinear features in data source, orthogonal NMF (ONMF), graph-regularized NMF (GNMF) and kernel NMF (k-NMF) has been proposed for better clustering performance. The non-linear orthogonal graph NMF combine both kernel, orthogonal and graph constraints in NMF (k-OGNMF), which push up the clustering performance further. In the HSI domain, kernel multi-view based orthogonal graph NMF (k-MOGNMF) is applied for subspace clustering, where k-OGNMF is extended with multi-view algorithm, and it has better performance and computation efficiency.
13

Computing with functions in two dimensions

Townsend, Alex January 2014 (has links)
New numerical methods are proposed for computing with smooth scalar and vector valued functions of two variables defined on rectangular domains. Functions are approximated to essentially machine precision by an iterative variant of Gaussian elimination that constructs near-optimal low rank approximations. Operations such as integration, differentiation, and function evaluation are particularly efficient. Explicit convergence rates are shown for the singular values of differentiable and separately analytic functions, and examples are given to demonstrate some paradoxical features of low rank approximation theory. Analogues of QR, LU, and Cholesky factorizations are introduced for matrices that are continuous in one or both directions, deriving a continuous linear algebra. New notions of triangular structures are proposed and the convergence of the infinite series associated with these factorizations is proved under certain smoothness assumptions. A robust numerical bivariate rootfinder is developed for computing the common zeros of two smooth functions via a resultant method. Using several specialized techniques the algorithm can accurately find the simple common zeros of two functions with polynomial approximants of high degree (&geq; 1,000). Lastly, low rank ideas are extended to linear partial differential equations (PDEs) with variable coefficients defined on rectangles. When these ideas are used in conjunction with a new one-dimensional spectral method the resulting solver is spectrally accurate and efficient, requiring O(n<sup>2</sup>) operations for rank $1$ partial differential operators, O(n<sup>3</sup>) for rank 2, and O(n<sup>4</sup>) for rank &geq,3 to compute an n x n matrix of bivariate Chebyshev expansion coefficients for the PDE solution. The algorithms in this thesis are realized in a software package called Chebfun2, which is an integrated two-dimensional component of Chebfun.
14

Large Scale Matrix Completion and Recommender Systems

Amadeo, Lily 04 September 2015 (has links)
"The goal of this thesis is to extend the theory and practice of matrix completion algorithms, and how they can be utilized, improved, and scaled up to handle large data sets. Matrix completion involves predicting missing entries in real-world data matrices using the modeling assumption that the fully observed matrix is low-rank. Low-rank matrices appear across a broad selection of domains, and such a modeling assumption is similar in spirit to Principal Component Analysis. Our focus is on large scale problems, where the matrices have millions of rows and columns. In this thesis we provide new analysis for the convergence rates of matrix completion techniques using convex nuclear norm relaxation. In addition, we validate these results on both synthetic data and data from two real-world domains (recommender systems and Internet tomography). The results we obtain show that with an empirical, data-inspired understanding of various parameters in the algorithm, this matrix completion problem can be solved more efficiently than some previous theory suggests, and therefore can be extended to much larger problems with greater ease. "
15

Exploiting Data Sparsity In Covariance Matrix Computations on Heterogeneous Systems

Charara, Ali 24 May 2018 (has links)
Covariance matrices are ubiquitous in computational sciences, typically describing the correlation of elements of large multivariate spatial data sets. For example, covari- ance matrices are employed in climate/weather modeling for the maximum likelihood estimation to improve prediction, as well as in computational ground-based astronomy to enhance the observed image quality by filtering out noise produced by the adap- tive optics instruments and atmospheric turbulence. The structure of these covariance matrices is dense, symmetric, positive-definite, and often data-sparse, therefore, hier- archically of low-rank. This thesis investigates the performance limit of dense matrix computations (e.g., Cholesky factorization) on covariance matrix problems as the number of unknowns grows, and in the context of the aforementioned applications. We employ recursive formulations of some of the basic linear algebra subroutines (BLAS) to accelerate the covariance matrix computation further, while reducing data traffic across the memory subsystems layers. However, dealing with large data sets (i.e., covariance matrices of billions in size) can rapidly become prohibitive in memory footprint and algorithmic complexity. Most importantly, this thesis investigates the tile low-rank data format (TLR), a new compressed data structure and layout, which is valuable in exploiting data sparsity by approximating the operator. The TLR com- pressed data structure allows approximating the original problem up to user-defined numerical accuracy. This comes at the expense of dealing with tasks with much lower arithmetic intensities than traditional dense computations. In fact, this thesis con- solidates the two trends of dense and data-sparse linear algebra for HPC. Not only does the thesis leverage recursive formulations for dense Cholesky-based matrix al- gorithms, but it also implements a novel TLR-Cholesky factorization using batched linear algebra operations to increase hardware occupancy and reduce the overhead of the API. Performance reported of the dense and TLR-Cholesky shows many-fold speedups against state-of-the-art implementations on various systems equipped with GPUs. Additionally, the TLR implementation gives the user flexibility to select the desired accuracy. This trade-off between performance and accuracy is, currently, a well-established leading trend in the convergence of the third and fourth paradigm, i.e., HPC and Big Data, when moving forward with exascale software roadmap.
16

Robust low-rank tensor approximations using group sparsity / Approximations robustes de tenseur de rang faible en utilisant la parcimonie de groupe

Han, Xu 21 January 2019 (has links)
Le développement de méthodes de décomposition de tableaux multi-dimensionnels suscite toujours autant d'attention, notamment d'un point de vue applicatif. La plupart des algorithmes, de décompositions tensorielles, existants requièrent une estimation du rang du tenseur et sont sensibles à une surestimation de ce dernier. Toutefois, une telle estimation peut être difficile par exemple pour des rapports signal à bruit faibles. D'un autre côté, estimer simultanément le rang et les matrices de facteurs du tenseur ou du tenseur cœur n'est pas tâche facile tant les problèmes de minimisation de rang sont généralement NP-difficiles. Plusieurs travaux existants proposent d'utiliser la norme nucléaire afin de servir d'enveloppe convexe de la fonction de rang. Cependant, la minimisation de la norme nucléaire engendre généralement un coût de calcul prohibitif pour l'analyse de données de grande taille. Dans cette thèse, nous nous sommes donc intéressés à l'approximation d'un tenseur bruité par un tenseur de rang faible. Plus précisément, nous avons étudié trois modèles de décomposition tensorielle, le modèle CPD (Canonical Polyadic Decomposition), le modèle BTD (Block Term Decomposition) et le modèle MTD (Multilinear Tensor Decomposition). Pour chacun de ces modèles, nous avons proposé une nouvelle méthode d'estimation de rang utilisant une métrique moins coûteuse exploitant la parcimonie de groupe. Ces méthodes de décomposition comportent toutes deux étapes : une étape d'estimation de rang, et une étape d'estimation des matrices de facteurs exploitant le rang estimé. Des simulations sur données simulées et sur données réelles montrent que nos méthodes présentent toutes une plus grande robustesse à la présence de bruit que les approches classiques. / Last decades, tensor decompositions have gained in popularity in several application domains. Most of the existing tensor decomposition methods require an estimating of the tensor rank in a preprocessing step to guarantee an outstanding decomposition results. Unfortunately, learning the exact rank of the tensor can be difficult in some particular cases, such as for low signal to noise ratio values. The objective of this thesis is to compute the best low-rank tensor approximation by a joint estimation of the rank and the loading matrices from the noisy tensor. Based on the low-rank property and an over estimation of the loading matrices or the core tensor, this joint estimation problem is solved by promoting group sparsity of over-estimated loading matrices and/or the core tensor. More particularly, three new methods are proposed to achieve efficient low rank estimation for three different tensors decomposition models, namely Canonical Polyadic Decomposition (CPD), Block Term Decomposition (BTD) and Multilinear Tensor Decomposition (MTD). All the proposed methods consist of two steps: the first step is designed to estimate the rank, and the second step uses the estimated rank to compute accurately the loading matrices. Numerical simulations with noisy tensor and results on real data the show effectiveness of the proposed methods compared to the state-of-the-art methods.
17

Novel adaptive reconstruction schemes for accelerated myocardial perfusion magnetic resonance imaging

Lingala, Sajan Goud 01 December 2013 (has links)
Coronary artery disease (CAD) is one of the leading causes of death in the world. In the United States alone, it is estimated that approximately every 25 seconds, a new CAD event will occur, and approximately every minute, someone will die of one. The detection of CAD during in its early stages is very critical to reduce the mortality rates. Magnetic resonance imaging of myocardial perfusion (MR-MPI) has been receiving significant attention over the last decade due to its ability to provide a unique view of the microcirculation blood flow in the myocardial tissue through the coronary vascular network. The ability of MR-MPI to detect changes in microcirculation during early stages of ischemic events makes it a useful tool in identifying myocardial tissues that are alive but at the risk of dying. However this technique is not yet fully established clinically due to fundamental limitations imposed by the MRI device physics. The limitations of current MRI schemes often make it challenging to simultaneously achieve high spatio-temporal resolution, sufficient spatial coverage, and good image quality in myocardial perfusion MRI. Furthermore, the acquisitions are typically set up to acquire images during breath holding. This often results in motion artifacts due to improper breath hold patterns. This dissertation deals with developing novel image reconstruction methods in conjunction with non-Cartesian sampling for the reconstruction of dynamic MRI data from highly accelerated / under-sampled Fourier measurements. The reconstruction methods are based on adaptive signal models to represent the dynamic data using few model coefficients. Three novel adaptive reconstruction methods are developed and validated: (a) low rank and sparsity based modeling, (b) blind compressed sensing, and (c) motion compensated compressed sensing. The developed methods are applicable to a wide range of dynamic imaging problems. In the context of MR-MPI, this dissertation show feasibilities that the developed methods can enable free breathing myocardial perfusion MRI acquisitions with high spatio-temporal resolutions ( < 2mm x 2mm, 1 heart beat) and slice coverage (upto 8 slices).
18

Bayesian learning methods for neural coding

Park, Mi Jung 27 January 2014 (has links)
A primary goal in systems neuroscience is to understand how neural spike responses encode information about the external world. A popular approach to this problem is to build an explicit probabilistic model that characterizes the encoding relationship in terms of a cascade of stages: (1) linear dimensionality reduction of a high-dimensional stimulus space using a bank of filters or receptive fields (RFs); (2) a nonlinear function from filter outputs to spike rate; and (3) a stochastic spiking process with recurrent feedback. These models have described single- and multi-neuron spike responses in a wide variety of brain areas. This dissertation addresses Bayesian methods to efficiently estimate the linear and non-linear stages of the cascade encoding model. In the first part, the dissertation describes a novel Bayesian receptive field estimator based on a hierarchical prior that flexibly incorporates knowledge about the shapes of neural receptive fields. This estimator achieves error rates several times lower than existing methods, and can be applied to a variety of other neural inference problems such as extracting structure in fMRI data. The dissertation also presents active learning frameworks developed for receptive field estimation incorporating a hierarchical prior in real-time neurophysiology experiments. In addition, the dissertation describes a novel low-rank model for the high dimensional receptive field, combined with a hierarchical prior for more efficient receptive field estimation. In the second part, the dissertation describes new models for neural nonlinearities using Gaussian processes (GPs) and Bayesian active learning algorithms in closed-loop neurophysiology experiments to rapidly estimate neural nonlinearities. The dissertation also presents several stimulus selection criteria and compare their performance in neural nonlinearity estimation. Furthermore, the dissertation presents a variation of the new models by including an additional latent Gaussian noise source, to infer the degree of over-dispersion in neural spike responses. The proposed model successfully captures various mean-variance relationships in neural spike responses and achieves higher prediction accuracy than previous models. / text
19

Mobile localization : approach and applications

Rallapalli, Swati 09 February 2015 (has links)
Localization is critical to a number of wireless network applications. In many situations GPS is not suitable. This dissertation (i) develops novel localization schemes for wireless networks by explicitly incorporating mobility information and (ii) applies localization to physical analytics i.e., understanding shoppers' behavior within retail spaces by leveraging inertial sensors, Wi-Fi and vision enabled by smart glasses. More specifically, we first focus on multi-hop mobile networks, analyze real mobility traces and observe that they exhibit temporal stability and low-rank structure. Motivated by these observations, we develop novel localization algorithms to effectively capture and also adapt to different degrees of these properties. Using extensive simulations and testbed experiments, we demonstrate the accuracy and robustness of our new schemes. Second, we focus on localizing a single mobile node, which may not be connected with multiple nodes (e.g., without network connectivity or only connected with an access point). We propose trajectory-based localization using Wi-Fi or magnetic field measurements. We show that these measurements have the potential to uniquely identify a trajectory. We then develop a novel approach that leverages multi-level wavelet coefficients to first identify the trajectory and then localize to a point on the trajectory. We show that this approach is highly accurate and power efficient using indoor and outdoor experiments. Finally, localization is a critical step in enabling a lot of applications --- an important one is physical analytics. Physical analytics has the potential to provide deep-insight into shoppers' interests and activities and therefore better advertisements, recommendations and a better shopping experience. To enable physical analytics, we build ThirdEye system which first achieves zero-effort localization by leveraging emergent devices like the Google-Glass to build AutoLayout that fuses video, Wi-Fi, and inertial sensor data, to simultaneously localize the shoppers while also constructing and updating the product layout in a virtual coordinate space. Further, ThirdEye comprises of a range of schemes that use a combination of vision and inertial sensing to study mobile users' behavior while shopping, namely: walking, dwelling, gazing and reaching-out. We show the effectiveness of ThirdEye through an evaluation in two large retail stores in the United States. / text
20

Unifying Low-Rank Models for Visual Learning

Cabral, Ricardo da Silveira 01 February 2015 (has links)
Many problems in signal processing, machine learning and computer vision can be solved by learning low rank models from data. In computer vision, problems such as rigid structure from motion have been formulated as an optimization over subspaces with fixed rank. These hard-rank constraints have traditionally been imposed by a factorization that parameterizes subspaces as a product of two matrices of fixed rank. Whilst factorization approaches lead to efficient and kernelizable optimization algorithms, they have been shown to be NP-Hard in presence of missing data. Inspired by recent work in compressed sensing, hard-rank constraints have been replaced by soft-rank constraints, such as the nuclear norm regularizer. Vis-a-vis hard-rank approaches, soft-rank models are convex even in presence of missing data: but how is convex optimization solving a NP-Hard problem? This thesis addresses this question by analyzing the relationship between hard and soft rank constraints in the unsupervised factorization with missing data problem. Moreover, we extend soft rank models to weakly supervised and fully supervised learning problems in computer vision. There are four main contributions of our work: (1) The analysis of a new unified low-rank model for matrix factorization with missing data. Our model subsumes soft and hard-rank approaches and merges advantages from previous formulations, such as efficient algorithms and kernelization. It also provides justifications on the choice of algorithms and regions that guarantee convergence to global minima. (2) A deterministic \rank continuation" strategy for the NP-hard unsupervised factorization with missing data problem, that is highly competitive with the state-of-the-art and often achieves globally optimal solutions. In preliminary work, we show that this optimization strategy is applicable to other NP-hard problems which are typically relaxed to convex semidentite programs (e.g., MAX-CUT, quadratic assignment problem). (3) A new soft-rank fully supervised robust regression model. This convex model is able to deal with noise, outliers and missing data in the input variables. (4) A new soft-rank model for weakly supervised image classification and localization. Unlike existing multiple-instance approaches for this problem, our model is convex.

Page generated in 0.0735 seconds