• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 133
  • 24
  • 12
  • 7
  • 5
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 212
  • 53
  • 30
  • 30
  • 29
  • 27
  • 24
  • 21
  • 21
  • 19
  • 19
  • 19
  • 18
  • 18
  • 18
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Limitations of Classical Tomographic Reconstructions from Restricted Measurements and Enhancing with Physically Constrained Machine Learning

January 2020 (has links)
abstract: This work is concerned with how best to reconstruct images from limited angle tomographic measurements. An introduction to tomography and to limited angle tomography will be provided and a brief overview of the many fields to which this work may contribute is given. The traditional tomographic image reconstruction approach involves Fourier domain representations. The classic Filtered Back Projection algorithm will be discussed and used for comparison throughout the work. Bayesian statistics and information entropy considerations will be described. The Maximum Entropy reconstruction method will be derived and its performance in limited angular measurement scenarios will be examined. Many new approaches become available once the reconstruction problem is placed within an algebraic form of Ax=b in which the measurement geometry and instrument response are defined as the matrix A, the measured object as the column vector x, and the resulting measurements by b. It is straightforward to invert A. However, for the limited angle measurement scenarios of interest in this work, the inversion is highly underconstrained and has an infinite number of possible solutions x consistent with the measurements b in a high dimensional space. The algebraic formulation leads to the need for high performing regularization approaches which add constraints based on prior information of what is being measured. These are constraints beyond the measurement matrix A added with the goal of selecting the best image from this vast uncertainty space. It is well established within this work that developing satisfactory regularization techniques is all but impossible except for the simplest pathological cases. There is a need to capture the "character" of the objects being measured. The novel result of this effort will be in developing a reconstruction approach that will match whatever reconstruction approach has proven best for the types of objects being measured given full angular coverage. However, when confronted with limited angle tomographic situations or early in a series of measurements, the approach will rely on a prior understanding of the "character" of the objects measured. This understanding will be learned by a parallel Deep Neural Network from examples. / Dissertation/Thesis / Doctoral Dissertation Electrical Engineering 2020
32

Efficient Techniques of Sparse Signal Analysis for Enhanced Recovery of Information in Biomedical Engineering and Geosciences

Sana, Furrukh 11 1900 (has links)
Sparse signals are abundant among both natural and man-made signals. Sparsity implies that the signal essentially resides in a small dimensional subspace. The sparsity of the signal can be exploited to improve its recovery from limited and noisy observations. Traditional estimation algorithms generally lack the ability to take advantage of signal sparsity. This dissertation considers several problems in the areas of biomedical engineering and geosciences with the aim of enhancing the recovery of information by exploiting the underlying sparsity in the problem. The objective is to overcome the fundamental bottlenecks, both in terms of estimation accuracies and required computational resources. In the first part of dissertation, we present a high precision technique for the monitoring of human respiratory movements by exploiting the sparsity of wireless ultra-wideband signals. The proposed technique provides a novel methodology of overcoming the Nyquist sampling constraint and enables robust performance in the presence of noise and interferences. We also present a comprehensive framework for the important problem of extracting the fetal electrocardiogram (ECG) signals from abdominal ECG recordings of pregnant women. The multiple measurement vectors approach utilized for this purpose provides an efficient mechanism of exploiting the common structure of ECG signals, when represented in sparse transform domains, and allows leveraging information from multiple ECG electrodes under a joint estimation formulation. In the second part of dissertation, we adopt sparse signal processing principles for improved information recovery in large-scale subsurface reservoir characterization problems. We propose multiple new algorithms for sparse representation of the subsurface geological structures, incorporation of useful prior information in the estimation process, and for reducing computational complexities of the problem. The techniques presented here enable significantly enhanced imaging of the subsurface earth and result in substantial savings in terms of convergence time, leading to optimized placement of oil wells. This dissertation demonstrates through detailed experimental analysis that the sparse estimation approach not only enables enhanced information recovery in variety of application areas, but also greatly helps in reducing the computational complexities associated with the problems.
33

Convex matrix sparsity for demixing with an application to graphical model structure estimation / Parcimonie matricielle convexe pour les problèmes de démixage avec une application à l'apprentissage de structure de modèles graphiques

Vinyes, Marina 27 November 2018 (has links)
En apprentissage automatique on a pour but d'apprendre un modèle, à partir de données, qui soit capable de faire des prédictions sur des nouvelles données (pas explorées auparavant). Pour obtenir un modèle qui puisse se généraliser sur les nouvelles données, et éviter le sur-apprentissage, nous devons restreindre le modèle. Ces restrictions sont généralement une connaissance a priori de la structure du modèle. Les premières approches considérées dans la littérature sont la régularisation de Tikhonov et plus tard le Lasso pour induire de la parcimonie dans la solution. La parcimonie fait partie d'un concept fondamental en apprentissage automatique. Les modèles parcimonieux sont attrayants car ils offrent plus d'interprétabilité et une meilleure généralisation (en évitant le sur-apprentissage) en induisant un nombre réduit de paramètres dans le modèle. Au-delà de la parcimonie générale et dans de nombreux cas, les modèles sont structurellement contraints et ont une représentation simple de certains éléments fondamentaux, comme par exemple une collection de vecteurs, matrices ou tenseurs spécifiques. Ces éléments fondamentaux sont appelés atomes. Dans ce contexte, les normes atomiques fournissent un cadre général pour estimer ce type de modèles. périodes de modèles. Le but de cette thèse est d'utiliser le cadre de parcimonie convexe fourni par les normes atomiques pour étudier une forme de parcimonie matricielle. Tout d'abord, nous développons un algorithme efficace basé sur les méthodes de Frank-Wolfe et qui est particulièrement adapté pour résoudre des problèmes convexes régularisés par une norme atomique. Nous nous concentrons ensuite sur l'estimation de la structure des modèles graphiques gaussiens, où la structure du modèle est encodée dans la matrice de précision et nous étudions le cas avec des variables manquantes. Nous proposons une formulation convexe avec une approche algorithmique et fournissons un résultat théorique qui énonce les conditions nécessaires pour récupérer la structure souhaitée. Enfin, nous considérons le problème de démixage d'un signal en deux composantes ou plus via la minimisation d’une somme de normes ou de jauges, encodant chacune la structure a priori des composants à récupérer. En particulier, nous fournissons une garantie de récupération exacte dans le cadre sans bruit, basée sur des mesures d'incohérence / The goal of machine learning is to learn a model from some data that will make accurate predictions on data that it has not seen before. In order to obtain a model that will generalize on new data, and avoid overfitting, we need to restrain the model. These restrictions are usually some a priori knowledge of the structure of the model. First considered approaches included a regularization, first ridge regression and later Lasso regularization for inducing sparsity in the solution. Sparsity, also known as parsimony, has emerged as a fundamental concept in machine learning. Parsimonious models are appealing since they provide more interpretability and better generalization (avoid overfitting) through the reduced number of parameters. Beyond general sparsity and in many cases, models are constrained structurally so they have a simple representation in terms of some fundamental elements, consisting for example of a collection of specific vectors, matrices or tensors. These fundamental elements are called atoms. In this context, atomic norms provide a general framework for estimating these sorts of models. The goal of this thesis is to use the framework of convex sparsity provided by atomic norms to study a form of matrix sparsity. First, we develop an efficient algorithm based on Frank-Wolfe methods that is particularly adapted to solve problems with an atomic norm regularization. Then, we focus on the structure estimation of Gaussian graphical models, where the structure of the graph is encoded in the precision matrix and study the case with unobserved variables. We propose a convex formulation with an algorithmic approach and provide a theoretical result that states necessary conditions for recovering the desired structure. Finally, we consider the problem of signal demixing into two or more components via the minimization of a sum of norms or gauges, encoding each a structural prior on the corresponding components to recover. In particular, we provide general exact recovery guarantees in the noiseless setting based on incoherence measures
34

Recent developments in curvelet-based seismic processing

Herrmann, Felix J. January 2007 (has links)
No description available.
35

Incorporating Sparse Attention Mechanism into Transformer for Object Detection in Images / Inkludering av gles attention i en transformer för objektdetektering i bilder

Duc Dao, Cuong January 2022 (has links)
DEtection TRansformer, DETR, introduces an innovative design for object detection based on softmax attention. However, the softmax operation produces dense attention patterns, i.e., all entries in the attention matrix receive a non-zero weight, regardless of their relevance for detection. In this work, we explore several alternatives to softmax to incorporate sparsity into the architecture of DETR. Specifically, we replace softmax with a sparse transformation from the α-entmax family: sparsemax and entmax-1.5, which induce a set amount of sparsity, and α-entmax, which treats sparsity as a learnable parameter of each attention head. In addition to evaluating the effect on detection performance, we examine the resulting attention maps from the perspective of explainability. To this end, we introduce three evaluation metrics to quantify the sparsity, complementing the qualitative observations. Although our experimental results on the COCO detection dataset do not show an increase in detection performance, we find that learnable sparsity provides more flexibility to the model and produces more explicative attention maps. To the best of our knowledge, we are the first to introduce learnable sparsity into the architecture of transformer-based object detectors. / DEtection Transformer, DETR, introducerar en innovativ design för objektdetektering baserad på softmax attention. Softmax producerar tät attention, alla element i attention-matrisen får en vikt skild från noll, oberoende av deras relevans för objektdetektering. Vi utforskar flera alternativ till softmax för att inkludera gleshet i DETRs arkitektur. Specifikt så ersätter vi softmax med en gles transformation från α-entmax familjen: sparsemax och entmax1.5, vilka inducerar en fördefinierad mängd gleshet, och α-entmax, som ser gleshet som en träningsbar parameter av varje attention-huvud. Förutom att evaluera effekten på detekteringsprestandan, så utforskar vi de resulterande attention-matriserna från ett förklarbarhetsperspektiv. Med det som mål så introducerar vi tre olika metriker för att evaluera gleshet, som ett komplement till de kvalitativa observationerna. Trots att våra experimentella resultat på COCO, ett utmanande dataset för objektdetektering, inte visar en ökning i detekteringsprestanda, så finner vi att träningsbar gleshet ökar modellens flexibilitet, och producerar mer förklarbara attentionmatriser. Såvitt vi vet så är vi de första som introducerar träningsbar gleshet i transformer-baserade arkitekturer för objektdetektering.
36

Sparse and orthogonal singular value decomposition

Khatavkar, Rohan January 1900 (has links)
Master of Science / Department of Statistics / Kun Chen / The singular value decomposition (SVD) is a commonly used matrix factorization technique in statistics, and it is very e ective in revealing many low-dimensional structures in a noisy data matrix or a coe cient matrix of a statistical model. In particular, it is often desirable to obtain a sparse SVD, i.e., only a few singular values are nonzero and their corresponding left and right singular vectors are also sparse. However, in several existing methods for sparse SVD estimation, the exact orthogonality among the singular vectors are often sacri ced due to the di culty in incorporating the non-convex orthogonality constraint in sparse estimation. Imposing orthogonality in addition to sparsity, albeit di cult, can be critical in restricting and guiding the search of the sparsity pattern and facilitating model interpretation. Combining the ideas of penalized regression and Bregman iterative methods, we propose two methods that strive to achieve the dual goal of sparse and orthogonal SVD estimation, in the general framework of high dimensional multivariate regression. We set up simulation studies to demonstrate the e cacy of the proposed methods.
37

Non-chordal patterns associated with the positive definite completion problem / Estiaan Murrell Klem

Klem, Estiaan Murrell January 2015 (has links)
A partial matrix, is a matrix for which some entries are specified and some unspecified. In general completion problems ask whether a given partial matrix, may be completed to a matrix where all the entries are specified, such that this completion admits a specific structure. The positive definite completion problem asks whether a partial Hermitian matrix admits a completion such that the completed matrix is positive semidefinite. The minimum solution criterion, is that every fully specified principal submatrix is nonnegative. Then the set of partial Hermitian matrices, which admit a positive semidefinite completion, forms a convex cone, and its dual cone can be identified as the set of positive semidefinite Hermitian matrices with zeros in the entries that correspond to non-edges in the graph G: Furthermore, the set of partial Hermitian matrices, with non-negative fully specified principal minors, also forms a convex cone, and its dual cone can be identified as the set of positive semidefinite Hermitian matrices which can be written as the sum of rank one matrices, with underlying graph G. Consequently, the problem reduces to determining when these cones are equal. Indeed, we find that this happens if and only if the underlying graph is chordal. It then follows that the extreme rays of the cone of positive semidefinite Hermitian matrices with zeros in the entries that correspond to non-edges in the graph G is generated by rank one matrices. The question that arises, is what happens if the underlying graph is not chordal. In particular, what can be said about the extreme rays of the cone of positive semidefinite matrices with some non-chordal pattern. This gives rise to the notion of the sparsity order of a graph G; that is, the maximum rank of matrices lying on extreme rays of the cone of positive semidefinite Hermitian matrices with zeros in the entries that correspond to non-edges in the graph G: We will see that those graphs having sparsity order less than or equal to 2 can be fully characterized. Moreover, one can determine in polynomial time whether a graph has sparsity order less than or equal to 2, using a clique-sum decomposition. We also show that one can determine whether a graph has sparsity order less than or equal to 2, by considering the characteristic polynomial of the adjacency matrix of certain forbidden induced subgraphs and comparing it with the characteristic polynomial of principal submatrices of appropriate size. / MSc (Mathematics), North-West University, Potchefstroom Campus, 2015
38

Non-chordal patterns associated with the positive definite completion problem / Estiaan Murrell Klem

Klem, Estiaan Murrell January 2015 (has links)
A partial matrix, is a matrix for which some entries are specified and some unspecified. In general completion problems ask whether a given partial matrix, may be completed to a matrix where all the entries are specified, such that this completion admits a specific structure. The positive definite completion problem asks whether a partial Hermitian matrix admits a completion such that the completed matrix is positive semidefinite. The minimum solution criterion, is that every fully specified principal submatrix is nonnegative. Then the set of partial Hermitian matrices, which admit a positive semidefinite completion, forms a convex cone, and its dual cone can be identified as the set of positive semidefinite Hermitian matrices with zeros in the entries that correspond to non-edges in the graph G: Furthermore, the set of partial Hermitian matrices, with non-negative fully specified principal minors, also forms a convex cone, and its dual cone can be identified as the set of positive semidefinite Hermitian matrices which can be written as the sum of rank one matrices, with underlying graph G. Consequently, the problem reduces to determining when these cones are equal. Indeed, we find that this happens if and only if the underlying graph is chordal. It then follows that the extreme rays of the cone of positive semidefinite Hermitian matrices with zeros in the entries that correspond to non-edges in the graph G is generated by rank one matrices. The question that arises, is what happens if the underlying graph is not chordal. In particular, what can be said about the extreme rays of the cone of positive semidefinite matrices with some non-chordal pattern. This gives rise to the notion of the sparsity order of a graph G; that is, the maximum rank of matrices lying on extreme rays of the cone of positive semidefinite Hermitian matrices with zeros in the entries that correspond to non-edges in the graph G: We will see that those graphs having sparsity order less than or equal to 2 can be fully characterized. Moreover, one can determine in polynomial time whether a graph has sparsity order less than or equal to 2, using a clique-sum decomposition. We also show that one can determine whether a graph has sparsity order less than or equal to 2, by considering the characteristic polynomial of the adjacency matrix of certain forbidden induced subgraphs and comparing it with the characteristic polynomial of principal submatrices of appropriate size. / MSc (Mathematics), North-West University, Potchefstroom Campus, 2015
39

Sparse distance metric learning

Choy, Tze Leung January 2014 (has links)
A good distance metric can improve the accuracy of a nearest neighbour classifier. Xing et al. (2002) proposed distance metric learning to find a linear transformation of the data so that observations of different classes are better separated. For high-dimensional problems where many un-informative variables are present, it is attractive to select a sparse distance metric, both to increase predictive accuracy but also to aid interpretation of the result. In this thesis, we investigate three different types of sparsity assumption for distance metric learning and show that sparse recovery is possible under each type of sparsity assumption with an appropriate choice of L1-type penalty. We show that a lasso penalty promotes learning a transformation matrix having lots of zero entries, a group lasso penalty recovers a transformation matrix having zero rows/columns and a trace norm penalty allows us to learn a low rank transformation matrix. The regularization allows us to consider a large number of covariates and we apply the technique to an expanded set of basis called rule ensemble to allow for a more flexible fit. Finally, we illustrate an application of the metric learning problem via a document retrieval example and discuss how similarity-based information can be applied to learn a classifier.
40

Optimizing Optimization: Scalable Convex Programming with Proximal Operators

Wytock, Matt 01 March 2016 (has links)
Convex optimization has developed a wide variety of useful tools critical to many applications in machine learning. However, unlike linear and quadratic programming, general convex solvers have not yet reached sufficient maturity to fully decouple the convex programming model from the numerical algorithms required for implementation. Especially as datasets grow in size, there is a significant gap in speed and scalability between general solvers and specialized algorithms. This thesis addresses this gap with a new model for convex programming based on an intermediate representation of convex problems as a sum of functions with efficient proximal operators. This representation serves two purposes: 1) many problems can be expressed in terms of functions with simple proximal operators, and 2) the proximal operator form serves as a general interface to any specialized algorithm that can incorporate additional `2-regularization. On a single CPU core, numerical results demonstrate that the prox-affine form results in significantly faster algorithms than existing general solvers based on conic forms. In addition, splitting problems into separable sums is attractive from the perspective of distributing solver work amongst multiple cores and machines. We apply large-scale convex programming to several problems arising from building the next-generation, information-enabled electrical grid. In these problems (as is common in many domains) large, high-dimensional datasets present opportunities for novel data-driven solutions. We present approaches based on convex models for several problems: probabilistic forecasting of electricity generation and demand, preventing failures in microgrids and source separation for whole-home energy disaggregation.

Page generated in 0.0431 seconds