241 |
Verarbeitung von Sparse-Matrizen in Kompaktspeicherform KLZ/KZUMeyer, A., Pester, M. 30 October 1998 (has links) (PDF)
The paper describes a storage scheme for sparse symmetric or
nonsymmetric matrices which has been developed and used for many
years at the Technical University of Chemnitz. An overview of
existing library subroutines using such matrices is included.
|
242 |
Sparse Merkle Trees: Definitions and Space-Time Trade-Offs with Applications for BalloonÖstersjö, Rasmus January 2016 (has links)
This dissertation proposes an efficient representation of a sparse Merkle tree (SMT): an authenticated data structure that supports logarithmic insertion, removal, and look-up in a verifiable manner. The proposal is general in the sense that it can be implemented using a variety of underlying non-authenticated data structures, and it allows trading time for space by the use of an abstract model which represents caching strategies. Both theoretical evaluations and performance results from a proof-of-concept implementation are provided, and the proposed SMT is applied to another authenticated data structure referred to as Balloon. The resulting Balloon has preserved efficiency in the expected case, and is improved with respect to worst case scenarios.
|
243 |
Methods for solving discontinuous-Galerkin finite element equations with application to neutron transportMurphy, Steven 26 August 2015 (has links) (PDF)
We consider high order discontinuous-Galerkin finite element methods for partial differential equations, with a focus on the neutron transport equation. We begin by examining a method for preprocessing block-sparse matrices, of the type that arise from discontinuous-Galerkin methods, prior to factorisation by a multifrontal solver. Numerical experiments on large two and three dimensional matrices show that this pre-processing method achieves a significant reduction in fill-in, when compared to methods that fail to exploit block structures. A discontinuous-Galerkin finite element method for the neutron transport equation is derived that employs high order finite elements in both space and angle. Parallel Krylov subspace based solvers are considered for both source problems and $k_{eff}$-eigenvalue problems. An a-posteriori error estimator is derived and implemented as part of an h-adaptive mesh refinement algorithm for neutron transport $k_{eff}$-eigenvalue problems. This algorithm employs a projection-based error splitting in order to balance the computational requirements between the spatial and angular parts of the computational domain. An hp-adaptive algorithm is presented and results are collected that demonstrate greatly improved efficiency compared to the h-adaptive algorithm, both in terms of reduced computational expense and enhanced accuracy. Computed eigenvalues and effectivities are presented for a variety of challenging industrial benchmarks. Accurate error estimation (with effectivities of 1) is demonstrated for a collection of problems with inhomogeneous, irregularly shaped spatial domains as well as multiple energy groups. Numerical results are presented showing that the hp-refinement algorithm can achieve exponential convergence with respect to the number of degrees of freedom in the finite element space
|
244 |
A Systematic Approach for Obtaining Performance on Matrix-Like OperationsVeras, Richard Michael 01 August 2017 (has links)
Scientific Computation provides a critical role in the scientific process because it allows us ask complex queries and test predictions that would otherwise be unfeasible to perform experimentally. Because of its power, Scientific Computing has helped drive advances in many fields ranging from Engineering and Physics to Biology and Sociology to Economics and Drug Development and even to Machine Learning and Artificial Intelligence. Common among these domains is the desire for timely computational results, thus a considerable amount of human expert effort is spent towards obtaining performance for these scientific codes. However, this is no easy task because each of these domains present their own unique set of challenges to software developers, such as domain specific operations, structurally complex data and ever-growing datasets. Compounding these problems are the myriads of constantly changing, complex and unique hardware platforms that an expert must target. Unfortunately, an expert is typically forced to reproduce their effort across multiple problem domains and hardware platforms. In this thesis, we demonstrate the automatic generation of expert level high-performance scientific codes for Dense Linear Algebra (DLA), Structured Mesh (Stencil), Sparse Linear Algebra and Graph Analytic. In particular, this thesis seeks to address the issue of obtaining performance on many complex platforms for a certain class of matrix-like operations that span across many scientific, engineering and social fields. We do this by automating a method used for obtaining high performance in DLA and extending it to structured, sparse and scale-free domains. We argue that it is through the use of the underlying structure found in the data from these domains that enables this process. Thus, obtaining performance for most operations does not occur in isolation of the data being operated on, but instead depends significantly on the structure of the data.
|
245 |
Compartimentation et transfert de contaminants dans les milieux souterrains : interaction entre transport physique, réactivité chimique et activité biologique / Compartimentalization and contaminant transfer in underground media : interaction between transport processes, chemical reactivity and biological activityBabey, Tristan 08 December 2016 (has links)
Classiquement le transfert des contaminants dans le milieu souterrain est modélisé par un couplage des processus de transport physiques (écoulements contrôlés par les structures géologiques poreuses) et des processus de dégradation ou d'immobilisation chimiques et biologiques. Tant sur les structures géologiques que sur la chimie et la physique, les modèles sont de plus en plus détaillés mais de plus en plus difficiles à calibrer sur des données toujours très parcellaires. Dans cette thèse, nous développons une approche alternative basée sur des modèles parcimonieux sous la forme d’un simple graphe de compartiments interconnectés généralisant les modèles d’interaction de continuums (MINC) ou de transfert à taux multiples (MRMT). Nous montrons que ces modèles sont particulièrement adaptés aux milieux dans lesquels la diffusion de solutés occupe un rôle prépondérant par rapport à l’advection, tels les sols ou les aquifères très hétérogènes comme les aquifères fracturés. L'homogénéisation induite par la diffusion réduit les gradients de concentration, accélère les mélanges entre espèces et fait de la distribution des temps de résidence un excellent proxy de la réactivité. En effet, ces structures simplifiées reconstituées à partir d’informations de temps de résidence se révèlent également pertinentes pour des réactions chimiques non linéaires (e.g. sorption, précipitation/dissolution). Nous montrons finalement comment ces modèles peuvent être adaptés automatiquement à des observations d’essais de traceurs ou de réactions de biodégradation. Ces approches parcimonieuses présentent de nombreux avantages dont la simplicité de développement et de mise en œuvre. Elles permettent d’identifier les déterminants majeurs des échanges entre zones advectives et diffusives ou entre zones inertes et réactives, et d’extrapoler des processus de réactivité à des échelles plus larges. L’utilisation de données de fractionnement isotopique est proposée pour améliorer la dissociation entre l’effet des structures et de la réactivité. / Modelling of contaminant transfer in the subsurface classically relies on a detailed representation of transport processes (groundwater flow controlled by geological structures) coupled to chemical and biological reactivity (immobilization, degradation). Calibration of such detailed models is however often limited by the small amount of available data on the subsurface structures and characteristics. In this thesis, we develop an alternative approach of parsimonious models based on simple graphs of interconnected compartments, taken as generalized multiple interacting continua (MINC) and multiple rate mass transfer (MRMT). We show that this approach is well suited to systems where diffusion-like processes are dominant over advection, like for instance in soils or highly heterogeneous aquifers like fractured aquifers. Homogenization induced by diffusion reduces concentration gradients, speeds up mixing between chemical species and makes residence time distributions excellent proxies for reactivity. Indeed, simplified structures calibrated solely from transit time information prove to provide consistent estimations of non-linear reactivity (e.g. sorption and precipitation/dissolution). Finally, we show how these models can be applied to tracer observations and to biodegradation reactions. Two important advantages of these parsimonious approaches are their facility of development and application. They help identifying the major controls of exchanges between advective and diffusive zones or between inert and reactive zones. They are also amenable to extrapolate reactive processes at larger scale. The use of isotopic fractionation data is proposed to help discriminating between structure-induced effects and reactivity.
|
246 |
Model-based clustering based on sparse finite Gaussian mixturesMalsiner-Walli, Gertraud, Frühwirth-Schnatter, Sylvia, Grün, Bettina January 2016 (has links) (PDF)
In the framework of Bayesian model-based clustering based on a finite mixture of Gaussian distributions, we present a joint approach to estimate the number of mixture components and identify cluster-relevant variables simultaneously as well as to obtain an identified model. Our approach consists in specifying sparse hierarchical priors on the mixture weights and component means. In a deliberately overfitting mixture model the sparse prior on the weights empties superfluous components during MCMC. A straightforward estimator for the true number of components is given by the most frequent number of non-empty components visited during MCMC sampling. Specifying a shrinkage prior, namely the normal gamma prior, on the component means leads to improved parameter estimates as well as identification of cluster-relevant variables. After estimating the mixture model using MCMC methods based on data augmentation and Gibbs sampling, an identified model is obtained by relabeling the MCMC output in the point process representation of the draws. This is performed using K-centroids cluster analysis based on the Mahalanobis distance. We evaluate our proposed strategy in a simulation setup with artificial data and by applying it to benchmark data sets. (authors' abstract)
|
247 |
Sparse Representations and Nonlinear Image Processing for Inverse Imaging SolutionsRam, Sundaresh, Ram, Sundaresh January 2017 (has links)
This work applies sparse representations and nonlinear image processing to two inverse imaging problems. The first problem involves image restoration, where the aim is to reconstruct an unknown high-quality image from a low-quality observed image. Sparse representations of images have drawn a considerable amount of interest in recent years. The assumption that natural signals, such as images, admit a sparse decomposition over a redundant dictionary leads to efficient algorithms for handling such sources of data. The standard sparse representation, however, does not consider the intrinsic geometric structure present in the data, thereby leading to sub-optimal results. Using the concept that a signal is block sparse in a given basis —i.e., the non-zero elements occur in clusters of varying sizes — we present a novel and efficient algorithm for learning a sparse representation of natural images, called graph regularized block sparse dictionary (GRBSD) learning. We apply the proposed method towards two image restoration applications: 1) single-Image super-resolution, where we propose a local regression model that uses learned dictionaries from the GRBSD algorithm for super-resolving a low-resolution image without any external training images, and 2) image inpainting, where we use GRBSD algorithm to learn a multiscale dictionary to generate visually plausible pixels to fill missing regions in an image. Experimental results validate the performance of the GRBSD learning algorithm for single-image super-resolution and image inpainting applications. The second problem addressed in this work involves image enhancement for detection and segmentation of objects in images. We exploit the concept that even though data from various imaging modalities have high dimensionality, the data is sufficiently well described using low-dimensional geometrical structures. To facilitate the extraction of objects having such structure, we have developed general structure enhancement methods that can be used to detect and segment various curvilinear structures in images across different applications. We use the proposed method to detect and segment objects of different size and shape in three applications: 1) segmentation of lamina cribrosa microstructure in the eye from second-harmonic generation microscopy images, 2) detection and segmentation of primary cilia in confocal microscopy images, and 3) detection and segmentation of vehicles in wide-area aerial imagery. Quantitative and qualitative results show that the proposed methods provide improved detection and segmentation accuracy and computational efficiency compared to other recent algorithms.
|
248 |
A Fast MLP-based Learning Method and its Application to Mine Countermeasure MissionsShao, Hang January 2012 (has links)
In this research, a novel machine learning method is designed and applied to Mine Countermeasure Missions. Similarly to some kernel methods, the proposed approach seeks to compute a linear model from another higher dimensional feature space. However, no kernel is used and the feature mapping is explicit. Computation can be done directly in the accessible feature space. In the proposed approach, the feature projection is implemented by constructing a large hidden layer, which differs from traditional belief that Multi-Layer Perceptron is usually funnel-shaped and the hidden layer is used as feature extractor.
The proposed approach is a general method that can be applied to various problems. It is able to improve the performance of the neural network based methods and the learning speed of support vector machine. The classification speed of the proposed approach is also faster than that of kernel machines on the mine countermeasure mission task.
|
249 |
Contribution to dimension reduction techniques : application to object tracking / Contribution aux techniques de la réduction de dimension : application au suivi d'objetLu, Weizhi 16 July 2014 (has links)
Cette thèse étudie et apporte des améliorations significatives sur trois techniques répandues en réduction de dimension : l'acquisition parcimonieuse (ou l'échantillonnage parcimonieux), la projection aléatoire et la représentation parcimonieuse. En acquisition parcimonieuse, la construction d’une matrice de réduction possédant à la fois de bonnes performances et une structure matérielle adéquate reste un défi de taille. Ici, nous proposons explicitement la matrice binaire optimale, avec éléments zéro-Un, en recherchant la meilleure propriété d’isométrie restreinte (RIP). Dans la pratique, un algorithme glouton efficace est successivement développé pour construire la matrice binaire optimale avec une taille arbitraire. Par ailleurs, nous étudions également un autre problème intéressant pour l'acquisition parcimonieuse, c'est celui de la performance des matrices d'acquisition parcimonieuse avec des taux de compression élevés. Pour la première fois, la limite inférieure de la performance des matrices aléatoires de Bernoulli pour des taux de compression croissants est observée et estimée. La projection aléatoire s'utilise principalement en classification mais la construction de la matrice de projection aléatoire s'avère également critique en termes de performance et de complexité. Cette thèse présente la matrice de projection aléatoire, de loin, la plus éparse. Celle-Ci est démontrée présenter la meilleure performance en sélection de caractéristiques, comparativement à d’autres matrices aléatoires plus denses. Ce résultat théorique est confirmé par de nombreuses expériences. Comme nouvelle technique pour la sélection de caractéristiques ou d’échantillons, la représentation parcimonieuse a récemment été largement appliquée dans le domaine du traitement d'image. Dans cette thèse, nous nous concentrons principalement sur ses applications de suivi d'objets dans une séquence d'images. Pour réduire la charge de calcul liée à la représentation parcimonieuse, un système simple mais efficace est proposé pour le suivi d'un objet unique. Par la suite, nous explorons le potentiel de cette représentation pour le suivi d'objets multiples. / This thesis studies three popular dimension reduction techniques: compressed sensing, random projection and sparse representation, and brings significant improvements on these techniques. In compressed sensing, the construction of sensing matrix with both good performance and hardware-Friendly structure has been a significant challenge. In this thesis, we explicitly propose the optimal zero-One binary matrix by searching the best Restricted Isometry Property. In practice, an efficient greedy algorithm is successively developed to construct the optimal binary matrix with arbitrary size. Moreover, we also study another interesting problem for compressed sensing, that is the performance of sensing matrices with high compression rates. For the first time, the performance floor of random Bernoulli matrices over increasing compression rates is observed and effectively estimated. Random projection is mainly used in the task of classification, for which the construction of random projection matrix is also critical in terms of both performance and complexity. This thesis presents so far the most sparse random projection matrix, which is proved holding better feature selection performance than other more dense random matrices. The theoretical result is confirmed with extensive experiments. As a novel technique for feature or sample selection, sparse representation has recently been widely applied in the area of image processing. In this thesis, we mainly focus our attention on its applications to visual object tracking. To reduce the computation load related to sparse representation, a simple but efficient scheme is proposed for the tracking of single object. Subsequently, the potential of sparse representation to multiobject tracking is investigated.
|
250 |
Numerical linear algebra problems in structural analysisKannan, Ramaseshan January 2014 (has links)
A range of numerical linear algebra problems that arise in finite element-based structural analysis are considered. These problems were encountered when implementing the finite element method in the software package Oasys GSA. We present novel solutions to these problems in the form of a new method for error detection, algorithms with superior numerical effeciency and algorithms with scalable performance on parallel computers. The solutions and their corresponding software implementations have been integrated into GSA's program code and we present results that demonstrate the use of these implementations by engineers to solve real-world structural analysis problems.
|
Page generated in 0.0491 seconds