• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 32
  • 5
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 45
  • 45
  • 12
  • 11
  • 10
  • 10
  • 9
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Large Scale Implementation Of The Block Lanczos Algorithm

Srikanth, Cherukupally 03 1900 (has links)
Large sparse matrices arise in many applications, especially in the major problems of Cryptography of factoring integers and computing discrete logarithms. We focus attention on such matrices called sieve matrices generated after the sieving stage of the algorithms for integer factoring. We need to solve large sparse system of equations Bx = 0, with sieve matrices B arising in this context. The traditional Gaussian elimination, with a cubic run time, is not efficient for handling such matrices. Better algorithms for such input matrices are the quadratic runtime algorithms based on Block Lanczos(BL) or Wiedemann techniques. Of these two, BL is even better for large integer factoring algorithms. We carry out an efficient implementation of the Block Lanczos algorithm for finding the vectors in the null space of the the sieve matrix. We report our test results using our implementation for matrices of sizes up to 106. We plan to use this implementation in our ongoing projects on factoring the large RSA challenge integers of sizes 640 bits(called RSA-640) and beyond. So it is useful to exploit possible parallelism. We propose a scheme for parallelizing certain steps of the Block Lanczos method, taking advantage of structural properties of the sieve matrix. The sizes of matrices arising in integer factoring context are quite large. Hence we also discuss some techniques that are used to reduce the size of the sieve matrix. We also consider the last stage of the NFS Algorithm for finding square roots of large algebraic numbers and outline a sketch of our algorithm.
32

Mathematical analysis of a dynamical system for sparse recovery

Balavoine, Aurele 22 May 2014 (has links)
This thesis presents the mathematical analysis of a continuous-times system for sparse signal recovery. Sparse recovery arises in Compressed Sensing (CS), where signals of large dimension must be recovered from a small number of linear measurements, and can be accomplished by solving a complex optimization program. While many solvers have been proposed and analyzed to solve such programs in digital, their high complexity currently prevents their use in real-time applications. On the contrary, a continuous-time neural network implemented in analog VLSI could lead to significant gains in both time and power consumption. The contributions of this thesis are threefold. First, convergence results for neural networks that solve a large class of nonsmooth optimization programs are presented. These results extend previous analysis by allowing the interconnection matrix to be singular and the activation function to have many constant regions and grow unbounded. The exponential convergence rate of the networks is demonstrated and an analytic expression for the convergence speed is given. Second, these results are specialized to the L1-minimization problem, which is the most famous approach to solving the sparse recovery problem. The analysis relies on standard techniques in CS and proves that the network takes an efficient path toward the solution for parameters that match results obtained for digital solvers. Third, the convergence rate and accuracy of both the continuous-time system and its discrete-time equivalent are derived in the case where the underlying sparse signal is time-varying and the measurements are streaming. Such a study is of great interest for practical applications that need to operate in real-time, when the data are streaming at high rates or the computational resources are limited. As a conclusion, while existing analysis was concentrated on discrete-time algorithms for the recovery of static signals, this thesis provides convergence rate and accuracy results for the recovery of static signals using a continuous-time solver, and for the recovery of time-varying signals with both a discrete-time and a continuous-time solver.
33

Methods for solving discontinuous-Galerkin finite element equations with application to neutron transport / Méthodes de résolution d'équations aux éléments finis Galerkin discontinus et application à la neutronique

Murphy, Steven 26 August 2015 (has links)
Cette thèse traite des méthodes d’éléments finis Galerkin discontinus d’ordre élevé pour la résolution d’équations aux dérivées partielles, avec un intérêt particulier pour l’équation de transport des neutrons. Nous nous intéressons tout d’abord à une méthode de pré-traitement de matrices creuses par blocs, qu’on retrouve dans les méthodes Galerkin discontinues, avant factorisation par un solveur multifrontal. Des expériences numériques conduites sur de grandes matrices bi- et tri-dimensionnelles montrent que cette méthode de pré-traitement permet une réduction significative du ’fill-in’, par rapport aux méthodes n’exploitant pas la structure par blocs. Ensuite, nous proposons une méthode d’éléments finis Galerkin discontinus, employant des éléments d’ordre élevé en espace comme en angle, pour résoudre l’équation de transport des neutrons. Nous considérons des solveurs parallèles basés sur les sous-espaces de Krylov à la fois pour des problèmes ’source’ et des problèmes aux valeur propre multiplicatif. Dans cet algorithme, l’erreur est décomposée par projection(s) afin d’équilibrer les contraintes numériques entre les parties spatiales et angulaires du domaine de calcul. Enfin, un algorithme HP-adaptatif est présenté ; les résultats obtenus démontrent une nette supériorité par rapport aux algorithmes h-adaptatifs, à la fois en terme de réduction de coût de calcul et d’amélioration de la précision. Les valeurs propres et effectivités sont présentées pour un panel de cas test industriels. Une estimation précise de l’erreur (avec effectivité de 1) est atteinte pour un ensemble de problèmes aux domaines inhomogènes et de formes irrégulières ainsi que des groupes d’énergie multiples. Nous montrons numériquement que l’algorithme HP-adaptatif atteint une convergence exponentielle par rapport au nombre de degrés de liberté de l’espace éléments finis. / We consider high order discontinuous-Galerkin finite element methods for partial differential equations, with a focus on the neutron transport equation. We begin by examining a method for preprocessing block-sparse matrices, of the type that arise from discontinuous-Galerkin methods, prior to factorisation by a multifrontal solver. Numerical experiments on large two and three dimensional matrices show that this pre-processing method achieves a significant reduction in fill-in, when compared to methods that fail to exploit block structures. A discontinuous-Galerkin finite element method for the neutron transport equation is derived that employs high order finite elements in both space and angle. Parallel Krylov subspace based solvers are considered for both source problems and $k_{eff}$-eigenvalue problems. An a-posteriori error estimator is derived and implemented as part of an h-adaptive mesh refinement algorithm for neutron transport $k_{eff}$-eigenvalue problems. This algorithm employs a projection-based error splitting in order to balance the computational requirements between the spatial and angular parts of the computational domain. An hp-adaptive algorithm is presented and results are collected that demonstrate greatly improved efficiency compared to the h-adaptive algorithm, both in terms of reduced computational expense and enhanced accuracy. Computed eigenvalues and effectivities are presented for a variety of challenging industrial benchmarks. Accurate error estimation (with effectivities of 1) is demonstrated for a collection of problems with inhomogeneous, irregularly shaped spatial domains as well as multiple energy groups. Numerical results are presented showing that the hp-refinement algorithm can achieve exponential convergence with respect to the number of degrees of freedom in the finite element space
34

On the numerical solution of large-scale sparse discrete-time Riccati equations

Benner, Peter, Faßbender, Heike 04 March 2010 (has links)
The numerical solution of Stein (aka discrete Lyapunov) equations is the primary step in Newton's method for the solution of discrete-time algebraic Riccati equations (DARE). Here we present a low-rank Smith method as well as a low-rank alternating-direction-implicit-iteration to compute low-rank approximations to solutions of Stein equations arising in this context. Numerical results are given to verify the efficiency and accuracy of the proposed algorithms.
35

Bridging the Gap Between H-Matrices and Sparse Direct Methods for the Solution of Large Linear Systems / Combler l’écart entre H-Matrices et méthodes directes creuses pour la résolution de systèmes linéaires de grandes tailles

Falco, Aurélien 24 June 2019 (has links)
De nombreux phénomènes physiques peuvent être étudiés au moyen de modélisations et de simulations numériques, courantes dans les applications scientifiques. Pour être calculable sur un ordinateur, des techniques de discrétisation appropriées doivent être considérées, conduisant souvent à un ensemble d’équations linéaires dont les caractéristiques dépendent des techniques de discrétisation. D’un côté, la méthode des éléments finis conduit généralement à des systèmes linéaires creux, tandis que les méthodes des éléments finis de frontière conduisent à des systèmes linéaires denses. La taille des systèmes linéaires en découlant dépend du domaine où le phénomène physique étudié se produit et tend à devenir de plus en plus grand à mesure que les performances des infrastructures informatiques augmentent. Pour des raisons de robustesse numérique, les techniques de solution basées sur la factorisation de la matrice associée au système linéaire sont la méthode de choix utilisée lorsqu’elle est abordable. A cet égard, les méthodes hiérarchiques basées sur de la compression de rang faible ont permis une importante réduction des ressources de calcul nécessaires pour la résolution de systèmes linéaires denses au cours des deux dernières décennies. Pour les systèmes linéaires creux, leur utilisation reste un défi qui a été étudié à la fois par la communauté des matrices hiérarchiques et la communauté des matrices creuses. D’une part, la communauté des matrices hiérarchiques a d’abord exploité la structure creuse du problème via l’utilisation de la dissection emboitée. Bien que cette approche bénéficie de la structure hiérarchique qui en résulte, elle n’est pas aussi efficace que les solveurs creux en ce qui concerne l’exploitation des zéros et la séparation structurelle des zéros et des non-zéros. D’autre part, la factorisation creuse est accomplie de telle sorte qu’elle aboutit à une séquence d’opérations plus petites et denses, ce qui incite les solveurs à utiliser cette propriété et à exploiter les techniques de compression des méthodes hiérarchiques afin de réduire le coût de calcul de ces opérations élémentaires. Néanmoins, la structure hiérarchique globale peut être perdue si la compression des méthodes hiérarchiques n’est utilisée que localement sur des sous-matrices denses. Nous passons en revue ici les principales techniques employées par ces deux communautés, en essayant de mettre en évidence leurs propriétés communes et leurs limites respectives, en mettant l’accent sur les études qui visent à combler l’écart qui les séparent. Partant de ces observations, nous proposons une classe d’algorithmes hiérarchiques basés sur l’analyse symbolique de la structure des facteurs d’une matrice creuse. Ces algorithmes s’appuient sur une information symbolique pour grouper les inconnues entre elles et construire une structure hiérarchique cohérente avec la disposition des non-zéros de la matrice. Nos méthodes s’appuient également sur la compression de rang faible pour réduire la consommation mémoire des sous-matrices les plus grandes ainsi que le temps que met le solveur à trouver une solution. Nous comparons également des techniques de renumérotation se fondant sur des propriétés géométriques ou topologiques. Enfin, nous ouvrons la discussion à un couplage entre la méthode des éléments finis et la méthode des éléments finis de frontière dans un cadre logiciel unique. / Many physical phenomena may be studied through modeling and numerical simulations, commonplace in scientific applications. To be tractable on a computer, appropriated discretization techniques must be considered, which often lead to a set of linear equations whose features depend on the discretization techniques. Among them, the Finite Element Method usually leads to sparse linear systems whereas the Boundary Element Method leads to dense linear systems. The size of the resulting linear systems depends on the domain where the studied physical phenomenon develops and tends to become larger and larger as the performance of the computer facilities increases. For the sake of numerical robustness, the solution techniques based on the factorization of the matrix associated with the linear system are the methods of choice when affordable. In that respect, hierarchical methods based on low-rank compression have allowed a drastic reduction of the computational requirements for the solution of dense linear systems over the last two decades. For sparse linear systems, their application remains a challenge which has been studied by both the community of hierarchical matrices and the community of sparse matrices. On the one hand, the first step taken by the community of hierarchical matrices most often takes advantage of the sparsity of the problem through the use of nested dissection. While this approach benefits from the hierarchical structure, it is not, however, as efficient as sparse solvers regarding the exploitation of zeros and the structural separation of zeros from non-zeros. On the other hand, sparse factorization is organized so as to lead to a sequence of smaller dense operations, enticing sparse solvers to use this property and exploit compression techniques from hierarchical methods in order to reduce the computational cost of these elementary operations. Nonetheless, the globally hierarchical structure may be lost if the compression of hierarchical methods is used only locally on dense submatrices. We here review the main techniques that have been employed by both those communities, trying to highlight their common properties and their respective limits with a special emphasis on studies that have aimed to bridge the gap between them. With these observations in mind, we propose a class of hierarchical algorithms based on the symbolic analysis of the structure of the factors of a sparse matrix. These algorithms rely on a symbolic information to cluster and construct a hierarchical structure coherent with the non-zero pattern of the matrix. Moreover, the resulting hierarchical matrix relies on low-rank compression for the reduction of the memory consumption of large submatrices as well as the time to solution of the solver. We also compare multiple ordering techniques based on geometrical or topological properties. Finally, we open the discussion to a coupling between the Finite Element Method and the Boundary Element Method in a unified computational framework.
36

Memory and performance issues in parallel multifrontal factorizations and triangular solutions with sparse right-hand sides / Problèmes de mémoire et de performance de la factorisation multifrontale parallèle et de la résolution triangulaire à seconds membres creux

Rouet, François-Henry 17 October 2012 (has links)
Nous nous intéressons à la résolution de systèmes linéaires creux de très grande taille sur des machines parallèles. Dans ce contexte, la mémoire est un facteur qui limite voire empêche souvent l’utilisation de solveurs directs, notamment ceux basés sur la méthode multifrontale. Cette étude se concentre sur les problèmes de mémoire et de performance des deux phases des méthodes directes les plus coûteuses en mémoire et en temps : la factorisation numérique et la résolution triangulaire. Dans une première partie nous nous intéressons à la phase de résolution à seconds membres creux, puis, dans une seconde partie, nous nous intéressons à la scalabilité mémoire de la factorisation multifrontale. La première partie de cette étude se concentre sur la résolution triangulaire à seconds membres creux, qui apparaissent dans de nombreuses applications. En particulier, nous nous intéressons au calcul d’entrées de l’inverse d’une matrice creuse, où les seconds membres et les vecteurs solutions sont tous deux creux. Nous présentons d’abord plusieurs schémas de stockage qui permettent de réduire significativement l’espace mémoire utilisé lors de la résolution, dans le cadre d’exécutions séquentielles et parallèles. Nous montrons ensuite que la façon dont les seconds membres sont regroupés peut fortement influencer la performance et nous considérons deux cadres différents : le cas "hors-mémoire" (out-of-core) où le but est de réduire le nombre d’accès aux facteurs, qui sont stockés sur disque, et le cas "en mémoire" (in-core) où le but est de réduire le nombre d’opérations. Finalement, nous montrons comment améliorer le parallélisme. Dans la seconde partie, nous nous intéressons à la factorisation multifrontale parallèle. Nous montrons tout d’abord que contrôler la mémoire active spécifique à la méthode multifrontale est crucial, et que les technique de "répartition" (mapping) classiques ne peuvent fournir une bonne scalabilité mémoire : le coût mémoire de la factorisation augmente fortement avec le nombre de processeurs. Nous proposons une classe d’algorithmes de répartition et d’ordonnancement "conscients de la mémoire" (memory-aware) qui cherchent à maximiser la performance tout en respectant une contrainte mémoire fournie par l’utilisateur. Ces techniques ont révélé des problèmes de performances dans certains des noyaux parallèles denses utilisés à chaque étape de la factorisation, et nous avons proposé plusieurs améliorations algorithmiques. Les idées présentées tout au long de cette étude ont été implantées dans le solveur MUMPS (Solveur MUltifrontal Massivement Parallèle) et expérimentées sur des matrices de grande taille (plusieurs dizaines de millions d’inconnues) et sur des machines massivement parallèles (jusqu’à quelques milliers de coeurs). Elles ont permis d’améliorer les performances et la robustesse du code et seront disponibles dans une prochaine version. Certaines des idées présentées dans la première partie ont également été implantées dans le solveur PDSLin (solveur linéaire hybride basé sur une méthode de complément de Schur). / We consider the solution of very large sparse systems of linear equations on parallel architectures. In this context, memory is often a bottleneck that prevents or limits the use of direct solvers, especially those based on the multifrontal method. This work focuses on memory and performance issues of the two memory and computationally intensive phases of direct methods, that is, the numerical factorization and the solution phase. In the first part we consider the solution phase with sparse right-hand sides, and in the second part we consider the memory scalability of the multifrontal factorization. In the first part, we focus on the triangular solution phase with multiple sparse right-hand sides, that appear in numerous applications. We especially emphasize the computation of entries of the inverse, where both the right-hand sides and the solution are sparse. We first present several storage schemes that enable a significant compression of the solution space, both in a sequential and a parallel context. We then show that the way the right-hand sides are partitioned into blocks strongly influences the performance and we consider two different settings: the out-of-core case, where the aim is to reduce the number of accesses to the factors, that are stored on disk, and the in-core case, where the aim is to reduce the computational cost. Finally, we show how to enhance the parallel efficiency. In the second part, we consider the parallel multifrontal factorization. We show that controlling the active memory specific to the multifrontal method is critical, and that commonly used mapping techniques usually fail to do so: they cannot achieve a high memory scalability, i.e. they dramatically increase the amount of memory needed by the factorization when the number of processors increases. We propose a class of "memory-aware" mapping and scheduling algorithms that aim at maximizing performance while enforcing a user-given memory constraint and provide robust memory estimates before the factorization. These techniques have raised performance issues in the parallel dense kernels used at each step of the factorization, and we have proposed some algorithmic improvements. The ideas presented throughout this study have been implemented within the MUMPS (MUltifrontal Massively Parallel Solver) solver and experimented on large matrices (up to a few tens of millions unknowns) and massively parallel architectures (up to a few thousand cores). They have demonstrated to improve the performance and the robustness of the code, and will be available in a future release. Some of the ideas presented in the first part have also been implemented within the PDSLin (Parallel Domain decomposition Schur complement based Linear solver) solver.
37

Methoden zur Beschreibung von chemischen Strukturen beliebiger Dimensionalität mit der Dichtefunktionaltheorie unter periodischen Randbedingungen

Burow, Asbjörn Manfred 28 November 2011 (has links)
Die vorliegende Arbeit ist ein Beitrag auf dem Gebiet der theoretischen Chemie und beschäftigt sich mit der Entwicklung effizienter Berechnungsmethoden für die Elektronendichte und die Energie des Grundzustands molekularer und periodischer Systeme im Rahmen der Kohn-Sham-Dichtefunktionaltheorie (Kohn-Sham-DFT) und unter Verwendung von lokalen Basisfunktionen. Im Vordergrund steht dabei die einheitliche Beschreibung von Molekülen und ausgedehnten Systemen beliebiger Periodizität (zum Beispiel Volumenkristalle, dünne Filme und Polymere) mit einfachen Algorithmen bei einem hohen Maß an numerischer Genauigkeit und Recheneffizienz. Dafür hat der Verfasser bewährte molekulare Simulationsmethoden in neuartiger Form auf periodische Randbedingungen erweitert und zu einer vollständigen DFT-Methode vereint. Von diesen Methoden ist das völlig neue Konzept für die RI-Methode (resolution of identity, Zerlegung der Einheit), die auf den Coulomb-Term angewendet wird, die Schlüsseltechnologie in dieser Arbeit. Ein Merkmal der Methode ist, dass sie ausschließlich im direkten Raum arbeitet. Neben der RI-Methode wurden weitere methodische Ansätze entwickelt werden, um eine gute Speicher- und Zeiteffizienz der gesamten DFT-Methode zu gewährleisten. Dazu gehören die Komprimierung der speicherintensiven Dichte- und Kohn-Sham-Matrizes und die numerische Integration des Austausch-Korrelationsterms durch die Anwendung eines adaptiven, numerischen Integrationsschemas. Die vorgestellten Methoden werden zum Prototypen eines RI-DFT-Programms zusammengefügt. Dieses Programm ermöglicht die Berechnung von single point-Energien am Gamma-Punkt für Systeme mit abgeschlossenen Schalen. Anhand von Berechnungen werden die numerische Genauigkeit und Effizienz bewertet. Das Programm bildet die Basis für ein effizientes und leistungsfähiges DFT-Programm, das Moleküle und periodische Systeme methodisch einheitlich und numerisch genau behandelt. / This work contributes to the field of theoretical chemistry and is aimed at the development of efficient methods for computation of the electron density and the energy belonging to the ground state of molecular and periodic systems. It is based on the use of Kohn Sham density functional theory (Kohn Sham DFT) and local basis functions. In this scope, the molecular and the periodic systems of any dimensionality (e.g., bulk crystals, thin films, and polymers) are treated on an equal footing using methods which are easy to implement, numerically accurate, and highly efficient. For this, the author has augmented established methods of molecular simulations for their use with periodic boundary conditions applying novel techniques. These methods have been combined to a complete DFT method. Among these methods, the innovative approach for the RI (resolution of identity) method applied to the Coulomb term represents the key technology of this work. As a striking feature, this approach operates exclusively in real space. Although the RI method is the chief ingredient, the development of further methods is required to achieve overall efficiency for the consumption of storage and time. One of these methods is used to compress the density and Kohn Sham matrices. Moreover, numerical integration of the exchange-correlation term has been improved applying an adaptive numerical integration scheme. The methods presented in this thesis are combined to the prototype of an RI-DFT program. Using this program single point energies on the gamma point can be calculated for systems with closed shells. Calculations have been performed and the results are used to assess the accuracy and efficiency achieved. This program forms the foundation of an efficient and competitive DFT code. It works numerically accurate and treats molecules and periodic systems on an equal footing.
38

Ultrasonic guided wave imaging via sparse reconstruction

Levine, Ross M. 22 May 2014 (has links)
Structural health monitoring (SHM) is concerned with the continuous, long-term assessment of structural integrity. One commonly investigated SHM technique uses guided ultrasonic waves, which travel through the structure and interact with damage. Measured signals are then analyzed in software for detection, estimation, and characterization of damage. One common configuration for such a system uses a spatially-distributed array of fixed piezoelectric transducers, which is inexpensive and can cover large areas. Typically, one or more sets of prerecorded baseline signals are measured when the structure is in a known state, with imaging methods operating on differences between follow-up measurements and these baselines. Presented here is a new class of SHM spatially-distributed array algorithms that rely on sparse reconstruction. For this problem, damage over a region of interest (ROI) is considered to be sparse. Two different techniques are demonstrated here. The first, which relies on sparse reconstruction, uses an a priori assumption of scattering behavior to generate a redundant dictionary where each column corresponds to a pixel in the ROI. The second method extends this concept by using multidimensional models for each pixel, with each pixel corresponding to a "block" in the dictionary matrix; this method does not require advance knowledge of scattering behavior. Analysis and experimental results presented demonstrate the validity of the sparsity assumption. Experiments show that images generated with sparse methods are superior to those created with delay-and-sum methods; the techniques here are shown to be tolerant of propagation model mismatch. The block-sparse method described here also allows the extraction of scattering patterns, which can be used for damage characterization.
39

[en] RESEQUENCING TECHNIQUES FOR SOLVING LARGE SPARSE SYSTEMS / [pt] TÉCNICAS DE REORDENAÇÃO PARA SOLUÇÃO DE SISTEMAS ESPARSOS

IVAN FABIO MOTA DE MENEZES 26 July 2002 (has links)
[pt] Este trabalho apresenta técnicas de reordenação para minimização de banda, perfil e frente de malhas de elementos finitos. Um conceito unificado relacionando as malhas de elementos finitos, os grafos associados e as matrizes correspondentes é proposto. As informações geométricas, disponíveis nos programans de elemnetos finitos, são utilizadas para aumentar a eficiência dos algoritmos heurísticos. Com base nestas idéias, os algoritmos são classificados em topológicos, geométricos, híbridos e espectrais. Um Grafo de Elementos Finitos - Finite Element Graph (FEG)- é definido coo um grafo nodal(G), um garfo dual(G) ou um grafo de comunicação(G.), associado a uma dada malha de elementos finitos. Os algoritmos topológicos mais utilizados na literatura técnica, tais como, Reverse- CuthiiMcKee (RCM), Collins, Gibbs-Poole-Stockmeyer(GPS), Gibbs-King (GK), Snay e Sloan, são inventigados detalhadamente. Em particular, o algoritmo de Collins é estendido para consideração de componentes não conexos nos grafos associados e a numeração é invertida para uma posterior redução do perfil das matrizer correspondentes. Essa nova versão é denominada Modified Reverse Collins (MRCollins). Um algoritmo puramente geométrico, denominado Coordinate Based Bandwidth and Profile Reduction (CBBPR), é apresentado. Um novo algoritmo híbrido (HybWP) para redução de frente e perfil é proposto. A matriz Laplaciana [L(G), L(G) ou L (G.)], utilizada no estudo de propriedades espectrais de grafos, é construída a partir das relações usuais de adjacências entre vértices e arestas. Um algoritmo automático, baseado em propriedades espectrais de FEGs, é proposto para reordenação de nós e/ou elementos das malhas associadas. Este algoritmo, denominado Spectral FEG Resequencing (SFR), utiliza informações globais do grafo; não depende da escolha de um vértice pseudo- periférico; e não utiliza o conceito de estrutura de níveis. Um novo algoritmo espectral para determinação de vértices pseudo-periféricos em grafos também é proposto. Os algoritmos apresentados neste trabalho são implementados computacionalmente e testados utilizando- se diversos exemplos numéricos. Finalmente, conclusões são apresentadas e algumas sugestões para trabalhos futuros são propostas. / [en] This work presents resequencing techniques for minimizing bandwidth, profile and wavefront of finite element meshes. A unified approach relating a finite element mesh, its associated graphs, and the corresponding matrices is proposed. The geometrical information available from conventional finite element program is also used in order to improve heuristic algorithms. Following these ideas, the algorithms are classified here as a nodal graph (G), a dual graph (G) or a communication graph (G.) associated with a generic finie element mesh. The most widely used topological algorithms, such as Reverse-Cuthill-McKee (RCM), Collins, Gibbs-Poole-Stockmeyer (GPS), Gibbs-King (GK), Snay, and Sloan, are investigated in detail. In particular, the Collins algorithm is extended to consider nonconnected components in associated graph and the ordering provide by this algorithm is reverted for improved profile. This new version is called Modified Reverse Collins (MRCollins). A purely geometrical algorithm, called Coordinate Based Bandwidth and Profile Reduction (CBBPR), is presented. A new hybrid reordering algorithm (HybWP) for wavefront and profile reduction is proposed. The Laplacian matrix [L(G), L(G) or L(G.)], used for the study of spectral properties of an FEG, is constructed from usual vertex and edge conectivities of a graph. An automatic algorithm, based on spectral properties of an FEG, is proposed to reorder the nodes and/or elements of the associated finite element meshes. The new algorithm, called Spectral FEG Resequencing (SFR), uses global information in the graph; it does not depende on a pseudoperipheral vertex in the resequencing process; and it does not use any kind of level structure of the graph. A new spectral algorithm for finding pseudoperipheral vertices in graphs is also proposed. The algorithmpresented herein are computationally implemented and tested against several numerical examples. Finally, conclusions are drawn and directions for futue work are given.
40

Improving multifrontal solvers by means of algebraic Block Low-Rank representations / Amélioration des solveurs multifrontaux à l’aide de representations algébriques rang-faible par blocs

Weisbecker, Clément 28 October 2013 (has links)
Nous considérons la résolution de très grands systèmes linéaires creux à l'aide d'une méthode de factorisation directe appelée méthode multifrontale. Bien que numériquement robustes et faciles à utiliser (elles ne nécessitent que des informations algébriques : la matrice d'entrée A et le second membre b, même si elles peuvent exploiter des stratégies de prétraitement basées sur des informations géométriques), les méthodes directes sont très coûteuses en termes de mémoire et d'opérations, ce qui limite leur applicabilité à des problèmes de taille raisonnable (quelques millions d'équations). Cette étude se concentre sur l'exploitation des approximations de rang-faible dans la méthode multifrontale, pour réduire sa consommation mémoire et son volume d'opérations, dans des environnements séquentiel et à mémoire distribuée, sur une large classe de problèmes. D'abord, nous examinons les formats rang-faible qui ont déjà été développé pour représenter efficacement les matrices denses et qui ont été utilisées pour concevoir des solveurs rapides pour les équations aux dérivées partielles, les équations intégrales et les problèmes aux valeurs propres. Ces formats sont hiérarchiques (les formats H et HSS sont les plus répandus) et il a été prouvé, en théorie et en pratique, qu'ils permettent de réduire substantiellement les besoins en mémoire et opération des calculs d'algèbre linéaire. Cependant, de nombreuses contraintes structurelles sont imposées sur les problèmes visés, ce qui peut limiter leur efficacité et leur applicabilité aux solveurs multifrontaux généraux. Nous proposons un format plat appelé Block Rang-Faible (BRF) basé sur un découpage naturel de la matrice en blocs et expliquons pourquoi il fournit toute la flexibilité nécéssaire à son utilisation dans un solveur multifrontal général, en terme de pivotage numérique et de parallélisme. Nous comparons le format BRF avec les autres et montrons que le format BRF ne compromet que peu les améliorations en mémoire et opération obtenues grâce aux approximations rang-faible. Une étude de stabilité montre que les approximations sont bien contrôlées par un paramètre numérique explicite appelé le seuil rang-faible, ce qui est critique dans l'optique de résoudre des systèmes linéaires creux avec précision. Ensuite, nous expliquons comment les factorisations exploitant le format BRF peuvent être efficacement implémentées dans les solveurs multifrontaux. Nous proposons plusieurs algorithmes de factorisation BRF, ce qui permet d'atteindre différents objectifs. Les algorithmes proposés ont été implémentés dans le solveur multifrontal MUMPS. Nous présentons tout d'abord des expériences effectuées avec des équations aux dérivées partielles standardes pour analyser les principales propriétés des algorithmes BRF et montrer le potentiel et la flexibilité de l'approche ; une comparaison avec un code basé sur le format HSS est également fournie. Ensuite, nous expérimentons le format BRF sur des problèmes variés et de grande taille (jusqu'à une centaine de millions d'inconnues), provenant de nombreuses applications industrielles. Pour finir, nous illustrons l'utilisation de notre approche en tant que préconditionneur pour la méthode du Gradient Conjugué. / We consider the solution of large sparse linear systems by means of direct factorization based on a multifrontal approach. Although numerically robust and easy to use (it only needs algebraic information: the input matrix A and a right-hand side b, even if it can also digest preprocessing strategies based on geometric information), direct factorization methods are computationally intensive both in terms of memory and operations, which limits their scope on very large problems (matrices with up to few hundred millions of equations). This work focuses on exploiting low-rank approximations on multifrontal based direct methods to reduce both the memory footprints and the operation count, in sequential and distributed-memory environments, on a wide class of problems. We first survey the low-rank formats which have been previously developed to efficiently represent dense matrices and have been widely used to design fast solutions of partial differential equations, integral equations and eigenvalue problems. These formats are hierarchical (H and Hierarchically Semiseparable matrices are the most common ones) and have been (both theoretically and practically) shown to substantially decrease the memory and operation requirements for linear algebra computations. However, they impose many structural constraints which can limit their scope and efficiency, especially in the context of general purpose multifrontal solvers. We propose a flat format called Block Low-Rank (BLR) based on a natural blocking of the matrices and explain why it provides all the flexibility needed by a general purpose multifrontal solver in terms of numerical pivoting for stability and parallelism. We compare BLR format with other formats and show that BLR does not compromise much the memory and operation improvements achieved through low-rank approximations. A stability study shows that the approximations are well controlled by an explicit numerical parameter called low-rank threshold, which is critical in order to solve the sparse linear system accurately. Details on how Block Low-Rank factorizations can be efficiently implemented within multifrontal solvers are then given. We propose several Block Low-Rank factorization algorithms which allow for different types of gains. The proposed algorithms have been implemented within the MUMPS (MUltifrontal Massively Parallel Solver) solver. We first report experiments on standard partial differential equations based problems to analyse the main features of our BLR algorithms and to show the potential and flexibility of the approach; a comparison with a Hierarchically SemiSeparable code is also given. Then, Block Low-Rank formats are experimented on large (up to a hundred millions of unknowns) and various problems coming from several industrial applications. We finally illustrate the use of our approach as a preconditioning method for the Conjugate Gradient.

Page generated in 0.0287 seconds