• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • Tagged with
  • 7
  • 7
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

APPROXIMATION OF MATRIX FUNCTIONS BY QUADRATURE RULES BASED ON THELANCZOS OR ARNOLDI PROCESSES

Eshghi, Nasim 26 October 2020 (has links)
No description available.
2

A C++ Matrix library for computing the Gateaux derivative of the Fermi-Dirac operator

Samuelsson, William January 2023 (has links)
Computing the Fermi-Dirac operator is done through recursive polynomial expansions, using the SP2 and SP2 Acc algorithms. The Gateaux derivative is computed for both schemes by mapping the zeroth and first order matrices onto a block upper triangular matrix, which is implemented in Python using Numpy arrays to store full matrices and in C++ by first constructing a basic matrix library to use as blocks in a later created block upper triangular matrix library which only directly references two blocks in a 2 x 2 block matrix. Computations of the Fermi-Dirac operator were carried out on artificially created Hamiltonians to verify correct implementations, as well on real life examples of Fock matrices resulting from ergo calculations on water clusters(http://ergoscf.org/). It was found that the relative error in the first order response in the density matrix was not different when using SP2 Acc compared to when using SP2.
3

Dense and sparse parallel linear algebra algorithms on graphics processing units

Lamas Daviña, Alejandro 13 November 2018 (has links)
Una línea de desarrollo seguida en el campo de la supercomputación es el uso de procesadores de propósito específico para acelerar determinados tipos de cálculo. En esta tesis estudiamos el uso de tarjetas gráficas como aceleradores de la computación y lo aplicamos al ámbito del álgebra lineal. En particular trabajamos con la biblioteca SLEPc para resolver problemas de cálculo de autovalores en matrices de gran dimensión, y para aplicar funciones de matrices en los cálculos de aplicaciones científicas. SLEPc es una biblioteca paralela que se basa en el estándar MPI y está desarrollada con la premisa de ser escalable, esto es, de permitir resolver problemas más grandes al aumentar las unidades de procesado. El problema lineal de autovalores, Ax = lambda x en su forma estándar, lo abordamos con el uso de técnicas iterativas, en concreto con métodos de Krylov, con los que calculamos una pequeña porción del espectro de autovalores. Este tipo de algoritmos se basa en generar un subespacio de tamaño reducido (m) en el que proyectar el problema de gran dimensión (n), siendo m << n. Una vez se ha proyectado el problema, se resuelve este mediante métodos directos, que nos proporcionan aproximaciones a los autovalores del problema inicial que queríamos resolver. Las operaciones que se utilizan en la expansión del subespacio varían en función de si los autovalores deseados están en el exterior o en el interior del espectro. En caso de buscar autovalores en el exterior del espectro, la expansión se hace mediante multiplicaciones matriz-vector. Esta operación la realizamos en la GPU, bien mediante el uso de bibliotecas o mediante la creación de funciones que aprovechan la estructura de la matriz. En caso de autovalores en el interior del espectro, la expansión requiere resolver sistemas de ecuaciones lineales. En esta tesis implementamos varios algoritmos para la resolución de sistemas de ecuaciones lineales para el caso específico de matrices con estructura tridiagonal a bloques, que se ejecutan en GPU. En el cálculo de las funciones de matrices hemos de diferenciar entre la aplicación directa de una función sobre una matriz, f(A), y la aplicación de la acción de una función de matriz sobre un vector, f(A)b. El primer caso implica un cálculo denso que limita el tamaño del problema. El segundo permite trabajar con matrices dispersas grandes, y para resolverlo también hacemos uso de métodos de Krylov. La expansión del subespacio se hace mediante multiplicaciones matriz-vector, y hacemos uso de GPUs de la misma forma que al resolver autovalores. En este caso el problema proyectado comienza siendo de tamaño m, pero se incrementa en m en cada reinicio del método. La resolución del problema proyectado se hace aplicando una función de matriz de forma directa. Nosotros hemos implementado varios algoritmos para calcular las funciones de matrices raíz cuadrada y exponencial, en las que el uso de GPUs permite acelerar el cálculo. / One line of development followed in the field of supercomputing is the use of specific purpose processors to speed up certain types of computations. In this thesis we study the use of graphics processing units as computer accelerators and apply it to the field of linear algebra. In particular, we work with the SLEPc library to solve large scale eigenvalue problems, and to apply matrix functions in scientific applications. SLEPc is a parallel library based on the MPI standard and is developed with the premise of being scalable, i.e. to allow solving larger problems by increasing the processing units. We address the linear eigenvalue problem, Ax = lambda x in its standard form, using iterative techniques, in particular with Krylov's methods, with which we calculate a small portion of the eigenvalue spectrum. This type of algorithms is based on generating a subspace of reduced size (m) in which to project the large dimension problem (n), being m << n. Once the problem has been projected, it is solved by direct methods, which provide us with approximations of the eigenvalues of the initial problem we wanted to solve. The operations used in the expansion of the subspace vary depending on whether the desired eigenvalues are from the exterior or from the interior of the spectrum. In the case of searching for exterior eigenvalues, the expansion is done by matrix-vector multiplications. We do this on the GPU, either by using libraries or by creating functions that take advantage of the structure of the matrix. In the case of eigenvalues from the interior of the spectrum, the expansion requires solving linear systems of equations. In this thesis we implemented several algorithms to solve linear systems of equations for the specific case of matrices with a block-tridiagonal structure, that are run on GPU. In the computation of matrix functions we have to distinguish between the direct application of a matrix function, f(A), and the action of a matrix function on a vector, f(A)b. The first case involves a dense computation that limits the size of the problem. The second allows us to work with large sparse matrices, and to solve it we also make use of Krylov's methods. The expansion of subspace is done by matrix-vector multiplication, and we use GPUs in the same way as when solving eigenvalues. In this case the projected problem starts being of size m, but it is increased by m on each restart of the method. The solution of the projected problem is done by directly applying a matrix function. We have implemented several algorithms to compute the square root and the exponential matrix functions, in which the use of GPUs allows us to speed up the computation. / Una línia de desenvolupament seguida en el camp de la supercomputació és l'ús de processadors de propòsit específic per a accelerar determinats tipus de càlcul. En aquesta tesi estudiem l'ús de targetes gràfiques com a acceleradors de la computació i ho apliquem a l'àmbit de l'àlgebra lineal. En particular treballem amb la biblioteca SLEPc per a resoldre problemes de càlcul d'autovalors en matrius de gran dimensió, i per a aplicar funcions de matrius en els càlculs d'aplicacions científiques. SLEPc és una biblioteca paral·lela que es basa en l'estàndard MPI i està desenvolupada amb la premissa de ser escalable, açò és, de permetre resoldre problemes més grans en augmentar les unitats de processament. El problema lineal d'autovalors, Ax = lambda x en la seua forma estàndard, ho abordem amb l'ús de tècniques iteratives, en concret amb mètodes de Krylov, amb els quals calculem una xicoteta porció de l'espectre d'autovalors. Aquest tipus d'algorismes es basa a generar un subespai de grandària reduïda (m) en el qual projectar el problema de gran dimensió (n), sent m << n. Una vegada s'ha projectat el problema, es resol aquest mitjançant mètodes directes, que ens proporcionen aproximacions als autovalors del problema inicial que volíem resoldre. Les operacions que s'utilitzen en l'expansió del subespai varien en funció de si els autovalors desitjats estan en l'exterior o a l'interior de l'espectre. En cas de cercar autovalors en l'exterior de l'espectre, l'expansió es fa mitjançant multiplicacions matriu-vector. Aquesta operació la realitzem en la GPU, bé mitjançant l'ús de biblioteques o mitjançant la creació de funcions que aprofiten l'estructura de la matriu. En cas d'autovalors a l'interior de l'espectre, l'expansió requereix resoldre sistemes d'equacions lineals. En aquesta tesi implementem diversos algorismes per a la resolució de sistemes d'equacions lineals per al cas específic de matrius amb estructura tridiagonal a blocs, que s'executen en GPU. En el càlcul de les funcions de matrius hem de diferenciar entre l'aplicació directa d'una funció sobre una matriu, f(A), i l'aplicació de l'acció d'una funció de matriu sobre un vector, f(A)b. El primer cas implica un càlcul dens que limita la grandària del problema. El segon permet treballar amb matrius disperses grans, i per a resoldre-ho també fem ús de mètodes de Krylov. L'expansió del subespai es fa mitjançant multiplicacions matriu-vector, i fem ús de GPUs de la mateixa forma que en resoldre autovalors. En aquest cas el problema projectat comença sent de grandària m, però s'incrementa en m en cada reinici del mètode. La resolució del problema projectat es fa aplicant una funció de matriu de forma directa. Nosaltres hem implementat diversos algorismes per a calcular les funcions de matrius arrel quadrada i exponencial, en les quals l'ús de GPUs permet accelerar el càlcul. / Lamas Daviña, A. (2018). Dense and sparse parallel linear algebra algorithms on graphics processing units [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/112425 / TESIS
4

Rational Krylov decompositions : theory and applications

Berljafa, Mario January 2017 (has links)
Numerical methods based on rational Krylov spaces have become an indispensable tool of scientific computing. In this thesis we study rational Krylov spaces by considering rational Krylov decompositions; matrix relations which, under certain conditions, are associated with these spaces. We investigate the algebraic properties of such decompositions and present an implicit Q theorem for rational Krylov spaces. We derive standard and harmonic Ritz extraction strategies for approximating the eigenpairs of a matrix and for approximating the action of a matrix function onto a vector. While these topics have been considered previously, our approach does not require the last pole to be infinite, which makes the extraction procedure computationally more efficient. Typically, the computationally most expensive component of the rational Arnoldi algorithm for computing a rational Krylov basis is the solution of a large linear system of equations at each iteration. We explore the option of solving several linear systems simultaneously, thus constructing the rational Krylov basis in parallel. If this is not done carefully, the basis being orthogonalized may become poorly conditioned, leading to numerical instabilities in the orthogonalization process. We introduce the new concept of continuation pairs which gives rise to a near-optimal parallelization strategy that allows to control the growth of the condition number of this non orthogonal basis. As a consequence we obtain a more accurate and reliable parallel rational Arnoldi algorithm. The computational benefits are illustrated using our high performance C++ implementation. We develop an iterative algorithm for solving nonlinear rational least squares problems. The difficulty is in finding the poles of a rational function. For this purpose, at each iteration a rational Krylov decomposition is constructed and a modified linear problem is solved in order to relocate the poles to new ones. Our numerical results indicate that the algorithm, called RKFIT, is well suited for model order reduction of linear time invariant dynamical systems and for optimisation problems related to exponential integration. Furthermore, we derive a strategy for the degree reduction of the approximant obtained by RKFIT. The rational function obtained by RKFIT is represented with the aid of a scalar rational Krylov decomposition and an additional coefficient vector. A function represented in this form is called an RKFUN. We develop efficient methods for the evaluation, pole and root finding, and for performing basic arithmetic operations with RKFUNs. Lastly, we discuss RKToolbox, a rational Krylov toolbox for MATLAB, which implements all our algorithms and is freely available from http://rktoolbox.org. RKToolbox also features an extensive guide and a growing number of examples. In particular, most of our numerical experiments are easily reproducible by downloading the toolbox and running the corresponding example files in MATLAB.
5

Randomized Diagonal Estimation / Randomiserad Diagonalestimering

Popp, Niclas Joshua January 2023 (has links)
Implicit diagonal estimation is a long-standing problem that is concerned with approximating the diagonal of a matrix that can only be accessed through matrix-vector products. It is of interest in various fields of application, such as network science, material science and machine learning. This thesis provides a comprehensive review of randomized algorithms for implicit diagonal estimation and introduces various enhancements as well as extensions to matrix functions. Three novel diagonal estimators are presented. The first method employs low-rank Nyström approximations. The second approach is based on shifts, forming a generalization of current deflation-based techniques. Additionally, we introduce a method for adaptively determining the number of test vectors, thereby removing the need for prior knowledge about the matrix. Moreover, the median of means principle is incorporated into diagonal estimation. Apart from that, we combine diagonal estimation methods with approaches for approximating the action of matrix functions using polynomial approximations and Krylov subspaces. This enables us to present implicit methods for estimating the diagonal of matrix functions. We provide first of their kind theoretical results for the convergence of these estimators. Subsequently, we present a deflation-based diagonal estimator for monotone functions of normal matrices with improved convergence properties. To validate the effectiveness and practical applicability of our methods, we conduct numerical experiments in real-world scenarios. This includes estimating the subgraph centralities in a protein interaction network, approximating uncertainty in ordinary least squares as well as randomized Jacobi preconditioning. / Implicit diagonalskattning är ett långvarigt problem som handlar om approximationen av diagonalerna i en matris som endast kan nås genom matris-vektorprodukter. Problemet är av intresse inom olika tillämpnings-områden, exempelvis nätverksvetenskap, materialvetenskap och maskininlärning. Detta arbete ger en omfattande översikt över algoritmer för randomiserad diagonalskattning och presenterar flera förbättringar samt utvidgningar till matrisfunktioner. Tre nya diagonalskattare presenteras. Den första metoden använder Nyström-approximationer med låg rang. Den andra metoden är baserad på skift och är en generalisering av de nuvarande deflationsbaserade metoderna. Dessutom presenteras en metod för adaptiv bestämning av antalet testvektorer som inte kräver förhandskunskap om matrisen. Median of Means principen ingår också i uppskattningen av diagonalerna. Dessutom kombinerar vi metoder för att uppskatta diagonalerna med algoritmer för att approximera matris-vektorprodukter med matrisfunktioner med hjälp av polynomapproximationer och Krylov-underutrymmen. Detta gör att vi kan presentera implicita metoder för att uppskatta diagonalerna i matrisfunktioner. Vi ger de första teoretiska resultaten för konvergensen av dessa skattare. Sedan presenterar vi en deflationsbaserad diagonal estimator för monotona funktioner av normala matriser med förbättrade konvergensegenskaper. För att validera våra metoders effektivitet och praktiska användbarhet genomför vi numeriska experiment i verkliga scenarier. Detta inkluderar uppskattning av Subgraph Centrality i nätverk, osäkerhetskvantifiering inom ramen för vanliga minsta kvadratmetoden och randomiserad Jacobi-förkonditionering.
6

Krylov subspace methods for approximating functions of symmetric positive definite matrices with applications to applied statistics and anomalous diffusion

Simpson, Daniel Peter January 2008 (has links)
Matrix function approximation is a current focus of worldwide interest and finds application in a variety of areas of applied mathematics and statistics. In this thesis we focus on the approximation of A..=2b, where A 2 Rnn is a large, sparse symmetric positive definite matrix and b 2 Rn is a vector. In particular, we will focus on matrix function techniques for sampling from Gaussian Markov random fields in applied statistics and the solution of fractional-in-space partial differential equations. Gaussian Markov random fields (GMRFs) are multivariate normal random variables characterised by a sparse precision (inverse covariance) matrix. GMRFs are popular models in computational spatial statistics as the sparse structure can be exploited, typically through the use of the sparse Cholesky decomposition, to construct fast sampling methods. It is well known, however, that for sufficiently large problems, iterative methods for solving linear systems outperform direct methods. Fractional-in-space partial differential equations arise in models of processes undergoing anomalous diffusion. Unfortunately, as the fractional Laplacian is a non-local operator, numerical methods based on the direct discretisation of these equations typically requires the solution of dense linear systems, which is impractical for fine discretisations. In this thesis, novel applications of Krylov subspace approximations to matrix functions for both of these problems are investigated. Matrix functions arise when sampling from a GMRF by noting that the Cholesky decomposition A = LLT is, essentially, a `square root' of the precision matrix A. Therefore, we can replace the usual sampling method, which forms x = L..T z, with x = A..1=2z, where z is a vector of independent and identically distributed standard normal random variables. Similarly, the matrix transfer technique can be used to build solutions to the fractional Poisson equation of the form n = A..=2b, where A is the finite difference approximation to the Laplacian. Hence both applications require the approximation of f(A)b, where f(t) = t..=2 and A is sparse. In this thesis we will compare the Lanczos approximation, the shift-and-invert Lanczos approximation, the extended Krylov subspace method, rational approximations and the restarted Lanczos approximation for approximating matrix functions of this form. A number of new and novel results are presented in this thesis. Firstly, we prove the convergence of the matrix transfer technique for the solution of the fractional Poisson equation and we give conditions by which the finite difference discretisation can be replaced by other methods for discretising the Laplacian. We then investigate a number of methods for approximating matrix functions of the form A..=2b and investigate stopping criteria for these methods. In particular, we derive a new method for restarting the Lanczos approximation to f(A)b. We then apply these techniques to the problem of sampling from a GMRF and construct a full suite of methods for sampling conditioned on linear constraints and approximating the likelihood. Finally, we consider the problem of sampling from a generalised Matern random field, which combines our techniques for solving fractional-in-space partial differential equations with our method for sampling from GMRFs.
7

Transformation and approximation of rational Krylov spaces with an application to 2.5-dimensional direct current resistivity modeling

Stein, Saskia 17 April 2021 (has links)
Die vorliegende Arbeit befasst sich mit der Fragestellung, inwiefern sich gegebene Verfahren zur Approximation von rationalen Krylow-Räumen zur Berechnung von Matrixfunktionen eignen. Als Modellproblem wird dazu eine 2.5D-Formulierung eines Problems aus der Gleichstrom-Geoelektrik mit finiten Elementen formuliert und dann mittels Matrixfunktionen auf rationalen Krylow-Unterräumen gelöst. Ein weiterer Teil beschäftigt sich mit dem Vergleich zweier Verfahren zur Transformation bestehender rationaler Krylow-Räume. Bei beiden Varianten werden die zugrunde liegenden Pole getauscht ohne dass ein explizites Invertieren von Matrizen notwendig ist. In dieser Arbeit werden die über mehrere Publikationen verteilten Grundlagen einheitlich zusammengetragen und fehlende Zusammenhänge ergänzt. Beide Verfahren eignen sich prinzipiell um rationale Krylow-Räume zu approximieren. Dies wird anhand mehrerer Beispiele gezeigt. Anhand des Modellproblems werden Beschränkungen der Methoden verdeutlicht.

Page generated in 0.1175 seconds