• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Fast, Sparse Matrix Factorization and Matrix Algebra via Random Sampling for Integral Equation Formulations in Electromagnetics

Wilkerson, Owen Tanner 01 January 2019 (has links)
Many systems designed by electrical & computer engineers rely on electromagnetic (EM) signals to transmit, receive, and extract either information or energy. In many cases, these systems are large and complex. Their accurate, cost-effective design requires high-fidelity computer modeling of the underlying EM field/material interaction problem in order to find a design with acceptable system performance. This modeling is accomplished by projecting the governing Maxwell equations onto finite dimensional subspaces, which results in a large matrix equation representation (Zx = b) of the EM problem. In the case of integral equation-based formulations of EM problems, the M-by-N system matrix, Z, is generally dense. For this reason, when treating large problems, it is necessary to use compression methods to store and manipulate Z. One such sparse representation is provided by so-called H^2 matrices. At low-to-moderate frequencies, H^2 matrices provide a controllably accurate data-sparse representation of Z. The scale at which problems in EM are considered ``large'' is continuously being redefined to be larger. This growth of problem scale is not only happening in EM, but respectively across all other sub-fields of computational science as well. The pursuit of increasingly large problems is unwavering in all these sub-fields, and this drive has long outpaced the rate of advancements in processing and storage capabilities in computing. This has caused computational science communities to now face the computational limitations of standard linear algebraic methods that have been relied upon for decades to run quickly and efficiently on modern computing hardware. This common set of algorithms can only produce reliable results quickly and efficiently for small to mid-sized matrices that fit into the memory of the host computer. Therefore, the drive to pursue larger problems has even began to outpace the reasonable capabilities of these common numerical algorithms; the deterministic numerical linear algebra algorithms that have gotten matrix computation this far have proven to be inadequate for many problems of current interest. This has computational science communities focusing on improvements in their mathematical and software approaches in order to push further advancement. Randomized numerical linear algebra (RandNLA) is an emerging area that both academia and industry believe to be strong candidates to assist in overcoming the limitations faced when solving massive and computationally expensive problems. This thesis presents results of recent work that uses a random sampling method (RSM) to implement algebraic operations involving multiple H^2 matrices. Significantly, this work is done in a manner that is non-invasive to an existing H^2 code base for filling and factoring H^2 matrices. The work presented thus expands the existing code's capabilities with minimal impact on existing (and well-tested) applications. In addition to this work with randomized H^2 algebra, improvements in sparse factorization methods for the compressed H^2 data structure will also be presented. The reported developments in filling and factoring H^2 data structures assist in, and allow for, the further pursuit of large and complex problems in computational EM (CEM) within simulation code bases that utilize the H^2 data structure.
2

Randomized Algorithms for Preconditioner Selection with Applications to Kernel Regression

DiPaolo, Conner 01 January 2019 (has links)
The task of choosing a preconditioner M to use when solving a linear system Ax=b with iterative methods is often tedious and most methods remain ad-hoc. This thesis presents a randomized algorithm to make this chore less painful through use of randomized algorithms for estimating traces. In particular, we show that the preconditioner stability || I - M-1A ||F, known to forecast preconditioner quality, can be computed in the time it takes to run a constant number of iterations of conjugate gradients through use of sketching methods. This is in spite of folklore which suggests the quantity is impractical to compute, and a proof we give that ensures the quantity could not possibly be approximated in a useful amount of time by a deterministic algorithm. Using our estimator, we provide a method which can provably select a quality preconditioner among n candidates using floating operations commensurate with running about n log(n) steps of the conjugate gradients algorithm. In the absence of such a preconditioner among the candidates, our method can advise the practitioner to use no preconditioner at all. The algorithm is extremely easy to implement and trivially parallelizable, and along the way we provide theoretical improvements to the literature on trace estimation. In empirical experiments, we show the selection method can be quite helpful. For example, it allows us to create to the best of our knowledge the first preconditioning method for kernel regression which never uses more iterations over the non-preconditioned analog in standard settings.
3

Randomized Diagonal Estimation / Randomiserad Diagonalestimering

Popp, Niclas Joshua January 2023 (has links)
Implicit diagonal estimation is a long-standing problem that is concerned with approximating the diagonal of a matrix that can only be accessed through matrix-vector products. It is of interest in various fields of application, such as network science, material science and machine learning. This thesis provides a comprehensive review of randomized algorithms for implicit diagonal estimation and introduces various enhancements as well as extensions to matrix functions. Three novel diagonal estimators are presented. The first method employs low-rank Nyström approximations. The second approach is based on shifts, forming a generalization of current deflation-based techniques. Additionally, we introduce a method for adaptively determining the number of test vectors, thereby removing the need for prior knowledge about the matrix. Moreover, the median of means principle is incorporated into diagonal estimation. Apart from that, we combine diagonal estimation methods with approaches for approximating the action of matrix functions using polynomial approximations and Krylov subspaces. This enables us to present implicit methods for estimating the diagonal of matrix functions. We provide first of their kind theoretical results for the convergence of these estimators. Subsequently, we present a deflation-based diagonal estimator for monotone functions of normal matrices with improved convergence properties. To validate the effectiveness and practical applicability of our methods, we conduct numerical experiments in real-world scenarios. This includes estimating the subgraph centralities in a protein interaction network, approximating uncertainty in ordinary least squares as well as randomized Jacobi preconditioning. / Implicit diagonalskattning är ett långvarigt problem som handlar om approximationen av diagonalerna i en matris som endast kan nås genom matris-vektorprodukter. Problemet är av intresse inom olika tillämpnings-områden, exempelvis nätverksvetenskap, materialvetenskap och maskininlärning. Detta arbete ger en omfattande översikt över algoritmer för randomiserad diagonalskattning och presenterar flera förbättringar samt utvidgningar till matrisfunktioner. Tre nya diagonalskattare presenteras. Den första metoden använder Nyström-approximationer med låg rang. Den andra metoden är baserad på skift och är en generalisering av de nuvarande deflationsbaserade metoderna. Dessutom presenteras en metod för adaptiv bestämning av antalet testvektorer som inte kräver förhandskunskap om matrisen. Median of Means principen ingår också i uppskattningen av diagonalerna. Dessutom kombinerar vi metoder för att uppskatta diagonalerna med algoritmer för att approximera matris-vektorprodukter med matrisfunktioner med hjälp av polynomapproximationer och Krylov-underutrymmen. Detta gör att vi kan presentera implicita metoder för att uppskatta diagonalerna i matrisfunktioner. Vi ger de första teoretiska resultaten för konvergensen av dessa skattare. Sedan presenterar vi en deflationsbaserad diagonal estimator för monotona funktioner av normala matriser med förbättrade konvergensegenskaper. För att validera våra metoders effektivitet och praktiska användbarhet genomför vi numeriska experiment i verkliga scenarier. Detta inkluderar uppskattning av Subgraph Centrality i nätverk, osäkerhetskvantifiering inom ramen för vanliga minsta kvadratmetoden och randomiserad Jacobi-förkonditionering.

Page generated in 0.0767 seconds