• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19
  • 1
  • Tagged with
  • 31
  • 31
  • 31
  • 13
  • 9
  • 9
  • 7
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Randomized Diagonal Estimation / Randomiserad Diagonalestimering

Popp, Niclas Joshua January 2023 (has links)
Implicit diagonal estimation is a long-standing problem that is concerned with approximating the diagonal of a matrix that can only be accessed through matrix-vector products. It is of interest in various fields of application, such as network science, material science and machine learning. This thesis provides a comprehensive review of randomized algorithms for implicit diagonal estimation and introduces various enhancements as well as extensions to matrix functions. Three novel diagonal estimators are presented. The first method employs low-rank Nyström approximations. The second approach is based on shifts, forming a generalization of current deflation-based techniques. Additionally, we introduce a method for adaptively determining the number of test vectors, thereby removing the need for prior knowledge about the matrix. Moreover, the median of means principle is incorporated into diagonal estimation. Apart from that, we combine diagonal estimation methods with approaches for approximating the action of matrix functions using polynomial approximations and Krylov subspaces. This enables us to present implicit methods for estimating the diagonal of matrix functions. We provide first of their kind theoretical results for the convergence of these estimators. Subsequently, we present a deflation-based diagonal estimator for monotone functions of normal matrices with improved convergence properties. To validate the effectiveness and practical applicability of our methods, we conduct numerical experiments in real-world scenarios. This includes estimating the subgraph centralities in a protein interaction network, approximating uncertainty in ordinary least squares as well as randomized Jacobi preconditioning. / Implicit diagonalskattning är ett långvarigt problem som handlar om approximationen av diagonalerna i en matris som endast kan nås genom matris-vektorprodukter. Problemet är av intresse inom olika tillämpnings-områden, exempelvis nätverksvetenskap, materialvetenskap och maskininlärning. Detta arbete ger en omfattande översikt över algoritmer för randomiserad diagonalskattning och presenterar flera förbättringar samt utvidgningar till matrisfunktioner. Tre nya diagonalskattare presenteras. Den första metoden använder Nyström-approximationer med låg rang. Den andra metoden är baserad på skift och är en generalisering av de nuvarande deflationsbaserade metoderna. Dessutom presenteras en metod för adaptiv bestämning av antalet testvektorer som inte kräver förhandskunskap om matrisen. Median of Means principen ingår också i uppskattningen av diagonalerna. Dessutom kombinerar vi metoder för att uppskatta diagonalerna med algoritmer för att approximera matris-vektorprodukter med matrisfunktioner med hjälp av polynomapproximationer och Krylov-underutrymmen. Detta gör att vi kan presentera implicita metoder för att uppskatta diagonalerna i matrisfunktioner. Vi ger de första teoretiska resultaten för konvergensen av dessa skattare. Sedan presenterar vi en deflationsbaserad diagonal estimator för monotona funktioner av normala matriser med förbättrade konvergensegenskaper. För att validera våra metoders effektivitet och praktiska användbarhet genomför vi numeriska experiment i verkliga scenarier. Detta inkluderar uppskattning av Subgraph Centrality i nätverk, osäkerhetskvantifiering inom ramen för vanliga minsta kvadratmetoden och randomiserad Jacobi-förkonditionering.
22

Algorithmes de mise à l'échelle et méthodes tropicales en analyse numérique matricielle

Sharify, Meisam 01 September 2011 (has links) (PDF)
L'Algèbre tropicale peut être considérée comme un domaine relativement nouveau en mathématiques. Elle apparait dans plusieurs domaines telles que l'optimisation, la synchronisation de la production et du transport, les systèmes à événements discrets, le contrôle optimal, la recherche opérationnelle, etc. La première partie de ce manuscrit est consacrée a l'étude des applications de l'algèbre tropicale à l'analyse numérique matricielle. Nous considérons tout d'abord le problème classique de l'estimation des racines d'un polynôme univarié. Nous prouvons plusieurs nouvelles bornes pour la valeur absolue des racines d'un polynôme en exploitant les méthodes tropicales. Ces résultats sont particulièrement utiles lorsque l'on considère des polynômes dont les coefficients ont des ordres de grandeur différents. Nous examinons ensuite le problème du calcul des valeurs propres d'une matrice polynomiale. Ici, nous introduisons une technique de mise à l'échelle générale, basée sur l'algèbre tropicale, qui s'applique en particulier à la forme compagnon. Cette mise à l'échelle est basée sur la construction d'une fonction polynomiale tropicale auxiliaire, ne dépendant que de la norme des matrices. Les raciness (les points de non-différentiabilité) de ce polynôme tropical fournissent une pré-estimation de la valeur absolue des valeurs propres. Ceci se justifie en particulier par un nouveau résultat montrant que sous certaines hypothèses faites sur le conditionnement, il existe un groupe de valeurs propres bornées en norme. L'ordre de grandeur de ces bornes est fourni par la plus grande racine du polynôme tropical auxiliaire. Un résultat similaire est valable pour un groupe de petites valeurs propres. Nous montrons expérimentalement que cette mise à l'échelle améliore la stabilité numérique, en particulier dans des situations où les données ont des ordres de grandeur différents. Nous étudions également le problème du calcul des valeurs propres tropicales (les points de non-différentiabilité du polynôme caractéristique) d'une matrice polynômiale tropicale. Du point de vue combinatoire, ce problème est équivalent à trouver une fonction de couplage: la valeur d'un couplage de poids maximum dans un graphe biparti dont les arcs sont valués par des fonctions convexes et linéaires par morceaux. Nous avons développé un algorithme qui calcule ces valeurs propres tropicales en temps polynomial. Dans la deuxième partie de cette thèse, nous nous intéressons à la résolution de problèmes d'affectation optimale de très grande taille, pour lesquels les algorithms séquentiels classiques ne sont pas efficaces. Nous proposons une nouvelle approche qui exploite le lien entre le problème d'affectation optimale et le problème de maximisation d'entropie. Cette approche conduit à un algorithme de prétraitement pour le problème d'affectation optimale qui est basé sur une méthode itérative qui élimine les entrées n'appartenant pas à une affectation optimale. Nous considérons deux variantes itératives de l'algorithme de prétraitement, l'une utilise la méthode Sinkhorn et l'autre utilise la méthode de Newton. Cet algorithme de prétraitement ramène le problème initial à un problème beaucoup plus petit en termes de besoins en mémoire. Nous introduisons également une nouvelle méthode itérative basée sur une modification de l'algorithme Sinkhorn, dans lequel un paramètre de déformation est lentement augmenté. Nous prouvons que cette méthode itérative(itération de Sinkhorn déformée) converge vers une matrice dont les entrées non nulles sont exactement celles qui appartiennent aux permutations optimales. Une estimation du taux de convergence est également présentée.
23

Matrix Algebra for Quantum Chemistry

Rubensson, Emanuel H. January 2008 (has links)
This thesis concerns methods of reduced complexity for electronic structure calculations.  When quantum chemistry methods are applied to large systems, it is important to optimally use computer resources and only store data and perform operations that contribute to the overall accuracy. At the same time, precarious approximations could jeopardize the reliability of the whole calculation.  In this thesis, the self-consistent field method is seen as a sequence of rotations of the occupied subspace. Errors coming from computational approximations are characterized as erroneous rotations of this subspace. This viewpoint is optimal in the sense that the occupied subspace uniquely defines the electron density. Errors should be measured by their impact on the overall accuracy instead of by their constituent parts. With this point of view, a mathematical framework for control of errors in Hartree-Fock/Kohn-Sham calculations is proposed.  A unifying framework is of particular importance when computational approximations are introduced to efficiently handle large systems. An important operation in Hartree-Fock/Kohn-Sham calculations is the calculation of the density matrix for a given Fock/Kohn-Sham matrix. In this thesis, density matrix purification is used to compute the density matrix with time and memory usage increasing only linearly with system size. The forward error of purification is analyzed and schemes to control the forward error are proposed. The presented purification methods are coupled with effective methods to compute interior eigenvalues of the Fock/Kohn-Sham matrix also proposed in this thesis.New methods for inverse factorizations of Hermitian positive definite matrices that can be used for congruence transformations of the Fock/Kohn-Sham and density matrices are suggested as well. Most of the methods above have been implemented in the Ergo quantum chemistry program. This program uses a hierarchic sparse matrix library, also presented in this thesis, which is parallelized for shared memory computer architectures. It is demonstrated that the Ergo program is able to perform linear scaling Hartree-Fock calculations. / QC 20100908
24

Numerical Quality and High Performance In Interval Linear Algebra on Multi-Core Processors / Algèbre linéaire d'intervalles - Qualité Numérique et Hautes Performances sur Processeurs Multi-Cœurs

Theveny, Philippe 31 October 2014 (has links)
L'objet est de comparer des algorithmes de multiplication de matrices à coefficients intervalles et leurs implémentations.Le premier axe est la mesure de la précision numérique. Les précédentes analyses d'erreur se limitent à établir une borne sur la surestimation du rayon du résultat en négligeant les erreurs dues au calcul en virgule flottante. Après examen des différentes possibilités pour quantifier l'erreur d'approximation entre deux intervalles, l'erreur d'arrondi est intégrée dans l'erreur globale. À partir de jeux de données aléatoires, la dispersion expérimentale de l'erreur globale permet d'éclairer l'importance des différentes erreurs (de méthode et d'arrondi) en fonction de plusieurs facteurs : valeur et homogénéité des précisions relatives des entrées, dimensions des matrices, précision de travail. Cette démarche conduit à un nouvel algorithme moins coûteux et tout aussi précis dans certains cas déterminés.Le deuxième axe est d'exploiter le parallélisme des opérations. Les implémentations précédentes se ramènent à des produits de matrices de nombres flottants. Pour contourner les limitations d'une telle approche sur la validité du résultat et sur la capacité à monter en charge, je propose une implémentation par blocs réalisée avec des threads OpenMP qui exécutent des noyaux de calcul utilisant les instructions vectorielles. L'analyse des temps d'exécution sur une machine de 4 octo-coeurs montre que les coûts de calcul sont du même ordre de grandeur sur des matrices intervalles et numériques de même dimension et que l'implémentation par bloc passe mieux à l'échelle que l'implémentation avec plusieurs appels aux routines BLAS. / This work aims at determining suitable scopes for several algorithms of interval matrices multiplication.First, we quantify the numerical quality. Former error analyses of interval matrix products establish bounds on the radius overestimation by neglecting the roundoff error. We discuss here several possible measures for interval approximations. We then bound the roundoff error and compare experimentally this bound with the global error distribution on several random data sets. This approach enlightens the relative importance of the roundoff and arithmetic errors depending on the value and homogeneity of relative accuracies of inputs, on the matrix dimension, and on the working precision. This also leads to a new algorithm that is cheaper yet as accurate as previous ones under well-identified conditions.Second, we exploit the parallelism of linear algebra. Previous implementations use calls to BLAS routines on numerical matrices. We show that this may lead to wrong interval results and also restrict the scalability of the performance when the core count increases. To overcome these problems, we implement a blocking version with OpenMP threads executing block kernels with vector instructions. The timings on a 4-octo-core machine show that this implementation is more scalable than the BLAS one and that the cost of numerical and interval matrix products are comparable.
25

Myson Burch Thesis

Myson C Burch (16637289) 08 August 2023 (has links)
<p>With the completion of the Human Genome Project and many additional efforts since, there is an abundance of genetic data that can be leveraged to revolutionize healthcare. Now, there are significant efforts to develop state-of-the-art techniques that reveal insights about connections between genetics and complex diseases such as diabetes, heart disease, or common psychiatric conditions that depend on multiple genes interacting with environmental factors. These methods help pave the way towards diagnosis, cure, and ultimately prediction and prevention of complex disorders. As a part of this effort, we address high dimensional genomics-related questions through mathematical modeling, statistical methodologies, combinatorics and scalable algorithms. More specifically, we develop innovative techniques at the intersection of technology and life sciences using biobank scale data from genome-wide association studies (GWAS) and machine learning as an effort to better understand human health and disease. <br> <br> The underlying principle behind Genome Wide Association Studies (GWAS) is a test for association between genotyped variants for each individual and the trait of interest. GWAS have been extensively used to estimate the signed effects of trait-associated alleles, mapping genes to disorders and over the past decade about 10,000 strong associations between genetic variants and one (or more) complex traits have been reported. One of the key challenges in GWAS is population stratification which can lead to spurious genotype-trait associations. Our work proposes a simple clustering-based approach to correct for stratification better than existing methods. This method takes into account the linkage disequilibrium (LD) while computing the distance between the individuals in a sample. Our approach, called CluStrat, performs Agglomerative Hierarchical Clustering (AHC) using a regularized Mahalanobis distance-based GRM, which captures the population-level covariance (LD) matrix for the available genotype data.<br> <br> Linear mixed models (LMMs) have been a popular and powerful method when conducting genome-wide association studies (GWAS) in the presence of population structure. LMMs are computationally expensive relative to simpler techniques. We implement matrix sketching in LMMs (MaSk-LMM) to mitigate the more expensive computations. Matrix sketching is an approximation technique where random projections are applied to compress the original dataset into one that is significantly smaller and still preserves some of the properties of the original dataset up to some guaranteed approximation ratio. This technique naturally applies to problems in genetics where we can treat large biobanks as a matrix with the rows representing samples and columns representing SNPs. These matrices will be very large due to the large number of individuals and markers in biobanks and can benefit from matrix sketching. Our approach tackles the bottleneck of LMMs directly by using sketching on the samples of the genotype matrix as well as sketching on the markers during the computation of the relatedness or kinship matrix (GRM). <br> <br> Predictive analytics have been used to improve healthcare by reinforcing decision-making, enhancing patient outcomes, and providing relief for the healthcare system. These methods help pave the way towards diagnosis, cure, and ultimately prediction and prevention of complex disorders. The prevalence of these complex diseases varies greatly around the world. Understanding the basis of this prevalence difference can help disentangle the interaction among different factors causing complex disorders and identify groups of people who may be at a greater risk of developing certain disorders. This could become the basis of the implementation of early intervention strategies for populations at higher risk with significant benefits for public health.<br> <br> This dissertation broadens our understanding of empirical population genetics. It proposes a data-driven perspective to a variety of problems in genetics such as confounding factors in genetic structure. This dissertation highlights current computational barriers in open problems in genetics and provides robust, scalable and efficient methods to ease the analysis of genotype data.</p>
26

Ghosts and machines : regularized variational methods for interactive simulations of multibodies with dry frictional contacts

Lacoursière, Claude January 2007 (has links)
<p>A time-discrete formulation of the variational principle of mechanics is used to provide a consistent theoretical framework for the construction and analysis of low order integration methods. These are applied to mechanical systems subject to mixed constraints and dry frictional contacts and impacts---machines. The framework includes physics motivated constraint regularization and stabilization schemes. This is done by adding potential energy and Rayleigh dissipation terms in the Lagrangian formulation used throughout. These terms explicitly depend on the value of the Lagrange multipliers enforcing constraints. Having finite energy, the multipliers are thus massless ghost particles. The main numerical stepping method produced with the framework is called SPOOK.</p><p>Variational integrators preserve physical invariants globally, exactly in some cases, approximately but within fixed global bounds for others. This allows to product realistic physical trajectories even with the low order methods. These are needed in the solution of nonsmooth problems such as dry frictional contacts and in addition, they are computationally inexpensive. The combination of strong stability, low order, and the global preservation of invariants allows for large integration time steps, but without loosing accuracy on the important and visible physical quantities. SPOOK is thus well-suited for interactive simulations, such as those commonly used in virtual environment applications, because it is fast, stable, and faithful to the physics.</p><p>New results include a stable discretization of highly oscillatory terms of constraint regularization; a linearly stable constraint stabilization scheme based on ghost potential and Rayleigh dissipation terms; a single-step, strictly dissipative, approximate impact model; a quasi-linear complementarity formulation of dry friction that is isotropic and solvable for any nonnegative value of friction coefficients; an analysis of a splitting scheme to solve frictional contact complementarity problems; a stable, quaternion-based rigid body stepping scheme and a stable linear approximation thereof. SPOOK includes all these elements. It is linearly implicit and linearly stable, it requires the solution of either one linear system of equations of one mixed linear complementarity problem per regular time step, and two of the same when an impact condition is detected. The changes in energy caused by constraints, impacts, and dry friction, are all shown to be strictly dissipative in comparison with the free system. Since all regularization and stabilization parameters are introduced in the physics, they map directly onto physical properties and thus allow modeling of a variety of phenomena, such as constraint compliance, for instance.</p><p>Tutorial material is included for continuous and discrete-time analytic mechanics, quaternion algebra, complementarity problems, rigid body dynamics, constraint kinematics, and special topics in numerical linear algebra needed in the solution of the stepping equations of SPOOK.</p><p>The qualitative and quantitative aspects of SPOOK are demonstrated by comparison with a variety of standard techniques on well known test cases which are analyzed in details. SPOOK compares favorably for all these examples. In particular, it handles ill-posed and degenerate problems seamlessly and systematically. An implementation suitable for large scale performance and accuracy testing is left for future work.</p>
27

Ghosts and machines : regularized variational methods for interactive simulations of multibodies with dry frictional contacts

Lacoursière, Claude January 2007 (has links)
A time-discrete formulation of the variational principle of mechanics is used to provide a consistent theoretical framework for the construction and analysis of low order integration methods. These are applied to mechanical systems subject to mixed constraints and dry frictional contacts and impacts---machines. The framework includes physics motivated constraint regularization and stabilization schemes. This is done by adding potential energy and Rayleigh dissipation terms in the Lagrangian formulation used throughout. These terms explicitly depend on the value of the Lagrange multipliers enforcing constraints. Having finite energy, the multipliers are thus massless ghost particles. The main numerical stepping method produced with the framework is called SPOOK. Variational integrators preserve physical invariants globally, exactly in some cases, approximately but within fixed global bounds for others. This allows to product realistic physical trajectories even with the low order methods. These are needed in the solution of nonsmooth problems such as dry frictional contacts and in addition, they are computationally inexpensive. The combination of strong stability, low order, and the global preservation of invariants allows for large integration time steps, but without loosing accuracy on the important and visible physical quantities. SPOOK is thus well-suited for interactive simulations, such as those commonly used in virtual environment applications, because it is fast, stable, and faithful to the physics. New results include a stable discretization of highly oscillatory terms of constraint regularization; a linearly stable constraint stabilization scheme based on ghost potential and Rayleigh dissipation terms; a single-step, strictly dissipative, approximate impact model; a quasi-linear complementarity formulation of dry friction that is isotropic and solvable for any nonnegative value of friction coefficients; an analysis of a splitting scheme to solve frictional contact complementarity problems; a stable, quaternion-based rigid body stepping scheme and a stable linear approximation thereof. SPOOK includes all these elements. It is linearly implicit and linearly stable, it requires the solution of either one linear system of equations of one mixed linear complementarity problem per regular time step, and two of the same when an impact condition is detected. The changes in energy caused by constraints, impacts, and dry friction, are all shown to be strictly dissipative in comparison with the free system. Since all regularization and stabilization parameters are introduced in the physics, they map directly onto physical properties and thus allow modeling of a variety of phenomena, such as constraint compliance, for instance. Tutorial material is included for continuous and discrete-time analytic mechanics, quaternion algebra, complementarity problems, rigid body dynamics, constraint kinematics, and special topics in numerical linear algebra needed in the solution of the stepping equations of SPOOK. The qualitative and quantitative aspects of SPOOK are demonstrated by comparison with a variety of standard techniques on well known test cases which are analyzed in details. SPOOK compares favorably for all these examples. In particular, it handles ill-posed and degenerate problems seamlessly and systematically. An implementation suitable for large scale performance and accuracy testing is left for future work.
28

Preconditioned Newton methods for ill-posed problems / Vorkonditionierte Newton-Verfahren für schlecht gestellte Probleme

Langer, Stefan 21 June 2007 (has links)
No description available.
29

On the Efficient Utilization of Dense Nonlocal Adjacency Information In Graph Neural Networks

Bünger, Dominik 14 December 2021 (has links)
In den letzten Jahren hat das Teilgebiet des Maschinellen Lernens, das sich mit Graphdaten beschäftigt, durch die Entwicklung von spezialisierten Graph-Neuronalen Netzen (GNNs) mit mathematischer Begründung in der spektralen Graphtheorie große Sprünge nach vorn gemacht. Zusätzlich zu natürlichen Graphdaten können diese Methoden auch auf Datensätze ohne Graphen angewendet werden, indem man einen Graphen künstlich mithilfe eines definierten Adjazenzbegriffs zwischen den Samplen konstruiert. Nach dem neueste Stand der Technik wird jedes Sample mit einer geringen Anzahl an Nachbarn verknüpft, um gleichzeitig das dünnbesetzte Verhalten natürlicher Graphen nachzuahmen, die Stärken bestehender GNN-Methoden auszunutzen und quadratische Abhängigkeit von der Knotenanzahl zu verhinden, welche diesen Ansatz für große Datensätze unbrauchbar machen würde. Die vorliegende Arbeit beleuchtet die alternative Konstruktion von vollbesetzten Graphen basierend auf Kernel-Funktionen. Dabei quantifizieren die Verknüpfungen eines jeden Samples explizit die Ähnlichkeit zu allen anderen Samplen. Deshalb enthält der Graph eine quadratische Anzahl an Kanten, die die lokalen und nicht-lokalen Nachbarschaftsinformationen beschreiben. Obwohl dieser Ansatz in anderen Kontexten wie der Lösung partieller Differentialgleichungen ausgiebig untersucht wurde, wird er im Maschinellen Lernen heutzutage meist wegen der dichtbesetzten Adjazenzmatrizen als unbrauchbar empfunden. Aus diesem Grund behandelt ein großer Teil dieser Arbeit numerische Techniken für schnelle Auswertungen, insbesondere Eigenwertberechnungen, in wichtigen Spezialfällen, bei denen die Samples durch niedrigdimensionale Vektoren (wie z.B. in dreidimensionalen Punktwolken) oder durch kategoriale Attribute beschrieben werden. Weiterhin wird untersucht, wie diese dichtbesetzten Adjazenzinformationen in Lernsituationen auf Graphen benutzt werden können. Es wird eine eigene transduktive Lernmethode vorgeschlagen und präsentiert, eine Version eines Graph Convolutional Networks (GCN), das auf die spektralen und räumlichen Eigenschaften von dichtbesetzten Graphen abgestimmt ist. Schließlich wird die Anwendung von Kernel-basierten Adjazenzmatrizen in der Beschleunigung der erfolgreichen Architektur “PointNet++” umrissen. Im Verlauf der Arbeit werden die Methoden in ausführlichen numerischen Experimenten evaluiert. Zusätzlich zu der empirischen Genauigkeit der Neuronalen Netze liegt der Fokus auf wettbewerbsfähigen Laufzeiten, um die Berechnungs- und Energiekosten der Methoden zu reduzieren. / Over the past few years, graph learning - the subdomain of machine learning on graph data - has taken big leaps forward through the development of specialized Graph Neural Networks (GNNs) that have mathematical foundations in spectral graph theory. In addition to natural graph data, these methods can be applied to non-graph data sets by constructing a graph artificially using a predefined notion of adjacency between samples. The state of the art is to only connect each sample to a low number of neighbors in order to simultaneously mimic the sparse behavior of natural graphs, play into the strengths of existing GNN methods, and avoid quadratic scaling in the number of nodes that would make the approach infeasible for large problem sizes. In this thesis, we shine light on the alternative construction of kernel-based fully-connected graphs. Here the connections of each sample explicitly quantify the similarities to all other samples. Hence the graph contains a quadratic number of edges which encode local and non-local neighborhood information. Though this approach is well studied in other settings including the solution of partial differential equations, it is typically dismissed in machine learning nowadays because of its dense adjacency matrices. We thus dedicate a large portion of this work to showcasing numerical techniques for fast evaluations, especially eigenvalue computations, in important special cases where samples are described by low-dimensional feature vectors (e.g., three-dimensional point clouds) or by a small set of categorial attributes. We then continue to investigate how this dense adjacency information can be utilized in graph learning settings. In particular, we present our own proposed transductive learning method, a version of a Graph Convolutional Network (GCN) designed towards the spectral and spatial properties of dense graphs. We furthermore outline the application of kernel-based adjacency matrices in the speedup of the successful PointNet++ architecture. Throughout this work, we evaluate our methods in extensive numerical experiments. In addition to the empirical accuracy of our neural network tasks, we focus on competitive runtimes in order to decrease the computational and energy cost of our methods.
30

The condition number of Vandermonde matrices and its application to the stability analysis of a subspace method / Die Konditionzahl von Vandermondematrizen und ihre Anwendung für die Stabilitätsanalyse einer Unterraummethode

Nagel, Dominik 19 March 2021 (has links)
This thesis consists of two main parts. First of all, the condition number of rectangular Vandermonde matrices with nodes on the complex unit circle is studied. The first time quantitative bounds for the extreme singular values are proven in the multivariate setting and when nodes of the Vandermonde matrix form clusters. In the second part, an optimized presentation of the deterministic stability analysis of the subspace method ESPRIT is given and results from the first part are applied.

Page generated in 0.4859 seconds