• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 22
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 46
  • 46
  • 24
  • 14
  • 13
  • 9
  • 9
  • 8
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Novel Model Reduction Techniques for Control of Machine Tools

Benner, Peter, Bonin, Thomas, Faßbender, Heike, Saak, Jens, Soppa, Andreas, Zaeh, Michael 13 November 2009 (has links) (PDF)
Computational methods for reducing the complexity of Finite Element (FE) models in structural dynamics are usually based on modal analysis. Classical approaches such as modal truncation, static condensation (Craig-Bampton, Guyan), and component mode synthesis (CMS) are available in many CAE tools such as ANSYS. In other disciplines, different techniques for Model Order Reduction (MOR) have been developed in the previous 2 decades. Krylov subspace methods are one possible choice and often lead to much smaller models than modal truncation methods given the same prescribed tolerance threshold. They have become available to ANSYS users through the tool mor4ansys. A disadvantage is that neither modal truncation nor CMS nor Krylov subspace methods preserve properties important to control design. System-theoretic methods like balanced truncation approximation (BTA), on the other hand, are directed towards reduced-order models for use in closed-loop control. So far, these methods are considered to be too expensive for large-scale structural models. We show that recent algorithmic advantages lead to MOR methods that are applicable to FE models in structural dynamics and that can easily be integrated into CAE software. We will demonstrate the efficiency of the proposed MOR method based on BTA using a control system including as plant the FE model of a machine tool.
12

Krylov subspace type methods for the computation of non-negative or sparse solutions of ill-posed problems

Pasha, Mirjeta 10 April 2020 (has links)
No description available.
13

Crouzeix's Conjecture and the GMRES Algorithm

Luo, Sarah McBride 13 July 2011 (has links) (PDF)
This thesis explores the connection between Crouzeix's conjecture and the convergence of the GMRES algorithm. GMRES is a popular iterative method for solving linear systems and is one of the many Krylov methods. Despite its popularity, the convergence of GMRES is not completely understood. While the spectrum can in some cases be a good indicator of convergence, it has been shown that in general, the spectrum does not provide sufficient information to fully explain the behavior of GMRES iterations. Other sets associated with a matrix that can also help predict convergence are the pseudospectrum and the numerical range. This work focuses on convergence bounds obtained by considering the latter. In particular, it focuses on the application of Crouzeix's conjecture, which relates the norm of a matrix polynomial to the size of that polynomial over the numerical range, to describing GMRES convergence.
14

Regularization Methods for Ill-posed Problems

Neuman, Arthur James, III 15 June 2010 (has links)
No description available.
15

Krylov Subspace Methods with Fixed Memory Requirements: Nearly Hermitian Linear Systems and Subspace Recycling

Soodhalter, Kirk McLane January 2012 (has links)
Krylov subspace iterative methods provide an effective tool for reducing the solution of large linear systems to a size for which a direct solver may be applied. However, the problems of limited storage and speed are still a concern. Therefore, in this dissertation work, we present iterative Krylov subspace algorithms for non-Hermitian systems which do have fixed memory requirements and have favorable convergence characteristics. This dissertation describes three projects. The first project concerns short-term recurrence Krylov subspace methods for nearly-Hermitian linear systems. In 2008, Beckermann and Reichel introduced a short-term recurrence progressive GMRES algorithm for nearly-Hermitian linear systems. However, we have found this method to be unstable. We document the instabilities and introduce a different fixed-memory algorithm to treat nearly-Hermitian problems. We present numerical experiments demonstrating that the performance of this algorithm is competitive. The other two projects involve extending a strategy called Krylov subspace recycling, introduced by Parks and colleagues in 2005. This method requires more overhead than other subspace augmentation methods but offers the ability to recycle subspace information between cycles for a single linear system and recycle information between related linear systems. In the first project, we extend subspace recycling to the block Krylov subspace setting. A block Krylov subspace is a generalization of Krylov subspace where a single starting vector is replaced with a block of linearly independent starting vectors. We then apply our method to a sequence of matrices arising in a Newton iteration applied to fluid density functional theory and present some numerical experiments. In the second project, we extend the methods of subspace recycling to a family of linear systems differing only by multiples of the identity. These problems arise in the theory of quantum chromodynamics, a theory of the behavior of subatomic particles. We wish to build on the class of Krylov methods which allow the simultaneous solution of all shifted linear systems while generating only one subspace. However, the mechanics of subspace recycling complicates this situation and interferes with our ability to simultaneously solve all systems using these techniques. Therefore, we introduce an algorithm which avoids this complication and present some numerical experiments demonstrating its effectiveness. / Mathematics
16

On the vector epsilon algorithm for solving linear systems of equations

Graves-Morris, Peter R., Salam, A. 12 May 2009 (has links)
No / The four vector extrapolation methods, minimal polynomial extrapolation, reduced rank extrapolation, modified minimal polynomial extrapolation and the topological epsilon algorithm, when applied to linearly generated vector sequences are Krylov subspace methods and it is known that they are equivalent to some well-known conjugate gradient type methods. However, the vector -algorithm is an extrapolation method, older than the four extrapolation methods above, and no similar results are known for it. In this paper, a determinantal formula for the vector -algorithm is given. Then it is shown that, when applied to a linearly generated vector sequence, the algorithm is also a Krylov subspace method and for a class of matrices the method is equivalent to a preconditioned Lanczos method. A new determinantal formula for the CGS is given, and an algebraic comparison between the vector -algorithm for linear systems and CGS is also given.
17

Exponential Integrators for the Incompressible Navier-Stokes Equations

Newman, Christopher K. 05 November 2003 (has links)
We provide an algorithm and analysis of a high order projection scheme for time integration of the incompressible Navier-Stokes equations (NSE). The method is based on a projection onto the subspace of divergence-free (incompressible) functions interleaved with a Krylov-based exponential time integration (KBEI). These time integration methods provide a high order accurate, stable approach with many of the advantages of explicit methods, and can reduce the computational resources over conventional methods. The method is scalable in the sense that the computational costs grow linearly with problem size. Exponential integrators, used typically to solve systems of ODEs, utilize matrix vector products of the exponential of the Jacobian on a vector. For large systems, this product can be approximated efficiently by Krylov subspace methods. However, in contrast to explicit methods, KBEIs are not restricted by the time step. While implicit methods require a solution of a linear system with the Jacobian, KBEIs only require matrix vector products of the Jacobian. Furthermore, these methods are based on linearization, so there is no non-linear system solve at each time step. Differential-algebraic equations (DAEs) are ordinary differential equations (ODEs) subject to algebraic constraints. The discretized NSE constitute a system of DAEs, where the incompressibility condition is the algebraic constraint. Exponential integrators can be extended to DAEs with linear constraints imposed via a projection onto the constraint manifold. This results in a projected ODE that is integrated by a KBEI. In this approach, the Krylov subspace satisfies the constraint, hence the solution at the advanced time step automatically satisfies the constraint as well. For the NSE, the projection onto the constraint is typically achieved by a projection induced by the L2 inner product. We examine this L2 projection and an H1 projection induced by the H1 semi-inner product. The H1 projection has an advantage over the L2 projection in that it retains tangential Dirichlet boundary conditions for the flow. Both the H1 and L2 projections are solutions to saddle point problems that are efficiently solved by a preconditioned Uzawa algorithm. / Ph. D.
18

Interpolation Methods for the Model Reduction of Bilinear Systems

Flagg, Garret Michael 31 May 2012 (has links)
Bilinear systems are a class of nonlinear dynamical systems that arise in a variety of applications. In order to obtain a sufficiently accurate representation of the underlying physical phenomenon, these models frequently have state-spaces of very large dimension, resulting in the need for model reduction. In this work, we introduce two new methods for the model reduction of bilinear systems in an interpolation framework. Our first approach is to construct reduced models that satisfy multipoint interpolation constraints defined on the Volterra kernels of the full model. We show that this approach can be used to develop an asymptotically optimal solution to the H_2 model reduction problem for bilinear systems. In our second approach, we construct a solution to a bilinear system realization problem posed in terms of constructing a bilinear realization whose kth-order transfer functions satisfy interpolation conditions in k complex variables. The solution to this realization problem can be used to construct a bilinear system realization directly from sampling data on the kth-order transfer functions, without requiring the formation of the realization matrices for the full bilinear system. / Ph. D.
19

Recycling Techniques for Sequences of Linear Systems and Eigenproblems

Carr, Arielle Katherine Grim 09 July 2021 (has links)
Sequences of matrices arise in many applications in science and engineering. In this thesis we consider matrices that are closely related (or closely related in groups), and we take advantage of the small differences between them to efficiently solve sequences of linear systems and eigenproblems. Recycling techniques, such as recycling preconditioners or subspaces, are popular approaches for reducing computational cost. In this thesis, we introduce two novel approaches for recycling previously computed information for a subsequent system or eigenproblem, and demonstrate good results for sequences arising in several applications. Preconditioners are often essential for fast convergence of iterative methods. However, computing a good preconditioner can be very expensive, and when solving a sequence of linear systems, we want to avoid computing a new preconditioner too often. Instead, we can recycle a previously computed preconditioner, for which we have good convergence behavior of the preconditioned system. We propose an update technique we call the sparse approximate map, or SAM update, that approximately maps one matrix to another matrix in our sequence. SAM updates are very cheap to compute and apply, preserve good convergence properties of a previously computed preconditioner, and help to amortize the cost of that preconditioner over many linear solves. When solving a sequence of eigenproblems, we can reduce the computational cost of constructing the Krylov space starting with a single vector by warm-starting the eigensolver with a subspace instead. We propose an algorithm to warm-start the Krylov-Schur method using a previously computed approximate invariant subspace. We first compute the approximate Krylov decomposition for a matrix with minimal residual, and use this space to warm-start the eigensolver. We account for the residual matrix when expanding, truncating, and deflating the decomposition and show that the norm of the residual monotonically decreases. This method is effective in reducing the total number of matrix-vector products, and computes an approximate invariant subspace that is as accurate as the one computed with standard Krylov-Schur. In applications where the matrix-vector products require an implicit linear solve, we incorporate Krylov subspace recycling. Finally, in many applications, sequences of matrices take the special form of the sum of the identity matrix, a very low-rank matrix, and a small-in-norm matrix. We consider convergence rates for GMRES applied to these matrices by identifying the sources of sensitivity. / Doctor of Philosophy / Problems in science and engineering often require the solution to many linear systems, or a sequence of systems, that model the behavior of physical phenomena. In order to construct highly accurate mathematical models to describe this behavior, the resulting matrices can be very large, and therefore the linear system can be very expensive to solve. To efficiently solve a sequence of large linear systems, we often use iterative methods, which can require preconditioning techniques to achieve fast convergence. The preconditioners themselves can be very expensive to compute. So, we propose a cheap update technique that approximately maps one matrix to another in the sequence for which we already have a good preconditioner. We then combine the preconditioner and the map and use the updated preconditioner for the current system. Sequences of eigenvalue problems also arise in many scientific applications, such as those modeling disk brake squeal in a motor vehicle. To accurately represent this physical system, large eigenvalue problems must be solved. The behavior of certain eigenvalues can reveal instability in the physical system but to identify these eigenvalues, we must solve a sequence of very large eigenproblems. The eigensolvers used to solve eigenproblems generally begin with a single vector, and instead, we propose starting the method with several vectors, or a subspace. This allows us to reduce the total number of iterations required by the eigensolver while still producing an accurate solution. We demonstrate good results for both of these approaches using sequences of linear systems and eigenvalue problems arising in several real-world applications. Finally, in many applications, sequences of matrices take the special form of the sum of the identity matrix, a very low-rank matrix, and a small-in-norm matrix. We examine the convergence behavior of the iterative method GMRES when solving such a sequence of matrices.
20

Recycling Preconditioners for Sequences of Linear Systems and Matrix Reordering

Li, Ming 09 December 2015 (has links)
In science and engineering, many applications require the solution of a sequence of linear systems. There are many ways to solve linear systems and we always look for methods that are faster and/or require less storage. In this dissertation, we focus on solving these systems with Krylov subspace methods and how to obtain effective preconditioners inexpensively. We first present an application for electronic structure calculation. A sequence of slowly changing linear systems is produced in the simulation. The linear systems change by rank-one updates. Properties of the system matrix are analyzed. We use Krylov subspace methods to solve these linear systems. Krylov subspace methods need a preconditioner to be efficient and robust. This causes the problem of computing a sequence of preconditioners corresponding to the sequence of linear systems. We use recycling preconditioners, which is to update and reuse existing preconditioner. We investigate and analyze several preconditioners, such as ILU(0), ILUTP, domain decomposition preconditioners, and inexact matrix-vector products with inner-outer iterations. Recycling preconditioners produces cumulative updates to the preconditioner. To reduce the cost of applying the preconditioners, we propose approaches to truncate the cumulative preconditioner updates, which is a low-rank matrix. Two approaches are developed. The first one is to truncate the low-rank matrix using the best approximation given by the singular value decomposition (SVD). This is effective if many singular values are close to zero. If not, based on the ideas underlying GCROT and recycling, we use information from an Arnoldi recurrence to determine which directions to keep. We investigate and analyze their properties. We also prove that both truncation approaches work well under suitable conditions. We apply our truncation approaches on two applications. One is the Quantum Monte Carlo (QMC) method and the other is a nonlinear second order partial differential equation (PDE). For the QMC method, we test both truncation approaches and analyze their results. For the PDE problem, we discretize the equations with finite difference method, solve the nonlinear problem by Newton's method with a line-search, and utilize Krylov subspace methods to solve the linear system in every nonlinear iteration. The preconditioner is updated by Broyden-type rank-one updates, and we truncate the preconditioner updates by using the SVD finally. We demonstrate that the truncation is effective. In the last chapter, we develop a matrix reordering algorithm that improves the diagonal dominance of Slater matrices in the QMC method. If we reorder the entire Slater matrix, we call it global reordering and the cost is O(N^3), which is expensive. As the change is geometrically localized and impacts only one row and a modest number of columns, we propose a local reordering of a submatrix of the Slater matrix. The submatrix has small dimension, which is independent of the size of Slater matrix, and hence the local reordering has constant cost (with respect to the size of Slater matrix). / Ph. D.

Page generated in 0.0416 seconds