• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • Tagged with
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Minimal and orthogonal residual methods and their generalizations for solving linear operator equations

Ernst, Oliver G. 10 December 2009 (has links) (PDF)
This thesis is concerned with the solution of linear operator equations by projection methods known as minimal residual (MR) and orthogonal residual (OR) methods. We begin with a rather abstract framework of approximation by orthogonal and oblique projection in Hilbert space. When these approximation schemes are applied to sequences of nested spaces, with a simple requirement relating trial and test spaces in case of the OR method, one can derive at this rather general level the basic relations which have been proved for many specific Krylov subspace methods for solving linear systems of equations in the literature. The crucial quantities with which we describe the behavior of these methods are angles between subspaces. By replacing the given inner product with one that is basis-dependent, one can also incorporate methods based on non-orthogonal bases such as those based on the non-Hermitian Lanczos process for solving linear systems. In fact, one can show that any reasonable approximation method based on a nested sequence of approximation spaces can be interpreted as an MR or OR method in this way. When these abstract approximation techniques are applied to the solution of linear operator equations, there are three generic algorithmic formulations, which we identify with some algorithms in the literature. Specializing further to Krylov trial and test spaces, we recover the well known Krylov subspace methods. Moreover, we show that our general framework also covers in a natural way many recent generalizations of Krylov subspace methods, which employ techniques such as augmentation, deflation, restarts and truncation. We conclude with a chapter on error and residual bounds, deriving some old and new results based on the angles framework. This work provides a natural and consistent framework for the sometimes confusing plethora of methods of Krylov subspace type introduced in the last 50 years.
2

Minimal and orthogonal residual methods and their generalizations for solving linear operator equations

Ernst, Oliver G. 09 October 2000 (has links)
This thesis is concerned with the solution of linear operator equations by projection methods known as minimal residual (MR) and orthogonal residual (OR) methods. We begin with a rather abstract framework of approximation by orthogonal and oblique projection in Hilbert space. When these approximation schemes are applied to sequences of nested spaces, with a simple requirement relating trial and test spaces in case of the OR method, one can derive at this rather general level the basic relations which have been proved for many specific Krylov subspace methods for solving linear systems of equations in the literature. The crucial quantities with which we describe the behavior of these methods are angles between subspaces. By replacing the given inner product with one that is basis-dependent, one can also incorporate methods based on non-orthogonal bases such as those based on the non-Hermitian Lanczos process for solving linear systems. In fact, one can show that any reasonable approximation method based on a nested sequence of approximation spaces can be interpreted as an MR or OR method in this way. When these abstract approximation techniques are applied to the solution of linear operator equations, there are three generic algorithmic formulations, which we identify with some algorithms in the literature. Specializing further to Krylov trial and test spaces, we recover the well known Krylov subspace methods. Moreover, we show that our general framework also covers in a natural way many recent generalizations of Krylov subspace methods, which employ techniques such as augmentation, deflation, restarts and truncation. We conclude with a chapter on error and residual bounds, deriving some old and new results based on the angles framework. This work provides a natural and consistent framework for the sometimes confusing plethora of methods of Krylov subspace type introduced in the last 50 years.

Page generated in 0.0999 seconds