• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 64
  • 22
  • 11
  • 7
  • 5
  • 4
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 148
  • 48
  • 41
  • 30
  • 26
  • 26
  • 24
  • 21
  • 21
  • 20
  • 19
  • 19
  • 18
  • 17
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

A Parallel Implicit Adaptive Mesh Refinement Algorithm for Predicting Unsteady Fully-compressible Reactive Flows

Northrup, Scott Andrew 13 August 2014 (has links)
A new parallel implicit adaptive mesh refinement (AMR) algorithm is developed for the prediction of unsteady behaviour of laminar flames. The scheme is applied to the solution of the system of partial-differential equations governing time-dependent, two- and three-dimensional, compressible laminar flows for reactive thermally perfect gaseous mixtures. A high-resolution finite-volume spatial discretization procedure is used to solve the conservation form of these equations on body-fitted multi-block hexahedral meshes. A local preconditioning technique is used to remove numerical stiffness and maintain solution accuracy for low-Mach-number, nearly incompressible flows. A flexible block-based octree data structure has been developed and is used to facilitate automatic solution-directed mesh adaptation according to physics-based refinement criteria. The data structure also enables an efficient and scalable parallel implementation via domain decomposition. The parallel implicit formulation makes use of a dual-time-stepping like approach with an implicit second-order backward discretization of the physical time, in which a Jacobian-free inexact Newton method with a preconditioned generalized minimal residual (GMRES) algorithm is used to solve the system of nonlinear algebraic equations arising from the temporal and spatial discretization procedures. An additive Schwarz global preconditioner is used in conjunction with block incomplete LU type local preconditioners for each sub-domain. The Schwarz preconditioning and block-based data structure readily allow efficient and scalable parallel implementations of the implicit AMR approach on distributed-memory multi-processor architectures. The scheme was applied to solutions of steady and unsteady laminar diffusion and premixed methane-air combustion and was found to accurately predict key flame characteristics. For a premixed flame under terrestrial gravity, the scheme accurately predicted the frequency of the natural buoyancy induced oscillations. The performance of the proposed parallel implicit algorithm was assessed by comparisons to more conventional solution procedures and was found to significantly reduce the computational time required to achieve a solution in all cases investigated.
12

Time domain model reduction by moment matching /

Eid, Rudy. January 2009 (has links)
Zugl.: München, Techn. University, Diss., 2009.
13

Structure preserving order reduction of large scale second order models

Salimbahrami, Seyed Behnam. Unknown Date (has links)
Techn. University, Diss., 2005--München.
14

Krylov subspace methods in finite precision a unified approach /

Zemke, Jens-Peter Max. Unknown Date (has links) (PDF)
Techn. University, Diss., 2003--Hamburg.
15

Inexact Solves in Interpolatory Model Reduction

Wyatt, Sarah A. 27 May 2009 (has links)
Dynamical systems are mathematical models characterized by a set of differential or difference equations. Due to the increasing demand for more accuracy, the number of equations involved may reach the order of thousands and even millions. With so many equations, it often becomes computationally cumbersome to work with these large-scale dynamical systems. Model reduction aims to replace the original system with a reduced system of significantly smaller dimension which will still describe the important dynamics of the large-scale model. Interpolation is one method used to obtain the reduced order model. This requires that the reduced order model interpolates the full order model at selected interpolation points. Reduced order models are obtained through the Krylov reduction process, which involves solving a sequence of linear systems. The Iterative Rational Krylov Algorithm (IRKA) iterates this Krylov reduction process to obtain an optimal Η₂ reduced model. Especially in the large-scale setting, these linear systems often require employing inexact solves. The aim of this thesis is to investigate the impact of inexact solves on interpolatory model reduction. We considered preconditioning the linear systems, varying the stopping tolerances, employing GMRES and BiCG as the inexact solvers, and using different initial shift selections. For just one step of Krylov reduction, we verified theoretical properties of the interpolation error. Also, we found a linear improvement in the subspace angles between the inexact and exact subspaces provided that a good shift selection was used. For a poor shift selection, these angles often remained of the same order regardless of how accurately the linear systems were solved. These patterns were reflected in Η₂ and Η∞ errors between the inexact and exact subspaces, since these errors improved linearly with a good shift selection and were typically of the same order with a poor shift. We found that the shift selection also influenced the overall model reduction error between the full model and inexact model as these error norms were often several orders larger when a poor shift selection was used. For a given shift selection, the overall model reduction error typically remained of the same order for tolerances smaller than 1 x 10<sup>-3</sup>, which suggests that larger tolerances for the inexact solver may be used without necessarily augmenting the model reduction error. With preconditioned linear systems as well as BiCG, we found smaller errors between the inexact and exact models while the order of the overall model reduction error remained the same. With IRKA, we observed similar patterns as with just one step of Krylov reduction. However, we also found additional benefits associated with using an initial guess in the inexact solve and by varying the tolerance of the inexact solve. / Master of Science
16

Parallel Sparse Linear Algebra for Homotopy Methods

Driver, Maria Sosonkina Jr. 19 September 1997 (has links)
Globally convergent homotopy methods are used to solve difficult nonlinear systems of equations by tracking the zero curve of a homotopy map. Homotopy curve tracking involves solving a sequence of linear systems, which often vary greatly in difficulty. In this research, a popular iterative solution tool, GMRES(k), is adapted to deal with the sequence of such systems. The proposed adaptive strategy of GMRES(k) allows tuning of the restart parameter k based on the GMRES convergence rate for the given problem. Adaptive GMRES(k) is shown to be superior to several other iterative techniques on analog circuit simulation problems and on postbuckling structural analysis problems. Developing parallel techniques for robust but expensive sequential computations, such as globally convergent homotopy methods, is important. The design of these techniques encompasses the functionality of the iterative method (adaptive GMRES(k)) implemented sequentially and is based on the results of a parallel performance analysis of several implementations. An implementation of adaptive GMRES(k) with Householder reflections in its orthogonalization phase is developed. It is shown that the efficiency of linear system solution by the adaptive GMRES(k) algorithm depends on the change in problem difficulty when the problem is scaled. In contrast, a standard GMRES(k) implementation using Householder reflections maintains a constant efficiency with increase in problem size and number of processors, as concluded analytically and experimentally. The supporting numerical results are obtained on three distributed memory homogeneous parallel architectures: CRAY T3E, Intel Paragon, and IBM SP2. / Ph. D.
17

An interpolation-based approach to the weighted H2 model reduction problem

Anic, Branimir 10 October 2008 (has links)
Dynamical systems and their numerical simulation are very important for investigating physical and technical problems. The more accuracy is desired, the more equations are needed to reach the desired level of accuracy. This leads to large-scale dynamical systems. The problem is that computations become infeasible due to the limitations on time and/or memory in large-scale settings. Another important issue is numerical ill-conditioning. These are the main reasons for the need of model reduction, i.e. replacing the original system by a reduced system of much smaller dimension. Then one uses the reduced models in order to simulate or control processes. The main goal of this thesis is to investigate an interpolation-based approach to the weighted-H2 model reduction problem. Nonetheless, first we will discuss the regular (unweighted) H2 model reduction problem. We will re-visit the interpolation conditions for H2-optimality, also known as Meier-Luenberger conditions, and discuss how to obtain an optimal reduced order system via projection. After having introduced the H2-norm and the unweighted-H2 model reduction problem, we will introduce the weighted-H2 model reduction problem. We will first derive a new error expression for the weighted-H2 model reduction problem. This error expression illustrates the significance of interpolation at the mirror images of the reduced system poles and the original system poles, as in the unweighted case. However, in the weighted case this expression yields that interpolation at the mirror images of the poles of the weighting system is also significant. Finally, based on the new weighted-H2 error expression, we will propose an iteratively corrected interpolation-based algorithm for the weighted-H2 model reduction problem. Moreover we will present new optimality conditions for the weighted-H2 approximation. These conditions occur as structured orthogonality conditions similar to those for the unweighted case which were derived by Antoulas, Beattie and Gugercin. We present several numerical examples to illustrate the effectiveness of the proposed approach and compare it with the frequency-weighted balanced truncation method. We observe that, for virtually all of our numerical examples, the proposed method outperforms the frequency-weighted balanced truncation method. / Master of Science
18

Méthodes tangentielles pour les réductions de modèles et applications / Tangential methods for model reductions and applications

Kaouane, Yassine 31 December 2018 (has links)
Les simulations à grande dimension jouent un rôle crucial dans l'étude d'une grande variété de phénomènes physiques complexes, entraînant souvent des demandes écrasantes sur les ressources informatiques. La gestion de ces demandes constitue la principale motivation pour la réduction du modèle : produire des modèles de commande réduite plus simples, qui permettent une simulation plus rapide et moins coûteuse tout en se rapprochant avec précision du comportement du modèle d'origine. La présence des systèmes avec multiples entrées et multiples sorties (MIMO) rend le processus de réduction encore plus difficile. Dans cette thèse, nous nous intéressons aux méthodes de réduction de modèles à grande dimension en utilisant la projection sur des sous-espaces de Krylov tangentielles. Nous nous penchons sur le développement de techniques qui utilisent l'interpolation tangentielle. Celles-ci présentent une alternative efficace et intéressante à la troncature équilibrée qui est considérée comme référence dans le domaine et tout particulièrement la réduction pour les systèmes linéaire à temps invariants. Enfin, une attention particulière sera portée sur l'élaboration de nouveaux algorithmes efficaces et sur l'application à des problèmes pratiques. / Large-scale simulations play a crucial role in the study of a great variety of complex physical phenomena, leading often to overwhelming demands on computational resources. Managing these demands constitutes the main motivation for model reduction : produce simpler reduced-order models, which allow for faster and cheaper simulation while accurately approximating the behaviour of the original model. The presence of multiple inputs and outputs (MIMO) systems, makes the reduction process even more challenging. In this thesis we are interested in methods of reducing large-scale models, using projection on tangential Krylov subspaces. We are looking at the development of techniques using tangential interpolation. These present an effective and interesting alternative to the balanced truncation which is considered as a reference in the field and especially for the reduction of linear time invariant systems. Finally, special attention will be focused on the development of new efficient algorithms and application to practical problems.
19

Recycling Techniques for Sequences of Linear Systems and Eigenproblems

Carr, Arielle Katherine Grim 09 July 2021 (has links)
Sequences of matrices arise in many applications in science and engineering. In this thesis we consider matrices that are closely related (or closely related in groups), and we take advantage of the small differences between them to efficiently solve sequences of linear systems and eigenproblems. Recycling techniques, such as recycling preconditioners or subspaces, are popular approaches for reducing computational cost. In this thesis, we introduce two novel approaches for recycling previously computed information for a subsequent system or eigenproblem, and demonstrate good results for sequences arising in several applications. Preconditioners are often essential for fast convergence of iterative methods. However, computing a good preconditioner can be very expensive, and when solving a sequence of linear systems, we want to avoid computing a new preconditioner too often. Instead, we can recycle a previously computed preconditioner, for which we have good convergence behavior of the preconditioned system. We propose an update technique we call the sparse approximate map, or SAM update, that approximately maps one matrix to another matrix in our sequence. SAM updates are very cheap to compute and apply, preserve good convergence properties of a previously computed preconditioner, and help to amortize the cost of that preconditioner over many linear solves. When solving a sequence of eigenproblems, we can reduce the computational cost of constructing the Krylov space starting with a single vector by warm-starting the eigensolver with a subspace instead. We propose an algorithm to warm-start the Krylov-Schur method using a previously computed approximate invariant subspace. We first compute the approximate Krylov decomposition for a matrix with minimal residual, and use this space to warm-start the eigensolver. We account for the residual matrix when expanding, truncating, and deflating the decomposition and show that the norm of the residual monotonically decreases. This method is effective in reducing the total number of matrix-vector products, and computes an approximate invariant subspace that is as accurate as the one computed with standard Krylov-Schur. In applications where the matrix-vector products require an implicit linear solve, we incorporate Krylov subspace recycling. Finally, in many applications, sequences of matrices take the special form of the sum of the identity matrix, a very low-rank matrix, and a small-in-norm matrix. We consider convergence rates for GMRES applied to these matrices by identifying the sources of sensitivity. / Doctor of Philosophy / Problems in science and engineering often require the solution to many linear systems, or a sequence of systems, that model the behavior of physical phenomena. In order to construct highly accurate mathematical models to describe this behavior, the resulting matrices can be very large, and therefore the linear system can be very expensive to solve. To efficiently solve a sequence of large linear systems, we often use iterative methods, which can require preconditioning techniques to achieve fast convergence. The preconditioners themselves can be very expensive to compute. So, we propose a cheap update technique that approximately maps one matrix to another in the sequence for which we already have a good preconditioner. We then combine the preconditioner and the map and use the updated preconditioner for the current system. Sequences of eigenvalue problems also arise in many scientific applications, such as those modeling disk brake squeal in a motor vehicle. To accurately represent this physical system, large eigenvalue problems must be solved. The behavior of certain eigenvalues can reveal instability in the physical system but to identify these eigenvalues, we must solve a sequence of very large eigenproblems. The eigensolvers used to solve eigenproblems generally begin with a single vector, and instead, we propose starting the method with several vectors, or a subspace. This allows us to reduce the total number of iterations required by the eigensolver while still producing an accurate solution. We demonstrate good results for both of these approaches using sequences of linear systems and eigenvalue problems arising in several real-world applications. Finally, in many applications, sequences of matrices take the special form of the sum of the identity matrix, a very low-rank matrix, and a small-in-norm matrix. We examine the convergence behavior of the iterative method GMRES when solving such a sequence of matrices.
20

Iterative Methods for Computing Eigenvalues and Exponentials of Large Matrices

Zhang, Ping 01 January 2009 (has links)
In this dissertation, we study iterative methods for computing eigenvalues and exponentials of large matrices. These types of computational problems arise in a large number of applications, including mathematical models in economics, physical and biological processes. Although numerical methods for computing eigenvalues and matrix exponentials have been well studied in the literature, there is a lack of analysis in inexact iterative methods for eigenvalue computation and certain variants of the Krylov subspace methods for approximating the matrix exponentials. In this work, we proposed an inexact inverse subspace iteration method that generalizes the inexact inverse iteration for computing multiple and clustered eigenvalues of a generalized eigenvalue problem. Compared with other methods, the inexact inverse subspace iteration method is generally more robust. Convergence analysis showed that the linear convergence rate of the exact case is preserved. The second part of the work is to present an inverse Lanczos method to approximate the product of a matrix exponential and a vector. This is proposed to allow use of larger time step in a time-propagation scheme for solving linear initial value problems. Error analysis is given for the inverse Lanczos method, the standard Lanczos method as well as the shift-and-invert Lanczos method. The analysis demonstrates different behaviors of these variants and helps in choosing which variant to use in practice.

Page generated in 0.0242 seconds