• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • 1
  • Tagged with
  • 3
  • 3
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Finite difference methods for solving mildly nonlinear elliptic partial differential equations

El-Nakla, Jehad A. H. January 1987 (has links)
This thesis is concerned with the solution of large systems of linear algebraic equations in which the matrix of coefficients is sparse. Such systems occur in the numerical solution of elliptic partial differential equations by finite-difference methods. By applying some well-known iterative methods, usually used to solve linear PDE systems, the thesis investigates their applicability to solve a set of four mildly nonlinear test problems. In Chapter 4 we study the basic iterative methods and semiiterative methods for linear systems. In particular, we derive and apply the CS, SOR, SSOR methods and the SSOR method extrapolated by the Chebyshev acceleration strategy. In Chapter 5, three ways of accelerating the SOR method are described together with the applications to the test problems. Also the Newton-SOR method and the SOR-Newton method are derived and applied to the same problems. In Chapter 6, the Alternating Directions Implicit methods are described. Two versions are studied in detail, namely, the Peaceman-Rachford and the Douglas-Rachford methods. They have been applied to the test problems for cycles of 1, 2 and 3 parameters. In Chapter 7, the conjugate gradients method and the conjugate gradient acceleration procedure are described together with some preconditioning techniques. Also an approximate LU-decomposition algorithm (ALUBOT algorithm) is given and then applied in conjunction with the Picard and Newton methods. Chapter 8 contains the final conclusions.
2

Puasono lygties sprendimas naudojantis šaltinio apibendrintomis hiperbolinės funkcijomis / Poisson's equation using a source of summarized hyperbolic functions

Brenčys, Liutauras 04 August 2011 (has links)
Sudarytas Puasono lygties sprendimo per „rutuliukų“ potencialus algoritmas. Šiuo metodu Puasono lygties sprendimo uždavinys suvedamas į tiesinių algebrinių lygčių sistemos sprendimą. Sudaryta ir išbandyta matematiniu paketu MATHCAD to sprendimo programa. Palyginti gauti sprendiniai su tais, kurie gaunami analiziškai, įvertintas gautų sprendinių tikslumas. Šį sprendimo būdą galima panaudoti realiems fizikiniams potencialams paskaičiuoti, turint galvoje realų potencialą su kuriuo realūs krūviai. / It consists of Poisson equation solution in the "ball" potential algorithm. In this method the Poisson equation, the decision problem are reduced to linear algebraic equations system solution. Created and tested a mathematical package MATHCAD program for that decision. Compared to solutions with those obtained analytically, estimated to obtain accurate solutions. This solution can be used to calculate the real physical potentials, given the real potential of the real workloads.
3

Aproximace maticemi malé hodnosti a jejich aplikace / Approximations by low-rank matrices and their applications

Outrata, Michal January 2018 (has links)
Consider the problem of solving a large system of linear algebraic equations, using the Krylov subspace methods. In order to find the solution efficiently, the system often needs to be preconditioned, i.e., transformed prior to the iterative scheme. A feature of the system that often enables fast solution with efficient preconditioners is the structural sparsity of the corresponding matrix. A recent development brought another and a slightly different phe- nomenon called the data sparsity. In contrast to the classical (structural) sparsity, the data sparsity refers to an uneven distribution of extractable information inside the matrix. In practice, the data sparsity of a matrix ty- pically means that its blocks can be successfully approximated by matrices of low rank. Naturally, this may significantly change the character of the numerical computations involving the matrix. The thesis focuses on finding ways to construct Cholesky-based preconditioners for the conjugate gradi- ent method to solve systems with symmetric and positive definite matrices, exploiting a combination of the data and structural sparsity. Methods to exploit the data sparsity are evolving very fast, influencing not only iterative solvers but direct solvers as well. Hierarchical schemes based on the data sparsity concepts can be derived...

Page generated in 0.089 seconds