• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • Tagged with
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

On Updating Preconditioners for the Iterative Solution of Linear Systems

Guerrero Flores, Danny Joel 02 July 2018 (has links)
El tema principal de esta tesis es el desarrollo de técnicas de actualización de precondicionadores para resolver sistemas lineales de gran tamaño y dispersos Ax=b mediante el uso de métodos iterativos de Krylov. Se consideran dos tipos interesantes de problemas. En el primero se estudia la solución iterativa de sistemas lineales no singulares y antisimétricos, donde la matriz de coeficientes A tiene parte antisimétrica de rango bajo o puede aproximarse bien con una matriz antisimétrica de rango bajo. Sistemas como este surgen de la discretización de PDEs con ciertas condiciones de frontera de Neumann, la discretización de ecuaciones integrales y métodos de puntos interiores, por ejemplo, el problema de Bratu y la ecuación integral de Love. El segundo tipo de sistemas lineales considerados son problemas de mínimos cuadrados (LS) que se resuelven considerando la solución del sistema equivalente de ecuaciones normales. Concretamente, consideramos la solución de problemas LS modificados y de rango incompleto. Por problema LS modificado se entiende que el conjunto de ecuaciones lineales se actualiza con alguna información nueva, se agrega una nueva variable o, por el contrario, se elimina alguna información o variable del conjunto. En los problemas LS de rango deficiente, la matriz de coeficientes no tiene rango completo, lo que dificulta el cálculo de una factorización incompleta de las ecuaciones normales. Los problemas LS surgen en muchas aplicaciones a gran escala de la ciencia y la ingeniería como, por ejemplo, redes neuronales, programación lineal, sismología de exploración o procesamiento de imágenes. Los precondicionadores directos para métodos iterativos usados habitualmente son las factorizaciones incompletas LU, o de Cholesky cuando la matriz es simétrica definida positiva. La principal contribución de esta tesis es el desarrollo de técnicas de actualización de precondicionadores. Básicamente, el método consiste en el cálculo de una descomposición incompleta para un sistema lineal aumentado equivalente, que se utiliza como precondicionador para el problema original. El estudio teórico y los resultados numéricos presentados en esta tesis muestran el rendimiento de la técnica de precondicionamiento propuesta y su competitividad en comparación con otros métodos disponibles en la literatura para calcular precondicionadores para los problemas estudiados. / The main topic of this thesis is updating preconditioners for solving large sparse linear systems Ax=b by using Krylov iterative methods. Two interesting types of problems are considered. In the first one is studied the iterative solution of non-singular, non-symmetric linear systems where the coefficient matrix A has a skew-symmetric part of low-rank or can be well approximated with a skew-symmetric low-rank matrix. Systems like this arise from the discretization of PDEs with certain Neumann boundary conditions, the discretization of integral equations as well as path following methods, for example, the Bratu problem and the Love's integral equation. The second type of linear systems considered are least squares (LS) problems that are solved by considering the solution of the equivalent normal equations system. More precisely, we consider the solution of modified and rank deficient LS problems. By modified LS problem, it is understood that the set of linear relations is updated with some new information, a new variable is added or, contrarily, some information or variable is removed from the set. Rank deficient LS problems are characterized by a coefficient matrix that has not full rank, which makes difficult the computation of an incomplete factorization of the normal equations. LS problems arise in many large-scale applications of the science and engineering as for instance neural networks, linear programming, exploration seismology or image processing. Usually, incomplete LU or incomplete Cholesky factorization are used as preconditioners for iterative methods. The main contribution of this thesis is the development of a technique for updating preconditioners by bordering. It consists in the computation of an approximate decomposition for an equivalent augmented linear system, that is used as preconditioner for the original problem. The theoretical study and the results of the numerical experiments presented in this thesis show the performance of the preconditioner technique proposed and its competitiveness compared with other methods available in the literature for computing preconditioners for the problems studied. / El tema principal d'esta tesi és actualitzar precondicionadors per a resoldre sistemes lineals grans i buits Ax=b per mitjà de l'ús de mètodes iteratius de Krylov. Es consideren dos tipus interessants de problemes. En el primer s'estudia la solució iterativa de sistemes lineals no singulars i antisimètrics, on la matriu de coeficients A té una part antisimètrica de baix rang, o bé pot aproximar-se amb una matriu antisimètrica de baix rang. Sistemes com este sorgixen de la discretització de PDEs amb certes condicions de frontera de Neumann, la discretització d'equacions integrals i mètodes de punts interiors, per exemple, el problema de Bratu i l'equació integral de Love. El segon tipus de sistemes lineals considerats, són problemes de mínims quadrats (LS) que es resolen considerant la solució del sistema equivalent d'equacions normals. Concretament, considerem la solució de problemes de LS modificats i de rang incomplet. Per problema LS modificat, s'entén que el conjunt d'equacions lineals s'actualitza amb alguna informació nova, s'agrega una nova variable o, al contrari, s'elimina alguna informació o variable del conjunt. En els problemes LS de rang deficient, la matriu de coeficients no té rang complet, la qual cosa dificultata el calcul d'una factorització incompleta de les equacions normals. Els problemes LS sorgixen en moltes aplicacions a gran escala de la ciència i l'enginyeria com, per exemple, xarxes neuronals, programació lineal, sismologia d'exploració o processament d'imatges. Els precondicionadors directes per a mètodes iteratius utilitzats més a sovint són les factoritzacions incompletes tipus ILU, o la factorització incompleta de Cholesky quan la matriu és simètrica definida positiva. La principal contribució d'esta tesi és el desenvolupament de tècniques d'actualització de precondicionadors. Bàsicament, el mètode consistix en el càlcul d'una descomposició incompleta per a un sistema lineal augmentat equivalent, que s'utilitza com a precondicionador pel problema original. L'estudi teòric i els resultats numèrics presentats en esta tesi mostren el rendiment de la tècnica de precondicionament proposta i la seua competitivitat en comparació amb altres mètodes disponibles en la literatura per a calcular precondicionadors per als problemes considerats. / Guerrero Flores, DJ. (2018). On Updating Preconditioners for the Iterative Solution of Linear Systems [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/104923 / TESIS
2

Algorithms in data mining using matrix and tensor methods

Savas, Berkant January 2008 (has links)
In many fields of science, engineering, and economics large amounts of data are stored and there is a need to analyze these data in order to extract information for various purposes. Data mining is a general concept involving different tools for performing this kind of analysis. The development of mathematical models and efficient algorithms is of key importance. In this thesis we discuss algorithms for the reduced rank regression problem and algorithms for the computation of the best multilinear rank approximation of tensors. The first two papers deal with the reduced rank regression problem, which is encountered in the field of state-space subspace system identification. More specifically the problem is \[ \min_{\rank(X) = k} \det (B - X A)(B - X A)\tp, \] where $A$ and $B$ are given matrices and we want to find $X$ under a certain rank condition that minimizes the determinant. This problem is not properly stated since it involves implicit assumptions on $A$ and $B$ so that $(B - X A)(B - X A)\tp$ is never singular. This deficiency of the determinant criterion is fixed by generalizing the minimization criterion to rank reduction and volume minimization of the objective matrix. The volume of a matrix is defined as the product of its nonzero singular values. We give an algorithm that solves the generalized problem and identify properties of the input and output signals causing a singular objective matrix. Classification problems occur in many applications. The task is to determine the label or class of an unknown object. The third paper concerns with classification of handwritten digits in the context of tensors or multidimensional data arrays. Tensor and multilinear algebra is an area that attracts more and more attention because of the multidimensional structure of the collected data in various applications. Two classification algorithms are given based on the higher order singular value decomposition (HOSVD). The main algorithm makes a data reduction using HOSVD of 98--99 \% prior the construction of the class models. The models are computed as a set of orthonormal bases spanning the dominant subspaces for the different classes. An unknown digit is expressed as a linear combination of the basis vectors. The resulting algorithm achieves 5\% in classification error with fairly low amount of computations. The remaining two papers discuss computational methods for the best multilinear rank approximation problem \[ \min_{\cB} \| \cA - \cB\| \] where $\cA$ is a given tensor and we seek the best low multilinear rank approximation tensor $\cB$. This is a generalization of the best low rank matrix approximation problem. It is well known that for matrices the solution is given by truncating the singular values in the singular value decomposition (SVD) of the matrix. But for tensors in general the truncated HOSVD does not give an optimal approximation. For example, a third order tensor $\cB \in \RR^{I \x J \x K}$ with rank$(\cB) = (r_1,r_2,r_3)$ can be written as the product \[ \cB = \tml{X,Y,Z}{\cC}, \qquad b_{ijk}=\sum_{\lambda,\mu,\nu} x_{i\lambda} y_{j\mu} z_{k\nu} c_{\lambda\mu\nu}, \] where $\cC \in \RR^{r_1 \x r_2 \x r_3}$ and $X \in \RR^{I \times r_1}$, $Y \in \RR^{J \times r_2}$, and $Z \in \RR^{K \times r_3}$ are matrices of full column rank. Since it is no restriction to assume that $X$, $Y$, and $Z$ have orthonormal columns and due to these constraints, the approximation problem can be considered as a nonlinear optimization problem defined on a product of Grassmann manifolds. We introduce novel techniques for multilinear algebraic manipulations enabling means for theoretical analysis and algorithmic implementation. These techniques are used to solve the approximation problem using Newton and Quasi-Newton methods specifically adapted to operate on products of Grassmann manifolds. The presented algorithms are suited for small, large and sparse problems and, when applied on difficult problems, they clearly outperform alternating least squares methods, which are standard in the field.

Page generated in 0.0739 seconds