• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19
  • 11
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 44
  • 44
  • 18
  • 17
  • 10
  • 9
  • 9
  • 8
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Preconditioned Newton methods for ill-posed problems / Vorkonditionierte Newton-Verfahren für schlecht gestellte Probleme

Langer, Stefan 21 June 2007 (has links)
No description available.
42

Algorithms in data mining using matrix and tensor methods

Savas, Berkant January 2008 (has links)
In many fields of science, engineering, and economics large amounts of data are stored and there is a need to analyze these data in order to extract information for various purposes. Data mining is a general concept involving different tools for performing this kind of analysis. The development of mathematical models and efficient algorithms is of key importance. In this thesis we discuss algorithms for the reduced rank regression problem and algorithms for the computation of the best multilinear rank approximation of tensors. The first two papers deal with the reduced rank regression problem, which is encountered in the field of state-space subspace system identification. More specifically the problem is \[ \min_{\rank(X) = k} \det (B - X A)(B - X A)\tp, \] where $A$ and $B$ are given matrices and we want to find $X$ under a certain rank condition that minimizes the determinant. This problem is not properly stated since it involves implicit assumptions on $A$ and $B$ so that $(B - X A)(B - X A)\tp$ is never singular. This deficiency of the determinant criterion is fixed by generalizing the minimization criterion to rank reduction and volume minimization of the objective matrix. The volume of a matrix is defined as the product of its nonzero singular values. We give an algorithm that solves the generalized problem and identify properties of the input and output signals causing a singular objective matrix. Classification problems occur in many applications. The task is to determine the label or class of an unknown object. The third paper concerns with classification of handwritten digits in the context of tensors or multidimensional data arrays. Tensor and multilinear algebra is an area that attracts more and more attention because of the multidimensional structure of the collected data in various applications. Two classification algorithms are given based on the higher order singular value decomposition (HOSVD). The main algorithm makes a data reduction using HOSVD of 98--99 \% prior the construction of the class models. The models are computed as a set of orthonormal bases spanning the dominant subspaces for the different classes. An unknown digit is expressed as a linear combination of the basis vectors. The resulting algorithm achieves 5\% in classification error with fairly low amount of computations. The remaining two papers discuss computational methods for the best multilinear rank approximation problem \[ \min_{\cB} \| \cA - \cB\| \] where $\cA$ is a given tensor and we seek the best low multilinear rank approximation tensor $\cB$. This is a generalization of the best low rank matrix approximation problem. It is well known that for matrices the solution is given by truncating the singular values in the singular value decomposition (SVD) of the matrix. But for tensors in general the truncated HOSVD does not give an optimal approximation. For example, a third order tensor $\cB \in \RR^{I \x J \x K}$ with rank$(\cB) = (r_1,r_2,r_3)$ can be written as the product \[ \cB = \tml{X,Y,Z}{\cC}, \qquad b_{ijk}=\sum_{\lambda,\mu,\nu} x_{i\lambda} y_{j\mu} z_{k\nu} c_{\lambda\mu\nu}, \] where $\cC \in \RR^{r_1 \x r_2 \x r_3}$ and $X \in \RR^{I \times r_1}$, $Y \in \RR^{J \times r_2}$, and $Z \in \RR^{K \times r_3}$ are matrices of full column rank. Since it is no restriction to assume that $X$, $Y$, and $Z$ have orthonormal columns and due to these constraints, the approximation problem can be considered as a nonlinear optimization problem defined on a product of Grassmann manifolds. We introduce novel techniques for multilinear algebraic manipulations enabling means for theoretical analysis and algorithmic implementation. These techniques are used to solve the approximation problem using Newton and Quasi-Newton methods specifically adapted to operate on products of Grassmann manifolds. The presented algorithms are suited for small, large and sparse problems and, when applied on difficult problems, they clearly outperform alternating least squares methods, which are standard in the field.
43

Βελτιωμένες αλγοριθμικές τεχνικές επίλυσης συστημάτων μη γραμμικών εξισώσεων

Μαλιχουτσάκη, Ελευθερία 22 December 2009 (has links)
Σε αυτή την εργασία, ασχολούμαστε με το πρόβλημα της επίλυσης συστημάτων μη γραμμικών αλγεβρικών ή/και υπερβατικών εξισώσεων και συγκεκριμένα αναφερόμαστε σε βελτιωμένες αλγοριθμικές τεχνικές επίλυσης τέτοιων συστημάτων. Μη γραμμικά συστήματα υπάρχουν σε πολλούς τομείς της επιστήμης, όπως στη Μηχανική, την Ιατρική, τη Χημεία, τη Ρομποτική, τα Οικονομικά, κ.τ.λ. Υπάρχουν πολλές μέθοδοι για την επίλυση συστημάτων μη γραμμικών εξισώσεων. Ανάμεσά τους η μέθοδος Newton είναι η πιο γνωστή μέθοδος, λόγω της τετραγωνικής της σύγκλισης όταν υπάρχει μια καλή αρχική εκτίμηση και ο Ιακωβιανός πίνακας είναι nonsingular. Η μέθοδος Newton έχει μερικά μειονεκτήματα, όπως τοπική σύγκλιση, αναγκαιότητα υπολογισμού του Ιακωβιανού πίνακα και ακριβής επίλυση του γραμμικού συστήματος σε κάθε επανάληψη. Σε αυτή τη μεταπτυχιακή διπλωματική εργασία αναλύουμε τη μέθοδο Newton και κατηγοριοποιούμε μεθόδους που συμβάλλουν στην αντιμετώπιση των μειονεκτημάτων της μεθόδου Newton, π.χ. Quasi-Newton και Inexact-Newton μεθόδους. Μερικές πιο πρόσφατες μέθοδοι που περιγράφονται σε αυτή την εργασία είναι η μέθοδος MRV και δύο νέες μέθοδοι Newton χωρίς άμεσες συναρτησιακές τιμές, κατάλληλες για προβλήματα με μη ακριβείς συναρτησιακές τιμές ή με μεγάλο υπολογιστικό κόστος. Στο τέλος αυτής της μεταπτυχιακής εργασίας, παρουσιάζουμε τις βασικές αρχές της Ανάλυσης Διαστημάτων και τη Διαστηματική μέθοδο Newton. / In this contribution, we deal with the problem of solving systems of nonlinear algebraic or/and transcendental equations and in particular we are referred to improved algorithmic techniques of such kind of systems. Nonlinear systems arise in many domains of science, such as Mechanics, Medicine, Chemistry, Robotics, Economics, etc. There are several methods for solving systems of nonlinear equations. Among them Newton's method is the most famous, because of its quadratic convergence when a good initial guess exists and the Jacobian matrix is nonsingular. Newton's method has some disadvantages, such as local convergence, necessity of computation of Jacobian matrix and the exact solution of linear system at each iteration. In this master thesis we analyze Newton's method and we categorize methods that contribute to the treatment of drawbacks of Newton's method, e.g. Quasi-Newton and Inexact-Newton methods. Some more recent methods which are described in this thesis are the MRV method and two new Newton's methods without direct function evaluations, ideal for problems with inaccurate function values or high computational cost. At the end of this master thesis, we present the basic principles of Interval Analysis and Interval Newton's method.
44

Multikanálová dekonvoluce obrazů / Multichannel Image Deconvolution

Bradáč, Pavel January 2009 (has links)
This Master Thesis deals with image restoration using deconvolution. The terms introducing into deconvolution theory like two-dimensional signal, distortion model, noise and convolution are explained in the first part of thesis. The second part deals with deconvolution methods via utilization of the Bayes approach which is based on the probability principle. The third part is focused on the Alternating Minimization Algorithm for Multichannel Blind Deconvolution. At the end this algorithm is written in Matlab with utilization of the NAG C Library. Then comparison of different optimization methods follows (simplex, steepest descent, quasi-Newton), regularization forms (Tichonov, Total Variation) and other parameters used by this deconvolution algorithm.

Page generated in 0.0254 seconds