Spelling suggestions: "subject:"superlinear convergence"" "subject:"superlineare convergence""
1 |
Studies on Optimization Methods for Nonlinear Semidefinite Programming Problems / 非線形半正定値計画問題に対する最適化手法の研究Yamakawa, Yuya 23 March 2015 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(情報学) / 甲第19122号 / 情博第568号 / 新制||情||100(附属図書館) / 32073 / 京都大学大学院情報学研究科数理工学専攻 / (主査)教授 山下 信雄, 教授 太田 快人, 教授 永持 仁 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
|
2 |
Development of a nonlinear equations solver with superlinear convergence at regular singularitiesAlabdallah, Suleiman 10 October 2014 (has links)
In dieser Arbeit präsentieren wir eine neue Art von Newton-Verfahren mit Liniensuche, basierend auf Interpolation im Bildbereich nach Wedin et al. [LW84]. Von dem resultierenden stabilisierten Newton-Algorithmus wird theoretisch und praktisch gezeigt, dass er effizient ist im Falle von nichtsingulären Lösungen. Darüber hinaus wird beobachtet, dass er eine superlineare Rate von Konvergenz bei einfachen Singularitäten erhält. Hingegen ist vom Newton-Verfahren ohne Liniensuche bekannt, dass es nur linear von fast allen Punkten in der Nähe einer singulären Lösung konvergiert. In Hinsicht auf Anwendungen auf Komplementaritätsprobleme betrachten wir auch Systeme, deren Jacobimatrix nicht differenzierbar sondern nur semismooth ist. Auch hier erreicht unser stabilisiertes und beschleunigtes Newton- Verfahren Superlinearität bei einfachen Singularitäten. / In this thesis we present a new type of line-search for Newton’s method, based on range space interpolation as suggested by Wedin et al. [LW84]. The resulting stabilized Newton algorithm is theoretically and practically shown to be efficient in the case of nonsingular roots. Moreover it is observed that it maintains a superlinear rate of convergence at simple singularities. Whereas Newton’s method without line-search is known to converge only linearly from almost all points near the singular root. In view of applications to complementarity problems we also consider systems, whose Jacobian is not differentiable but only semismooth. Again, our stabilized and accelerated Newton’s method achieves superlinearity at simple singularities.
|
3 |
Rational Krylov Methods for Operator FunctionsGüttel, Stefan 26 March 2010 (has links) (PDF)
We present a unified and self-contained treatment of rational Krylov methods for approximating the product of a function of a linear operator with a vector. With the help of general rational Krylov decompositions we reveal the connections between seemingly different approximation methods, such as the Rayleigh–Ritz or shift-and-invert method, and derive new methods, for example a restarted rational Krylov method and a related method based on rational interpolation in prescribed nodes. Various theorems known for polynomial Krylov spaces are generalized to the rational Krylov case. Computational issues, such as the computation of so-called matrix Rayleigh quotients or parallel variants of rational Arnoldi algorithms, are discussed. We also present novel estimates for the error arising from inexact linear system solves and the approximation error of the Rayleigh–Ritz method. Rational Krylov methods involve several parameters and we discuss their optimal choice by considering the underlying rational approximation problems. In particular, we present different classes of optimal parameters and collect formulas for the associated convergence rates. Often the parameters leading to best convergence rates are not optimal in terms of computation time required by the resulting rational Krylov method. We explain this observation and present new approaches for computing parameters that are preferable for computations. We give a heuristic explanation of superlinear convergence effects observed with the Rayleigh–Ritz method, utilizing a new theory of the convergence of rational Ritz values. All theoretical results are tested and illustrated by numerical examples. Numerous links to the historical and recent literature are included.
|
4 |
Rational Krylov Methods for Operator FunctionsGüttel, Stefan 12 March 2010 (has links)
We present a unified and self-contained treatment of rational Krylov methods for approximating the product of a function of a linear operator with a vector. With the help of general rational Krylov decompositions we reveal the connections between seemingly different approximation methods, such as the Rayleigh–Ritz or shift-and-invert method, and derive new methods, for example a restarted rational Krylov method and a related method based on rational interpolation in prescribed nodes. Various theorems known for polynomial Krylov spaces are generalized to the rational Krylov case. Computational issues, such as the computation of so-called matrix Rayleigh quotients or parallel variants of rational Arnoldi algorithms, are discussed. We also present novel estimates for the error arising from inexact linear system solves and the approximation error of the Rayleigh–Ritz method. Rational Krylov methods involve several parameters and we discuss their optimal choice by considering the underlying rational approximation problems. In particular, we present different classes of optimal parameters and collect formulas for the associated convergence rates. Often the parameters leading to best convergence rates are not optimal in terms of computation time required by the resulting rational Krylov method. We explain this observation and present new approaches for computing parameters that are preferable for computations. We give a heuristic explanation of superlinear convergence effects observed with the Rayleigh–Ritz method, utilizing a new theory of the convergence of rational Ritz values. All theoretical results are tested and illustrated by numerical examples. Numerous links to the historical and recent literature are included.
|
Page generated in 0.0958 seconds