Spelling suggestions: "subject:"smooth""
1 |
A coordinate gradient descent method for structured nonsmooth optimization /Yun, Sangwoon, January 2007 (has links)
Thesis (Ph. D.)--University of Washington, 2007. / Vita. Includes bibliographical references (p. 125-133).
|
2 |
On optimality conditions, minimizing and stationary sequences for nonsmooth functions. / CUHK electronic theses & dissertations collectionJanuary 1997 (has links)
Xinbao Li. / Thesis (Ph.D.)--Chinese University of Hong kong, 1997. / Includes bibliographical references (p. 62-64). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Mode of access: World Wide Web.
|
3 |
Nonsmooth analysis and optimization.January 1993 (has links)
Huang Liren. / Thesis (Ph.D.)--Chinese University of Hong Kong, 1993. / Includes bibliographical references (leaves 96). / Abstract --- p.1 / Introduction --- p.2 / References --- p.5 / Chapter Chapter 1. --- Some elementary results in nonsmooth analysis and optimization --- p.6 / Chapter 1. --- "Some properties for ""lim sup"" and ""lim inf""" --- p.6 / Chapter 2. --- The directional derivative of the sup-type function --- p.8 / Chapter 3. --- Some results in nonsmooth analysis and optimization --- p.12 / References --- p.19 / Chapter Chapter 2. --- On generalized second-order derivatives and Taylor expansions in nonsmooth optimization --- p.20 / Chapter 1. --- Introduction --- p.20 / Chapter 2. --- "Dini-directional derivatives, Clark's directional derivatives and generalized second-order directional derivatives" --- p.20 / Chapter 3. --- On Cominetti and Correa's conjecture --- p.28 / Chapter 4. --- Generalized second-order Taylor expansion --- p.36 / Chapter 5. --- Detailed proof of Theorem 2.4.2 --- p.40 / Chapter 6. --- Corollaries of Theorem 2.4.2 and Theorem 2.4.3 --- p.43 / Chapter 7. --- Some applications in optimization --- p.46 / Ref erences --- p.51 / Chapter Chapter 3. --- Second-order necessary and sufficient conditions in nonsmooth optimization --- p.53 / Chapter 1. --- Introduction --- p.53 / Chapter 2. --- Second-order necessary and sufficient conditions without constraint --- p.56 / Chapter 3. --- Second-order necessary conditions with constrains --- p.66 / Chapter 4. --- Sufficient conditions theorem with constraints --- p.77 / References --- p.87 / Appendix --- p.89 / References --- p.96
|
4 |
Sensor-based motion planning via nonsmooth analysisRusaw, Shawn January 2002 (has links)
In this thesis we present a novel approach to sensor-based motion planning developed using the mathematical tools provided by the field of nonsmooth analysis. The work is based on a broad body of background material developed using the tools of differential topology (smooth analysis), that is limited to simple cases like a point or circular robot. Nonsmooth analysis is required to extend this background work to the case of a polygonal robot moving amidst polygonal obstacles. We present a detailed nonsmooth analysis of the distance function for arbitrary configuration spaces and use this analysis to develop a planner for a rotating and translating polygonal mobile robot. Using the tools of nonsmooth analysis, we then describe a one-dimensional nonsmooth roadmap of the robot's freespace called the Nonsmooth Critical Set + Nonsmooth Generalised Voronoi Graph (NC<sub>RIT</sub>+NGVG) where the robot is equidistant to a number of obstacles, in a critical configuration or passing between two obstacles. We then use the related field of nonsmooth control theory to develop several provably stable control laws for following and exploring the nonsmooth roadmap. Finally, we implement a motion planner in simulation and for a real polygonal mobile robot, thus verifying the utility and practicality of the nonsmooth roadmap.
|
5 |
Conformal reduction of boundary problems for harmonic functions in a plane domain with strong singularities on the boundaryGrudsky, Serguey, Tarkhanov, Nikolai January 2012 (has links)
We consider the Dirichlet, Neumann and Zaremba problems for harmonic functions in a bounded plane domain with nonsmooth boundary. The boundary curve belongs to one of the following three classes: sectorial curves, logarithmic spirals and spirals of power type. To study the problem we apply a familiar method of Vekua-Muskhelishvili which consists in using a conformal mapping of the unit disk onto the domain to pull back the problem to a boundary problem for harmonic functions in the disk. This latter is reduced in turn to a Toeplitz operator equation on the unit circle with symbol bearing discontinuities of second kind. We develop a constructive invertibility theory for Toeplitz operators and thus derive solvability conditions as well as explicit formulas for solutions.
|
6 |
Non-linear analogues of Lagrange functions in constrained optimizationGiri, Jason January 2005 (has links)
"This thesis investigates several non-linear analogues of Lagrange functions in the hope of answering the question 'Is it possible to generalise Lagrange functions such that they may be applied to a range of nonconvex objective problems?' The answer to this question is found to be yes for a particular class of optimization problems. Furthermore the thesis asserts that in derivative free optimization the general schema which is most theoretically and practically appealing involves the reformulation of both objective and constraint functions, whilst the least practically successful approach for everything but the most simple convex case is the augmented Lagrangian approach." / Doctor of Philosophy
|
7 |
Non-linear analogues of Lagrange functions in constrained optimizationGiri, Jason . University of Ballarat. January 2005 (has links)
"This thesis investigates several non-linear analogues of Lagrange functions in the hope of answering the question 'Is it possible to generalise Lagrange functions such that they may be applied to a range of nonconvex objective problems?' The answer to this question is found to be yes for a particular class of optimization problems. Furthermore the thesis asserts that in derivative free optimization the general schema which is most theoretically and practically appealing involves the reformulation of both objective and constraint functions, whilst the least practically successful approach for everything but the most simple convex case is the augmented Lagrangian approach." / Doctor of Philosophy
|
8 |
Sensitivity Analysis of Convex Relaxations for Nonsmooth Global OptimizationYuan, Yingwei January 2020 (has links)
Nonsmoothness appears in various applications in chemical engineering, including multi-stream heat exchangers, nonsmooth flash calculation, process integration. In terms of numerical approaches, convex/concave relaxations of static and dynamic systems may also exhibit nonsmoothness. These relaxations are used in deterministic methods for global optimization. This thesis presents several new theoretical results for nonsmooth sensitivity analysis, with an emphasis on convex relaxations.
Firstly, the "compass difference" and established ODE results by Pang and Stewart are used to describe a correct subgradient for a nonsmooth dynamic system with two parameters. This sensitivity information can be computed using standard ODE solvers.
Next, this thesis also uses the compass difference to obtain a subgradient for the Tsoukalas-Mitsos convex relaxations of composite functions of two variables.
Lastly, this thesis develops a new general subgradient result for Tsoukalas-Mitsos convex relaxations of composite functions. This result does not limit on the dimensions of input variables. It gives the whole subdifferential of Tsoukalas-Mitsos convex relaxations. Compare to Tsoukalas-Mitsos’ previous subdifferential results, it does not require additionally solving a dual optimization problem as well. The new subgradient results are extended to obtain directional derivatives for Tsoukalas-Mitsos convex relaxations. The new subgradient results and directional derivative results are computationally approachable: subgradients in this article can be calculated both by the vector forward AD mode and reverse AD mode. A proof-of-concept implementation in Matlab is discussed. / Thesis / Master of Applied Science (MASc)
|
9 |
A Nonsmooth Nonconvex Descent AlgorithmMankau, Jan Peter 17 January 2017 (has links) (PDF)
In many applications nonsmooth nonconvex energy functions, which are Lipschitz continuous, appear quite naturally. Contact mechanics with friction is a classic example. A second example is the 1-Laplace operator and its eigenfunctions.
In this work we will give an algorithm such that for every locally Lipschitz continuous function f and every sequence produced by this algorithm it holds that every accumulation point of the sequence is a critical point of f in the sense of Clarke. Here f is defined on a reflexive Banach space X, such that X and its dual space X' are strictly convex and Clarkson's inequalities hold. (E.g. Sobolev spaces and every closed subspace equipped with the Sobolev norm satisfy these assumptions for p>1.) This algorithm is designed primarily to solve variational problems or their high dimensional discretizations, but can be applied to a variety of locally Lipschitz functions.
In elastic contact mechanics the strain energy is often smooth and nonconvex on a suitable domain, while the contact and the friction energy are nonsmooth and have a support on a subspace which has a substantially smaller dimension than the strain energy, since all points in the interior of the bodies only have effect on the strain energy. For such elastic contact problems we suggest a specialization of our algorithm, which treats the smooth part with Newton like methods. In the case that the gradient of the entire energy function is semismooth close to the minimizer, we can even prove superlinear convergence of this specialization of our algorithm.
We test the algorithm and its specialization with a couple of benchmark problems. Moreover, we apply the algorithm to the 1-Laplace minimization problem restricted to finitely dimensional subspaces of piecewise affine, continuous functions.
The algorithm developed here uses ideas of the bundle trust region method by Schramm, and a new generalization of the concept of gradients on a set. The basic idea behind this gradients on sets is that we want to find a stable descent direction, which is a descent direction on an entire neighborhood of an iteration point. This way we avoid oscillations of the gradients and very small descent steps (in the smooth and in the nonsmooth case). It turns out, that the norm smallest element of the gradient on a set provides a stable descent direction.
The algorithm we present here is the first algorithm which can treat locally Lipschitz continuous functions in this generality, up to our knowledge. In particular, large finitely dimensional Banach spaces haven't been studied for nonsmooth nonconvex functions so far. We will show that the algorithm is very robust and often faster than common algorithms. Furthermore, we will see that with this algorithm it is possible to compute reliably the first eigenfunctions of the 1-Laplace operator up to disretization errors, for the first time. / In vielen Anwendungen tauchen nichtglatte, nichtkonvexe, Lipschitz-stetige Energie Funktionen in natuerlicher Weise auf. Ein klassische Beispiel bildet die Kontaktmechanik mit Reibung. Ein weiteres Beispiel ist der $1$-Laplace Operator und seine Eigenfunktionen.
In dieser Dissertation werden wir ein Abstiegsverfahren angeben, so dass fuer jede lokal Lipschitz-stetige Funktion f jeder Haeufungspunkt einer durch dieses Verfahren erzeugten Folge ein kritischer Punkt von f im Sinne von Clarke ist. Hier ist f auf einem einem reflexiver, strikt konvexem Banachraum definierert, fuer den der Dualraum ebenfalls strikt konvex ist und die Clarkeson Ungleichungen gelten. (Z.B. Sobolevraeume und jeder abgeschlossene Unterraum mit der Sobolevnorm versehen, erfuellt diese Bedingung fuer p>1.) Dieser Algorithmus ist primaer entwickelt worden um Variationsprobleme, bzw. deren hochdimensionalen Diskretisierungen zu loesen. Er kann aber auch fuer eine Vielzahl anderer lokal Lipschitz stetige Funktionen eingesetzt werden.
In der elastischen Kontaktmechanik ist die Spannungsenergie oft glatt und nichtkonvex auf einem geeignetem Definitionsbereich, waehrend der Kontakt und die Reibung durch nicht glatte Funktionen modelliert werden, deren Traeger ein Unterraum mit wesentlich kleineren Dimension ist, da alle Punkte im Inneren des Koerpers nur die Spannungsenergie beeinflussen. Fuer solche elastischen Kontaktprobleme schlagen wir eine Spezialisierung unseres Algorithmuses vor, der den glatten Teil mit Newton aehnlichen Methoden behandelt. Falls der Gradient der gesamten Energiefunktion semiglatt in der Naehe der Minimalstelle ist, koennen wir sogar beweisen, dass der Algorithmus superlinear konvergiert.
Wir testen den Algorithmus und seine Spezialisierung an mehreren Benchmark Problemen. Ausserdem wenden wir den Algorithmus auf 1-Laplace Minimierungsproblem eingeschraenkt auf eine endlich dimensionalen Unterraum der stueckweise affinen, stetigen Funktionen an.
Der hier entwickelte Algorithmus verwendet Ideen des Bundle-Trust-Region-Verfahrens von Schramm, und einen neu entwickelten Verallgemeinerung von Gradienten auf Mengen. Die zentrale Idee hinter den Gradienten auf Mengen ist die, dass wir stabile Abstiegsrichtungen auf einer ganzen Umgebung der Iterationspunkte finden wollen. Auf diese Weise vermeiden wir das Oszillieren der Gradienten und sehr kleine Abstiegsschritte (im glatten, wie im nichtglatten Fall.) Es stellt sich heraus, dass das normkleinste Element dieses Gradienten auf der Umgebung eine stabil Abstiegsrichtung bestimmt.
So weit es uns bekannt ist, koennen die hier entwickelten Algorithmen zum ersten Mal lokal Lipschitz-stetige Funktionen in dieser Allgemeinheit behandeln. Insbesondere wurden nichtglatte, nichtkonvexe Funktionen auf derart hochdimensionale Banachraeume bis jetzt nicht behandelt. Wir werden zeigen, dass unser Algorithmus sehr robust und oft schneller als uebliche Algorithmen ist. Des Weiteren, werden wir sehen, dass es mit diesem Algorithmus das erste mal moeglich ist, zuverlaessig die erste Eigenfunktion des 1-Laplace Operators bis auf Diskretisierungsfehler zu bestimmen.
|
10 |
A Nonsmooth Nonconvex Descent AlgorithmMankau, Jan Peter 09 December 2016 (has links)
In many applications nonsmooth nonconvex energy functions, which are Lipschitz continuous, appear quite naturally. Contact mechanics with friction is a classic example. A second example is the 1-Laplace operator and its eigenfunctions.
In this work we will give an algorithm such that for every locally Lipschitz continuous function f and every sequence produced by this algorithm it holds that every accumulation point of the sequence is a critical point of f in the sense of Clarke. Here f is defined on a reflexive Banach space X, such that X and its dual space X' are strictly convex and Clarkson's inequalities hold. (E.g. Sobolev spaces and every closed subspace equipped with the Sobolev norm satisfy these assumptions for p>1.) This algorithm is designed primarily to solve variational problems or their high dimensional discretizations, but can be applied to a variety of locally Lipschitz functions.
In elastic contact mechanics the strain energy is often smooth and nonconvex on a suitable domain, while the contact and the friction energy are nonsmooth and have a support on a subspace which has a substantially smaller dimension than the strain energy, since all points in the interior of the bodies only have effect on the strain energy. For such elastic contact problems we suggest a specialization of our algorithm, which treats the smooth part with Newton like methods. In the case that the gradient of the entire energy function is semismooth close to the minimizer, we can even prove superlinear convergence of this specialization of our algorithm.
We test the algorithm and its specialization with a couple of benchmark problems. Moreover, we apply the algorithm to the 1-Laplace minimization problem restricted to finitely dimensional subspaces of piecewise affine, continuous functions.
The algorithm developed here uses ideas of the bundle trust region method by Schramm, and a new generalization of the concept of gradients on a set. The basic idea behind this gradients on sets is that we want to find a stable descent direction, which is a descent direction on an entire neighborhood of an iteration point. This way we avoid oscillations of the gradients and very small descent steps (in the smooth and in the nonsmooth case). It turns out, that the norm smallest element of the gradient on a set provides a stable descent direction.
The algorithm we present here is the first algorithm which can treat locally Lipschitz continuous functions in this generality, up to our knowledge. In particular, large finitely dimensional Banach spaces haven't been studied for nonsmooth nonconvex functions so far. We will show that the algorithm is very robust and often faster than common algorithms. Furthermore, we will see that with this algorithm it is possible to compute reliably the first eigenfunctions of the 1-Laplace operator up to disretization errors, for the first time. / In vielen Anwendungen tauchen nichtglatte, nichtkonvexe, Lipschitz-stetige Energie Funktionen in natuerlicher Weise auf. Ein klassische Beispiel bildet die Kontaktmechanik mit Reibung. Ein weiteres Beispiel ist der $1$-Laplace Operator und seine Eigenfunktionen.
In dieser Dissertation werden wir ein Abstiegsverfahren angeben, so dass fuer jede lokal Lipschitz-stetige Funktion f jeder Haeufungspunkt einer durch dieses Verfahren erzeugten Folge ein kritischer Punkt von f im Sinne von Clarke ist. Hier ist f auf einem einem reflexiver, strikt konvexem Banachraum definierert, fuer den der Dualraum ebenfalls strikt konvex ist und die Clarkeson Ungleichungen gelten. (Z.B. Sobolevraeume und jeder abgeschlossene Unterraum mit der Sobolevnorm versehen, erfuellt diese Bedingung fuer p>1.) Dieser Algorithmus ist primaer entwickelt worden um Variationsprobleme, bzw. deren hochdimensionalen Diskretisierungen zu loesen. Er kann aber auch fuer eine Vielzahl anderer lokal Lipschitz stetige Funktionen eingesetzt werden.
In der elastischen Kontaktmechanik ist die Spannungsenergie oft glatt und nichtkonvex auf einem geeignetem Definitionsbereich, waehrend der Kontakt und die Reibung durch nicht glatte Funktionen modelliert werden, deren Traeger ein Unterraum mit wesentlich kleineren Dimension ist, da alle Punkte im Inneren des Koerpers nur die Spannungsenergie beeinflussen. Fuer solche elastischen Kontaktprobleme schlagen wir eine Spezialisierung unseres Algorithmuses vor, der den glatten Teil mit Newton aehnlichen Methoden behandelt. Falls der Gradient der gesamten Energiefunktion semiglatt in der Naehe der Minimalstelle ist, koennen wir sogar beweisen, dass der Algorithmus superlinear konvergiert.
Wir testen den Algorithmus und seine Spezialisierung an mehreren Benchmark Problemen. Ausserdem wenden wir den Algorithmus auf 1-Laplace Minimierungsproblem eingeschraenkt auf eine endlich dimensionalen Unterraum der stueckweise affinen, stetigen Funktionen an.
Der hier entwickelte Algorithmus verwendet Ideen des Bundle-Trust-Region-Verfahrens von Schramm, und einen neu entwickelten Verallgemeinerung von Gradienten auf Mengen. Die zentrale Idee hinter den Gradienten auf Mengen ist die, dass wir stabile Abstiegsrichtungen auf einer ganzen Umgebung der Iterationspunkte finden wollen. Auf diese Weise vermeiden wir das Oszillieren der Gradienten und sehr kleine Abstiegsschritte (im glatten, wie im nichtglatten Fall.) Es stellt sich heraus, dass das normkleinste Element dieses Gradienten auf der Umgebung eine stabil Abstiegsrichtung bestimmt.
So weit es uns bekannt ist, koennen die hier entwickelten Algorithmen zum ersten Mal lokal Lipschitz-stetige Funktionen in dieser Allgemeinheit behandeln. Insbesondere wurden nichtglatte, nichtkonvexe Funktionen auf derart hochdimensionale Banachraeume bis jetzt nicht behandelt. Wir werden zeigen, dass unser Algorithmus sehr robust und oft schneller als uebliche Algorithmen ist. Des Weiteren, werden wir sehen, dass es mit diesem Algorithmus das erste mal moeglich ist, zuverlaessig die erste Eigenfunktion des 1-Laplace Operators bis auf Diskretisierungsfehler zu bestimmen.
|
Page generated in 0.0427 seconds