Spelling suggestions: "subject:"nonsmooth aptimization"" "subject:"nonsmooth anoptimization""
11 |
Sur les dérivées généralisées, les conditions d'optimalité et l'unicité des solutions en optimisation non lisse / On generalized derivatives, optimality conditions and uniqueness of solutions in nonsmooth optimizationLe Thanh, Tung 13 August 2011 (has links)
En optimisation les conditions d’optimalité jouent un rôle primordial pour détecter les solutions optimales et leur étude occupe une place significative dans la recherche actuelle. Afin d’exprimer adéquatement des conditions d’optimalité les chercheurs ont introduit diverses notions de dérivées généralisées non seulement pour des fonctions non lisses, mais aussi pour des fonctions à valeurs ensemblistes, dites applications multivoques ou multifonctions. Cette thèse porte sur l’application des deux nouveaux concepts de dérivées généralisées: les ensembles variationnels de Khanh-Tuan et les approximations de Jourani-Thibault, aux problèmes d’optimisation multiobjectif et aux problèmes d’équilibre vectoriel. L’enjeu principal est d’obtenir des conditions d’optimalité du premier et du second ordre pour les problèmes ayant des données multivoques ou univoques non lisses et pas forcément continues, et des conditions assurant l’unicité des solutions dans les problèmes d’équilibre vectoriel. / Optimality conditions for nonsmooth optimization have become one of the most important topics in the study of optimization-related problems. Various notions of generalized derivatives have been introduced to establish optimality conditions. Besides establishing optimality conditions, generalized derivatives also is an important tool for studying the local uniqueness of solutions. During the last three decades, these topics have been being developed, generalized and applied to many elds of mathematics by many authors all over the world. The purpose of this thesis is to investigate the above topics. It consists of ve chapters. In Chapter 1, we develop elements of calculus of variational sets for set-valued mappings, which were recently introduced in Khanh and Tuan (2008). Most of the usual calculus rules, from chain and sum rules to rules for unions, intersections, products and other operations on mappings, are established. As applications we provide a direct employment of sum rules to establishing an explicit formula for a variational set of the solution map to a parametrized variational inequality in terms of variational sets of the data. Furthermore, chain rules and sum or product rules are also used to prove optimality conditions for weak solutions of some vector optimization problems. In Chapter 2, we propose notions of higher-order outer and inner radial derivatives of set-valued maps and obtain main calculus rules. Some direct applications of these rules in proving optimality conditions for particular optimization problems are provided. Then, we establish higher-order optimality necessary conditions and sufficient ones for a general set-valued vector optimization problem with inequality constraints. Chapter 3 is devoted to using first and second-order approximations, which were introduced by Jourani and Thibault (1993) and Allali and Amaroq (1997), as generalized derivatives, to establish both necessary and sufficient optimality conditions for various kinds of solutions to nonsmooth vector equilibrium problems with functional constraints. Our rst-order conditions are shown to be applicable in many cases, where existing ones cannot be applied. The second-order conditions are new. In Chapter 4, we consider nonsmooth multi-objective fractional programming on normed spaces. Using rst and second-order approximations as generalized derivatives, rst and second-order optimality conditions are established. For sufficient conditions no convexity is needed. Our results can be applied even in innite dimensional cases involving innitely discontinuousmaps. In Chapter 5, we establish sufficient conditions for the local uniqueness of solutions to nonsmooth strong and weak vector equilibrium problems. Also by using approximations, our results are valid even in cases where the maps involved in the problems suffer innite discontinuity at the considered point.
|
12 |
Análise não suave e aplicações em otimização /Costa, Tiago Mendonça de. January 2011 (has links)
Orientador: Geraldo Nunes Silva / Banca: Luis Antônio Fernandes de Oliveira / Banca: Yurilev Chalco Cano / Resumo: Neste trabalho, estamos interessados em apresentar uma abordagem relacionando a análise não suave com a otimização. Primeiramente, é realizado um estudo sobre conceitos da análise não suave, como cones normais, cone tangente de Bouligand, subdiferenciais proximal, estrita, limite e de clarke. Com esses conceitos exibimos uma série de resultados, por exemplo, uma caracterização par funções de Lipschitz, subdiferencais da soma, produto e máximo de funções semi-contínuas inferior, uma versão não suave dos multiplicadores de Lagrange, i.e., condições de primeira ordem para otimalidade de problemas de otimização não suaves. Também é feito um estudo sobre as condições de segunda ordem para otimalidade em problemas de otimização não suaves e para isso, foi necessário a apresentação de outros conceitos e propriedades como os de Hessiana generalizada, Jacobiana aproximada a Hessiana proximada. Após a apresentação desses resultados, é feita uma análise sobre dois Teoremas que fornecem, com abordagens distintas, condições suficiente de segunda ordem para problemas de otimização não suaves e este trabalho é finalizado com a aprsentação de um resultado que é considerado uma "unificação" desses dois Teoremas / Abstract: In this work we are interested in the presentation of an approach relating Nonsmooth Analysis to Optimization. First we make a study about concepts of nonsmooth analysis such as, normal cone, Bouligand's tangent cone, proximal, strict and limiting Subdiferential, as well as Clarke's Suddifferential. After these, we exhibit a series of results, for example, a characterization of Lipschitz functions, Subdifferential sum, product and maxium rules of lower semicontinuous functions and a nonsmooth version of Lagrange's multiplier rule, that is, a first order necessary condition of optimality for nonsmooth optimization problems. We also made a study about second order optimality conditions for nonsmooth optimization problems. In order to do that, it was necessary to present other concepts and properties about generalized Hessian, approximate Jacobian and approximate Hessian. After presenting these concepts and results, an analysis of two theorems that provide, with different approches, second order conditions for optimality for nonsmooth problems is made. Finally, this dissertation is completed with the exposition of a result that is considered a "unification" of these two theorems / Mestre
|
13 |
Metodo de direções interiores ao epígrafo - IED para otimização não diferenciável e não convexa via Dualidade Lagrangeana: estratégias para minimização da Lagrangeana aumentadaFranco, Hernando José Rocha 08 June 2018 (has links)
Submitted by Geandra Rodrigues (geandrar@gmail.com) on 2018-07-12T12:23:47Z
No. of bitstreams: 1
hernandojoserochafranco.pdf: 1674623 bytes, checksum: f6df7317dd6a8e94e51045dbf75e8241 (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2018-07-17T11:56:13Z (GMT) No. of bitstreams: 1
hernandojoserochafranco.pdf: 1674623 bytes, checksum: f6df7317dd6a8e94e51045dbf75e8241 (MD5) / Made available in DSpace on 2018-07-17T11:56:13Z (GMT). No. of bitstreams: 1
hernandojoserochafranco.pdf: 1674623 bytes, checksum: f6df7317dd6a8e94e51045dbf75e8241 (MD5)
Previous issue date: 2018-06-08 / A teoria clássica de otimização presume a existência de certas condições, por exemplo, que as funções envolvidas em um problema desta natureza sejam pelo menos uma vez continuamente diferenciáveis. Entretanto, em muitas aplicações práticas que requerem o emprego de métodos de otimização, essa característica não se encontra presente. Problemas de otimização não diferenciáveis são considerados mais difíceis de lidar. Nesta classe, aqueles que envolvem funções não convexas são ainda mais complexos. O Interior Epigraph Directions (IED) é um método de otimização que se baseia na teoria da Dualidade Lagrangeana e se aplica à resolução de problemas não diferenciáveis, não convexos e com restrições. Neste estudo, apresentamos duas novas versões para o referido método a partir de implementações computacionais de outros algoritmos. A primeira versão, denominada IED+NFDNA, recebeu a incorporação de uma implementação do algoritmo Nonsmooth Feasible Direction Nonconvex Algorithm (NFDNA). Esta versão, ao ser aplicada em experimentos numéricos com problemas teste da literatura, apresentou desempenho satisfatório quando comparada ao IED original e a outros solvers de otimização. Com o objetivo de aperfeiçoar mais o método, reduzindo sua dependência de parâmetros iniciais e também do cálculo de subgradientes, uma segunda versão, IED+GA, foi desenvolvida com a utilização de algoritmos genéticos. Além da resolução de problemas teste, o IED-FGA obteve bons resultados quando aplicado a problemas de engenharia. / The classical theory of optimization assumes the existence of certain conditions, for example, that the functions involved in a problem of this nature are at least once continuously differentiable. However, in many practical applications that require the use of optimization methods, this characteristic is not present. Non-differentiable optimization problems are considered more difficult to deal with. In this class, those involving nonconvex functions are even more complex. Interior Epigraph Directions (IED) is an optimization method that is based on Lagrangean duality theory and applies to the resolution of non-differentiable, non-convex and constrained problems. In this study, we present two new versions for this method from computational implementations of other algorithms. The first version, called IED + NFDNA, received the incorporation of an implementation of the Nonsmooth Feasible Direction Nonconvex Algorithm (NFDNA) algorithm. This version, when applied in numerical experiments with problems in the literature, presented satisfactory performance when compared to the original IED and other optimization solvers. A second version, IED + GA, was developed with the use of genetic algorithms in order to further refine the method, reducing its dependence on initial parameters and also on the calculation of subgradients. In addition to solving test problems, IED + GA achieved good results when applied to engineering problems.
|
14 |
Eigenschaften pseudo-regulärer Funktionen und einige Anwendungen auf OptimierungsaufgabenFúsek, Peter 26 February 1999 (has links)
im Postscript-Format / PostScript
|
15 |
Mathematical analysis of a dynamical system for sparse recoveryBalavoine, Aurele 22 May 2014 (has links)
This thesis presents the mathematical analysis of a continuous-times system for sparse signal recovery. Sparse recovery arises in Compressed Sensing (CS), where signals of large dimension must be recovered from a small number of linear measurements, and can be accomplished by solving a complex optimization program. While many solvers have been proposed and analyzed to solve such programs in digital, their high complexity currently prevents their use in real-time applications. On the contrary, a continuous-time neural network implemented in analog VLSI could lead to significant gains in both time and power consumption. The contributions of this thesis are threefold. First, convergence results for neural networks that solve a large class of nonsmooth optimization programs are presented. These results extend previous analysis by allowing the interconnection matrix to be singular and the activation function to have many constant regions and grow unbounded. The exponential convergence rate of the networks is demonstrated and an analytic expression for the convergence speed is given. Second, these results are specialized to the L1-minimization problem, which is the most famous approach to solving the sparse recovery problem. The analysis relies on standard techniques in CS and proves that the network takes an efficient path toward the solution for parameters that match results obtained for digital solvers. Third, the convergence rate and accuracy of both the continuous-time system and its discrete-time equivalent are derived in the case where the underlying sparse signal is time-varying and the measurements are streaming. Such a study is of great interest for practical applications that need to operate in real-time, when the data are streaming at high rates or the computational resources are limited. As a conclusion, while existing analysis was concentrated on discrete-time algorithms for the recovery of static signals, this thesis provides convergence rate and accuracy results for the recovery of static signals using a continuous-time solver, and for the recovery of time-varying signals with both a discrete-time and a continuous-time solver.
|
16 |
Eine spezielle Klasse von Zwei-Ebenen-OptimierungsaufgabenLohse, Sebastian 17 March 2011 (has links) (PDF)
In der Dissertation werden Zwei-Ebenen-Optimierungsaufgaben mit spezieller Struktur untersucht. Von Interesse sind hierbei für den sogenannten pessimistischen Lösungszugang Existenzresultate für Lösungen, die Eckpunkteigenschaft einer Lösung, eine Regularisierungstechnik, Optimalitätsbedingungen sowie für den linearen Fall ein Verfahren zur Bestimmung einer global pessimistischen Lösung. Beim optimistischen Lösungszugang wird zunächst eine Verallgemeinerung des Lösungsbegriffes angegeben. Anschließend finden sich Betrachtungen zur Komplexität des Problems, zu Optimalitätsbedingungen sowie ein Abstiegs- und Branch&Bound-Verfahren für den linearen Fall wieder. Den Abschluss der Arbeit bilden ein Anwendungsbeispiel und numerische Testrechnungen.
|
17 |
A Nonsmooth Nonconvex Descent AlgorithmMankau, Jan Peter 17 January 2017 (has links) (PDF)
In many applications nonsmooth nonconvex energy functions, which are Lipschitz continuous, appear quite naturally. Contact mechanics with friction is a classic example. A second example is the 1-Laplace operator and its eigenfunctions.
In this work we will give an algorithm such that for every locally Lipschitz continuous function f and every sequence produced by this algorithm it holds that every accumulation point of the sequence is a critical point of f in the sense of Clarke. Here f is defined on a reflexive Banach space X, such that X and its dual space X' are strictly convex and Clarkson's inequalities hold. (E.g. Sobolev spaces and every closed subspace equipped with the Sobolev norm satisfy these assumptions for p>1.) This algorithm is designed primarily to solve variational problems or their high dimensional discretizations, but can be applied to a variety of locally Lipschitz functions.
In elastic contact mechanics the strain energy is often smooth and nonconvex on a suitable domain, while the contact and the friction energy are nonsmooth and have a support on a subspace which has a substantially smaller dimension than the strain energy, since all points in the interior of the bodies only have effect on the strain energy. For such elastic contact problems we suggest a specialization of our algorithm, which treats the smooth part with Newton like methods. In the case that the gradient of the entire energy function is semismooth close to the minimizer, we can even prove superlinear convergence of this specialization of our algorithm.
We test the algorithm and its specialization with a couple of benchmark problems. Moreover, we apply the algorithm to the 1-Laplace minimization problem restricted to finitely dimensional subspaces of piecewise affine, continuous functions.
The algorithm developed here uses ideas of the bundle trust region method by Schramm, and a new generalization of the concept of gradients on a set. The basic idea behind this gradients on sets is that we want to find a stable descent direction, which is a descent direction on an entire neighborhood of an iteration point. This way we avoid oscillations of the gradients and very small descent steps (in the smooth and in the nonsmooth case). It turns out, that the norm smallest element of the gradient on a set provides a stable descent direction.
The algorithm we present here is the first algorithm which can treat locally Lipschitz continuous functions in this generality, up to our knowledge. In particular, large finitely dimensional Banach spaces haven't been studied for nonsmooth nonconvex functions so far. We will show that the algorithm is very robust and often faster than common algorithms. Furthermore, we will see that with this algorithm it is possible to compute reliably the first eigenfunctions of the 1-Laplace operator up to disretization errors, for the first time. / In vielen Anwendungen tauchen nichtglatte, nichtkonvexe, Lipschitz-stetige Energie Funktionen in natuerlicher Weise auf. Ein klassische Beispiel bildet die Kontaktmechanik mit Reibung. Ein weiteres Beispiel ist der $1$-Laplace Operator und seine Eigenfunktionen.
In dieser Dissertation werden wir ein Abstiegsverfahren angeben, so dass fuer jede lokal Lipschitz-stetige Funktion f jeder Haeufungspunkt einer durch dieses Verfahren erzeugten Folge ein kritischer Punkt von f im Sinne von Clarke ist. Hier ist f auf einem einem reflexiver, strikt konvexem Banachraum definierert, fuer den der Dualraum ebenfalls strikt konvex ist und die Clarkeson Ungleichungen gelten. (Z.B. Sobolevraeume und jeder abgeschlossene Unterraum mit der Sobolevnorm versehen, erfuellt diese Bedingung fuer p>1.) Dieser Algorithmus ist primaer entwickelt worden um Variationsprobleme, bzw. deren hochdimensionalen Diskretisierungen zu loesen. Er kann aber auch fuer eine Vielzahl anderer lokal Lipschitz stetige Funktionen eingesetzt werden.
In der elastischen Kontaktmechanik ist die Spannungsenergie oft glatt und nichtkonvex auf einem geeignetem Definitionsbereich, waehrend der Kontakt und die Reibung durch nicht glatte Funktionen modelliert werden, deren Traeger ein Unterraum mit wesentlich kleineren Dimension ist, da alle Punkte im Inneren des Koerpers nur die Spannungsenergie beeinflussen. Fuer solche elastischen Kontaktprobleme schlagen wir eine Spezialisierung unseres Algorithmuses vor, der den glatten Teil mit Newton aehnlichen Methoden behandelt. Falls der Gradient der gesamten Energiefunktion semiglatt in der Naehe der Minimalstelle ist, koennen wir sogar beweisen, dass der Algorithmus superlinear konvergiert.
Wir testen den Algorithmus und seine Spezialisierung an mehreren Benchmark Problemen. Ausserdem wenden wir den Algorithmus auf 1-Laplace Minimierungsproblem eingeschraenkt auf eine endlich dimensionalen Unterraum der stueckweise affinen, stetigen Funktionen an.
Der hier entwickelte Algorithmus verwendet Ideen des Bundle-Trust-Region-Verfahrens von Schramm, und einen neu entwickelten Verallgemeinerung von Gradienten auf Mengen. Die zentrale Idee hinter den Gradienten auf Mengen ist die, dass wir stabile Abstiegsrichtungen auf einer ganzen Umgebung der Iterationspunkte finden wollen. Auf diese Weise vermeiden wir das Oszillieren der Gradienten und sehr kleine Abstiegsschritte (im glatten, wie im nichtglatten Fall.) Es stellt sich heraus, dass das normkleinste Element dieses Gradienten auf der Umgebung eine stabil Abstiegsrichtung bestimmt.
So weit es uns bekannt ist, koennen die hier entwickelten Algorithmen zum ersten Mal lokal Lipschitz-stetige Funktionen in dieser Allgemeinheit behandeln. Insbesondere wurden nichtglatte, nichtkonvexe Funktionen auf derart hochdimensionale Banachraeume bis jetzt nicht behandelt. Wir werden zeigen, dass unser Algorithmus sehr robust und oft schneller als uebliche Algorithmen ist. Des Weiteren, werden wir sehen, dass es mit diesem Algorithmus das erste mal moeglich ist, zuverlaessig die erste Eigenfunktion des 1-Laplace Operators bis auf Diskretisierungsfehler zu bestimmen.
|
18 |
A Nonsmooth Nonconvex Descent AlgorithmMankau, Jan Peter 09 December 2016 (has links)
In many applications nonsmooth nonconvex energy functions, which are Lipschitz continuous, appear quite naturally. Contact mechanics with friction is a classic example. A second example is the 1-Laplace operator and its eigenfunctions.
In this work we will give an algorithm such that for every locally Lipschitz continuous function f and every sequence produced by this algorithm it holds that every accumulation point of the sequence is a critical point of f in the sense of Clarke. Here f is defined on a reflexive Banach space X, such that X and its dual space X' are strictly convex and Clarkson's inequalities hold. (E.g. Sobolev spaces and every closed subspace equipped with the Sobolev norm satisfy these assumptions for p>1.) This algorithm is designed primarily to solve variational problems or their high dimensional discretizations, but can be applied to a variety of locally Lipschitz functions.
In elastic contact mechanics the strain energy is often smooth and nonconvex on a suitable domain, while the contact and the friction energy are nonsmooth and have a support on a subspace which has a substantially smaller dimension than the strain energy, since all points in the interior of the bodies only have effect on the strain energy. For such elastic contact problems we suggest a specialization of our algorithm, which treats the smooth part with Newton like methods. In the case that the gradient of the entire energy function is semismooth close to the minimizer, we can even prove superlinear convergence of this specialization of our algorithm.
We test the algorithm and its specialization with a couple of benchmark problems. Moreover, we apply the algorithm to the 1-Laplace minimization problem restricted to finitely dimensional subspaces of piecewise affine, continuous functions.
The algorithm developed here uses ideas of the bundle trust region method by Schramm, and a new generalization of the concept of gradients on a set. The basic idea behind this gradients on sets is that we want to find a stable descent direction, which is a descent direction on an entire neighborhood of an iteration point. This way we avoid oscillations of the gradients and very small descent steps (in the smooth and in the nonsmooth case). It turns out, that the norm smallest element of the gradient on a set provides a stable descent direction.
The algorithm we present here is the first algorithm which can treat locally Lipschitz continuous functions in this generality, up to our knowledge. In particular, large finitely dimensional Banach spaces haven't been studied for nonsmooth nonconvex functions so far. We will show that the algorithm is very robust and often faster than common algorithms. Furthermore, we will see that with this algorithm it is possible to compute reliably the first eigenfunctions of the 1-Laplace operator up to disretization errors, for the first time. / In vielen Anwendungen tauchen nichtglatte, nichtkonvexe, Lipschitz-stetige Energie Funktionen in natuerlicher Weise auf. Ein klassische Beispiel bildet die Kontaktmechanik mit Reibung. Ein weiteres Beispiel ist der $1$-Laplace Operator und seine Eigenfunktionen.
In dieser Dissertation werden wir ein Abstiegsverfahren angeben, so dass fuer jede lokal Lipschitz-stetige Funktion f jeder Haeufungspunkt einer durch dieses Verfahren erzeugten Folge ein kritischer Punkt von f im Sinne von Clarke ist. Hier ist f auf einem einem reflexiver, strikt konvexem Banachraum definierert, fuer den der Dualraum ebenfalls strikt konvex ist und die Clarkeson Ungleichungen gelten. (Z.B. Sobolevraeume und jeder abgeschlossene Unterraum mit der Sobolevnorm versehen, erfuellt diese Bedingung fuer p>1.) Dieser Algorithmus ist primaer entwickelt worden um Variationsprobleme, bzw. deren hochdimensionalen Diskretisierungen zu loesen. Er kann aber auch fuer eine Vielzahl anderer lokal Lipschitz stetige Funktionen eingesetzt werden.
In der elastischen Kontaktmechanik ist die Spannungsenergie oft glatt und nichtkonvex auf einem geeignetem Definitionsbereich, waehrend der Kontakt und die Reibung durch nicht glatte Funktionen modelliert werden, deren Traeger ein Unterraum mit wesentlich kleineren Dimension ist, da alle Punkte im Inneren des Koerpers nur die Spannungsenergie beeinflussen. Fuer solche elastischen Kontaktprobleme schlagen wir eine Spezialisierung unseres Algorithmuses vor, der den glatten Teil mit Newton aehnlichen Methoden behandelt. Falls der Gradient der gesamten Energiefunktion semiglatt in der Naehe der Minimalstelle ist, koennen wir sogar beweisen, dass der Algorithmus superlinear konvergiert.
Wir testen den Algorithmus und seine Spezialisierung an mehreren Benchmark Problemen. Ausserdem wenden wir den Algorithmus auf 1-Laplace Minimierungsproblem eingeschraenkt auf eine endlich dimensionalen Unterraum der stueckweise affinen, stetigen Funktionen an.
Der hier entwickelte Algorithmus verwendet Ideen des Bundle-Trust-Region-Verfahrens von Schramm, und einen neu entwickelten Verallgemeinerung von Gradienten auf Mengen. Die zentrale Idee hinter den Gradienten auf Mengen ist die, dass wir stabile Abstiegsrichtungen auf einer ganzen Umgebung der Iterationspunkte finden wollen. Auf diese Weise vermeiden wir das Oszillieren der Gradienten und sehr kleine Abstiegsschritte (im glatten, wie im nichtglatten Fall.) Es stellt sich heraus, dass das normkleinste Element dieses Gradienten auf der Umgebung eine stabil Abstiegsrichtung bestimmt.
So weit es uns bekannt ist, koennen die hier entwickelten Algorithmen zum ersten Mal lokal Lipschitz-stetige Funktionen in dieser Allgemeinheit behandeln. Insbesondere wurden nichtglatte, nichtkonvexe Funktionen auf derart hochdimensionale Banachraeume bis jetzt nicht behandelt. Wir werden zeigen, dass unser Algorithmus sehr robust und oft schneller als uebliche Algorithmen ist. Des Weiteren, werden wir sehen, dass es mit diesem Algorithmus das erste mal moeglich ist, zuverlaessig die erste Eigenfunktion des 1-Laplace Operators bis auf Diskretisierungsfehler zu bestimmen.
|
19 |
Eine spezielle Klasse von Zwei-Ebenen-OptimierungsaufgabenLohse, Sebastian 25 February 2011 (has links)
In der Dissertation werden Zwei-Ebenen-Optimierungsaufgaben mit spezieller Struktur untersucht. Von Interesse sind hierbei für den sogenannten pessimistischen Lösungszugang Existenzresultate für Lösungen, die Eckpunkteigenschaft einer Lösung, eine Regularisierungstechnik, Optimalitätsbedingungen sowie für den linearen Fall ein Verfahren zur Bestimmung einer global pessimistischen Lösung. Beim optimistischen Lösungszugang wird zunächst eine Verallgemeinerung des Lösungsbegriffes angegeben. Anschließend finden sich Betrachtungen zur Komplexität des Problems, zu Optimalitätsbedingungen sowie ein Abstiegs- und Branch&Bound-Verfahren für den linearen Fall wieder. Den Abschluss der Arbeit bilden ein Anwendungsbeispiel und numerische Testrechnungen.
|
20 |
Optimal control of a semi-discrete Cahn–Hilliard–Navier–Stokes system with variable fluid densitiesKeil, Tobias 29 October 2021 (has links)
Die vorliegende Doktorarbeit befasst sich mit der optimalen Steuerung von einem Cahn–Hilliard–Navier–Stokes-System mit variablen Flüssigkeitsdichten. Dabei konzentriert sie sich auf das Doppelhindernispotential, was zu einem optimalen Steuerungsproblem einer Gruppe von gekoppelten Systemen, welche eine Variationsungleichung vierter Ordnung sowie eine Navier–Stokes-Gleichung beinhalten, führt. Eine geeignete Zeitdiskretisierung wird präsentiert und zugehörige Energieabschätzungen werden bewiesen. Die Existenz von Lösungen zum primalen System und von optimalen Steuerungen für das ursprüngliche Problem sowie für eine Gruppe von regularisierten Problemen wird etabliert. Die Optimalitätsbedingungen erster Ordnung für die regularisierten Probleme werden hergeleitet. Mittels eines Grenzübergangs in Bezug auf den Regularisierungsparameter werden Stationaritätsbedingungen für das ursprüngliche Problem etabliert, welche einer Form von C-Stationarität im Funktionenraum entsprechen.
Weiterhin wird ein numerischer Lösungsalgorithmus für das Steuerungsproblem basierend auf einer Strafmethode entwickelt, welche die Moreau–Yosida-artigen Approximationen des Doppelhindernispotentials einschließt. In diesem Zusammenhang wird ein dual-gewichteter Residuenansatz für zielorientierte adaptive finite Elemente präsentiert, welcher auf dem Konzept der C-Stationarität beruht. Die numerische Realisierung des adaptiven Konzepts und entsprechende numerische Testergebnisse werden beschrieben.
Die Lipschitzstetigkeit des Steuerungs-Zustandsoperators des zugehörigen instantanen Steuerungsproblems wird bewiesen und dessen Richtungsableitung wird charakterisiert. Starke Stationaritätsbedingungen für dieses Problem werden durch die Anwendung einer Technick von Mignot und Puel hergeleitet. Basierend auf der primalen Form der Bouligard-Ableitung wird ein impliziter numerischer Löser entwickelt, dessen Implentierung erläutert und anhand von numerischen Resultaten illustriert wird. / This thesis is concerned with the optimal control of a Cahn–Hilliard–Navier–Stokes system with variable fluid densities. It focuses on the double-obstacle potential, which yields an optimal control problem for a family of coupled systems in each time instant of a variational inequality of fourth order and the Navier–Stokes equation. A suitable time-discretization is presented and associated energy estimates are proven. The existence of solutions to the primal system and of optimal controls is established for the original problem as well as for a family of regularized problems. The consistency of these approximations is shown and first order optimality conditions for the regularized problems are derived. Through a limit process with respect to the regularization parameter, a stationarity system for the original problem is established, which corresponds to a function space version of ε-almost C-stationarity.
Moreover, a numerical solution algorithm for the optimal control problem is developed based on a penalization method involving the Moreau–Yosida type approximations of the double-obstacle potential. A dual-weighted residual approach for goal-oriented adaptive finite elements is presented, which is based on the concept of C-stationarity. The overall error representation depends on dual weighted primal residuals and vice versa, supplemented by additional terms corresponding to the complementarity mismatch. The numerical realization of the adaptive concept is described and a report on numerical tests is provided.
The Lipschitz continuity of the control-to-state operator of the corresponding instantaneous control problem is verified and its directional derivative is characterized. Strong stationarity conditions for the instantaneous control problem are derived. Utilizing the primal notion of B-differentiability, a bundle-free implicit programming method is developed. Details on the numerical implementation are given and numerical results are included.
|
Page generated in 0.1099 seconds