• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1772
  • 226
  • 157
  • 1
  • Tagged with
  • 2149
  • 2149
  • 954
  • 948
  • 948
  • 471
  • 262
  • 240
  • 221
  • 211
  • 199
  • 183
  • 177
  • 176
  • 167
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Optimisation Problems with Sparsity Terms: Theory and Algorithms / Optimierungsprobleme mit Dünnbesetzten Termen: Theorie und Algorithmen

Raharja, Andreas Budi January 2021 (has links) (PDF)
The present thesis deals with optimisation problems with sparsity terms, either in the constraints which lead to cardinality-constrained problems or in the objective function which in turn lead to sparse optimisation problems. One of the primary aims of this work is to extend the so-called sequential optimality conditions to these two classes of problems. In recent years sequential optimality conditions have become increasingly popular in the realm of standard nonlinear programming. In contrast to the more well-known Karush-Kuhn-Tucker condition, they are genuine optimality conditions in the sense that every local minimiser satisfies these conditions without any further assumption. Lately they have also been extended to mathematical programmes with complementarity constraints. At around the same time it was also shown that optimisation problems with sparsity terms can be reformulated into problems which possess similar structures to mathematical programmes with complementarity constraints. These recent developments have become the impetus of the present work. But rather than working with the aforementioned reformulations which involve an artifical variable we shall first directly look at the problems themselves and derive sequential optimality conditions which are independent of any artificial variable. Afterwards we shall derive the weakest constraint qualifications associated with these conditions which relate them to the Karush-Kuhn-Tucker-type conditions. Another equally important aim of this work is to then consider the practicability of the derived sequential optimality conditions. The previously mentioned reformulations open up the possibilities to adapt methods which have been proven successful to handle mathematical programmes with complementarity constraints. We will show that the safeguarded augmented Lagrangian method and some regularisation methods may generate a point satisfying the derived conditions. / Die vorliegende Arbeit beschäftigt sich mit Optimierungsproblemen mit dünnbesetzten Termen, und zwar entweder in der Restriktionsmenge, was zu kardinalitätsrestringierten Problemen führen, oder in der Zielfunktion, was zu Optimierungsproblemen mit dünnbesetzten Lösungen führen. Die Herleitung der sogenannten sequentiellen Optimalitätsbedingungen für diese Problemklassen ist eines der Hauptziele dieser Arbeit. Im Bereich der nichtlinearen Optimierung gibt es in jüngster Zeit immer mehr Interesse an diesen Bedingungen. Im Gegensatz zu der mehr bekannten Karush-Kuhn-Tucker Bedingung sind diese Bedingungen echte Optimalitätsbedingungen. Sie sind also in jedem lokalen Minimum ohne weitere Voraussetzung erfüllt. Vor Kurzem wurden solche Bedingungen auch für mathematische Programme mit Komplementaritätsbedingungen hergeleitet. Zum gleichen Zeitpunkt wurde es auch gezeigt, dass Optimierungsproblemen mit dünnbesetzten Termen sich als Problemen, die ähnliche Strukturen wie mathematische Programme mit Komplementaritätsbedingungen besitzen, umformulieren lassen. Diese jüngsten Entwicklungen motivieren die vorliegende Arbeit. Hier werden wir zunächst die ursprunglichen Problemen direkt betrachten anstatt mit den Umformulierungen, die eine künstliche Variable enthalten, zu arbeiten. Dies ermöglicht uns, um Optimalitätsbedingungen, die von künstlichen Variablen unabhängig sind, zu gewinnen. Danach werden wir die entsprechenden schwächsten Constraint Qualifikationen, die diese Bedingungen mit Karush-Kuhn-Tucker-ähnlichen Bedingungen verknüpfen, herleiten. Als ein weiteres Hauptziel der Arbeit werden wir dann untersuchen, ob die gerade hergeleiteten Bedingungen eine praktische Bedeutung haben. Die vor Kurzem eingeführten Umformulierungen bieten die Möglichkeiten, um die für mathematische Programme mit Komplementaritätsbedingungen gut funktionierenden Methoden hier auch anzuwenden. Wir werden zeigen, dass das safeguarded augmented Lagrangian Method und einige Regularisierungsmethoden theoretisch in der Lage sind, um einen Punkt, der den hergeleiteten Bedingungen genügt, zu generieren.
32

Numerical methods for solving open-loop non zero-sum differential Nash games / Numerische Methoden zur Lösung von Open-Loop-Nicht-Nullsummen-Differential-Nash-Spielen

Calà Campana, Francesca January 2021 (has links) (PDF)
This thesis is devoted to a theoretical and numerical investigation of methods to solve open-loop non zero-sum differential Nash games. These problems arise in many applications, e.g., biology, economics, physics, where competition between different agents appears. In this case, the goal of each agent is in contrast with those of the others, and a competition game can be interpreted as a coupled optimization problem for which, in general, an optimal solution does not exist. In fact, an optimal strategy for one player may be unsatisfactory for the others. For this reason, a solution of a game is sought as an equilibrium and among the solutions concepts proposed in the literature, that of Nash equilibrium (NE) is the focus of this thesis. The building blocks of the resulting differential Nash games are a dynamical model with different control functions associated with different players that pursue non-cooperative objectives. In particular, the aim of this thesis is on differential models having linear or bilinear state-strategy structures. In this framework, in the first chapter, some well-known results are recalled, especially for non-cooperative linear-quadratic differential Nash games. Then, a bilinear Nash game is formulated and analysed. The main achievement in this chapter is Theorem 1.4.2 concerning existence of Nash equilibria for non-cooperative differential bilinear games. This result is obtained assuming a sufficiently small time horizon T, and an estimate of T is provided in Lemma 1.4.8 using specific properties of the regularized Nikaido-Isoda function. In Chapter 2, in order to solve a bilinear Nash game, a semi-smooth Newton (SSN) scheme combined with a relaxation method is investigated, where the choice of a SSN scheme is motivated by the presence of constraints on the players’ actions that make the problem non-smooth. The resulting method is proved to be locally convergent in Theorem 2.1, and an estimate on the relaxation parameter is also obtained that relates the relaxation factor to the time horizon of a Nash equilibrium and to the other parameters of the game. For the bilinear Nash game, a Nash bargaining problem is also introduced and discussed, aiming at determining an improvement of all players’ objectives with respect to the Nash equilibrium. A characterization of a bargaining solution is given in Theorem 2.2.1 and a numerical scheme based on this result is presented that allows to compute this solution on the Pareto frontier. Results of numerical experiments based on a quantum model of two spin-particles and on a population dynamics model with two competing species are presented that successfully validate the proposed algorithms. In Chapter 3 a functional formulation of the classical homicidal chauffeur (HC) Nash game is introduced and a new numerical framework for its solution in a time-optimal formulation is discussed. This methodology combines a Hamiltonian based scheme, with proximal penalty to determine the time horizon where the game takes place, with a Lagrangian optimal control approach and relaxation to solve the Nash game at a fixed end-time. The resulting numerical optimization scheme has a bilevel structure, which aims at decoupling the computation of the end-time from the solution of the pursuit-evader game. Several numerical experiments are performed to show the ability of the proposed algorithm to solve the HC game. Focusing on the case where a collision may occur, the time for this event is determined. The last part of this thesis deals with the analysis of a novel sequential quadratic Hamiltonian (SQH) scheme for solving open-loop differential Nash games. This method is formulated in the framework of Pontryagin’s maximum principle and represents an efficient and robust extension of the successive approximations strategy in the realm of Nash games. In the SQH method, the Hamilton-Pontryagin functions are augmented by a quadratic penalty term and the Nikaido-Isoda function is used as a selection criterion. Based on this fact, the key idea of this SQH scheme is that the PMP characterization of Nash games leads to a finite-dimensional Nash game for any fixed time. A class of problems for which this finite-dimensional game admits a unique solution is identified and for this class of games theoretical results are presented that prove the well-posedness of the proposed scheme. In particular, Proposition 4.2.1 is proved to show that the selection criterion on the Nikaido-Isoda function is fulfilled. A comparison of the computational performances of the SQH scheme and the SSN-relaxation method previously discussed is shown. Applications to linear-quadratic Nash games and variants with control constraints, weighted L1 costs of the players’ actions and tracking objectives are presented that corroborate the theoretical statements. / Diese Dissertation handelt von eine theoretischen und numerischen Untersuchung von Methoden zur Lösung von Open-Loop-Nicht-Nullsummen-Differential-Nash-Spielen. Diese Probleme treten in vielen Anwendungen auf, z.B., Biologie, Wirtschaft, Physik, in denen die Konkurrenz zwischen verschiedenen Wirkstoffen bzw. Agenten auftritt. In diesem Fall steht das Ziel jedes Agenten im Gegensatz zu dem der anderen und ein Wettbewerbsspiel kann als gekoppeltes Optimierungsproblem interpretiert werden. Im Allgemeinen gibt es keine optimale Lösung für ein solches Spiel. Tatsächlich kann eine optimale Strategie für einen Spieler für den anderen unbefriedigend sein. Aus diesem Grund wird ein Gle- ichgewicht eines Spiels als Lösung gesucht, und unter den in der Literatur vorgeschlagenen Lösungskonzepten steht das Nash-Gleichgewicht (NE) im Mittelpunkt dieser Arbeit. ...
33

Proximal Methods for Nonconvex Composite Optimization Problems / Proximal-Verfahren für nichtkonvexe zusammengesetzte Optimierungsprobleme

Lechner, Theresa January 2022 (has links) (PDF)
Optimization problems with composite functions deal with the minimization of the sum of a smooth function and a convex nonsmooth function. In this thesis several numerical methods for solving such problems in finite-dimensional spaces are discussed, which are based on proximity operators. After some basic results from convex and nonsmooth analysis are summarized, a first-order method, the proximal gradient method, is presented and its convergence properties are discussed in detail. Known results from the literature are summarized and supplemented by additional ones. Subsequently, the main part of the thesis is the derivation of two methods which, in addition, make use of second-order information and are based on proximal Newton and proximal quasi-Newton methods, respectively. The difference between the two methods is that the first one uses a classical line search, while the second one uses a regularization parameter instead. Both techniques lead to the advantage that, in contrast to many similar methods, in the respective detailed convergence analysis global convergence to stationary points can be proved without any restricting precondition. Furthermore, comprehensive results show the local convergence properties as well as convergence rates of these algorithms, which are based on rather weak assumptions. Also a method for the solution of the arising proximal subproblems is investigated. In addition, the thesis contains an extensive collection of application examples and a detailed discussion of the related numerical results. / In Optimierungsproblemen mit zusammengesetzten Funktionen wird die Summe aus einer glatten und einer konvexen, nicht glatten Funktion minimiert. Die vorliegende Arbeit behan- delt mehrere numerische Verfahren zur Lösung solcher Probleme in endlich-dimensionalen Räumen, welche auf Proximity Operatoren basieren. Nach der Zusammenfassung einiger grundlegender Resultate aus der konvexen und nicht- glatten Analysis wird ein Verfahren erster Ordnung, das Proximal-Gradienten-Verfahren, vorgestellt und dessen Konvergenzeigenschaften ausführlich behandelt. Bekannte Resultate aus der Literatur werden dabei zusammengefasst und durch weitere Ergebnisse ergänzt. Im Anschluss werden im Hauptteil der Arbeit zwei Verfahren hergeleitet, die zusätzlich Informationen zweiter Ordnung nutzen und auf Proximal-Newton- beziehungsweise Proximal-Quasi- Newton-Verfahren beruhen. Der Unterschied zwischen beiden Verfahren liegt darin, dass bei ersterem eine klassische Schrittweitensuche verwendet wird, während das zweite stattdessen einen Regularisierungsparameter nutzt. Beide Techniken führen dazu, dass im Gegensatz zu vielen verwandten Verfahren in der jeweils ausführlichen Konvergenzanalyse die globale Konvergenz zu stationären Punkten ohne weitere einschränkende Voraussetzungen bewiesen werden kann. Ferner zeigen umfassende Resultate die lokalen Konvergenzeigenschaften sowie Konvergenzraten der Algorithmen auf, welche auf lediglich schwachen Annahmen beruhen. Ein Verfahren zur Lösung auftretender Proximal-Teilprobleme ist ebenfalls Bestandteil dieser Arbeit. Die Dissertation beinhaltet zudem eine umfangreiche Sammlung von Anwendungsbeispielen und zugehörigen numerischen Ergebnissen.
34

On coverings and reduced residues in combinatorial number theory / Über Abdeckungen und prime Restklassen in kombinatorischer Zahlentheorie

Stumpf, Pascal January 2022 (has links) (PDF)
Our starting point is the Jacobsthal function \(j(m)\), defined for each positive integer \(m\) as the smallest number such that every \(j(m)\) consecutive integers contain at least one integer relatively prime to \(m\). It has turned out that improving on upper bounds for \(j(m)\) would also lead to advances in understanding the distribution of prime numbers among arithmetic progressions. If \(P_r\) denotes the product of the first \(r\) prime numbers, then a conjecture of Montgomery states that \(j(P_r)\) can be bounded from above by \(r (\log r)^2\) up to some constant factor. However, the until now very promising sieve methods seem to have reached a limit here, and the main goal of this work is to develop other combinatorial methods in hope of coming a bit closer to prove the conjecture of Montgomery. Alongside, we solve a problem of Recamán about the maximum possible length among arithmetic progressions in the least (positive) reduced residue system modulo \(m\). Lastly, we turn towards three additive representation functions as introduced by Erdős, Sárközy and Sós who studied their surprising different monotonicity behavior. By an alternative approach, we answer a question of Sárközy and demostrate that another conjecture does not hold. / Der Startpunkt dieser Arbeit ist die Jacobsthal-Funktion \(j(m)\), die für jede natürliche Zahl \(m\) als die kleinste Zahl definiert ist, so dass je \(j(m)\) aufeinanderfolgende ganze Zahlen mindestens eine zu \(m\) teilerfremde Zahl enthalten. Es hat sich herausgestellt, dass Verbesserungen oberer Abschätzungen für \(j(m)\) gleichzeitig zu Fortschritten im Verständnis der Verteilung der Primzahlen in arithmetischen Folgen führen. Bezeichnet \(P_r\) das Produkt der ersten \(r\) Primzahlen, dann besagt eine Vermutung von Montgomery, dass \(j(P_r)\) bis auf einen konstanten Faktor durch \(r (\log r)^2\) von oben abgeschätzt werden kann. Allerdings scheinen die hier bisher sehr vielversprechenden Siebmethoden eine Grenze erreicht zu haben, und das Hauptziel dieser Arbeit ist es andere kombinatorische Methoden zu entwickeln, in der Hoffnung einem Beweis der Vermutung von Montgomery ein wenig näher zu kommen. Auf diesem Weg lösen wir nebenbei ein Problem von Recamán über die maximal mögliche Länge unter den arithmetischen Folgen im kleinsten (positiven) primen Restklassensystem modulo \(m\). Außerdem wenden wir uns am Ende drei additiven Darstellungsfunktionen zu, wie sie von Erdős, Sárközy und Sós eingeführt wurden, die deren überraschend unterschiedliches Monotonieverhalten untersucht haben. Mit einem alternativen Ansatz beantworten wir hier eine Frage von Sárközy und zeigen auf, dass eine andere Vermutung nicht bestehen kann.
35

Efficient time step parallelization of full multigrid techniques

Weickert, J., Steidten, T. 30 October 1998 (has links)
This paper deals with parallelization methods for time-dependent problems where the time steps are shared out among the processors. A Full Multigrid technique serves as solution algorithm, hence information of the preceding time step and of the coarser grid is necessary to compute the solution at each new grid level. Applying the usual extrapolation formula to process this information, the parallelization will not be very efficient. We developed another extrapolation technique which causes a much higher parallelization effect. Test examples show that no essential loss of exactness appears, such that the method presented here shall be well-applicable.
36

Local inequalities for anisotropic finite elements and their application to convection-diffusion problems

Apel, Thomas, Lube, Gert 30 October 1998 (has links)
The paper gives an overview over local inequalities for anisotropic simplicial Lagrangian finite elements. The main original contributions are the estimates for higher derivatives of the interpolation error, the formulation of the assumptions on admissible anisotropic finite elements in terms of geometrical conditions in the three-dimensional case, and an anisotropic variant of the inverse inequality. An application of anisotropic meshes in the context of a stabilized Galerkin method for a convection-diffusion problem is given.
37

Navier-Stokes equations as a differential-algebraic system

Weickert, J. 30 October 1998 (has links)
Nonsteady Navier-Stokes equations represent a differential-algebraic system of strangeness index one after any spatial discretization. Since such systems are hard to treat in their original form, most approaches use some kind of index reduction. Processing this index reduction it is important to take care of the manifolds contained in the differential-algebraic equation (DAE). We investigate for several discretization schemes for the Navier-Stokes equations how the consideration of the manifolds is taken into account and propose a variant of solving these equations along the lines of the theoretically best index reduction. Applying this technique, the error of the time discretisation depends only on the method applied for solving the DAE.
38

A note on anisotropic interpolation error estimates for isoparametric quadrilateral finite elements

Apel, Th. 30 October 1998 (has links)
Anisotropic local interpolation error estimates are derived for quadrilateral and hexahedral Lagrangian finite elements with straight edges. These elements are allowed to have diameters with different asymptotic behaviour in different space directions. The case of affine elements (parallelepipeds) with arbitrarily high degree of the shape functions is considered first. Then, a careful examination of the multi-linear map leads to estimates for certain classes of more general, isoparametric elements. As an application, the Galerkin finite element method for a reaction diffusion problem in a polygonal domain is considered. The boundary layers are resolved using anisotropic trapezoidal elements.
39

Two-point boundary value problems with piecewise constant coefficients: weak solution and exact discretization

Windisch, G. 30 October 1998 (has links)
For two-point boundary value problems in weak formulation with piecewise constant coefficients and piecewise continuous right-hand side functions we derive a representation of its weak solution by local Green's functions. Then we use it to generate exact three-point discretizations by Galerkin's method on essentially arbitrary grids. The coarsest possible grid is the set of points at which the piecewise constant coefficients and the right- hand side functions are discontinuous. This grid can be refined to resolve any solution properties like boundary and interior layers much more correctly. The proper basis functions for the Galerkin method are entirely defined by the local Green's functions. The exact discretizations are of completely exponentially fitted type and stable. The system matrices of the resulting tridiagonal systems of linear equations are in any case irreducible M-matrices with a uniformly bounded norm of its inverse.
40

Variable preconditioning procedures for elliptic problems

Jung, M., Nepomnyaschikh, S. V. 30 October 1998 (has links)
For solving systems of grid equations approximating elliptic boundary value problems a method of constructing variable preconditioning procedures is presented. The main purpose is to discuss how an efficient preconditioning iterative procedure can be constructed in the case of elliptic problems with disproportional coefficients, e.g. equations with a large coefficient in the reaction term (or a small diffusion coefficient). The optimality of the suggested technique is based on fictitious space and multilevel decom- position methods. Using an additive form of the preconditioners, we intro- duce factors into the preconditioners to optimize the corresponding conver- gence rate. The optimization with respect to these factors is used at each step of the iterative process. The application of this technique to two-level $p$-hierarchical precondi- tioners and domain decomposition methods is considered too.

Page generated in 0.0296 seconds