• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 20
  • 1
  • Tagged with
  • 26
  • 26
  • 17
  • 14
  • 14
  • 14
  • 12
  • 10
  • 10
  • 9
  • 8
  • 8
  • 8
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

About a deficit in low order convergence rates on the example of autoconvolution

Bürger, Steven, Hofmann, Bernd 18 December 2013 (has links) (PDF)
We revisit in L2-spaces the autoconvolution equation x ∗ x = y with solutions which are real-valued or complex-valued functions x(t) defined on a finite real interval, say t ∈ [0,1]. Such operator equations of quadratic type occur in physics of spectra, in optics and in stochastics, often as part of a more complex task. Because of their weak nonlinearity deautoconvolution problems are not seen as difficult and hence little attention is paid to them wrongly. In this paper, we will indicate on the example of autoconvolution a deficit in low order convergence rates for regularized solutions of nonlinear ill-posed operator equations F(x)=y with solutions x† in a Hilbert space setting. So for the real-valued version of the deautoconvolution problem, which is locally ill-posed everywhere, the classical convergence rate theory developed for the Tikhonov regularization of nonlinear ill-posed problems reaches its limits if standard source conditions using the range of F (x† )∗ fail. On the other hand, convergence rate results based on Hölder source conditions with small Hölder exponent and logarithmic source conditions or on the method of approximate source conditions are not applicable since qualified nonlinearity conditions are required which cannot be shown for the autoconvolution case according to current knowledge. We also discuss the complex-valued version of autoconvolution with full data on [0,2] and see that ill-posedness must be expected if unbounded amplitude functions are admissible. As a new detail, we present situations of local well-posedness if the domain of the autoconvolution operator is restricted to complex L2-functions with a fixed and uniformly bounded modulus function.
22

Keller-Segel-type models and kinetic equations for interacting particles : long-time asymptotic analysis

Hoffmann, Franca Karoline Olga January 2017 (has links)
This thesis consists of three parts: The first and second parts focus on long-time asymptotics of macroscopic and kinetic models respectively, while in the third part we connect these regimes using different scaling approaches. (1) Keller–Segel-type aggregation-diffusion equations: We study a Keller–Segel-type model with non-linear power-law diffusion and non-local particle interaction: Does the system admit equilibria? If yes, are they unique? Which solutions converge to them? Can we determine an explicit rate of convergence? To answer these questions, we make use of the special gradient flow structure of the equation and its associated free energy functional for which the overall convexity properties are not known. Special cases of this family of models have been investigated in previous works, and this part of the thesis represents a contribution towards a complete characterisation of the asymptotic behaviour of solutions. (2) Hypocoercivity techniques for a fibre lay-down model: We show existence and uniqueness of a stationary state for a kinetic Fokker-Planck equation modelling the fibre lay-down process in non-woven textile production. Further, we prove convergence to equilibrium with an explicit rate. This part of the thesis is an extension of previous work which considered the case of a stationary conveyor belt. Adding the movement of the belt, the global equilibrium state is not known explicitly and a more general hypocoercivity estimate is needed. Although we focus here on a particular application, this approach can be used for any equation with a similar structure as long as it can be understood as a certain perturbation of a system for which the global Gibbs state is known. (3) Scaling approaches for collective animal behaviour models: We study the multi-scale aspects of self-organised biological aggregations using various scaling techniques. Not many previous studies investigate how the dynamics of the initial models are preserved via these scalings. Firstly, we consider two scaling approaches (parabolic and grazing collision limits) that can be used to reduce a class of non-local kinetic 1D and 2D models to simpler models existing in the literature. Secondly, we investigate how some of the kinetic spatio-temporal patterns are preserved via these scalings using asymptotic preserving numerical methods.
23

About a deficit in low order convergence rates on the example of autoconvolution

Bürger, Steven, Hofmann, Bernd January 2013 (has links)
We revisit in L2-spaces the autoconvolution equation x ∗ x = y with solutions which are real-valued or complex-valued functions x(t) defined on a finite real interval, say t ∈ [0,1]. Such operator equations of quadratic type occur in physics of spectra, in optics and in stochastics, often as part of a more complex task. Because of their weak nonlinearity deautoconvolution problems are not seen as difficult and hence little attention is paid to them wrongly. In this paper, we will indicate on the example of autoconvolution a deficit in low order convergence rates for regularized solutions of nonlinear ill-posed operator equations F(x)=y with solutions x† in a Hilbert space setting. So for the real-valued version of the deautoconvolution problem, which is locally ill-posed everywhere, the classical convergence rate theory developed for the Tikhonov regularization of nonlinear ill-posed problems reaches its limits if standard source conditions using the range of F (x† )∗ fail. On the other hand, convergence rate results based on Hölder source conditions with small Hölder exponent and logarithmic source conditions or on the method of approximate source conditions are not applicable since qualified nonlinearity conditions are required which cannot be shown for the autoconvolution case according to current knowledge. We also discuss the complex-valued version of autoconvolution with full data on [0,2] and see that ill-posedness must be expected if unbounded amplitude functions are admissible. As a new detail, we present situations of local well-posedness if the domain of the autoconvolution operator is restricted to complex L2-functions with a fixed and uniformly bounded modulus function.
24

The impact of a curious type of smoothness conditions on convergence rates in l1-regularization

Bot, Radu Ioan, Hofmann, Bernd January 2013 (has links)
Tikhonov-type regularization of linear and nonlinear ill-posed problems in abstract spaces under sparsity constraints gained relevant attention in the past years. Since under some weak assumptions all regularized solutions are sparse if the l1-norm is used as penalty term, the l1-regularization was studied by numerous authors although the non-reflexivity of the Banach space l1 and the fact that such penalty functional is not strictly convex lead to serious difficulties. We consider the case that the sparsity assumption is narrowly missed. This means that the solutions may have an infinite number of nonzero but fast decaying components. For that case we formulate and prove convergence rates results for the l1-regularization of nonlinear operator equations. In this context, we outline the situations of Hölder rates and of an exponential decay of the solution components.
25

Adaptive Discontinuous Petrov-Galerkin Finite-Element-Methods

Hellwig, Friederike 12 June 2019 (has links)
Die vorliegende Arbeit "Adaptive Discontinuous Petrov-Galerkin Finite-Element-Methods" beweist optimale Konvergenzraten für vier diskontinuierliche Petrov-Galerkin (dPG) Finite-Elemente-Methoden für das Poisson-Modell-Problem für genügend feine Anfangstriangulierung. Sie zeigt dazu die Äquivalenz dieser vier Methoden zu zwei anderen Klassen von Methoden, den reduzierten gemischten Methoden und den verallgemeinerten Least-Squares-Methoden. Die erste Klasse benutzt ein gemischtes System aus konformen Courant- und nichtkonformen Crouzeix-Raviart-Finite-Elemente-Funktionen. Die zweite Klasse verallgemeinert die Standard-Least-Squares-Methoden durch eine Mittelpunktsquadratur und Gewichtsfunktionen. Diese Arbeit verallgemeinert ein Resultat aus [Carstensen, Bringmann, Hellwig, Wriggers 2018], indem die vier dPG-Methoden simultan als Spezialfälle dieser zwei Klassen charakterisiert werden. Sie entwickelt alternative Fehlerschätzer für beide Methoden und beweist deren Zuverlässigkeit und Effizienz. Ein Hauptresultat der Arbeit ist der Beweis optimaler Konvergenzraten der adaptiven Methoden durch Beweis der Axiome aus [Carstensen, Feischl, Page, Praetorius 2014]. Daraus folgen dann insbesondere die optimalen Konvergenzraten der vier dPG-Methoden. Numerische Experimente bestätigen diese optimalen Konvergenzraten für beide Klassen von Methoden. Außerdem ergänzen sie die Theorie durch ausführliche Vergleiche beider Methoden untereinander und mit den äquivalenten dPG-Methoden. / The thesis "Adaptive Discontinuous Petrov-Galerkin Finite-Element-Methods" proves optimal convergence rates for four lowest-order discontinuous Petrov-Galerkin methods for the Poisson model problem for a sufficiently small initial mesh-size in two different ways by equivalences to two other non-standard classes of finite element methods, the reduced mixed and the weighted Least-Squares method. The first is a mixed system of equations with first-order conforming Courant and nonconforming Crouzeix-Raviart functions. The second is a generalized Least-Squares formulation with a midpoint quadrature rule and weight functions. The thesis generalizes a result on the primal discontinuous Petrov-Galerkin method from [Carstensen, Bringmann, Hellwig, Wriggers 2018] and characterizes all four discontinuous Petrov-Galerkin methods simultaneously as particular instances of these methods. It establishes alternative reliable and efficient error estimators for both methods. A main accomplishment of this thesis is the proof of optimal convergence rates of the adaptive schemes in the axiomatic framework [Carstensen, Feischl, Page, Praetorius 2014]. The optimal convergence rates of the four discontinuous Petrov-Galerkin methods then follow as special cases from this rate-optimality. Numerical experiments verify the optimal convergence rates of both types of methods for different choices of parameters. Moreover, they complement the theory by a thorough comparison of both methods among each other and with their equivalent discontinuous Petrov-Galerkin schemes.
26

Accelerated algorithms for temporal difference learning methods

Rankawat, Anushree 12 1900 (has links)
L'idée centrale de cette thèse est de comprendre la notion d'accélération dans les algorithmes d'approximation stochastique. Plus précisément, nous tentons de répondre à la question suivante : Comment l'accélération apparaît-elle naturellement dans les algorithmes d'approximation stochastique ? Nous adoptons une approche de systèmes dynamiques et proposons de nouvelles méthodes accélérées pour l'apprentissage par différence temporelle (TD) avec approximation de fonction linéaire : Polyak TD(0) et Nesterov TD(0). Contrairement aux travaux antérieurs, nos méthodes ne reposent pas sur une conception des méthodes de TD comme des méthodes de descente de gradient. Nous étudions l'interaction entre l'accélération, la stabilité et la convergence des méthodes accélérées proposées en temps continu. Pour établir la convergence du système dynamique sous-jacent, nous analysons les modèles en temps continu des méthodes d'approximation stochastique accélérées proposées en dérivant la loi de conservation dans un système de coordonnées dilaté. Nous montrons que le système dynamique sous-jacent des algorithmes proposés converge à un rythme accéléré. Ce cadre nous fournit également des recommandations pour le choix des paramètres d'amortissement afin d'obtenir ce comportement convergent. Enfin, nous discrétisons ces ODE convergentes en utilisant deux schémas de discrétisation différents, Euler explicite et Euler symplectique, et nous analysons leurs performances sur de petites tâches de prédiction linéaire. / The central idea of this thesis is to understand the notion of acceleration in stochastic approximation algorithms. Specifically, we attempt to answer the question: How does acceleration naturally show up in SA algorithms? We adopt a dynamical systems approach and propose new accelerated methods for temporal difference (TD) learning with linear function approximation: Polyak TD(0) and Nesterov TD(0). In contrast to earlier works, our methods do not rely on viewing TD methods as gradient descent methods. We study the interplay between acceleration, stability, and convergence of the proposed accelerated methods in continuous time. To establish the convergence of the underlying dynamical system, we analyze continuous-time models of the proposed accelerated stochastic approximation methods by deriving the conservation law in a dilated coordinate system. We show that the underlying dynamical system of our proposed algorithms converges at an accelerated rate. This framework also provides us recommendations for the choice of the damping parameters to obtain this convergent behavior. Finally, we discretize these convergent ODEs using two different discretization schemes, explicit Euler, and symplectic Euler, and analyze their performance on small, linear prediction tasks.

Page generated in 0.1022 seconds