• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 43
  • 27
  • 13
  • 4
  • 2
  • Tagged with
  • 99
  • 66
  • 26
  • 26
  • 25
  • 20
  • 19
  • 19
  • 16
  • 15
  • 15
  • 15
  • 15
  • 14
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

A pastoral theological examination of inner healing

Velthuysen, Daniel Nicholas January 1990 (has links)
Doing a survey of the ministry of inner healing, one is arrested by three salient features: its pragmatic and correlative development, its lay orientation, and the inconsistent and naïve theoretical explanation of the phenomenon. Inner healing, or as it was first known, the healing of the memories, appears to have its roots with Agnes Sanford during the 1940's (Sandford 1982: 3-4). Over a period of time and through a series of events, Sanford experienced what she termed a healing of memories. After some reflection on her experiences she began to teach her views at the School of Pastoral Care started by her husband in 1958, at Camps Farthest Out (CFO), and at numerous churches and conferences.
72

Algorithmes d'optimisation en grande dimension : applications à la résolution de problèmes inverses / Large scale optimization algorithms : applications to solution of inverse problems

Repetti, Audrey 29 June 2015 (has links)
Une approche efficace pour la résolution de problèmes inverses consiste à définir le signal (ou l'image) recherché(e) par minimisation d'un critère pénalisé. Ce dernier s'écrit souvent sous la forme d'une somme de fonctions composées avec des opérateurs linéaires. En pratique, ces fonctions peuvent n'être ni convexes ni différentiables. De plus, les problèmes auxquels on doit faire face sont souvent de grande dimension. L'objectif de cette thèse est de concevoir de nouvelles méthodes pour résoudre de tels problèmes de minimisation, tout en accordant une attention particulière aux coûts de calculs ainsi qu'aux résultats théoriques de convergence. Une première idée pour construire des algorithmes rapides d'optimisation est d'employer une stratégie de préconditionnement, la métrique sous-jacente étant adaptée à chaque itération. Nous appliquons cette technique à l'algorithme explicite-implicite et proposons une méthode, fondée sur le principe de majoration-minimisation, afin de choisir automatiquement les matrices de préconditionnement. L'analyse de la convergence de cet algorithme repose sur l'inégalité de Kurdyka-L ojasiewicz. Une seconde stratégie consiste à découper les données traitées en différents blocs de dimension réduite. Cette approche nous permet de contrôler à la fois le nombre d'opérations s'effectuant à chaque itération de l'algorithme, ainsi que les besoins en mémoire, lors de son implémentation. Nous proposons ainsi des méthodes alternées par bloc dans les contextes de l'optimisation non convexe et convexe. Dans le cadre non convexe, une version alternée par bloc de l'algorithme explicite-implicite préconditionné est proposée. Les blocs sont alors mis à jour suivant une règle déterministe acyclique. Lorsque des hypothèses supplémentaires de convexité peuvent être faites, nous obtenons divers algorithmes proximaux primaux-duaux alternés, permettant l'usage d'une règle aléatoire arbitraire de balayage des blocs. L'analyse théorique de ces algorithmes stochastiques d'optimisation convexe se base sur la théorie des opérateurs monotones. Un élément clé permettant de résoudre des problèmes d'optimisation de grande dimension réside dans la possibilité de mettre en oeuvre en parallèle certaines étapes de calculs. Cette parallélisation est possible pour les algorithmes proximaux primaux-duaux alternés par bloc que nous proposons: les variables primales, ainsi que celles duales, peuvent être mises à jour en parallèle, de manière tout à fait flexible. A partir de ces résultats, nous déduisons de nouvelles méthodes distribuées, où les calculs sont répartis sur différents agents communiquant entre eux suivant une topologie d'hypergraphe. Finalement, nos contributions méthodologiques sont validées sur différentes applications en traitement du signal et des images. Nous nous intéressons dans un premier temps à divers problèmes d'optimisation faisant intervenir des critères non convexes, en particulier en restauration d'images lorsque l'image originale est dégradée par un bruit gaussien dépendant du signal, en démélange spectral, en reconstruction de phase en tomographie, et en déconvolution aveugle pour la reconstruction de signaux sismiques parcimonieux. Puis, dans un second temps, nous abordons des problèmes convexes intervenant dans la reconstruction de maillages 3D et dans l'optimisation de requêtes pour la gestion de bases de données / An efficient approach for solving an inverse problem is to define the recovered signal/image as a minimizer of a penalized criterion which is often split in a sum of simpler functions composed with linear operators. In the situations of practical interest, these functions may be neither convex nor smooth. In addition, large scale optimization problems often have to be faced. This thesis is devoted to the design of new methods to solve such difficult minimization problems, while paying attention to computational issues and theoretical convergence properties. A first idea to build fast minimization algorithms is to make use of a preconditioning strategy by adapting, at each iteration, the underlying metric. We incorporate this technique in the forward-backward algorithm and provide an automatic method for choosing the preconditioning matrices, based on a majorization-minimization principle. The convergence proofs rely on the Kurdyka-L ojasiewicz inequality. A second strategy consists of splitting the involved data in different blocks of reduced dimension. This approach allows us to control the number of operations performed at each iteration of the algorithms, as well as the required memory. For this purpose, block alternating methods are developed in the context of both non-convex and convex optimization problems. In the non-convex case, a block alternating version of the preconditioned forward-backward algorithm is proposed, where the blocks are updated according to an acyclic deterministic rule. When additional convexity assumptions can be made, various alternating proximal primal-dual algorithms are obtained by using an arbitrary random sweeping rule. The theoretical analysis of these stochastic convex optimization algorithms is grounded on the theory of monotone operators. A key ingredient in the solution of high dimensional optimization problems lies in the possibility of performing some of the computation steps in a parallel manner. This parallelization is made possible in the proposed block alternating primal-dual methods where the primal variables, as well as the dual ones, can be updated in a quite flexible way. As an offspring of these results, new distributed algorithms are derived, where the computations are spread over a set of agents connected through a general hyper graph topology. Finally, our methodological contributions are validated on a number of applications in signal and image processing. First, we focus on optimization problems involving non-convex criteria, in particular image restoration when the original image is corrupted with a signal dependent Gaussian noise, spectral unmixing, phase reconstruction in tomography, and blind deconvolution in seismic sparse signal reconstruction. Then, we address convex minimization problems arising in the context of 3D mesh denoising and in query optimization for database management
73

Aux sources de l'originaire dans le désir d'enseigner : approche clinique et projective / Analyzing the sources of origin in the desire to teach : clinical and projective approach

Cayot Decharte, Angélique 30 November 2017 (has links)
L'objet de ma recherche porte sur le désir d'enseigner qui interroge le choix d'un métier et la rencontre actuelle entre l'adulte et l'enfant, soit dans l'enseignement ordinaire soit pour les enseignants spécialisés qui choisissent de se confronter aux troubles graves des apprentissages et au handicap. Il s'agit d'explorer les origines du désir pris dans le discours spontané des enseignants appuyé sur des entretiens cliniques de recherche et dans un autre registre saisir la dynamique pulsionnelle, les choix identificatoires, les processus d'idéalisation et les mouvements fantasmatiques qui peuvent se déployer à travers l'utilisation des méthodes projectives. Pris dans la « métaphore vive » du mythe de Pygmalion déconstruit par Kaës (1973) à partir du fantasme de fustigation décrit par Freud en 1919, enseigner appelle un scénario fantasmatique mobile où les emplacements et les identifications entre l'enfant et l'adulte sont permutables, réversibles dans un retournement des positions actives et passives du sujet. Le désir d'enseigner s'inscrit dans une dimension narcissique patente actualisant le sentiment de continuité d'existence et la quête d'une image de soi idéale mais dans le même temps, se joue aussi la quête de la différence, ancrée dans l'infantile et le sexuel (Chabert, 2011) convoquant inévitablement la différence des sexes et la bisexualité psychique. Enfin, dans la réciprocité du désir d'apprendre et de savoir, enseigner à l'enfant, c'est aller à la rencontre de l'Inconnu posé par Rosolato (1978) comme une relation fondamentale entre le désir et l'idéal qui interroge l'inconnu en soi, le mystère de nos origines convoquant les fantasmes originaires rattachés aux questions de la naissance, de la mort et de la scène primitive. Le Rorschach et le TAT, ces autres objets énigmatiques qui font émerger des « arrêts » sur image et l'insistance de la répétition à certaines planches, mobilisent, dévoilent des enjeux fantasmatiques dans un écho singulier avec ce qui se joue dans la rencontre entre l'adulte et l'enfant à l'âge œdipien, celui de la latence et de l'adolescence. / This research focuses on the desire to teach that motivates the choices of a career and of the mutual encounter between the adult and the child, either in an ordinary school or a special educational setting for learning difficulty or severe disability needs. It aims at exploring the origins of the desire of teachers, on the one hand, through clinical research interviews, and on the other hand, by focusing on the underlying dynamics of drives, identifications and fantasies through projective testing. Caught up in the "rule of metaphor" of the Pygmalion myth, which was revisited by Kaës (1973) based on Freud's child beating fantasy concept (1919), teaching addresses mobile fantasized scenarios where identifications between the adult and the child are intertwined and reversible, turning around active and passive positions of the subject. The desire to teach is part of an evident narcissistic domain, restoring the sense of continuity of being and of an ideal self-image. However, the quest for differentiation, rooted in infantile psychosexuality (Chabert, 2011), is also played out, inevitably summoning gender differences and psychic bisexuality. Lastly, given the reciprocity between the desire to impart and to learn, teaching a child means encountering the Unknown which Rosolato (1978) defined as the fundamental relationship between the desire and the ideal; a relationship that questions the unknown in oneself, the mystery of one's origins and evoke primal fantasies which relate to questions of birth, death and primitive scenes. Enigmatic tools, such as the Rorschach and the TAT, produce fixated images and insistent repetitions on specific cards which summon fantasies echoing those at play in the encounter between the adult and the child of oedipal, latency or adolescent age.
74

Varieties and Clones of Relational Structures

Grabowski, Jens-Uwe 07 June 2002 (has links)
We present an axiomatization of relational varieties, i.e., classes of relational structures closed under formation of products and retracts, by a certain class of first-order sentences. We apply this result to categorically equivalent algebras and primal algebras. We consider the relational varieties generated by structures with minimal clone, rigid structures and two-element structures.
75

Link Dependent Origin-Destination Matrix Estimation : Nonsmooth Convex Optimisation with Bluetooth-Inferred Trajectories / Estimation de Matrices Origine-Destination-Lien : optimisation convexe et non lisse avec inférence de trajectoires Bluetooth

Michau, Gabriel 21 July 2016 (has links)
L’estimation des matrices origine-destination (OD) est un sujet de recherche important depuis les années 1950. En effet, ces tableaux à deux entrées recensent la demande de transport d'une zone géographique donnée et sont de ce fait un élément clé de l'ingénierie du trafic. Historiquement, les seules données disponibles pour leur estimation par les statistiques étaient les comptages de véhicules par les boucles magnétiques. Ce travail s'inscrit alors dans le contexte de l'installation à Brisbane de plus de 600 détecteurs Bluetooth qui ont la capacité de détecter et d'identifier les appareils électroniques équipés de cette technologie.Dans un premier temps, il explore la possibilité offerte par ces détecteurs pour les applications en ingénierie du transport en caractérisant ces données et leurs bruits. Ce projet aboutit, à l'issue de cette étude, à une méthode de reconstruction des trajectoires des véhicules équipés du Bluetooth à partir de ces seules données. Dans un second temps, en partant de l'hypothèse que l'accès à des échantillons importants de trajectoires va se démocratiser, cette thèse propose d'étendre la notion de matrice OD à celle de matrice OD par lien afin de combiner la description de la demande avec celle de l'utilisation du réseau. Reposant sur les derniers outils méthodologies développés en optimisation convexe, nous proposons une méthode d'estimation de ces matrices à partir des trajectoires inférées par Bluetooth et des comptages routiers.A partir de peu d'hypothèses, il est possible d'inférer ces nouvelles matrices pour l'ensemble des utilisateurs d'un réseau routier (indépendamment de leur équipement en nouvelles technologies). Ce travail se distingue ainsi des méthodes traditionnelles d'estimation qui reposaient sur des étapes successives et indépendantes d'inférence et de modélisation. / Origin Destination matrix estimation is a critical problem of the Transportation field since the fifties. OD matrix is a two-entry table taking census of the zone-to-zone traffic of a geographic area. This traffic description tools is therefore paramount for traffic engineering applications. Traditionally, the OD matrix estimation has solely been based on traffic counts collected by networks of magnetic loops. This thesis takes place in a context with over 600 Bluetooth detectors installed in the City of Brisbane. These detectors permit in-car Bluetooth device detection and thus vehicle identification.This manuscript explores first, the potentialities of Bluetooth detectors for Transport Engineering applications by characterising the data, their noises and biases. This leads to propose a new methodology for Bluetooth equipped vehicle trajectory reconstruction. In a second step, based on the idea that probe trajectories will become more and more available by means of new technologies, this thesis proposes to extend the concept of OD matrix to the one of link dependent origin destination matrix that describes simultaneously both the traffic demand and the usage of the network. The problem of LOD matrix estimation is formulated as a minimisation problem based on probe trajectories and traffic counts and is then solved thanks to the latest advances in nonsmooth convex optimisation.This thesis demonstrates that, with few hypothesis, it is possible to retrieve the LOD matrix for the whole set of users in a road network. It is thus different from traditional OD matrix estimation approaches that relied on successive steps of modelling and of statistical inferences.
76

Proximal Splitting Methods in Nonsmooth Convex Optimization

Hendrich, Christopher 25 July 2014 (has links) (PDF)
This thesis is concerned with the development of novel numerical methods for solving nondifferentiable convex optimization problems in real Hilbert spaces and with the investigation of their asymptotic behavior. To this end, we are also making use of monotone operator theory as some of the provided algorithms are originally designed to solve monotone inclusion problems. After introducing basic notations and preliminary results in convex analysis, we derive two numerical methods based on different smoothing strategies for solving nondifferentiable convex optimization problems. The first approach, known as the double smoothing technique, solves the optimization problem with some given a priori accuracy by applying two regularizations to its conjugate dual problem. A special fast gradient method then solves the regularized dual problem such that an approximate primal solution can be reconstructed from it. The second approach affects the primal optimization problem directly by applying a single regularization to it and is capable of using variable smoothing parameters which lead to a more accurate approximation of the original problem as the iteration counter increases. We then derive and investigate different primal-dual methods in real Hilbert spaces. In general, one considerable advantage of primal-dual algorithms is that they are providing a complete splitting philosophy in that the resolvents, which arise in the iterative process, are only taken separately from each maximally monotone operator occurring in the problem description. We firstly analyze the forward-backward-forward algorithm of Combettes and Pesquet in terms of its convergence rate for the objective of a nondifferentiable convex optimization problem. Additionally, we propose accelerations of this method under the additional assumption that certain monotone operators occurring in the problem formulation are strongly monotone. Subsequently, we derive two Douglas–Rachford type primal-dual methods for solving monotone inclusion problems involving finite sums of linearly composed parallel sum type monotone operators. To prove their asymptotic convergence, we use a common product Hilbert space strategy by reformulating the corresponding inclusion problem reasonably such that the Douglas–Rachford algorithm can be applied to it. Finally, we propose two primal-dual algorithms relying on forward-backward and forward-backward-forward approaches for solving monotone inclusion problems involving parallel sums of linearly composed monotone operators. The last part of this thesis deals with different numerical experiments where we intend to compare our methods against algorithms from the literature. The problems which arise in this part are manifold and they reflect the importance of this field of research as convex optimization problems appear in lots of applications of interest.
77

Planejamento da expansão de sistemas de transmissão usando técnicas especializadas de programação inteira mista /

Vanderlinde, Jeferson Back. January 2017 (has links)
Orientador: Rubén Augusto Romero Lázaro / Resumo: Neste trabalho, consideram-se a análise teórica e a implementação computacional dos algoritmos Primal Simplex Canalizado (PSC) e Dual Simplex Canalizado (DSC) especializados. Esses algoritmos foram incorporados em um algoritmo Branch and Bound (B&B) de modo a resolver o problema de Planejamento da Expansão de Sistemas de Transmissão (PEST). Neste caso, o problema PEST foi modelado usando os chamados modelo de Transportes e modelo Linear Disjuntivo (LD), o que produz um problema de Programação Linear Inteiro Misto (PLIM). O algoritmo PSC é utilizado na resolução do problema de Programação Linear (PL) inicial após desconsiderar a restrição de integralidade do problema PLIM original. Juntamente com o algoritmo PSC, foi implementada uma estratégia para reduzir o número de variáveis artificiais adicionadas ao PL, consequentemente reduzindo o número de iterações do algoritmo PSC. O algoritmo DSC é utilizado na reotimização eficiente dos subproblemas gerados pelo algoritmo B&B, através do quadro ótimo do PL inicial, excluindo, assim, a necessidade da resolução completa de cada subproblema e, consequentemente, reduzindo o consumo de processamento e memória. Nesta pesquisa, é apresentada uma nova proposta de otimização, e, consequentemente, a implementação computacional usando a linguagem de programação FORTRAN que opera independentemente de qualquer solver. / Doutor
78

Planejamento da expansão de sistemas de transmissão usando técnicas especializadas de programação inteira mista / Transmission network expansion planning via efficient mixed-integer linear programming techniques

Vanderlinde, Jeferson Back [UNESP] 06 September 2017 (has links)
Submitted by JEFERSON BACK VANDERLINDE null (jefersonbv@yahoo.com.br) on 2017-11-01T16:38:25Z No. of bitstreams: 1 jeferson_tese_final_20171101.pdf: 4860852 bytes, checksum: 2f99c37969be3815f82b1b4455a40230 (MD5) / Approved for entry into archive by LUIZA DE MENEZES ROMANETTO (luizamenezes@reitoria.unesp.br) on 2017-11-13T15:38:34Z (GMT) No. of bitstreams: 1 vanderlinde_jb_dr_ilha.pdf: 4860852 bytes, checksum: 2f99c37969be3815f82b1b4455a40230 (MD5) / Made available in DSpace on 2017-11-13T15:38:34Z (GMT). No. of bitstreams: 1 vanderlinde_jb_dr_ilha.pdf: 4860852 bytes, checksum: 2f99c37969be3815f82b1b4455a40230 (MD5) Previous issue date: 2017-09-06 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / Neste trabalho, consideram-se a análise teórica e a implementação computacional dos algoritmos Primal Simplex Canalizado (PSC) e Dual Simplex Canalizado (DSC) especializados. Esses algoritmos foram incorporados em um algoritmo Branch and Bound (B&B) de modo a resolver o problema de Planejamento da Expansão de Sistemas de Transmissão (PEST). Neste caso, o problema PEST foi modelado usando os chamados modelo de Transportes e modelo Linear Disjuntivo (LD), o que produz um problema de Programação Linear Inteiro Misto (PLIM). O algoritmo PSC é utilizado na resolução do problema de Programação Linear (PL) inicial após desconsiderar a restrição de integralidade do problema PLIM original. Juntamente com o algoritmo PSC, foi implementada uma estratégia para reduzir o número de variáveis artificiais adicionadas ao PL, consequentemente reduzindo o número de iterações do algoritmo PSC. O algoritmo DSC é utilizado na reotimização eficiente dos subproblemas gerados pelo algoritmo B&B, através do quadro ótimo do PL inicial, excluindo, assim, a necessidade da resolução completa de cada subproblema e, consequentemente, reduzindo o consumo de processamento e memória. Nesta pesquisa, é apresentada uma nova proposta de otimização, e, consequentemente, a implementação computacional usando a linguagem de programação FORTRAN que opera independentemente de qualquer solver. / In this research, the theoretical analysis and computational implementation of the specialized dual simplex algorithm (DSA) and primal simplex algorithm (PSA) for bounded variables is considered. These algorithms have been incorporated in a Branch and Bound (B&B) algorithm to solve the Transmission Network Expansion Planning (TNEP) problem. In this case, the TNEP problem is modeled using transportation model and linear disjunctive model (DM), which produces a mixed-integer linear programming (MILP) problem. After relaxing the integrality of investment variables of the original MILP problem, the PSA is used to solve the initial linear programming (LP) problem. Also, it has been implemented a strategy in PSA to reduce the number of artificial variables which are added into the LP problem, and consequently reduces the number of iterations of PSA. Through optimal solution of the initial LP, the DSA is used in efficient reoptimization of subproblems, resulting from the B&B algorithm, thus excludes the need for complete resolution of each subproblems, which results reducing the CPU time and memory consumption. This research presents the implementation of the proposed approach using the FORTRAN programming language which operates independently and does not use any commercial solver.
79

Proximal Splitting Methods in Nonsmooth Convex Optimization

Hendrich, Christopher 17 July 2014 (has links)
This thesis is concerned with the development of novel numerical methods for solving nondifferentiable convex optimization problems in real Hilbert spaces and with the investigation of their asymptotic behavior. To this end, we are also making use of monotone operator theory as some of the provided algorithms are originally designed to solve monotone inclusion problems. After introducing basic notations and preliminary results in convex analysis, we derive two numerical methods based on different smoothing strategies for solving nondifferentiable convex optimization problems. The first approach, known as the double smoothing technique, solves the optimization problem with some given a priori accuracy by applying two regularizations to its conjugate dual problem. A special fast gradient method then solves the regularized dual problem such that an approximate primal solution can be reconstructed from it. The second approach affects the primal optimization problem directly by applying a single regularization to it and is capable of using variable smoothing parameters which lead to a more accurate approximation of the original problem as the iteration counter increases. We then derive and investigate different primal-dual methods in real Hilbert spaces. In general, one considerable advantage of primal-dual algorithms is that they are providing a complete splitting philosophy in that the resolvents, which arise in the iterative process, are only taken separately from each maximally monotone operator occurring in the problem description. We firstly analyze the forward-backward-forward algorithm of Combettes and Pesquet in terms of its convergence rate for the objective of a nondifferentiable convex optimization problem. Additionally, we propose accelerations of this method under the additional assumption that certain monotone operators occurring in the problem formulation are strongly monotone. Subsequently, we derive two Douglas–Rachford type primal-dual methods for solving monotone inclusion problems involving finite sums of linearly composed parallel sum type monotone operators. To prove their asymptotic convergence, we use a common product Hilbert space strategy by reformulating the corresponding inclusion problem reasonably such that the Douglas–Rachford algorithm can be applied to it. Finally, we propose two primal-dual algorithms relying on forward-backward and forward-backward-forward approaches for solving monotone inclusion problems involving parallel sums of linearly composed monotone operators. The last part of this thesis deals with different numerical experiments where we intend to compare our methods against algorithms from the literature. The problems which arise in this part are manifold and they reflect the importance of this field of research as convex optimization problems appear in lots of applications of interest.
80

Decentralized Algorithms for Wasserstein Barycenters

Dvinskikh, Darina 29 October 2021 (has links)
In dieser Arbeit beschäftigen wir uns mit dem Wasserstein Baryzentrumproblem diskreter Wahrscheinlichkeitsmaße sowie mit dem population Wasserstein Baryzentrumproblem gegeben von a Fréchet Mittelwerts von der rechnerischen und statistischen Seiten. Der statistische Fokus liegt auf der Schätzung der Stichprobengröße von Maßen zur Berechnung einer Annäherung des Fréchet Mittelwerts (Baryzentrum) der Wahrscheinlichkeitsmaße mit einer bestimmten Genauigkeit. Für empirische Risikominimierung (ERM) wird auch die Frage der Regularisierung untersucht zusammen mit dem Vorschlag einer neuen Regularisierung, die zu den besseren Komplexitätsgrenzen im Vergleich zur quadratischen Regularisierung beiträgt. Der Rechenfokus liegt auf der Entwicklung von dezentralen Algorithmen zurBerechnung von Wasserstein Baryzentrum: duale Algorithmen und Sattelpunktalgorithmen. Die Motivation für duale Optimierungsmethoden ist geschlossene Formen für die duale Formulierung von entropie-regulierten Wasserstein Distanz und ihren Derivaten, während, die primale Formulierung nur in einigen Fällen einen Ausdruck in geschlossener Form hat, z.B. für Gaußsches Maß. Außerdem kann das duale Orakel, das den Gradienten der dualen Darstellung für die entropie-regulierte Wasserstein Distanz zurückgibt, zu einem günstigeren Preis berechnet werden als das primale Orakel, das den Gradienten der (entropie-regulierten) Wasserstein Distanz zurückgibt. Die Anzahl der dualen Orakel rufe ist in diesem Fall ebenfalls weniger, nämlich die Quadratwurzel der Anzahl der primalen Orakelrufe. Im Gegensatz zum primalen Zielfunktion, hat das duale Zielfunktion Lipschitz-stetig Gradient aufgrund der starken Konvexität regulierter Wasserstein Distanz. Außerdem untersuchen wir die Sattelpunktformulierung des (nicht regulierten) Wasserstein Baryzentrum, die zum Bilinearsattelpunktproblem führt. Dieser Ansatz ermöglicht es uns auch, optimale Komplexitätsgrenzen zu erhalten, und kann einfach in einer dezentralen Weise präsentiert werden. / In this thesis, we consider the Wasserstein barycenter problem of discrete probability measures as well as the population Wasserstein barycenter problem given by a Fréchet mean from computational and statistical sides. The statistical focus is estimating the sample size of measures needed to calculate an approximation of a Fréchet mean (barycenter) of probability distributions with a given precision. For empirical risk minimization approaches, the question of the regularization is also studied along with proposing a new regularization which contributes to the better complexity bounds in comparison with the quadratic regularization. The computational focus is developing decentralized algorithms for calculating Wasserstein barycenters: dual algorithms and saddle point algorithms. The motivation for dual approaches is closed-forms for the dual formulation of entropy-regularized Wasserstein distances and their derivatives, whereas the primal formulation has a closed-form expression only in some cases, e.g., for Gaussian measures.Moreover, the dual oracle returning the gradient of the dual representation forentropy-regularized Wasserstein distance can be computed for a cheaper price in comparison with the primal oracle returning the gradient of the (entropy-regularized) Wasserstein distance. The number of dual oracle calls in this case will be also less, i.e., the square root of the number of primal oracle calls. Furthermore, in contrast to the primal objective, the dual objective has Lipschitz continuous gradient due to the strong convexity of regularized Wasserstein distances. Moreover, we study saddle-point formulation of the non-regularized Wasserstein barycenter problem which leads to the bilinear saddle-point problem. This approach also allows us to get optimal complexity bounds and it can be easily presented in a decentralized setup.

Page generated in 0.0678 seconds