• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 1
  • Tagged with
  • 4
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Apprentissage dans les jeux à champ moyen / Learning in Mean Field Games

Hadikhanloo, Saeed 29 January 2018 (has links)
Les jeux à champ moyen (MFG) sont une classe de jeux différentiels dans lequel chaque agent est infinitésimal et interagit avec une énorme population d'agents. Dans cette thèse, nous soulevons la question de la formation effective de l'équilibre MFG. En effet, le jeu étant très complexe, il est irréaliste de supposer que les agents peuvent réellement calculer la configuration d'équilibre. Cela semble indiquer que si la configuration d'équilibre se présente, c'est parce que les agents ont appris à jouer au jeu. Donc, la question principale est de trouver des procédures d'apprentissage dans les jeux à champ moyen et d'analyser leurs convergences vers un équilibre. Nous nous sommes inspirés par des schémas d'apprentissage dans les jeux statiques et avons essayé de les appliquer à notre modèle dynamique de MFG. Nous nous concentrons particulièrement sur les applications de fictitious play et online mirror descent sur différents types de jeux de champs moyens : Potentiel, Monotone ou Discret. / Mean Field Games (MFG) are a class of differential games in which each agent is infinitesimal and interacts with a huge population of other agents. In this thesis, we raise the question of the actual formation of the MFG equilibrium. Indeed, the game being quite involved, it is unrealistic to assume that the agents can compute the equilibrium configuration. This seems to indicate that, if the equilibrium configuration arises, it is because the agents have learned how to play the game. Hence the main question is to find learning procedures in mean field games and investigating if they converge to an equilibrium. We have inspired from the learning schemes in static games and tried to apply them to our dynamical model of MFG. We especially focus on fictitious play and online mirror descent applications on different types of mean field games; those are either Potential, Monotone or Discrete.
2

Numerical splitting methods for nonsmooth convex optimization problems

Bitterlich, Sandy 11 December 2023 (has links)
In this thesis, we develop and investigate numerical methods for solving nonsmooth convex optimization problems in real Hilbert spaces. We construct algorithms, such that they handle the terms in the objective function and constraints of the minimization problems separately, which makes these methods simpler to compute. In the first part of the thesis, we extend the well known AMA method from Tseng to the Proximal AMA algorithm by introducing variable metrics in the subproblems of the primal-dual algorithm. For a special choice of metrics, the subproblems become proximal steps. Thus, for objectives in a lot of important applications, such as signal and image processing, machine learning or statistics, the iteration process consists of expressions in closed form that are easy to calculate. In the further course of the thesis, we intensify the investigation on this algorithm by considering and studying a dynamical system. Through explicit time discretization of this system, we obtain Proximal AMA. We show the existence and uniqueness of strong global solutions of the dynamical system and prove that its trajectories converge to the primal-dual solution of the considered optimization problem. In the last part of this thesis, we minimize a sum of finitely many nonsmooth convex functions (each can be composed by a linear operator) over a nonempty, closed and convex set by smoothing these functions. We consider a stochastic algorithm in which we take gradient steps of the smoothed functions (which are proximal steps if we smooth by Moreau envelope), and use a mirror map to 'mirror'' the iterates onto the feasible set. In applications, we compare them to similar methods and discuss the advantages and practical usability of these new algorithms.
3

Learning Robust Support Vector Machine Classifiers With Uncertain Observations

Bhadra, Sahely 03 1900 (has links) (PDF)
The central theme of the thesis is to study linear and non linear SVM formulations in the presence of uncertain observations. The main contribution of this thesis is to derive robust classfiers from partial knowledge of the underlying uncertainty. In the case of linear classification, a new bounding scheme based on Bernstein inequality has been proposed, which models interval-valued uncertainty in a less conservative fashion and hence is expected to generalize better than the existing methods. Next, potential of partial information such as bounds on second order moments along with support information has been explored. Bounds on second order moments make the resulting classifiers robust to moment estimation errors. Uncertainty in the dataset will lead to uncertainty in the kernel matrices. A novel distribution free large deviation inequality has been proposed which handles uncertainty in kernels through co-positive programming in a chance constraint setting. Although such formulations are NP hard, under several cases of interest the problem reduces to a convex program. However, the independence assumption mentioned above, is restrictive and may not always define a valid uncertain kernel. To alleviate this problem an affine set based alternative is proposed and using a robust optimization framework the resultant problem is posed as a minimax problem. In both the cases of Chance Constraint Program or Robust Optimization (for non-linear SVM), mirror descent algorithm (MDA) like procedures have been applied.
4

Stochastic approximation and least-squares regression, with applications to machine learning / Approximation stochastique et régression par moindres carrés : applications en apprentissage automatique

Flammarion, Nicolas 24 July 2017 (has links)
De multiples problèmes en apprentissage automatique consistent à minimiser une fonction lisse sur un espace euclidien. Pour l’apprentissage supervisé, cela inclut les régressions par moindres carrés et logistique. Si les problèmes de petite taille sont résolus efficacement avec de nombreux algorithmes d’optimisation, les problèmes de grande échelle nécessitent en revanche des méthodes du premier ordre issues de la descente de gradient. Dans ce manuscrit, nous considérons le cas particulier de la perte quadratique. Dans une première partie, nous nous proposons de la minimiser grâce à un oracle stochastique. Dans une seconde partie, nous considérons deux de ses applications à l’apprentissage automatique : au partitionnement de données et à l’estimation sous contrainte de forme. La première contribution est un cadre unifié pour l’optimisation de fonctions quadratiques non-fortement convexes. Celui-ci comprend la descente de gradient accélérée et la descente de gradient moyennée. Ce nouveau cadre suggère un algorithme alternatif qui combine les aspects positifs du moyennage et de l’accélération. La deuxième contribution est d’obtenir le taux optimal d’erreur de prédiction pour la régression par moindres carrés en fonction de la dépendance au bruit du problème et à l’oubli des conditions initiales. Notre nouvel algorithme est issu de la descente de gradient accélérée et moyennée. La troisième contribution traite de la minimisation de fonctions composites, somme de l’espérance de fonctions quadratiques et d’une régularisation convexe. Nous étendons les résultats existants pour les moindres carrés à toute régularisation et aux différentes géométries induites par une divergence de Bregman. Dans une quatrième contribution, nous considérons le problème du partitionnement discriminatif. Nous proposons sa première analyse théorique, une extension parcimonieuse, son extension au cas multi-labels et un nouvel algorithme ayant une meilleure complexité que les méthodes existantes. La dernière contribution de cette thèse considère le problème de la sériation. Nous adoptons une approche statistique où la matrice est observée avec du bruit et nous étudions les taux d’estimation minimax. Nous proposons aussi un estimateur computationellement efficace. / Many problems in machine learning are naturally cast as the minimization of a smooth function defined on a Euclidean space. For supervised learning, this includes least-squares regression and logistic regression. While small problems are efficiently solved by classical optimization algorithms, large-scale problems are typically solved with first-order techniques based on gradient descent. In this manuscript, we consider the particular case of the quadratic loss. In the first part, we are interestedin its minimization when its gradients are only accessible through a stochastic oracle. In the second part, we consider two applications of the quadratic loss in machine learning: clustering and estimation with shape constraints. In the first main contribution, we provided a unified framework for optimizing non-strongly convex quadratic functions, which encompasses accelerated gradient descent and averaged gradient descent. This new framework suggests an alternative algorithm that exhibits the positive behavior of both averaging and acceleration. The second main contribution aims at obtaining the optimal prediction error rates for least-squares regression, both in terms of dependence on the noise of the problem and of forgetting the initial conditions. Our new algorithm rests upon averaged accelerated gradient descent. The third main contribution deals with minimization of composite objective functions composed of the expectation of quadratic functions and a convex function. Weextend earlier results on least-squares regression to any regularizer and any geometry represented by a Bregman divergence. As a fourth contribution, we consider the the discriminative clustering framework. We propose its first theoretical analysis, a novel sparse extension, a natural extension for the multi-label scenario and an efficient iterative algorithm with better running-time complexity than existing methods. The fifth main contribution deals with the seriation problem. We propose a statistical approach to this problem where the matrix is observed with noise and study the corresponding minimax rate of estimation. We also suggest a computationally efficient estimator whose performance is studied both theoretically and experimentally.

Page generated in 0.0744 seconds