• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 1
  • Tagged with
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Interval-based possibility theory : conditioning and probability/possibility transformations / Théorie des possibilités à intervalles : conditionnement et transformations probabilités/possibilités

Levray, Amélie 08 December 2017 (has links)
Cette thèse contribue au développement de formalismes efficaces pour représenter l’information incertaine. Les formalismes existants tels que la théorie des probabilités ou la théorie des possibilités sont parmi les cadres les plus connus et utilisés pour représenter ce type d’information. Différentes extensions (e.g. théorie des probabilités imprécises, théorie des possibilités à intervalles) ont été proposées pour traiter des informations incomplètes ou des connaissances mal-connues, ainsi que pour raisonner avec les connaissances d’un groupe d’experts. Les contributions de cette thèse sont divisées en deux parties. Dans la première partie, nous développons le conditionnement dans le cadre des possibilités à intervalles et dans le cadre des possibilités ensemblistes. Conditionner dans le cadre standard diffère que l’on considère l’échelle possibiliste qualitative ou quantitative. Notre travail traite les deux définitions du conditionnement possibiliste. Ce qui nous amène à étudier une nouvelle extension de la logique possibiliste, définie comme logique possibiliste ensembliste, et son opérateur de conditionnement dans le cadre possibiliste qualitatif. Ces résultats, plus spécialement en termes de complexité, nous amène à étudier les transformations, plus précisément des transformations du cadre probabiliste vers le cadre possibiliste. En effet, nous analysons des propriétés les tâches de raisonnement comme la marginalisation et le conditionnement. Nous nous attaquons aussi aux transformations des probabilités imprécises vers les possibilités avec un intérêt particulier pour l’inférence MAP. / This thesis contributes to the development of efficient formalisms to handle uncertain information. Existing formalisms such as probability theory or possibility theory are among the most known and used settings to represent such information. Extensions and generalizations (e.g. imprecise probability theory, interval-based possibilistic theory) have been provided to handle uncertainty such as incomplete and ill-known knowledge and reasoning with the knowledge of a group of experts. We are particularly interested in reasoning tasks within these theories such as conditioning. The contributions of this thesis are divided in two parts. In the first part, we tackle conditioning in interval-based possibilistic framework and set-valued possibilistic framework. The purpose is to develop a conditioning machinery for interval-based possibilistic logic. Conditioning in a standard possibilistic setting differs whether we consider a qualitative or quantitative scale. Our works deal with both definitions of possibilistic conditioning. This leads us to investigate a new extension of possibilisticlogic, defined as set-valued possibilistic logic, and its conditioning machinery in the qualitative possibilistic setting. These results, especially in terms of complexity, lead us to study transformations, more precisely from probability to possibility theories. The second part of our contributions deals with probability-possibility transformation procedures. Indeed, we analyze properties of reasoning tasks such as conditioning and marginalization. We also tackle transformations from imprecise probability theory to possibility theory with a particular interest in MAP inference.
2

Quelques applications de l’optimisation numérique aux problèmes d’inférence et d’apprentissage / Few applications of numerical optimization in inference and learning

Kannan, Hariprasad 28 September 2018 (has links)
Les relaxations en problème d’optimisation linéaire jouent un rôle central en inférence du maximum a posteriori (map) dans les champs aléatoires de Markov discrets. Nous étudions ici les avantages offerts par les méthodes de Newton pour résoudre efficacement le problème dual (au sens de Lagrange) d’une reformulation lisse du problème. Nous comparons ces dernières aux méthodes de premier ordre, à la fois en terme de vitesse de convergence et de robustesse au mauvais conditionnement du problème. Nous exposons donc un cadre général pour l’apprentissage non-supervisé basé sur le transport optimal et les régularisations parcimonieuses. Nous exhibons notamment une approche prometteuse pour résoudre le problème de la préimage dans l’acp à noyau. Du point de vue de l’optimisation, nous décrivons le calcul du gradient d’une version lisse de la norme p de Schatten et comment cette dernière peut être utilisée dans un schéma de majoration-minimisation. / Numerical optimization and machine learning have had a fruitful relationship, from the perspective of both theory and application. In this thesis, we present an application oriented take on some inference and learning problems. Linear programming relaxations are central to maximum a posteriori (MAP) inference in discrete Markov Random Fields (MRFs). Especially, inference in higher-order MRFs presents challenges in terms of efficiency, scalability and solution quality. In this thesis, we study the benefit of using Newton methods to efficiently optimize the Lagrangian dual of a smooth version of the problem. We investigate their ability to achieve superior convergence behavior and to better handle the ill-conditioned nature of the formulation, as compared to first order methods. We show that it is indeed possible to obtain an efficient trust region Newton method, which uses the true Hessian, for a broad range of MAP inference problems. Given the specific opportunities and challenges in the MAP inference formulation, we present details concerning (i) efficient computation of the Hessian and Hessian-vector products, (ii) a strategy to damp the Newton step that aids efficient and correct optimization, (iii) steps to improve the efficiency of the conjugate gradient method through a truncation rule and a pre-conditioner. We also demonstrate through numerical experiments how a quasi-Newton method could be a good choice for MAP inference in large graphs. MAP inference based on a smooth formulation, could greatly benefit from efficient sum-product computation, which is required for computing the gradient and the Hessian. We show a way to perform sum-product computation for trees with sparse clique potentials. This result could be readily used by other algorithms, also. We show results demonstrating the usefulness of our approach using higher-order MRFs. Then, we discuss potential research topics regarding tightening the LP relaxation and parallel algorithms for MAP inference.Unsupervised learning is an important topic in machine learning and it could potentially help high dimensional problems like inference in graphical models. We show a general framework for unsupervised learning based on optimal transport and sparse regularization. Optimal transport presents interesting challenges from an optimization point of view with its simplex constraints on the rows and columns of the transport plan. We show one way to formulate efficient optimization problems inspired by optimal transport. This could be done by imposing only one set of the simplex constraints and by imposing structure on the transport plan through sparse regularization. We show how unsupervised learning algorithms like exemplar clustering, center based clustering and kernel PCA could fit into this framework based on different forms of regularization. We especially demonstrate a promising approach to address the pre-image problem in kernel PCA. Several methods have been proposed over the years, which generally assume certain types of kernels or have too many hyper-parameters or make restrictive approximations of the underlying geometry. We present a more general method, with only one hyper-parameter to tune and with some interesting geometric properties. From an optimization point of view, we show how to compute the gradient of a smooth version of the Schatten p-norm and how it can be used within a majorization-minimization scheme. Finally, we present results from our various experiments.

Page generated in 0.0502 seconds