Return to search

Generalized Maximum Entropy, Convexity and Machine Learning

This thesis identiļ¬es and extends techniques that can be linked to the principle
of maximum entropy (maxent) and applied to parameter estimation in machine
learning and statistics. Entropy functions based on deformed logarithms are used
to construct Bregman divergences, and together these represent a generalization
of relative entropy. The framework is analyzed using convex analysis to charac-
terize generalized forms of exponential family distributions. Various connections
to the existing machine learning literature are discussed and the techniques are
applied to the problem of non-negative matrix factorization (NMF).

Identiferoai:union.ndltd.org:ADTP/233142
Date January 2008
CreatorsSears, Timothy Dean, tim.sears@biogreenoil.com
PublisherThe Australian National University. Research School of Information Sciences and Engineering
Source SetsAustraliasian Digital Theses Program
LanguageEnglish
Detected LanguageEnglish
Rightshttp://www.anu.edu.au/legal/copyrit.html), Copyright Timothy Dean Sears

Page generated in 0.0019 seconds