Return to search

Convex Large Margin Training - Unsupervised, Semi-supervised, and Robust Support Vector Machines

Support vector machines (SVMs) have been a dominant machine learning technique for more than a decade. The intuitive principle
behind SVM training is to find the maximum margin separating hyperplane for a given set of binary labeled training data. Previously, SVMs have been primarily applied to supervised learning problems, where target class labels are provided with the
data. Developing unsupervised extensions to SVMs, where no class labels are given, turns out to be a challenging problem. In
this dissertation, I propose a principled approach for unsupervised and semi-supervised SVM training by formulating
convex relaxations of the natural training criterion: find a (constrained) labeling that would yield an optimal SVM classifier
on the resulting labeled training data. This relaxation yields a semidefinite program (SDP) that can be solved in polynomial time.
The resulting training procedures can be applied to two-class and multi-class problems, and ultimately to the multivariate case, achieving high quality results in each case. In addition to unsupervised training, I also consider the problem of reducing the outlier sensitivity of standard supervised SVM training. Here I show that a similar convex relaxation can be applied to improve the robustness of SVMs by explicitly
suppressing outliers in the training process. The proposed approach can achieve superior results to standard SVMs in the
presence of outliers.

Identiferoai:union.ndltd.org:LACETR/oai:collectionscanada.gc.ca:OWTU.10012/3076
Date January 2007
CreatorsXu, Linli
Source SetsLibrary and Archives Canada ETDs Repository / Centre d'archives des thèses électroniques de Bibliothèque et Archives Canada
LanguageEnglish
Detected LanguageEnglish
TypeThesis or Dissertation
Format1645797 bytes, application/pdf

Page generated in 0.0164 seconds