Return to search

Fast Online Training of L1 Support Vector Machines

This thesis proposes a novel experimental environment (non-linear stochastic gradient descent, NL-SGD), as well as a novel online learning algorithm (OL SVM), for solving a classic nonlinear Soft Margin L1 Support Vector Machine (SVM) problem using a Stochastic Gradient Descent (SGD) algorithm. The NL-SGD implementation has a unique method of random sampling and alpha calculations. The developed code produces a competitive accuracy and speed in comparison with the solutions of the Direct L2 SVM obtained by software for Minimal Norm SVM (MN-SVM) and Non-Negative Iterative Single Data Algorithm (NN-ISDA). The latter two algorithms have shown excellent performances on large datasets; which is why we chose to compare NL-SGD and OL SVM to them. All experiments have been done under strict double (nested) cross-validation, and the results are reported in terms of accuracy and CPU times. OL SVM has been implemented within MATLAB and is compared to the classic Sequential Minimal Optimization (SMO) algorithm implemented within MATLAB's solver, fitcsvm. The experiments with OL SVM have been done using k-fold cross-validation and the results reported in % error and % speedup of CPU Time.

Identiferoai:union.ndltd.org:vcu.edu/oai:scholarscompass.vcu.edu:etd-5368
Date01 January 2016
CreatorsMelki, Gabriella A
PublisherVCU Scholars Compass
Source SetsVirginia Commonwealth University
Detected LanguageEnglish
Typetext
Formatapplication/pdf
SourceTheses and Dissertations
Rights© The Author

Page generated in 0.0017 seconds