Training a Support Vector Machine (SVM) requires the solution of a very large quadratic programming (QP) optimization problem. Sequential Minimal Optimization (SMO) is a decomposition-based algorithm which breaks this large QP problem into a series of smallest possible QP problems. However, it still costs O(n2) computation time. In our SVM implementation, we can do training with huge data sets in a distributed manner (by breaking the dataset into chunks, then using Message Passing Interface (MPI) to distribute each chunk to a different machine and processing SVM training within each chunk). In addition, we moved the kernel calculation part in SVM classification to a graphics processing unit (GPU) which has zero scheduling overhead to create concurrent threads. In this thesis, we will take advantage of this GPU architecture to improve the classification performance of SVM.
Identifer | oai:union.ndltd.org:uno.edu/oai:scholarworks.uno.edu:td-1972 |
Date | 06 August 2009 |
Creators | Zhang, Hang |
Publisher | ScholarWorks@UNO |
Source Sets | University of New Orleans |
Detected Language | English |
Type | text |
Format | application/pdf |
Source | University of New Orleans Theses and Dissertations |
Page generated in 0.0023 seconds