Return to search

Parameter learning and support vector reduction in support vector regression

The selection and learning of kernel functions is a very important but rarely studied problem in the field of support vector learning. However, the kernel function of a support vector regression has great influence on its performance. The kernel function projects the dataset from the original data space into the feature space, and therefore the problems which can not be done in low dimensions could be done in a higher dimension through the transform of the kernel function.
In this paper, there are two main contributions. Firstly, we introduce the gradient descent method to the learning of kernel functions. Using the gradient descent method, we can conduct learning rules of the parameters which indicate the shape and distribution of the kernel functions. Therefore, we can obtain better kernel functions by training their parameters with respect to the risk minimization principle. Secondly, In order to reduce the number of support vectors, we use the orthogonal least squares method. By choosing the representative support vectors, we may remove the less important support vectors in the support vector regression model.
The experimental results have shown that our approach can derive better kernel functions than others and has better generalization ability. Also, the number of support vectors can be effectively reduced.

Identiferoai:union.ndltd.org:NSYSU/oai:NSYSU:etd-0721106-215207
Date21 July 2006
CreatorsYang, Chih-cheng
ContributorsShie-jue Lee, Chung-ming Kuo, Tsung-chuan Huang, Chih-hung Wu, Chen-sen Ouyang
PublisherNSYSU
Source SetsNSYSU Electronic Thesis and Dissertation Archive
LanguageCholon
Detected LanguageEnglish
Typetext
Formatapplication/pdf
Sourcehttp://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0721106-215207
Rightsnot_available, Copyright information available at source archive

Page generated in 0.0021 seconds