Spelling suggestions: "subject:"equential minimal aptimization"" "subject:"equential minimal anoptimization""
1 |
Support Vector Machine and Application in Seizure PredictionQiu, Simeng 04 1900 (has links)
Nowadays, Machine learning (ML) has been utilized in various kinds of area which across the range from engineering field to business area. In this paper, we first present several kernel machine learning methods of solving classification, regression and clustering problems. These have good performance but also have some limitations. We present examples to each method and analyze the advantages and disadvantages for solving different scenarios. Then we focus on one of the most popular classification methods, Support Vectors Machine (SVM).
In addition, we introduce the basic theory, advantages and scenarios of using Support Vector Machine (SVM) deal with classification problems. We also explain a convenient approach of tacking SVM problems which are called Sequential Minimal Optimization (SMO). Moreover, one class SVM can be understood in a different way which is called Support Vector Data Description (SVDD). This is a famous non-linear model problem compared with SVM problems, SVDD can be solved by utilizing Gaussian RBF kernel function combined with SMO. At last, we compared the difference and performance of SVM-SMO implementation and SVM-SVDD implementation.
About the application part, we utilized SVM method to handle seizure forecasting in canine epilepsy, after comparing the results from different methods such as random forest, extremely randomized tree, and SVM to classify preictal (pre-seizure) and interictal (interval-seizure) binary data. We draw the conclusion that SVM has the best performance.
|
2 |
Estimation of Parameters in Support Vector RegressionChan, Yi-Chao 21 July 2006 (has links)
The selection and modification of kernel functions is a very important problem in the field of support vector learning. However, the kernel function of a support vector machine has great influence on its performance. The kernel function projects the dataset from the original data space into the feature space, and therefore the problems which couldn¡¦t be done in low dimensions could be done in a higher dimension through the transform of the kernel function. In this thesis, we adopt the FCM clustering algorithm to group data patterns into clusters, and then use a statistical approach to calculate the standard deviation of each pattern with respect to the other patterns in the same cluster. Therefore we can make a proper estimation on the distribution of data patterns and assign a proper standard deviation for each pattern. The standard deviation is the same as the variance of a radial basis function. Then we have the origin data patterns and the variance of each data pattern for support vector learning. Experimental results have shown that our approach can derive better kernel functions than other methods, and also can have better learning and generalization abilities.
|
3 |
SVM-BASED ROBUST TEMPLATE DESIGN FOR CELLULAR NEURAL NETWORKS IMPLEMENTING AN ARBITRARY BOOLEAN FUNCTIONTeng, Wei-chih 27 June 2005 (has links)
In this thesis, the geometric margin is used for the first time as the robustness indicator of an uncoupled cellular neural network implementing a given Boolean function. First, robust template design for uncoupled cellular neural networks implementing linearly separable Boolean functions by support vector machines is proposed. A fast sequential minimal optimization algorithm is presented to find maximal margin classifiers, which in turn determine the robust templates. Some general properties of robust templates are investigated. An improved CFC algorithm implementing an arbitrarily given Boolean function is proposed. Two illustrative examples are provided to demonstrate the validity of the proposed method.
|
4 |
Duality, Derivative-Based Training Methods and Hyperparameter Optimization for Support Vector MachinesStrasdat, Nico 18 October 2023 (has links)
In this thesis we consider the application of Fenchel's duality theory and gradient-based methods for the training and hyperparameter optimization of Support Vector Machines. We show that the dualization of convex training problems is possible theoretically in a rather general formulation. For training problems following a special structure (for instance, standard training problems) we find that the resulting optimality conditions can be interpreted concretely. This approach immediately leads to the well-known notion of support vectors and a formulation of the Representer Theorem. The proposed theory is applied to several examples such that dual formulations of training problems and associated optimality conditions can be derived straightforwardly. Furthermore, we consider different formulations of the primal training problem which are equivalent under certain conditions. We also argue that the relation of the corresponding solutions to the solution of the dual training problem is not always intuitive. Based on the previous findings, we consider the application of customized optimization methods to the primal and dual training problems. A particular realization of Newton's method is derived which could be used to solve the primal training problem accurately. Moreover, we introduce a general convergence framework covering different types of decomposition methods for the solution of the dual training problem. In doing so, we are able to generalize well-known convergence results for the SMO method. Additionally, a discussion of the complexity of the SMO method and a motivation for a shrinking strategy reducing the computational effort is provided. In a last theoretical part, we consider the problem of hyperparameter optimization. We argue that this problem can be handled efficiently by means of gradient-based methods if the training problems are formulated appropriately. Finally, we evaluate the theoretical results concerning the training and hyperparameter optimization approaches practically by means of several example training problems.
|
5 |
Distributed Support Vector Machine With Graphics Processing UnitsZhang, Hang 06 August 2009 (has links)
Training a Support Vector Machine (SVM) requires the solution of a very large quadratic programming (QP) optimization problem. Sequential Minimal Optimization (SMO) is a decomposition-based algorithm which breaks this large QP problem into a series of smallest possible QP problems. However, it still costs O(n2) computation time. In our SVM implementation, we can do training with huge data sets in a distributed manner (by breaking the dataset into chunks, then using Message Passing Interface (MPI) to distribute each chunk to a different machine and processing SVM training within each chunk). In addition, we moved the kernel calculation part in SVM classification to a graphics processing unit (GPU) which has zero scheduling overhead to create concurrent threads. In this thesis, we will take advantage of this GPU architecture to improve the classification performance of SVM.
|
Page generated in 0.1406 seconds