41 |
Practical aspects of kernel smoothing for binary regression and density estimation.Signorini, David F. January 1998 (has links)
Thesis (PhD)-Open University. BLDSC no.DX205389.
|
42 |
Nearest hypersphere classification : a comparison with other classification techniquesVan der Westhuizen, Cornelius Stephanus 12 1900 (has links)
Thesis (MCom)--Stellenbosch University, 2014. / ENGLISH ABSTRACT: Classification is a widely used statistical procedure to classify objects into two or more
classes according to some rule which is based on the input variables. Examples of such
techniques are Linear and Quadratic Discriminant Analysis (LDA and QDA). However,
classification of objects with these methods can get complicated when the number of input
variables in the data become too large ( ≪ ), when the assumption of normality is no
longer met or when classes are not linearly separable. Vapnik et al. (1995) introduced the
Support Vector Machine (SVM), a kernel-based technique, which can perform classification
in cases where LDA and QDA are not valid. SVM makes use of an optimal separating
hyperplane and a kernel function to derive a rule which can be used for classifying objects.
Another kernel-based technique was proposed by Tax and Duin (1999) where a hypersphere
is used for domain description of a single class. The idea of a hypersphere for a single class
can be easily extended to classification when dealing with multiple classes by just classifying
objects to the nearest hypersphere.
Although the theory of hyperspheres is well developed, not much research has gone into
using hyperspheres for classification and the performance thereof compared to other
classification techniques. In this thesis we will give an overview of Nearest Hypersphere
Classification (NHC) as well as provide further insight regarding the performance of NHC
compared to other classification techniques (LDA, QDA and SVM) under different
simulation configurations.
We begin with a literature study, where the theory of the classification techniques LDA,
QDA, SVM and NHC will be dealt with. In the discussion of each technique, applications in
the statistical software R will also be provided. An extensive simulation study is carried out
to compare the performance of LDA, QDA, SVM and NHC for the two-class case. Various
data scenarios will be considered in the simulation study. This will give further insight in
terms of which classification technique performs better under the different data scenarios.
Finally, the thesis ends with the comparison of these techniques on real-world data. / AFRIKAANSE OPSOMMING: Klassifikasie is ’n statistiese metode wat gebruik word om objekte in twee of meer klasse te
klassifiseer gebaseer op ’n reël wat gebou is op die onafhanklike veranderlikes. Voorbeelde
van hierdie metodes sluit in Lineêre en Kwadratiese Diskriminant Analise (LDA en KDA).
Wanneer die aantal onafhanklike veranderlikes in ’n datastel te veel raak, die aanname van
normaliteit nie meer geld nie of die klasse nie meer lineêr skeibaar is nie, raak die toepassing
van metodes soos LDA en KDA egter te moeilik. Vapnik et al. (1995) het ’n kern gebaseerde
metode bekendgestel, die Steun Vektor Masjien (SVM), wat wel vir klassifisering gebruik
kan word in situasies waar metodes soos LDA en KDA misluk. SVM maak gebruik van ‘n
optimale skeibare hipervlak en ’n kern funksie om ’n reël af te lei wat gebruik kan word om
objekte te klassifiseer. ’n Ander kern gebaseerde tegniek is voorgestel deur Tax and Duin
(1999) waar ’n hipersfeer gebruik kan word om ’n gebied beskrywing op te stel vir ’n datastel
met net een klas. Dié idee van ’n enkele klas wat beskryf kan word deur ’n hipersfeer, kan
maklik uitgebrei word na ’n multi-klas klassifikasie probleem. Dit kan gedoen word deur
slegs die objekte te klassifiseer na die naaste hipersfeer.
Alhoewel die teorie van hipersfere goed ontwikkeld is, is daar egter nog nie baie navorsing
gedoen rondom die gebruik van hipersfere vir klassifikasie nie. Daar is ook nog nie baie
gekyk na die prestasie van hipersfere in vergelyking met ander klassifikasie tegnieke nie. In
hierdie tesis gaan ons ‘n oorsig gee van Naaste Hipersfeer Klassifikasie (NHK) asook verdere
insig in terme van die prestasie van NHK in vergelyking met ander klassifikasie tegnieke
(LDA, KDA en SVM) onder sekere simulasie konfigurasies.
Ons gaan begin met ‘n literatuurstudie, waar die teorie van die klassifikasie tegnieke LDA,
KDA, SVM en NHK behandel gaan word. Vir elke tegniek gaan toepassings in die statistiese
sagteware R ook gewys word. ‘n Omvattende simulasie studie word uitgevoer om die
prestasie van die tegnieke LDA, KDA, SVM en NHK te vergelyk. Die vergelyking word
gedoen vir situasies waar die data slegs twee klasse het. ‘n Verskeidenheid van data situasies
gaan ook ondersoek word om verdere insig te toon in terme van wanneer watter tegniek die
beste vaar. Die tesis gaan afsluit deur die genoemde tegnieke toe te pas op praktiese
datastelle.
|
43 |
Nonparametric density estimation for univariate and bivariate distributions with applications in discriminant analysis for the bivariate caseHaug, Mark January 2010 (has links)
Typescript (photocopy). / Digitized by Kansas Correctional Industries / Department: Statistics.
|
44 |
Sparse learning under regularization framework. / 正則化框架下的稀疏學習 / CUHK electronic theses & dissertations collection / Zheng ze hua kuang jia xia de xi shu xue xiJanuary 2011 (has links)
Regularization is a dominant theme in machine learning and statistics due to its prominent ability in providing an intuitive and principled tool for learning from high-dimensional data. As large-scale learning applications become popular, developing efficient algorithms and parsimonious models become promising and necessary for these applications. Aiming at solving large-scale learning problems, this thesis tackles the key research problems ranging from feature selection to learning with unlabeled data and learning data similarity representation. More specifically, we focus on the problems in three areas: online learning, semi-supervised learning, and multiple kernel learning. / The first part of this thesis develops a novel online learning framework to solve group lasso and multi-task feature selection. To the best our knowledge, the proposed online learning framework is the first framework for the corresponding models. The main advantages of the online learning algorithms are that (1) they can work on the applications where training data appear sequentially; consequently, the training procedure can be started at any time; (2) they can handle data up to any size with any number of features. The efficiency of the algorithms is attained because we derive closed-form solutions to update the weights of the corresponding models. At each iteration, the online learning algorithms just need O (d) time complexity and memory cost for group lasso, while they need O (d x Q) for multi-task feature selection, where d is the number of dimensions and Q is the number of tasks. Moreover, we provide theoretical analysis for the average regret of the online learning algorithms, which also guarantees the convergence rate of the algorithms. In addition, we extend the online learning framework to solve several related models which yield more sparse solutions. / The second part of this thesis addresses a general scenario of semi-supervised learning for the binary classification problern, where the unlabeled data may be a mixture of relevant and irrelevant data to the target binary classification task. Without specifying the relatedness in the unlabeled data, we develop a novel maximum margin classifier, named the tri-class support vector machine (3C-SVM), to seek an inductive rule that can separate these data into three categories: --1, +1, or 0. This is achieved by adopting a novel min loss function and following the maximum entropy principle. For the implementation, we approximate the problem and solve it by a standard concaveconvex procedure (CCCP). The approach is very efficient and it is possible to solve large-scale datasets. / The third part of this thesis focuses on multiple kernel learning (MKL) to solve the insufficiency of the L1-MKL and the Lp-MKL models. Hence, we propose a generalized MKL (GMKL) model by introducing an elastic net-type constraint on the kernel weights. More specifically, it is an MKL model with a constraint on a linear combination of the L1-norm and the square of the L2-norm on the kernel weights to seek the optimal kernel combination weights. Therefore, previous MKL problems based on the L1-norm or the L2-norm constraints can be regarded as its special cases. Moreover, our GMKL enjoys the favorable sparsity property on the solution and also facilitates the grouping effect. In addition, the optimization of our GMKL is a convex optimization problem, where a local solution is the globally optimal solution. We further derive the level method to efficiently solve the optimization problem. / Yang, Haiqin. / Advisers: Kuo Chin Irwin King; Michael Rung Tsong Iyu. / Source: Dissertation Abstracts International, Volume: 73-04, Section: B, page: . / Thesis (Ph.D.)--Chinese University of Hong Kong, 2011. / Includes bibliographical references (leaves 152-173). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [201-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract also in Chinese.
|
45 |
Road Sign Recognition based onInvariant Features using SupportVector MachineGilani, Syed Hassan January 2007 (has links)
Since last two decades researches have been working on developing systems that can assistsdrivers in the best way possible and make driving safe. Computer vision has played a crucialpart in design of these systems. With the introduction of vision techniques variousautonomous and robust real-time traffic automation systems have been designed such asTraffic monitoring, Traffic related parameter estimation and intelligent vehicles. Among theseautomatic detection and recognition of road signs has became an interesting research topic.The system can assist drivers about signs they don’t recognize before passing them.Aim of this research project is to present an Intelligent Road Sign Recognition System basedon state-of-the-art technique, the Support Vector Machine. The project is an extension to thework done at ITS research Platform at Dalarna University [25]. Focus of this research work ison the recognition of road signs under analysis. When classifying an image its location, sizeand orientation in the image plane are its irrelevant features and one way to get rid of thisambiguity is to extract those features which are invariant under the above mentionedtransformation. These invariant features are then used in Support Vector Machine forclassification. Support Vector Machine is a supervised learning machine that solves problemin higher dimension with the help of Kernel functions and is best know for classificationproblems.
|
46 |
Variance analysis for kernel smoothing of a varying-coefficient model with longitudinal data /Chen, Jinsong. January 2003 (has links) (PDF)
Thesis (M.S.)--University of North Carolina at Wilmington, 2003. / Includes bibliographical references (leaf : [101]).
|
47 |
Acoustic impulse detection algorithms for application in gunshot localizationVan der Merwe, J. F. January 2012 (has links)
M. Tech. Electrical Engineering. / Attempts to find computational efficient ways to identify and extract gunshot impulses from signals. Areas of study include Generalised Cross Correlation (GCC), sidelobe minimisation utilising Least Square (LS) techniques as well as training algorithms using a Reproducing Kernel Hilbert Space (RKHS) approach. It also incorporates Support Vector Machines (SVM) to train a network to recognise gunshot impulses. By combining these individual research areas more optimal solutions are obtainable.
|
48 |
On modelling using radial basis function networks with structure determined by support vector regressionChoy, Kin-yee., 蔡建怡. January 2004 (has links)
published_or_final_version / abstract / toc / Mechanical Engineering / Master / Master of Philosophy
|
49 |
Kernel methods in steganalysisPevný, Tomáš. January 2008 (has links)
Thesis (Ph. D.)--State University of New York at Binghamton, Thomas J. Watson School of Engineering and Applied Science, Department of Computer Science, 2008. / Includes bibliographical references.
|
50 |
Investigation of kernels for the reproducing kernel particle methodShanmugam, Bala Priyadarshini. January 2009 (has links) (PDF)
Thesis (M.S.)--University of Alabama at Birmingham, 2009. / Description based on contents viewed June 2, 2009; title from PDF t.p. Includes bibliographical references (p. 71-76).
|
Page generated in 0.0731 seconds