1 |
Abstract kernel operators.January 1987 (has links)
by Zhang Xiao-dong. / Thesis (M.Ph.)--Chineses University of Hong Kong, 1987. / Bibliography: leaves 88-92.
|
2 |
Kernel based methods for sequence comparison.January 2011 (has links)
Yeung, Hau Man. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2011. / Includes bibliographical references (p. 59-63). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.7 / Chapter 2 --- Work Flows and Kernel Methods --- p.9 / Chapter 2.1 --- Work Flows --- p.9 / Chapter 2.2 --- Frequency Vector --- p.11 / Chapter 2.3 --- Motivation for Kernel Based Distance --- p.12 / Chapter 2.3.1 --- Similarity between sequences --- p.13 / Chapter 2.3.2 --- Distance between sequences --- p.14 / Chapter 2.4 --- Kernels for DNA Sequence --- p.15 / Chapter 2.4.1 --- Kernels based on evolution model --- p.15 / Chapter 2.4.2 --- Kernels based on empirical data --- p.17 / Chapter 2.5 --- Kernels for Peptide Sequence --- p.18 / Chapter 3 --- Dataset for DNA Sequence and Results --- p.25 / Chapter 3.1 --- Dataset and Goal --- p.25 / Chapter 3.1.1 --- Mitochondrial DNA dataset --- p.26 / Chapter 3.1.2 --- 18S ribosomal RNA --- p.28 / Chapter 3.2 --- Results --- p.28 / Chapter 4 --- Dataset for Peptide Sequence and Results --- p.35 / Chapter 4.1 --- Dataset and Goal --- p.36 / Chapter 4.2 --- Classification and Evaluation Methods --- p.39 / Chapter 4.2.1 --- Partition of training and testing datasets --- p.39 / Chapter 4.2.2 --- Classification methods --- p.40 / Chapter 4.3 --- Results --- p.45 / Chapter 4.3.1 --- KNN performs better than the FDSM --- p.45 / Chapter 4.3.2 --- BLOSUM62 performs best and window length not important --- p.46 / Chapter 4.3.3 --- Distance formula (2.4) performs better --- p.49 / Chapter 5 --- Discussion --- p.51 / Chapter 5.1 --- Sequence Length and Window Length --- p.51 / Chapter 5.2 --- Possible Kernels --- p.52 / Chapter 5.3 --- Distance Formulae --- p.53 / Chapter 5.4 --- Protein Structural Problem --- p.54 / Chapter 6 --- Appendix --- p.55 / Chapter 6.1 --- Kernel for Peptide Sequences --- p.55 / Bibliography --- p.59
|
3 |
Feature extraction VIA kernel weighted discriminant analysis methods /Dai, Guang. January 2007 (has links)
Thesis (M.Phil.)--Hong Kong University of Science and Technology, 2007. / Includes bibliographical references (leaves 83-90). Also available in electronic version.
|
4 |
Kernel correlation as an affinity measure in point-sampled vision problems /Tsin, Yanghai. January 1900 (has links)
Thesis (Ph. D.)--Carnegie Mellon University, 2003. / "September 2003." Includes bibliographical references.
|
5 |
Scalable kernel methods for machine learningKulis, Brian Joseph 09 October 2012 (has links)
Machine learning techniques are now essential for a diverse set of applications in computer vision, natural language processing, software analysis, and many other domains. As more applications emerge and the amount of data continues to grow, there is a need for increasingly powerful and scalable techniques. Kernel methods, which generalize linear learning methods to non-linear ones, have become a cornerstone for much of the recent work in machine learning and have been used successfully for many core machine learning tasks such as clustering, classification, and regression. Despite the recent popularity in kernel methods, a number of issues must be tackled in order for them to succeed on large-scale data. First, kernel methods typically require memory that grows quadratically in the number of data objects, making it difficult to scale to large data sets. Second, kernel methods depend on an appropriate kernel function--an implicit mapping to a high-dimensional space--which is not clear how to choose as it is dependent on the data. Third, in the context of data clustering, kernel methods have not been demonstrated to be practical for real-world clustering problems. This thesis explores these questions, offers some novel solutions to them, and applies the results to a number of challenging applications in computer vision and other domains. We explore two broad fundamental problems in kernel methods. First, we introduce a scalable framework for learning kernel functions based on incorporating prior knowledge from the data. This frame-work scales to very large data sets of millions of objects, can be used for a variety of complex data, and outperforms several existing techniques. In the transductive setting, the method can be used to learn low-rank kernels, whose memory requirements are linear in the number of data points. We also explore extensions of this framework and applications to image search problems, such as object recognition, human body pose estimation, and 3-d reconstructions. As a second problem, we explore the use of kernel methods for clustering. We show a mathematical equivalence between several graph cut objective functions and the weighted kernel k-means objective. This equivalence leads to the first eigenvector-free algorithm for weighted graph cuts, which is thousands of times faster than existing state-of-the-art techniques while using significantly less memory. We benchmark this algorithm against existing methods, apply it to image segmentation, and explore extensions to semi-supervised clustering. / text
|
6 |
Applications of reproducing kernels in Hilbert spacesMumford, Michael Leslie 05 1900 (has links)
No description available.
|
7 |
Kernel estimators : testing and bandwidth selection in models of unknown smoothnessKotlyarova, Yulia January 2005 (has links)
Semiparametric and nonparametric estimators are becoming indispensable tools in applied econometrics. Many of these estimators depend on the choice of smoothing bandwidth and kernel function. Optimality of such parameters is determined by unobservable smoothness of the model, that is, by differentiability of the distribution functions of random variables in the model. In this thesis we consider two estimators of this class: the smoothed maximum score estimator for binary choice models and the kernel density estimator. / We present theoretical results on the asymptotic distribution of the estimators under various smoothness assumptions and derive the limiting joint distributions for estimators with different combinations of bandwidths and kernel functions. Using these nontrivial joint distributions, we suggest a new way of improving accuracy and robustness of the estimators by considering a linear combination of estimators with different smoothing parameters. The weights in the combination minimize an estimate of the mean squared error. Monte Carlo simulations confirm suitability of this method for both smooth and non-smooth models. / For the original and smoothed maximum score estimators, a formal procedure is introduced to test for equivalence of the maximum likelihood estimators and these semiparametric estimators, which converge to the true value at slower rates. The test allows one to identify heteroskedastic misspecifications in the logit/probit models. The method has been applied to analyze the decision of married women to join the labour force.
|
8 |
Scalable kernel methods for machine learningKulis, Brian Joseph. January 1900 (has links)
Thesis (Ph. D.)--University of Texas at Austin, 2008. / Vita. Includes bibliographical references.
|
9 |
Assessing the influence of observations on the generalization performance of the generalization performance of the Kernel Fisher Discriminant Classifier /Lamont, Morné Michael Connell. January 2008 (has links)
Dissertation (PhD)--University of Stellenbosch, 2008. / Bibliography. Also available via the Internet.
|
10 |
Learning with kernel based regularization schemes /Xiao, Quanwu. January 2009 (has links) (PDF)
Thesis (Ph.D.)--City University of Hong Kong, 2009. / "Submitted to Department of Mathematics in partial fulfillment of the requirements for the degree of Doctor of Philosophy." Includes bibliographical references (leaves [73]-81)
|
Page generated in 0.1125 seconds