Spelling suggestions: "subject:"multidimensional trees"" "subject:"ultidimensional trees""
1 |
A distributed kernel summation framework for machine learning and scientific applicationsLee, Dong Ryeol 11 May 2012 (has links)
The class of computational problems I consider in
this thesis share the common trait of requiring
consideration of pairs (or higher-order tuples)
of data points. I focus on the problem of kernel
summation operations ubiquitous in many data
mining and scientific algorithms.
In machine learning, kernel summations appear in
popular kernel methods which can model nonlinear
structures in data. Kernel methods include many
non-parametric methods such as kernel density
estimation, kernel regression, Gaussian process
regression, kernel PCA, and kernel support vector
machines (SVM). In computational physics,
kernel summations occur inside the classical
N-body problem for simulating positions of a set
of celestial bodies or atoms.
This thesis attempts to marry, for the first
time, the best relevant techniques in parallel
computing, where kernel summations are in low
dimensions, with the best general-dimension
algorithms from the machine learning literature.
We provide a unified, efficient parallel
kernel summation framework that can utilize:
(1) various types of deterministic and
probabilistic approximations that may be
suitable for both low and high-dimensional
problems with a large number of data points;
(2) indexing the data using any multi-dimensional
binary tree with both distributed memory (MPI)
and shared memory (OpenMP/Intel TBB) parallelism;
(3) a dynamic load balancing scheme to adjust
work imbalances during the computation.
I will first summarize my previous research in
serial kernel summation algorithms. This work
started from Greengard/Rokhlin's earlier work on
fast multipole methods for the purpose of
approximating potential sums of many particles.
The contributions of this part of this thesis
include the followings: (1) reinterpretation of
Greengard/Rokhlin's work for the computer science
community; (2) the extension of the algorithms to
use a larger class of approximation strategies,
i.e. probabilistic error bounds via Monte Carlo
techniques; (3) the multibody series expansion:
the generalization of the theory of fast
multipole methods to handle interactions of more
than two entities; (4) the first O(N) proof of
the batch approximate kernel summation using a
notion of intrinsic dimensionality. Then I move
onto the problem of parallelization of the kernel
summations and tackling the scaling of two other
kernel methods, Gaussian process regression
(kernel matrix inversion) and kernel PCA (kernel
matrix eigendecomposition).
The artifact of this thesis has contributed to an
open-source machine learning package called
MLPACK which has been first demonstrated at the
NIPS 2008 and subsequently at the NIPS 2011 Big
Learning Workshop. Completing a portion of this
thesis involved utilization of high performance
computing resource at XSEDE (eXtreme Science and
Engineering Discovery Environment) and NERSC
(National Energy Research Scientific Computing
Center).
|
Page generated in 0.1075 seconds