Spelling suggestions: "subject:"dimensionalityreduction"" "subject:"dimensionalitetreduktion""
1 |
Nonlinear Dimensionality Reduction by Manifold UnfoldingKhajehpour Tadavani, Pooyan 18 September 2013 (has links)
Every second, an enormous volume of data is being gathered from various sources and stored in huge data banks. Most of the time, monitoring a data source requires several parallel measurements, which form a high-dimensional sample vector. Due to the curse of dimensionality, applying machine learning methods, that is, studying and analyzing high-dimensional data, could be difficult. The essential task of dimensionality reduction is to faithfully represent a given set of high-dimensional data samples with a few variables. The goal of this thesis is to develop and propose new techniques for handling high-dimensional data, in order to address contemporary demand in machine learning applications.
Most prominent nonlinear dimensionality reduction methods do not explicitly provide a way to handle out-of-samples. The starting point of this thesis is a nonlinear technique, called Embedding by Affine Transformations (EAT), which reduces the dimensionality of out-of-sample data as well. In this method, a convex optimization is solved for estimating a transformation between the high-dimensional input space and the low-dimensional embedding space. To the best of our knowledge, EAT is the only distance-preserving method for nonlinear dimensionality reduction capable of handling out-of-samples.
The second method that we propose is TesseraMap. This method is a scalable extension of EAT. Conceptually, TesseraMap partitions the underlying manifold of data into a set of tesserae and then unfolds it by constructing a tessellation in a low-dimensional subspace of the embedding space. Crucially, the desired tessellation is obtained through solving a small semidefinite program; therefore, this method can efficiently handle tens of thousands of data points in a short time.
The final outcome of this thesis is a novel method in dimensionality reduction called Isometric Patch Alignment (IPA). Intuitively speaking, IPA first considers a number of overlapping flat patches, which cover the underlying manifold of the high-dimensional input data. Then, IPA rearranges the patches and stitches the neighbors together on their overlapping parts. We prove that stitching two neighboring patches aligns them together; thereby, IPA unfolds the underlying manifold of data. Although this method and TesseraMap have similar approaches, IPA is more scalable; it embeds one million data points in only a few minutes. More importantly, unlike EAT and TesseraMap, which unfold the underlying manifold by stretching it, IPA constructs the unfolded manifold through patch alignment. We show this novel approach is advantageous in many cases. In addition, compared to the other well-known dimensionality reduction methods, IPA has several important characteristics; for example, it is noise tolerant, it handles non-uniform samples, and it can embed non-convex manifolds properly.
In addition to these three dimensionality reduction methods, we propose a method for subspace clustering called Low-dimensional Localized Clustering (LDLC). In subspace clustering, data is partitioned into clusters, such that the points of each cluster lie close to a low-dimensional subspace. The unique property of LDLC is that it produces localized clusters on the underlying manifold of data. By conducting several experiments, we show this property is an asset in many machine learning tasks. This method can also be used for local dimensionality reduction. Moreover, LDLC is a suitable tool for forming the tesserae in TesseraMap, and also for creating the patches in IPA.
|
2 |
Subjective MappingWilkinson, Dana January 2007 (has links)
There are a variety of domains where it is desirable to learn a representation
of an environment defined by a stream of sensori-motor experience. This
dissertation introduces and formalizes subjective mapping, a novel approach to
this problem. A learned representation is subjective if it is constructed
almost entirely from the experience stream, minimizing the requirement of
additional domain-specific information (which is often not readily obtainable).
In many cases the observational data may be too plentiful to be feasibly stored.
In these cases, a primary feature of a learned representation is that it be
compact---summarizing information in a way that alleviates storage demands.
Consequently, the first key insight of the subjective mapping approach is to
phrase the problem as a variation of the well-studied problem of dimensionality
reduction. The second insight is that knowing the effects of actions is
critical to the usefulness of a representation. Therefore enforcing that
actions have a consistent and succinct form in the learned representation is
also a key requirement.
This dissertation presents a new framework, action respecting embedding (ARE),
which builds on a recent effective dimensionality reduction algorithm called
maximum variance unfolding, in order to solve the newly introduced subjective
mapping problem. The resulting learned representations are shown to be useful
for reasoning, planning and localization tasks. At the heart of the new
algorithm lies a semidefinite program leading to questions about ARE's ability
to handle sufficiently large input sizes. The final contribution of this
dissertation is to provide a divide-and-conquer algorithm as a first step to
addressing this issue.
|
3 |
Subjective MappingWilkinson, Dana January 2007 (has links)
There are a variety of domains where it is desirable to learn a representation
of an environment defined by a stream of sensori-motor experience. This
dissertation introduces and formalizes subjective mapping, a novel approach to
this problem. A learned representation is subjective if it is constructed
almost entirely from the experience stream, minimizing the requirement of
additional domain-specific information (which is often not readily obtainable).
In many cases the observational data may be too plentiful to be feasibly stored.
In these cases, a primary feature of a learned representation is that it be
compact---summarizing information in a way that alleviates storage demands.
Consequently, the first key insight of the subjective mapping approach is to
phrase the problem as a variation of the well-studied problem of dimensionality
reduction. The second insight is that knowing the effects of actions is
critical to the usefulness of a representation. Therefore enforcing that
actions have a consistent and succinct form in the learned representation is
also a key requirement.
This dissertation presents a new framework, action respecting embedding (ARE),
which builds on a recent effective dimensionality reduction algorithm called
maximum variance unfolding, in order to solve the newly introduced subjective
mapping problem. The resulting learned representations are shown to be useful
for reasoning, planning and localization tasks. At the heart of the new
algorithm lies a semidefinite program leading to questions about ARE's ability
to handle sufficiently large input sizes. The final contribution of this
dissertation is to provide a divide-and-conquer algorithm as a first step to
addressing this issue.
|
4 |
Acoustic Feature Transformation Combining Average and Maximum Classification Error Minimization CriteriaTAKEDA, Kazuya, KITAOKA, Norihide, SAKAI, Makoto 01 July 2010 (has links)
No description available.
|
5 |
Assessing and quantifying clusteredness: The OPTICS CordilleraRusch, Thomas, Hornik, Kurt, Mair, Patrick 01 1900 (has links) (PDF)
Data representations in low dimensions such as results from unsupervised dimensionality reduction methods are often visually interpreted to find clusters of observations. To identify clusters the result must be appreciably clustered. This property of a result may be called "clusteredness". When judged visually, the appreciation of clusteredness is highly subjective. In this paper we suggest an objective way to assess clusteredness in data representations. We provide a definition of clusteredness that captures important aspects of a clustered appearance. We characterize these aspects and define the extremes rigorously. For this characterization of clusteredness we suggest an index to assess the degree of clusteredness, coined the OPTICS Cordillera. It makes only weak assumptions and is a property of the result, invariant for different partitionings or cluster assignments. We
provide bounds and a normalization for the index, and prove that it represents the aspects of clusteredness. Our index is parsimonious with respect to mandatory parameters but
also exible by allowing optional parameters to be tuned. The index can be used as a descriptive goodness-of-clusteredness statistic or to compare different results. For illustration we use a data set of handwritten digits which are very differently represented in two
dimensions by various popular dimensionality reduction results. Empirically, observers had a hard time to visually judge the clusteredness in these representations but our index provides a clear and easy characterisation of the clusteredness of each result. (authors' abstract) / Series: Discussion Paper Series / Center for Empirical Research Methods
|
6 |
Assessing and quantifying clusteredness: The OPTICS CordilleraRusch, Thomas, Hornik, Kurt, Mair, Patrick 22 June 2018 (has links) (PDF)
This article provides a framework for assessing and quantifying "clusteredness" of a data representation.
Clusteredness is a global univariate property defined as a layout diverging from equidistance of points
to the closest neighboring point set. The OPTICS algorithm encodes the global clusteredness as a pair of
clusteredness-representative distances and an algorithmic ordering. We use this to construct an index for
quantification of clusteredness, coined the OPTICS Cordillera, as the norm of subsequent differences over
the pair. We provide lower and upper bounds and a normalization for the index. We show the index captures
important aspects of clusteredness such as cluster compactness, cluster separation, and number of
clusters simultaneously. The index can be used as a goodness-of-clusteredness statistic, as a function over
a grid or to compare different representations. For illustration, we apply our suggestion to dimensionality
reduced 2D representations of Californian counties with respect to 48 climate change related variables.
Online supplementary material is available (including an R package, the data and additional mathematical
details).
|
7 |
Dimensionality reduction for dynamical systems with parametersWelshman, Christopher January 2014 (has links)
Dimensionality reduction methods allow for the study of high-dimensional systems by producing low-dimensional descriptions that preserve the relevant structure and features of interest. For dynamical systems, attractors are particularly important examples of such features, as they govern the long-term dynamics of the system, and are typically low-dimensional even if the state space is high- or infinite-dimensional. Methods for reduction need to be able to determine a suitable reduced state space in which to describe the attractor, and to produce a reduced description of the corresponding dynamics. In the presence of a parameter space, a system can possess a family of attractors. Parameters are important quantities that represent aspects of the physical system not directly modelled in the dynamics, and may take different values in different instances of the system. Therefore, including the parameter dependence in the reduced system is desirable, in order to capture the model's full range of behaviour. Existing methods typically involve algebraically manipulating the original differential equation, either by applying a projection, or by making local approximations around a fixed-point. In this work, we take more of a geometric approach, both for the reduction process and for determining the dynamics in the reduced space. For the reduction, we make use of an existing secant-based projection method, which has properties that make it well-suited to the reduction of attractors. We also regard the system to be a manifold and vector field, consider the attractor's normal and tangent spaces, and the derivatives of the vector field, in order to determine the desired properties of the reduced system. We introduce a secant culling procedure that allows for the number of secants to be greatly reduced in the case that the generating set explores a low-dimensional space. This reduces the computational cost of the secant-based method without sacrificing the detail captured in the data set. This makes it feasible to use secant-based methods with larger examples. We investigate a geometric formulation of the problem of dimensionality reduction of attractors, and identify and resolve the complications that arise. The benefit of this approach is that it is compatible with a wider range of examples than conventional approaches, particularly those with angular state variables. In turn this allows for application to non-autonomous systems with periodic time-dependence. We also adapt secant-based projection for use in this more general setting, which provides a concrete method of reduction. We then extend the geometric approach to include a parameter space, resulting in a family of vector fields and a corresponding family of attractors. Both the secant-based projection and the reproduction of dynamics are extended to produce a reduced model that correctly responds to the parameter dependence. The method is compatible with multiple parameters within a given region of parameter space. This is illustrated by a variety of examples.
|
8 |
Dimensionality reduction for hyperspectral imageryYang, He 30 April 2011 (has links)
In this dissertation, dimensionality reduction for hyperspectral remote sensing imagery is investigated to alleviate practical application difficulties caused by high data dimension. Band selection and band clustering are applied for this purpose. Based on availability of object prior information, supervised, semi-supervised, and unsupervised techniques are proposed. To take advantage of modern computational architecture, parallel implementations on cluster and graphics processing units (GPU) are developed. The impact of dimensionality reduction on the following data analysis is also evaluated. Specific contributions are as below. 1. A similarity-based unsupervised band selection algorithm is developed to select distinctive and informative bands, which outperforms other existing unsupervised band selection approaches in the literature. 2. An efficient supervised band selection method based on minimum estimated abundance covariance is developed, which outperforms other frequently-used metrics. This new method does not need to conduct classification during band selection process or examine original bands/band combinations as do traditional approaches. 3. An efficient semi-supervised band clustering method is proposed, which uses class signatures to conduct band partition. Compared to traditional unsupervised clustering, computational complexity is significantly reduced. 4. Parallel GPU implementations with computational cost saving strategies for the developed algorithms are designed to facilitate onboard processing. 5. As an application example, band selection results are used for urban land cover classification. With a few selected bands, classification accuracy can be greatly improved, compared to the one using all the original bands or those from other frequently-used dimensionality reduction methods.
|
9 |
Manifold SculptingGashler, Michael S. 24 April 2007 (has links) (PDF)
Manifold learning algorithms have been shown to be useful for many applications of numerical analysis. Unfortunately, existing algorithms often produce noisy results, do not scale well, and are unable to benefit from prior knowledge about the expected results. We propose a new algorithm that iteratively discovers manifolds by preserving the local structure among neighboring data points while scaling down the values in unwanted dimensions. This algorithm produces less noisy results than existing algorithms, and it scales better when the number of data points is much larger than the number of dimensions. Additionally, this algorithm is able to benefit from existing knowledge by operating in a semi-supervised manner.
|
10 |
Parametric Projection Pursuits for Dimensionality Reduction of Hyperspectral Signals in Target Recognition ApplicationsLin, Huang-De Hennessy 08 May 2004 (has links)
The improved spectral resolution of modern hyperspectral sensors provides a means for discriminating subtly different classes of on ground materials in remotely sensed images. However, in order to obtain statistically reliable classification results, the number of necessary training samples can increase exponentially as the number of spectral bands increases. Obtaining the necessary number of training signals for these high-dimensional datasets may not be feasible. The problem can be overcome by preprocessing the data to reduce the dimensionality and thus reduce the number of required training samples. In this thesis, three dimensionality reduction methods, all based on parametric projection pursuits, are investigated. These methods are the Sequential Parametric Projection Pursuits (SPPP), Parallel Parametric Projection Pursuits (PPPP), and Projection Pursuits Best Band Selection (PPBBS). The methods are applied to very high spectral resolution data to transform the hyperspectral data to a lower-dimension subspace. Feature extractors and classifiers are then applied to the lower-dimensional data to obtain target detection accuracies. The three projection pursuit methods are compared to each other, as well as to the case of using no dimensionality reduction preprocessing. When applied to hyperspectral data in a precision agriculture application, discriminating sicklepod and cocklebur weeds, the results showed that the SPPP method was optimum in terms of accuracy, resulting in a classification accuracy of >95% when using a nearest mean, maximum likelihood, or nearest neighbor classifier. The PPPP method encountered optimization problems when the hyperspectral dimensionality was very high, e.g. in the thousands. The PPBBS method resulted in high classification accuracies, >95%, when the maximum likelihood classifier was utilized; however, this method resulted in lower accuracies when the nearest mean or nearest neighbor classifiers were used. When using no projection pursuit preprocessing, the classification accuracies ranged between ~50% and 95%; however, for this case the accuracies greatly depended on the type of classifier being utilized.
|
Page generated in 0.1249 seconds