Spelling suggestions: "subject:"laplacian eigenmann"" "subject:"laplacian eigenmodes""
1 |
Fault Isolation By Manifold LearningThurén, Mårten January 1985 (has links)
<p>This thesis investigates the possibility of improving black box fault diagnosis by a process called manifold learning, which simply stated is a way of finding patterns in recorded sensor data. The idea is that there is more information in the data than is exploited when using simple classification algorithms such as k-Nearest Neighbor and Support Vector Machines, and that this additional information can be found by using manifold learning methods. To test the idea of using manifold learning, data from two different fault diagnosis scenarios is used: A Scania truck engine and an electrical system called Adapt. Two linear and one non-linear manifold learning methods are used: Principal Component Analysis and Linear Discriminant Analysis (linear) and Laplacian Eigenmaps (non-linear).Some improvements are achieved given certain conditions on the diagnosis scenarios. The improvements for different methods correspond to the systems in which they are achieved in terms of linearity. The positive results for the relatively linear electrical system are achieved mainly by the linear methods Principal Component Analysis and Linear Discriminant Analysis and the positive results for the non-linear Scania system are achieved by the non-linear method Laplacian Eigenmaps.The results for scenarios without these special conditions are not improved however, and it is uncertain wether the improvements in special condition scenarios are due to gained information or to the nature of the cases themselves.</p>
|
2 |
Fault Isolation By Manifold LearningThurén, Mårten January 1985 (has links)
This thesis investigates the possibility of improving black box fault diagnosis by a process called manifold learning, which simply stated is a way of finding patterns in recorded sensor data. The idea is that there is more information in the data than is exploited when using simple classification algorithms such as k-Nearest Neighbor and Support Vector Machines, and that this additional information can be found by using manifold learning methods. To test the idea of using manifold learning, data from two different fault diagnosis scenarios is used: A Scania truck engine and an electrical system called Adapt. Two linear and one non-linear manifold learning methods are used: Principal Component Analysis and Linear Discriminant Analysis (linear) and Laplacian Eigenmaps (non-linear).Some improvements are achieved given certain conditions on the diagnosis scenarios. The improvements for different methods correspond to the systems in which they are achieved in terms of linearity. The positive results for the relatively linear electrical system are achieved mainly by the linear methods Principal Component Analysis and Linear Discriminant Analysis and the positive results for the non-linear Scania system are achieved by the non-linear method Laplacian Eigenmaps.The results for scenarios without these special conditions are not improved however, and it is uncertain wether the improvements in special condition scenarios are due to gained information or to the nature of the cases themselves.
|
3 |
REGION-BASED GEOMETRIC ACTIVE CONTOUR FOR CLASSIFICATION USING HYPERSPECTRAL REMOTE SENSING IMAGESYan, Lin 20 October 2011 (has links)
No description available.
|
4 |
Towards on-line domain-independent big data learning : novel theories and applicationsMalik, Zeeshan January 2015 (has links)
Feature extraction is an extremely important pre-processing step to pattern recognition, and machine learning problems. This thesis highlights how one can best extract features from the data in an exhaustively online and purely adaptive manner. The solution to this problem is given for both labeled and unlabeled datasets, by presenting a number of novel on-line learning approaches. Specifically, the differential equation method for solving the generalized eigenvalue problem is used to derive a number of novel machine learning and feature extraction algorithms. The incremental eigen-solution method is used to derive a novel incremental extension of linear discriminant analysis (LDA). Further the proposed incremental version is combined with extreme learning machine (ELM) in which the ELM is used as a preprocessor before learning. In this first key contribution, the dynamic random expansion characteristic of ELM is combined with the proposed incremental LDA technique, and shown to offer a significant improvement in maximizing the discrimination between points in two different classes, while minimizing the distance within each class, in comparison with other standard state-of-the-art incremental and batch techniques. In the second contribution, the differential equation method for solving the generalized eigenvalue problem is used to derive a novel state-of-the-art purely incremental version of slow feature analysis (SLA) algorithm, termed the generalized eigenvalue based slow feature analysis (GENEIGSFA) technique. Further the time series expansion of echo state network (ESN) and radial basis functions (EBF) are used as a pre-processor before learning. In addition, the higher order derivatives are used as a smoothing constraint in the output signal. Finally, an online extension of the generalized eigenvalue problem, derived from James Stone’s criterion, is tested, evaluated and compared with the standard batch version of the slow feature analysis technique, to demonstrate its comparative effectiveness. In the third contribution, light-weight extensions of the statistical technique known as canonical correlation analysis (CCA) for both twinned and multiple data streams, are derived by using the same existing method of solving the generalized eigenvalue problem. Further the proposed method is enhanced by maximizing the covariance between data streams while simultaneously maximizing the rate of change of variances within each data stream. A recurrent set of connections used by ESN are used as a pre-processor between the inputs and the canonical projections in order to capture shared temporal information in two or more data streams. A solution to the problem of identifying a low dimensional manifold on a high dimensional dataspace is then presented in an incremental and adaptive manner. Finally, an online locally optimized extension of Laplacian Eigenmaps is derived termed the generalized incremental laplacian eigenmaps technique (GENILE). Apart from exploiting the benefit of the incremental nature of the proposed manifold based dimensionality reduction technique, most of the time the projections produced by this method are shown to produce a better classification accuracy in comparison with standard batch versions of these techniques - on both artificial and real datasets.
|
Page generated in 0.0487 seconds