Spelling suggestions: "subject:"1inear discriminant 2analysis"" "subject:"1inear discriminant 3analysis""
1 |
Dimensionality Reduction Using Factor AnalysisKhosla, Nitin, n/a January 2006 (has links)
In many pattern recognition applications, a large number of features are extracted in order to ensure an accurate classification of unknown classes. One way to solve the problems of high dimensions is to first reduce the dimensionality of the data to a manageable size, keeping as much of the original information as possible and then feed the reduced-dimensional data into a pattern recognition system. In this situation, dimensionality reduction process becomes the pre-processing stage of the pattern recognition system. In addition to this, probablility density estimation, with fewer variables is a simpler approach for dimensionality reduction. Dimensionality reduction is useful in speech recognition, data compression, visualization and exploratory data analysis. Some of the techniques which can be used for dimensionality reduction are; Factor Analysis (FA), Principal Component Analysis(PCA), and Linear Discriminant Analysis(LDA). Factor Analysis can be considered as an extension of Principal Component Analysis. The EM (expectation maximization) algorithm is ideally suited to problems of this sort, in that it produces maximum-likelihood (ML) estimates of parameters when there is a many-to-one mapping from an underlying distribution to the distribution governing the observation, conditioned upon the obervations. The maximization step then provides a new estimate of the parameters. This research work compares the techniques; Factor Analysis (Expectation-Maximization algorithm based), Principal Component Analysis and Linear Discriminant Analysis for dimensionality reduction and investigates Local Factor Analysis (EM algorithm based) and Local Principal Component Analysis using Vector Quantization.
|
2 |
Classification Of Remotely Sensed Data By Using 2d Local Discriminant BasesTekinay, Cagri 01 August 2009 (has links) (PDF)
In this thesis, 2D Local Discriminant Bases (LDB) algorithm is used to 2D search structure to classify remotely sensed data. 2D Linear Discriminant Analysis (LDA) method is converted into an M-ary classifier by combining majority voting principle and linear distance parameters. The feature extraction algorithm extracts the relevant features by removing the irrelevant ones and/or combining the ones which do not represent supplemental information on their own. The algorithm is implemented on a remotely sensed airborne data set from Tippecanoe County, Indiana to evaluate its performance. The spectral and spatial-frequency features are extracted from the multispectral data and used for classifying vegetative species like corn, soybeans, red clover, wheat and oat in the data set.
|
3 |
Infinite dimensional discrimination and classificationShin, Hyejin 17 September 2007 (has links)
Modern data collection methods are now frequently returning observations that should
be viewed as the result of digitized recording or sampling from stochastic processes rather
than vectors of finite length. In spite of great demands, only a few classification methodologies
for such data have been suggested and supporting theory is quite limited. The focus of
this dissertation is on discrimination and classification in this infinite dimensional setting.
The methodology and theory we develop are based on the abstract canonical correlation
concept of Eubank and Hsing (2005), and motivated by the fact that Fisher's discriminant
analysis method is intimately tied to canonical correlation analysis. Specifically, we have
developed a theoretical framework for discrimination and classification of sample paths
from stochastic processes through use of the Loeve-Parzen isomorphism that connects a
second order process to the reproducing kernel Hilbert space generated by its covariance
kernel. This approach provides a seamless transition between the finite and infinite dimensional
settings and lends itself well to computation via smoothing and regularization. In
addition, we have developed a new computational procedure and illustrated it with simulated
data and Canadian weather data.
|
4 |
DEVELOPMENT OF AN EEG BRAIN-MACHINE INTERFACE TO AID IN RECOVERY OF MOTOR FUNCTION AFTER NEUROLOGICAL INJURYSalmon, Elizabeth 01 January 2013 (has links)
Impaired motor function following neurological injury may be overcome through therapies that induce neuroplastic changes in the brain. Therapeutic methods include repetitive exercises that promote use-dependent plasticity (UDP), the benefit of which may be increased by first administering peripheral nerve stimulation (PNS) to activate afferent fibers, resulting in increased cortical excitability. We speculate that PNS delivered only in response to attempted movement would induce timing-dependent plasticity (TDP), a mechanism essential to normal motor learning. Here we develop a brain-machine interface (BMI) to detect movement intent and effort in healthy volunteers (n=5) from their electroencephalogram (EEG). This could be used in the future to promote TDP by triggering PNS in response to a patient’s level of effort in a motor task. Linear classifiers were used to predict state (rest, sham, right, left) based on EEG variables in a handgrip task and to determine between three levels of force applied. Mean classification accuracy with out-of-sample data was 54% (23-73%) for tasks and 44% (21-65%) for force. There was a slight but significant correlation (p<0.001) between sample entropy and force exerted. The results indicate the feasibility of applying PNS in response to motor intent detected from the brain.
|
5 |
Real-time Embedded Age and Gender Classification in Unconstrained VideoAzarmehr, Ramin January 2015 (has links)
Recently, automatic demographic classification has found its way into embedded applications such as targeted advertising in mobile devices, and in-car warning systems for elderly drivers. In this thesis, we present a complete framework for video-based gender classification and age estimation which can perform accurately on embedded systems in real-time and under unconstrained conditions. We propose a segmental dimensionality reduction technique utilizing Enhanced Discriminant Analysis (EDA) to minimize the memory and computational requirements, and enable the implementation of these classifiers for resource-limited embedded systems which otherwise is not achievable using existing resource-intensive approaches. On a multi-resolution feature vector we have achieved up to 99.5% compression ratio for training data storage, and a maximum performance of 20 frames per second on an embedded Android platform. Also, we introduce several novel improvements such as face alignment using the nose, and an illumination normalization method for unconstrained environments using bilateral filtering. These improvements could help to suppress the textural noise, normalize the skin color, and rectify the face localization errors. A non-linear Support Vector Machine (SVM) classifier along with a discriminative demography-based classification strategy is exploited to improve both accuracy and performance of classification. We have performed several cross-database evaluations on different controlled and uncontrolled databases to assess the generalization capability of the classifiers. Our experiments demonstrated competitive accuracies compared to the resource-demanding state-of-the-art approaches.
|
6 |
Face recognition with partial occlusions using weighing and image segmentationChanaiwa, Tapfuma January 2020 (has links)
This dissertation studied the problem of face recognition when facial images have partial occlusions like sunglasses and scarfs. These partial occlusions lead to the loss of discriminatory information when trying to recognise a person's face using traditional face recognition techniques that do not take into account these shortcomings. This dissertation aimed to fill the gap of knowledge. Several papers in literature put forward the theory that not all regions of the face contribute equally when discriminating between different subjects. They state that some regions of the face are more equal than others, like the eyes and nose. While this may be true in theory there was a need to comprehensively study this problem.
A weighting technique was introduced that that took into account the different features of the face and assigned weights for the different features of the face based on their distance from the five points that were identified as the centre of the weighing technique. Five centres were chosen which were the left eye, the right eye, the centre of the brows, the nose and the mouth. These centres perfectly captured were the five dominant regions of the face where roughly located. This weighing technique was fused with an image segmentation process that ultimately led to a hybrid approach to face recognition.
Five features of the face were identified and studied quantitatively on how much they influence face recognition. These five features were the chin (C), eyes (E), forehead (F), mouth (M) and finally the nose (N). For the system to be robust and thorough, combinations of these five features were constructed to make 31 models that were used for both training and testing purposes. This meant that each of the five features had 16 models associated with it. For example, the chin (C) had the following models associated with it; C, CE, CF, CM, CN, CE, CEM, CEN, CFM, CFN, CMN, CEFM CEFN, CEMN, CFMN and CEFMN. These models were put in five different groupings called Category 1 up to Category 5. A Category 3 model implied that only three out of the five features were utilised for training the algorithm and testing. An example of a Category 3 model was the CFN model. This meant that this model simulated partial occlusion on the mouth and the chin region. The face recognition algorithm was trained on all these different models in order to ascertain the efficiency and effectiveness of this proposed technique. The results were then compared with various methods from the literature. / Dissertation (MEng (Computer Engineering))--University of Pretoria, 2020. / Electrical, Electronic and Computer Engineering / MEng (Computer Engineering) / Unrestricted
|
7 |
Predicting the size of a company winning a procurement: an evaluation study of three classification modelsBjörkegren, Ellen January 2022 (has links)
In this thesis, the performance of the classification methods Linear Discriminant Analysis (LDA), Random Forests (RF), and Support Vector Machines (SVM) are compared using procurement data to predict what size company will win a procurement. This is useful information for companies, since bidding on a procurement takes time and resources, which they can save if they know their chances of winning are low. The data used in the models are collected from OpenTender and allabolag.se and represent procurements that were awarded to companies in 2020. A total of 8 models are created, two versions of the LDA model, two versions of the RF model, and four versions of the SVM model, where some models are more complex than others. All models are evaluated on overall performance using hit rate, Huberty’s I Index, mean average error, and Area Under the Curve. The most complex SVM model performed the best across all evaluation measurements, whereas the less complex LDA model performed overall worst. Hit rates and mean average errors are also calculated within each class, and the complex SVM models performed best on all company sizes, except the small companies which were best predicted by the less complex Random Forest model.
|
8 |
Misclassification Probabilities through Edgeworth-type Expansion for the Distribution of the Maximum Likelihood based Discriminant FunctionUmunoza Gasana, Emelyne January 2021 (has links)
This thesis covers misclassification probabilities via an Edgeworth-type expansion of the maximum likelihood based discriminant function. When deriving misclassification errors, first the expectation and variance in the population are assumed to be known where the variance is the same across populations and thereafter we consider the case where those parameters are unknown. Cumulants of the discriminant function for discriminating between two multivariate normal populations are derived. Approximate probabilities of the misclassification errors are established via an Edgeworth-type expansion using a standard normal distribution.
|
9 |
Toward Enhanced P300 Speller PerformanceKrusienski,, D. J., Sellers, Eric W., McFarland, D. J., Vaughan, T. M., Wolpaw, J. R. 15 January 2008 (has links)
This study examines the effects of expanding the classical P300 feature space on the classification performance of data collected from a P300 speller paradigm [Farwell LA, Donchin E. Talking off the top of your head: toward a mental prosthesis utilizing event-related brain potentials. Electroenceph Clin Neurophysiol 1988;70:510-23]. Using stepwise linear discriminant analysis (SWLDA) to construct a classifier, the effects of spatial channel selection, channel referencing, data decimation, and maximum number of model features are compared with the intent of establishing a baseline not only for the SWLDA classifier, but for related P300 speller classification methods in general. By supplementing the classical P300 recording locations with posterior locations, online classification performance of P300 speller responses can be significantly improved using SWLDA and the favorable parameters derived from the offline comparative analysis.
|
10 |
Robustness Against Non-Normality : Evaluating LDA and QDA in Simulated Settings Using Multivariate Non-Normal DistributionsViktor, Gånheim, Isak, Åslund January 2023 (has links)
Evaluating classifiers in controlled settings is essential for empirical applications, as extensive knowledge on model-behaviour is needed for accurate predictions. This thesis investigates robustness against non-normality of two prominent classifiers, LDA and QDA. Through simulation, errors in leave-one-out cross-validation are compared for data generated by different multivariate distributions, also controlling for covariance structures, class separation and sample sizes. Unexpectedly, the classifiers perform better on data generated by heavy-tailed symmetrical distributions than by the normal distribution. Possible explanations are proposed, but the cause remains unknown. There is need for further studies, investigating more settings as well as mathematical properties to verify and understand these results.
|
Page generated in 0.0609 seconds