1 |
A Hypershperical Classifier Training Method for Pattern Recognition Problems with OverlappingGao, Chih-Hung 24 July 2001 (has links)
A Hypershperical Classifier Training Method for Pattern Recognition Problems with Overlapping
|
2 |
Fuzzy logic and neural network techniques in data analysisCampbell, Jonathan G. January 1999 (has links)
No description available.
|
3 |
Noun and numeral classifiers in Mixtec and Tzotzil : a referential viewLeon Pasqual, Maria Lourdes de January 1988 (has links)
No description available.
|
4 |
Co-evolving functions in genetic programmingAhluwalia, Manu January 2000 (has links)
No description available.
|
5 |
Prediction Intervals for Class ProbabilitiesYu, Xiaofeng January 2007 (has links)
Prediction intervals for class probabilities are of interest in machine learning because they can quantify the uncertainty about the class probability estimate for a test instance. The idea is that all likely class probability values of the test instance are included, with a pre-specified confidence level, in the calculated prediction interval. This thesis proposes a probabilistic model for calculating such prediction intervals. Given the unobservability of class probabilities, a Bayesian approach is employed to derive a complete distribution of the class probability of a test instance based on a set of class observations of training instances in the neighbourhood of the test instance. A random decision tree ensemble learning algorithm is also proposed, whose prediction output constitutes the neighbourhood that is used by the Bayesian model to produce a PI for the test instance. The Bayesian model, which is used in conjunction with the ensemble learning algorithm and the standard nearest-neighbour classifier, is evaluated on artificial datasets and modified real datasets.
|
6 |
A Study on Interestingness Measures for Associative ClassifiersJalali Heravi, Mojdeh 11 1900 (has links)
Associative classification is a rule-based approach to classify data relying on association rule mining by discovering associations between a set of features and a class label. Support and confidence are the de-facto interestingness measures used for discovering relevant association rules. The support-confidence framework has also been used in most, if not all, associative classifiers. Although support and confidence are appropriate measures for building a strong model in many cases, they are still not the ideal measures because in some cases a huge set of rules is generated which could hinder the effectiveness in some cases for which other measures could be better suited.
There are many other rule interestingness measures already used in machine learning, data mining and statistics. This work focuses on using 53 different objective measures for associative classification rules. A wide range of UCI datasets are used to study the impact of different interestingness measures on different phases of associative classifiers based on the number of rules generated and the accuracy obtained. The results show that there are interestingness measures that can significantly reduce the number of rules for almost all datasets while the accuracy of the model is hardly jeopardized or even improved. However, no single measure can be introduced as an obvious winner.
|
7 |
Analysis and classification of drift susceptible chemosensory responsesBansal, Puneet, active 21st century 17 February 2015 (has links)
This report presents machine learning models that can accurately classify gases by analyzing data from an array of 16 sensors. More specifically, the report presents basic decision tree models and advanced ensemble versions. The contribution of this report is to show that basic decision trees perform reasonably well on the gas sensor data, however their accuracy can be drastically improved by employing ensemble decision tree classifiers. The report presents bagged trees, Adaboost trees and Random Forest models in addition to basic entropy and Gini based trees. It is shown that ensemble classifiers achieve a very high degree of accuracy of 99% in classifying gases even when the sensor data is drift ridden. Finally, the report compares the accuracy of all the models developed. / text
|
8 |
A Study on Interestingness Measures for Associative ClassifiersJalali Heravi, Mojdeh Unknown Date
No description available.
|
9 |
Localization of Stroke Using Microwave Technology and Inner product Subspace ClassifierPrabahar, Jasila January 2014 (has links)
Stroke or “brain attack” occurs when a blood clot carried by the blood vessels from other part of the body blocks the cerebral artery in the brain or when a blood vessel breaks and interrupts the blood flow to parts of the brain. Depending on which part of the brain is being damaged functional abilities controlled by that region of the brain is lost. By interpreting the patient’s symptoms it is possible to make a coarse estimate of the location of the stroke, e.g. if it is on the left or right hemisphere of the brain. The aim of this study was to evaluate if microwave technology can be used to estimate the location of haemorrhagic stroke. In the first part of the thesis, CT images of the patients for whom the microwave measurement are taken is analysed and are used as a reference to know the location of bleeding in the brain. The X, Y and Z coordinates are calculated from the target slice (where the bleeding is more prominent). Based on the bleeding coordinated the datasets are divided into classes. Under supervised learning method the ISC algorithm is trained to classify stroke in the left and right hemispheres; stroke in the anterior and posterior part of the brain and the stroke in the inferior and superior region of the brain. The second part of the thesis is to analyse the classification result in order to identify the patients that were being misclassified. The classification results to classify the location of bleeding were promising with a high sensitivity and specificity that are indicated by the area under the ROC curve (AUC). AUC of 0.86 was obtained for bleedings in the left and right brain and an AUC of 0.94 was obtained for bleeding in the inferior and superior brain. The main constraint was the small size of the dataset and few availability of dataset with bleeding in the front brain that leads to imbalance between classes. After analysis it was found that bleedings that were close to the skull and few small bleedings that are deep inside the brain are being misclassified. Many factors can be responsible for misclassification like the antenna position, head size, amount of hair etc. The overall results indicate that SDD using ISC algorithm has high potential to distinguish bleedings in different locations. It is expected that the results will be more stable with increased patient dataset for training.
|
10 |
Acquisition and analysis of heart sound dataHebden, John Edward January 1997 (has links)
No description available.
|
Page generated in 0.0666 seconds