Spelling suggestions: "subject:"[een] MULTI-CLASS CLASSIFICATION"" "subject:"[enn] MULTI-CLASS CLASSIFICATION""
11 |
Multi-class Classification Methods Utilizing Mahalanobis Taguchi System And A Re-sampling Approach For Imbalanced Data SetsAyhan, Dilber 01 April 2009 (has links) (PDF)
Classification approaches are used in many areas in order to identify or estimate classes,
which different observations belong to. The classification approach, Mahalanobis Taguchi
System (MTS) is analyzed and further improved for multi-class classification problems under
the scope of this thesis study. MTS tries to explore significant variables and classify a new
observation based on its Mahalanobis distance (MD). In this study, first, sample size
problems, which are encountered mostly in small data sets, and multicollinearity problems,
which constitute some limitations of MTS, are analyzed and a re-sampling approach is
explored as a solution. Our re-sampling approach, which only works for data sets with two
classes, is a combination of over-sampling and under-sampling. Over-sampling is based on
SMOTE, which generates the synthetic observations between the nearest neighbors of
observations in the minority class. In addition, MTS models are used to test the performance
of several re-sampling parameters, for which the most appropriate values are sought specific
to each case. In the second part, multi-class classification methods with MTS are developed.
An algorithm, namely Feature Weighted Multi-class MTS-I (FWMMTS-I), is inspired by the
descent feature weighted MD. It relaxes adding up of the MDs for variables equally. This
provides representations of noisy variables with weights close to zero so that they do not
mask the other variables. As a second multi-class classification algorithm, the original MTS
method is extended to multi-class problems, which is called Multi-class MTS (MMTS). In
addition, a comparable approach to that of Su and Hsiao (2009), which also considers weights
of variables, is studied with a modification in MD calculation. It is named as Feature
Weighted Multi-class MTS-II (FWMMTS-II). The methods are compared on eight different
multi-class data sets using a 5-fold stratified cross validation approach. Results show that
FWMMTS-I is as accurate as MMTS, and they are better than FWMMTS-II. Interestingly,
the Mahalanobis Distance Classifier (MDC) using all the variables directly in the
classification model has performed equally well on the studied data sets.
|
12 |
[en] REDUCING TEACHER-STUDENT INTERACTIONS BETWEEN TWO NEURAL NETWORKS / [pt] REDUZINDO AS INTERAÇÕES PROFESSOR-ALUNO ENTRE DUAS REDES NEURAISGUSTAVO MADEIRA KRIEGER 11 October 2019 (has links)
[pt] Propagação de conhecimento é um dos pilares da evolução humana. Nossas descobertas são baseadas em conhecimentos já existentes, construídas em cima deles e então se tornam a fundação para a próxima geração de aprendizado. No ramo de Inteligência Artificial, existe o interesse em replicar esse aspecto da natureza humana em máquinas. Criando um primeiro modelo e treinando ele nos dados originais, outro modelo pode ser criado e aprender a partir dele ao invés de ter que começar todo o processo do zero. Se for comprovado que esse método é confiável, ele vai permitir várias mudanças na forma que nós abordamos machine learning, em que cada inteligência não será um microcosmo independente. Essa relação entre modelos é batizada de relação Professor-Aluno. Esse trabalho descreve o desenvolvimento de dois modelos distintos e suas capacidades de aprender usando a informação dada em um ao outro. Os experimentos apresentados aqui mostram os resultados desse treino e as diferentes metodologias usadas em busca do cenário ótimo em que esse processo de aprendizado é viável para replicação futura. / [en] Propagation of knowledge is one of the pillars of human evolution. Our discoveries are all based on preexisting knowledge, built upon them and then become the foundation for the next generation of learning. In the field of artificial intelligence, there s an interest in replicating this aspect of human nature on machines. By creating a first model and training it on the original data, another model can be created and learn from it instead of having to learn everything from scratch. If this method is proven to be reliable, it will allow many changes in the way that we approach machine learning, specially allowing different models to work together. This relation between models is nicknamed the Teacher-Student relation. This work describes the development of two separate models and their ability to learn using incomplete data and each other. The experiments presented here show the results of this training and the different methods used in the pursuit of an optimal scenario where such learning process is viable for future use.
|
13 |
Classifying Portable Electronic Devices using Device Specifications : A Comparison of Machine Learning TechniquesWesterholm, Ludvig January 2024 (has links)
In this project, we explored the usage of machine learning in classifying portable electronic devices. The primary objective was to identify devices such as laptops, smartphones, and tablets, based on their physical and technical specification. These specifications, sourced from the Pricerunner price comparison website, contain height, Wi-Fi standard, and screen resolution. We aggregated this information into a dataset and split it into a training set and a testing set. To achieve the classification of devices, we trained four popular machine learning models: Random Forest (RF), Logistic Regression (LR), k-Nearest Neighbor (kNN), and Fully Connected Network (FCN). We then compared the performance of these models. The evaluation metrics used to compare performance included precision, recall, F1-score, accuracy, and training time. The RF model achieved the highest overall accuracy of 95.4% on the original dataset. The FCN, applied to a dataset processed with standardization followed by Principal Component Analysis (PCA), reached an accuracy of 92.7%, the best within this specific subset. LR excelled in a few class-specific metrics, while kNN performed notably well relative to its training time. The RF model was the clear winner on the original dataset, while the kNN model was a strong contender on the PCA-processed dataset due to its significantly faster training time compared to the FCN. In conclusion, the RF was the best-performing model on the original dataset, the FCN showed impressive results on the standardized and PCA-processed dataset, and the kNN model, with its highest macro precision and rapid training time, also demonstrated competitive performance.
|
14 |
Classificação de dados estacionários e não estacionários baseada em grafos / Graph-based classification for stationary and non-stationary dataBertini Júnior, João Roberto 24 January 2011 (has links)
Métodos baseados em grafos consistem em uma poderosa forma de representação e abstração de dados que proporcionam, dentre outras vantagens, representar relações topológicas, visualizar estruturas, representar grupos de dados com formatos distintos, bem como, fornecer medidas alternativas para caracterizar os dados. Esse tipo de abordagem tem sido cada vez mais considerada para solucionar problemas de aprendizado de máquina, principalmente no aprendizado não supervisionado, como agrupamento de dados, e mais recentemente, no aprendizado semissupervisionado. No aprendizado supervisionado, por outro lado, o uso de algoritmos baseados em grafos ainda tem sido pouco explorado na literatura. Este trabalho apresenta um algoritmo não paramétrico baseado em grafos para problemas de classificação com distribuição estacionária, bem como sua extensão para problemas que apresentam distribuição não estacionária. O algoritmo desenvolvido baseia-se em dois conceitos, a saber, 1) em uma estrutura chamada grafo K-associado ótimo, que representa o conjunto de treinamento como um grafo esparso e dividido em componentes; e 2) na medida de pureza de cada componente, que utiliza a estrutura do grafo para determinar o nível de mistura local dos dados em relação às suas classes. O trabalho também considera problemas de classificação que apresentam alteração na distribuição de novos dados. Este problema caracteriza a mudança de conceito e degrada o desempenho do classificador. De modo que, para manter bom desempenho, é necessário que o classificador continue aprendendo durante a fase de aplicação, por exemplo, por meio de aprendizado incremental. Resultados experimentais sugerem que ambas as abordagens apresentam vantagens na classificação de dados em relação aos algoritmos testados / Graph-based methods consist in a powerful form for data representation and abstraction which provides, among others advantages, representing topological relations, visualizing structures, representing groups of data with distinct formats, as well as, supplying alternative measures to characterize data. Such approach has been each time more considered to solve machine learning related problems, mainly concerning unsupervised learning, like clustering, and recently, semi-supervised learning. However, graph-based solutions for supervised learning tasks still remain underexplored in literature. This work presents a non-parametric graph-based algorithm suitable for classification problems with stationary distribution, as well as its extension to cope with problems of non-stationary distributed data. The developed algorithm relies on the following concepts, 1) a graph structure called optimal K-associated graph, which represents the training set as a sparse graph separated into components; and 2) the purity measure for each component, which uses the graph structure to determine local data mixture level in relation to their classes. This work also considers classification problems that exhibit modification on distribution of data flow. This problem qualifies concept drift and worsens any static classifier performance. Hence, in order to maintain accuracy performance, it is necessary for the classifier to keep learning during application phase, for example, by implementing incremental learning. Experimental results, concerning both algorithms, suggest that they had presented advantages over the tested algorithms on data classification tasks
|
15 |
Multimodal Deep Learning for Multi-Label Classification and Ranking ProblemsDubey, Abhishek January 2015 (has links) (PDF)
In recent years, deep neural network models have shown to outperform many state of the art algorithms. The reason for this is, unsupervised pretraining with multi-layered deep neural networks have shown to learn better features, which further improves many supervised tasks. These models not only automate the feature extraction process but also provide with robust features for various machine learning tasks. But the unsupervised pretraining and feature extraction using multi-layered networks are restricted only to the input features and not to the output. The performance of many supervised learning algorithms (or models) depends on how well the output dependencies are handled by these algorithms [Dembczy´nski et al., 2012]. Adapting the standard neural networks to handle these output dependencies for any specific type of problem has been an active area of research [Zhang and Zhou, 2006, Ribeiro et al., 2012].
On the other hand, inference into multimodal data is considered as a difficult problem in machine learning and recently ‘deep multimodal neural networks’ have shown significant results [Ngiam et al., 2011, Srivastava and Salakhutdinov, 2012]. Several problems like classification with complete or missing modality data, generating the missing modality etc., are shown to perform very well with these models. In this work, we consider three nontrivial supervised learning tasks (i) multi-class classification (MCC),
(ii) multi-label classification (MLC) and (iii) label ranking (LR), mentioned in the order of increasing complexity of the output. While multi-class classification deals with predicting one class for every instance, multi-label classification deals with predicting more than one classes for every instance and label ranking deals with assigning a rank to each label for every instance. All the work in this field is associated around formulating new error functions that can force network to identify the output dependencies.
Aim of our work is to adapt neural network to implicitly handle the feature extraction (dependencies) for output in the network structure, removing the need of hand crafted error functions. We show that the multimodal deep architectures can be adapted for these type of problems (or data) by considering labels as one of the modalities. This also brings unsupervised pretraining to the output along with the input. We show that these models can not only outperform standard deep neural networks, but also outperform standard adaptations of neural networks for individual domains under various metrics over several data sets considered by us. We can observe that the performance of our models over other models improves even more as the complexity of the output/ problem increases.
|
16 |
Classificadores e aprendizado em processamento de imagens e visão computacional / Classifiers and machine learning techniques for image processing and computer visionRocha, Anderson de Rezende, 1980- 03 March 2009 (has links)
Orientador: Siome Klein Goldenstein / Tese (doutorado) - Universidade Estadual de Campinas, Instituto da Computação / Made available in DSpace on 2018-08-12T17:37:15Z (GMT). No. of bitstreams: 1
Rocha_AndersondeRezende_D.pdf: 10303487 bytes, checksum: 243dccfe5255c828ce7ead27c27eb1cd (MD5)
Previous issue date: 2009 / Resumo: Neste trabalho de doutorado, propomos a utilizaçãoo de classificadores e técnicas de aprendizado de maquina para extrair informações relevantes de um conjunto de dados (e.g., imagens) para solução de alguns problemas em Processamento de Imagens e Visão Computacional. Os problemas de nosso interesse são: categorização de imagens em duas ou mais classes, detecçãao de mensagens escondidas, distinção entre imagens digitalmente adulteradas e imagens naturais, autenticação, multi-classificação, entre outros. Inicialmente, apresentamos uma revisão comparativa e crítica do estado da arte em análise forense de imagens e detecção de mensagens escondidas em imagens. Nosso objetivo é mostrar as potencialidades das técnicas existentes e, mais importante, apontar suas limitações. Com esse estudo, mostramos que boa parte dos problemas nessa área apontam para dois pontos em comum: a seleção de características e as técnicas de aprendizado a serem utilizadas. Nesse estudo, também discutimos questões legais associadas a análise forense de imagens como, por exemplo, o uso de fotografias digitais por criminosos. Em seguida, introduzimos uma técnica para análise forense de imagens testada no contexto de detecção de mensagens escondidas e de classificação geral de imagens em categorias como indoors, outdoors, geradas em computador e obras de arte. Ao estudarmos esse problema de multi-classificação, surgem algumas questões: como resolver um problema multi-classe de modo a poder combinar, por exemplo, caracteríisticas de classificação de imagens baseadas em cor, textura, forma e silhueta, sem nos preocuparmos demasiadamente em como normalizar o vetor-comum de caracteristicas gerado? Como utilizar diversos classificadores diferentes, cada um, especializado e melhor configurado para um conjunto de caracteristicas ou classes em confusão? Nesse sentido, apresentamos, uma tecnica para fusão de classificadores e caracteristicas no cenário multi-classe através da combinação de classificadores binários. Nós validamos nossa abordagem numa aplicação real para classificação automática de frutas e legumes. Finalmente, nos deparamos com mais um problema interessante: como tornar a utilização de poderosos classificadores binarios no contexto multi-classe mais eficiente e eficaz? Assim, introduzimos uma tecnica para combinação de classificadores binarios (chamados classificadores base) para a resolução de problemas no contexto geral de multi-classificação. / Abstract: In this work, we propose the use of classifiers and machine learning techniques to extract useful information from data sets (e.g., images) to solve important problems in Image Processing and Computer Vision. We are particularly interested in: two and multi-class image categorization, hidden messages detection, discrimination among natural and forged images, authentication, and multiclassification. To start with, we present a comparative survey of the state-of-the-art in digital image forensics as well as hidden messages detection. Our objective is to show the importance of the existing solutions and discuss their limitations. In this study, we show that most of these techniques strive to solve two common problems in Machine Learning: the feature selection and the classification techniques to be used. Furthermore, we discuss the legal and ethical aspects of image
forensics analysis, such as, the use of digital images by criminals. We introduce a technique for image forensics analysis in the context of hidden messages detection and image classification in categories such as indoors, outdoors, computer generated, and art works. From this multi-class classification, we found some important questions: how to solve a multi-class problem in order to combine, for instance, several different features such as color, texture, shape, and silhouette without worrying about the pre-processing and normalization of the combined feature vector? How to take advantage of different classifiers, each one custom tailored to a specific set of classes in confusion? To cope with most of these problems, we present a feature and classifier fusion technique based on combinations of binary classifiers. We validate our solution with a real application for automatic produce classification. Finally, we address another interesting problem: how to combine powerful binary classifiers in the multi-class scenario more effectively? How to boost their efficiency? In this context, we present a solution that boosts the efficiency and effectiveness of multi-class from binary
techniques. / Doutorado / Engenharia de Computação / Doutor em Ciência da Computação
|
17 |
Classificação de dados estacionários e não estacionários baseada em grafos / Graph-based classification for stationary and non-stationary dataJoão Roberto Bertini Júnior 24 January 2011 (has links)
Métodos baseados em grafos consistem em uma poderosa forma de representação e abstração de dados que proporcionam, dentre outras vantagens, representar relações topológicas, visualizar estruturas, representar grupos de dados com formatos distintos, bem como, fornecer medidas alternativas para caracterizar os dados. Esse tipo de abordagem tem sido cada vez mais considerada para solucionar problemas de aprendizado de máquina, principalmente no aprendizado não supervisionado, como agrupamento de dados, e mais recentemente, no aprendizado semissupervisionado. No aprendizado supervisionado, por outro lado, o uso de algoritmos baseados em grafos ainda tem sido pouco explorado na literatura. Este trabalho apresenta um algoritmo não paramétrico baseado em grafos para problemas de classificação com distribuição estacionária, bem como sua extensão para problemas que apresentam distribuição não estacionária. O algoritmo desenvolvido baseia-se em dois conceitos, a saber, 1) em uma estrutura chamada grafo K-associado ótimo, que representa o conjunto de treinamento como um grafo esparso e dividido em componentes; e 2) na medida de pureza de cada componente, que utiliza a estrutura do grafo para determinar o nível de mistura local dos dados em relação às suas classes. O trabalho também considera problemas de classificação que apresentam alteração na distribuição de novos dados. Este problema caracteriza a mudança de conceito e degrada o desempenho do classificador. De modo que, para manter bom desempenho, é necessário que o classificador continue aprendendo durante a fase de aplicação, por exemplo, por meio de aprendizado incremental. Resultados experimentais sugerem que ambas as abordagens apresentam vantagens na classificação de dados em relação aos algoritmos testados / Graph-based methods consist in a powerful form for data representation and abstraction which provides, among others advantages, representing topological relations, visualizing structures, representing groups of data with distinct formats, as well as, supplying alternative measures to characterize data. Such approach has been each time more considered to solve machine learning related problems, mainly concerning unsupervised learning, like clustering, and recently, semi-supervised learning. However, graph-based solutions for supervised learning tasks still remain underexplored in literature. This work presents a non-parametric graph-based algorithm suitable for classification problems with stationary distribution, as well as its extension to cope with problems of non-stationary distributed data. The developed algorithm relies on the following concepts, 1) a graph structure called optimal K-associated graph, which represents the training set as a sparse graph separated into components; and 2) the purity measure for each component, which uses the graph structure to determine local data mixture level in relation to their classes. This work also considers classification problems that exhibit modification on distribution of data flow. This problem qualifies concept drift and worsens any static classifier performance. Hence, in order to maintain accuracy performance, it is necessary for the classifier to keep learning during application phase, for example, by implementing incremental learning. Experimental results, concerning both algorithms, suggest that they had presented advantages over the tested algorithms on data classification tasks
|
18 |
Brain Tumor Grade Classification in MR images using Deep Learning / Klassificering av hjärntumör-grad i MR-bilder genom djupinlärningChatzitheodoridou, Eleftheria January 2022 (has links)
Brain tumors represent a diverse spectrum of cancer types which can induce grave complications and lead to poor life expectancy. Amongst the various brain tumor types, gliomas are primary brain tumors that compose about 30% of adult brain tumors. They are graded according to the World Health Organization into Grades 1 to 4 (G1-G4), where G4 is the highest grade with the highest malignancy and poor prognosis. Early diagnosis and classification of brain tumor grade is very important since it can improve the treatment procedure and (potentially) prolong a patient's life, since life expectancy largely depends on the level of malignancy and the tumor's histological characteristics. While clinicians have diagnostic tools they use as a gold standard, such as biopsies these are either invasive or costly. A widely used example of a non-invasive technique is magnetic resonance imaging, due to its ability to produce images with different soft-tissue contrast and high spatial resolution thanks to multiple imaging sequences. However, the examination of such images can be overwhelming for radiologists due to the overall large amount of data. Deep learning approaches, on the other hand, have shown great potential in brain tumor diagnosis and can assist radiologists in the decision-making process. In this thesis, brain tumor grade classification in MR images is performed using deep learning. Two popular pre-trained CNN models (VGG-19, ResNet50) were employed using single MR modalities and combinations of them to classify gliomas into three grades. All models were trained using data augmentation on 2D images from the TCGA dataset, which consisted of 3D volumes from 142 anonymized patients. The models were evaluated based on accuracy, precision, recall, F1-score, AUC score, as well as the Wilcoxon Signed-Rank test to establish if one classifier was statistically significantly better than the other. Since deep learning models are typically 'black box' models and can be difficult to interpret by non-experts, Gradient-weighted Class Activation Mapping (Grad-CAM) was used in order to address model explainability. For single modalities, VGG-19 displayed the highest performance with a test accuracy of 77.86%, whilst for combinations of two and three modalities T1ce, FLAIR and T2, T1ce, FLAIR were the best performing ones for VGG-19 with a test accuracy of 74.48%, 75.78%, respectively. Statistical comparisons indicated that for single MR modalities and combinations of two MR modalities, there was not a statistically significant difference between the two classifiers, whilst for combination of three modalities, one model was better than the other. However, given the small size of the test population, these comparisons have low statistical power. The use of Grad-CAM for model explainability indicated that ResNet50 was able to localize the tumor region better than VGG-19.
|
19 |
Machine Learning based Predictive Data Analytics for Embedded Test SystemsAl Hanash, Fayad January 2023 (has links)
Organizations gather enormous amounts of data and analyze these data to extract insights that can be useful for them and help them to make better decisions. Predictive data analytics is a crucial subfield within data analytics that make accurate predictions. Predictive data analytics extracts insights from data by using machine learning algorithms. This thesis presents the supervised learning algorithm to perform predicative data analytics in Embedded Test System at the Nordic Engineering Partner company. Predictive Maintenance is a concept that is often used in manufacturing industries which refers to predicting asset failures before they occur. The machine learning algorithms used in this thesis are support vector machines, multi-layer perceptrons, random forests, and gradient boosting. Both binary and multi-class classifier have been provided to fit the models, and cross-validation, sampling techniques, and a confusion matrix have been provided to accurately measure their performance. In addition to accuracy, recall, precision, f1, kappa, mcc, and roc auc measurements are used as well. The prediction models that are fitted achieve high accuracy.
|
20 |
A Machine Learning Model of Perturb-Seq Data for use in Space Flight Gene Expression Profile AnalysisLiam Fitzpatric Johnson (18437556) 27 April 2024 (has links)
<p dir="ltr">The genetic perturbations caused by spaceflight on biological systems tend to have a system-wide effect which is often difficult to deconvolute into individual signals with specific points of origin. Single cell multi-omic data can provide a profile of the perturbational effects but does not necessarily indicate the initial point of interference within a network. The objective of this project is to take advantage of large scale and genome-wide perturbational or Perturb-Seq datasets by using them to pre-train a generalist machine learning model that is capable of predicting the effects of unseen perturbations in new data. Perturb-Seq datasets are large libraries of single cell RNA sequencing data collected from CRISPR knock out screens in cell culture. The advent of generative machine learning algorithms, particularly transformers, make it an ideal time to re-assess large scale data libraries in order to grasp cell and even organism-wide genomic expression motifs. By tailoring an algorithm to learn the downstream effects of the genetic perturbations, we present a pre-trained generalist model capable of predicting the effects of multiple perturbations in combination, locating points of origin for perturbation in new datasets, predicting the effects of known perturbations in new datasets, and annotation of large-scale network motifs. We demonstrate the utility of this model by identifying key perturbational signatures in RNA sequencing data from spaceflown biological samples from the NASA Open Science Data Repository.</p>
|
Page generated in 0.0557 seconds