• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 62
  • 4
  • 3
  • 1
  • 1
  • Tagged with
  • 87
  • 54
  • 51
  • 37
  • 34
  • 18
  • 15
  • 14
  • 13
  • 12
  • 12
  • 11
  • 10
  • 10
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Advanced Text Analytics and Machine Learning Approach for Document Classification

Anne, Chaitanya 19 May 2017 (has links)
Text classification is used in information extraction and retrieval from a given text, and text classification has been considered as an important step to manage a vast number of records given in digital form that is far-reaching and expanding. This thesis addresses patent document classification problem into fifteen different categories or classes, where some classes overlap with other classes for practical reasons. For the development of the classification model using machine learning techniques, useful features have been extracted from the given documents. The features are used to classify patent document as well as to generate useful tag-words. The overall objective of this work is to systematize NASA’s patent management, by developing a set of automated tools that can assist NASA to manage and market its portfolio of intellectual properties (IP), and to enable easier discovery of relevant IP by users. We have identified an array of methods that can be applied such as k-Nearest Neighbors (kNN), two variations of the Support Vector Machine (SVM) algorithms, and two tree based classification algorithms: Random Forest and J48. The major research steps in this work consist of filtering techniques for variable selection, information gain and feature correlation analysis, and training and testing potential models using effective classifiers. Further, the obstacles associated with the imbalanced data were mitigated by adding synthetic data wherever appropriate, which resulted in a superior SVM classifier based model.
12

Detection of unusual fish trajectories from underwater videos

Beyan, Çigdem January 2015 (has links)
Fish behaviour analysis is a fundamental research area in marine ecology as it is helpful for detecting environmental changes by observing unusual fish patterns or new fish behaviours. The traditional way of analysing fish behaviour is by visual inspection using human observers, which is very time consuming and also limits the amount of data that can be processed. Therefore, there is a need for automatic algorithms to identify fish behaviours by using computer vision and machine learning techniques. The aim of this thesis is to help marine biologists with their work. We focus on behaviour understanding and analysis of detected and tracked fish with unusual behaviour detection approaches. Normal fish trajectories exhibit frequently observed behaviours while unusual trajectories are outliers or rare trajectories. This thesis proposes 3 approaches to detecting unusual trajectories: i) a filtering mechanism for normal fish trajectories, ii) an unusual fish trajectory classification method using clustered and labelled data and iii) an unusual fish trajectory classification approach using a clustering based hierarchical decomposition. The rule based trajectory filtering mechanism is proposed to remove normal fish trajectories which potentially helps to increase the accuracy of the unusual fish behaviour detection system. The aim is to reject normal fish trajectories as much as possible while not rejecting unusual fish trajectories. The results show that this method successfully filters out normal trajectories with a low false negative rate. This method is useful to assist building a ground truth data set from a very large fish trajectory repository, especially when the amount of normal fish trajectories greatly dominates the unusual fish trajectories. Moreover, it successfully distinguishes true fish trajectories from false fish trajectories which result from errors by the fish detection and tracking algorithms. A key contribution of this thesis is the proposed flat classifier, which uses an outlier detection method based on cluster cardinalities and a distance function to detect unusual fish trajectories. Clustered and labelled data are used to select feature sets which perform best on a training set. To describe fish trajectories 10 groups of trajectory descriptions are proposed which were not previously used for fish behaviour analysis. The proposed flat classifier improved the performance of unusual fish detection compared to the filtering approach. The performance of the flat classifier is further improved by integrating it into a hierarchical decomposition. This hierarchical decomposition method selects more specific features for different trajectory clusters which is useful considering the trajectory variety. Significantly improved results were obtained using this hierarchical decomposition in comparison to the flat classifier. This hierarchical framework is also applied to classification of more general imbalanced data sets which is a key current topic in machine learning. The experiments showed that the proposed hierarchical decomposition method is significantly better than the state of art classification methods, other outlier detection methods and unusual trajectory detection methods. Furthermore, it is successful at classifying imbalanced data sets even though the majority and minority classes contain varieties, and classes overlap which is frequently seen in real-world applications. Finally, we explored the benefits of active learning in the context of the hierarchical decomposition method, where active learning query strategies choose the most informative training data. A substantial performance gain is possible by using less labelled training data compared to learning from larger labelled data sets. Additionally, active learning with feature selection is investigated. The results show that feature selection has a positive effect on the performance of active learning. However, we show that random selection can be as effective as popular active learning query strategies in combination with active learning and feature selection, especially for imbalanced set classification.
13

Classification de bases de données déséquilibrées par des règles de décomposition / Handling imbalanced datasets by reconstruction rules in decomposition schemes

D'Ambrosio, Roberto 07 March 2014 (has links)
Le déséquilibre entre la distribution des a priori est rencontré dans un nombre très large de domaines. Les algorithmes d’apprentissage conventionnels sont moins efficaces dans la prévision d’échantillons appartenant aux classes minoritaires. Notre but est de développer une règle de reconstruction adaptée aux catégories de données biaisées. Nous proposons une nouvelle règle, la Reconstruction Rule par sélection, qui, dans le schéma ‘One-per-Class’, utilise la fiabilité, des étiquettes et des distributions a priori pour permettre de calculer une décision finale. Les tests démontrent que la performance du système s’améliore en utilisant cette règle plutôt que des règles classiques. Nous étudions également les règles dans l’ ‘Error Correcting Output Code’ (ECOC) décomposition. Inspiré par une règle de reconstitution de données statistiques conçue pour le ‘One-per-Class’ et ‘Pair-Wise Coupling’ des approches sur la décomposition, nous avons développé une règle qui s’applique à la régression ‘softmax’ sur la fiabilité afin d’évaluer la classification finale. Les résultats montrent que ce choix améliore les performances avec respect de la règle statistique existante et des règles de reconstructions classiques. Sur ce thème d’estimation fiable nous remarquons que peu de travaux ont porté sur l’efficacité de l’estimation postérieure dans le cadre de boosting. Suivant ce raisonnement, nous développons une estimation postérieure efficace en boosting Nearest Neighbors. Utilisant Universal Nearest Neighbours classification nous prouvons qu’il existe une sous-catégorie de fonctions, dont la minimisation apporte statistiquement de simples et efficaces estimateurs de Bayes postérieurs. / Disproportion among class priors is encountered in a large number of domains making conventional learning algorithms less effective in predicting samples belonging to the minority classes. We aim at developing a reconstruction rule suited to multiclass skewed data. In performing this task we use the classification reliability that conveys useful information on the goodness of classification acts. In the framework of One-per-Class decomposition scheme we design a novel reconstruction rule, Reconstruction Rule by Selection, which uses classifiers reliabilities, crisp labels and a-priori distributions to compute the final decision. Tests show that system performance improves using this rule rather than using well-established reconstruction rules. We investigate also the rules in the Error Correcting Output Code (ECOC) decomposition framework. Inspired by a statistical reconstruction rule designed for the One-per-Class and Pair-Wise Coupling decomposition approaches, we have developed a rule that applies softmax regression on reliability outputs in order to estimate the final classification. Results show that this choice improves the performances with respect to the existing statistical rule and to well-established reconstruction rules. On the topic of reliability estimation we notice that small attention has been given to efficient posteriors estimation in the boosting framework. On this reason we develop an efficient posteriors estimator by boosting Nearest Neighbors. Using Universal Nearest Neighbours classifier we prove that a sub-class of surrogate losses exists, whose minimization brings simple and statistically efficient estimators for Bayes posteriors.
14

Predicting the Unobserved : A statistical analysis of missing data techniques for binary classification

Säfström, Stella January 2019 (has links)
The aim of the thesis is to investigate how the classification performance of random forest and logistic regression differ, given an imbalanced data set with MCAR missing data. The performance is measured in terms of accuracy and sensitivity. Two analyses are performed: one with a simulated data set and one application using data from the Swedish population registries. The simulation study is created to have the same class imbalance at 1:5. The missing values are handled using three different techniques: complete case analysis, predictive mean matching and mean imputation. The thesis concludes that logistic regression and random forest are on average equally accurate, with some instances of random forest outperforming logistic regression. Logistic regression consistently outperforms random forest with regards to sensitivity. This implies that logistic regression may be the best option for studies where the goal is to accurately predict outcomes in the minority class. None of the missing data techniques stood out in terms of performance.
15

Evolutionary ensembles for imbalanced learning / Comitês evolucionários para aprendizado desbalanceado

Fernandes, Everlandio Rebouças Queiroz 13 August 2018 (has links)
In many real classification problems, the data set used for model induction is significantly imbalanced. This occurs when the number of examples of some classes is much lower than the other classes. Imbalanced datasets can compromise the performance of most classical classification algorithms. The classification models induced by such datasets usually present a strong bias towards the majority classes, tending to classify new instances as belonging to these classes. A commonly adopted strategy for dealing with this problem is to train the classifier on a balanced sample from the original dataset. However, this procedure can discard examples that could be important for a better class discrimination, reducing classifier efficiency. On the other hand, in recent years several studies have shown that in different scenarios the strategy of combining several classifiers into structures known as ensembles has proved to be quite effective. This strategy has led to a stable predictive accuracy and, in particular, to a greater generalization ability than the classifiers that make up the ensemble. This generalization power of classifier ensembles has been the focus of research in the imbalanced learning field in order to reduce the bias toward the majority classes, despite the complexity involved in generating efficient ensembles. Optimization meta-heuristics, such as evolutionary algorithms, have many applications for ensemble learning, although they are little used for this purpose. For example, evolutionary algorithms maintain a set of possible solutions and diversify these solutions, which helps to escape out of the local optimal. In this context, this thesis investigates and develops approaches to deal with imbalanced datasets, using ensemble of classifiers induced by samples taken from the original dataset. More specifically, this theses propose three solutions based on evolutionary ensemble learning and a fourth proposal that uses a pruning mechanism based on dominance ranking, a common concept in multiobjective evolutionary algorithms. Experiments showed the potential of the developed solutions. / Em muitos problemas reais de classificação, o conjunto de dados usado para a indução do modelo é significativamente desbalanceado. Isso ocorre quando a quantidade de exemplos de algumas classes é muito inferior às das outras classes. Conjuntos de dados desbalanceados podem comprometer o desempenho da maioria dos algoritmos clássicos de classificação. Os modelos de classificação induzidos por tais conjuntos de dados geralmente apresentam um forte viés para as classes majoritárias, tendendo classificar novas instâncias como pertencentes a essas classes. Uma estratégia comumente adotada para lidar com esse problema, é treinar o classificador sobre uma amostra balanceada do conjunto de dados original. Entretanto, esse procedimento pode descartar exemplos que poderiam ser importantes para uma melhor discriminação das classes, diminuindo a eficiência do classificador. Por outro lado, nos últimos anos, vários estudos têm mostrado que em diferentes cenários a estratégia de combinar vários classificadores em estruturas conhecidas como comitês tem se mostrado bastante eficaz. Tal estratégia tem levado a uma acurácia preditiva estável e principalmente a apresentar maior habilidade de generalização que os classificadores que compõe o comitê. Esse poder de generalização dos comitês de classificadores tem sido foco de pesquisas no campo de aprendizado desbalanceado, com o objetivo de diminuir o viés em direção as classes majoritárias, apesar da complexidade que envolve gerar comitês de classificadores eficientes. Meta-heurísticas de otimização, como os algoritmos evolutivos, têm muitas aplicações para o aprendizado de comitês, apesar de serem pouco usadas para este fim. Por exemplo, algoritmos evolutivos mantêm um conjunto de soluções possíveis e diversificam essas soluções, o que auxilia na fuga dos ótimos locais. Nesse contexto, esta tese investiga e desenvolve abordagens para lidar com conjuntos de dados desbalanceados, utilizando comitês de classificadores induzidos a partir de amostras do conjunto de dados original por meio de metaheurísticas. Mais especificamente, são propostas três soluções baseadas em aprendizado evolucionário de comitês e uma quarta proposta que utiliza um mecanismo de poda baseado em ranking de dominância, conceito comum em algoritmos evolutivos multiobjetivos. Experimentos realizados mostraram o potencial das soluções desenvolvidas.
16

Evolutionary ensembles for imbalanced learning / Comitês evolucionários para aprendizado desbalanceado

Everlandio Rebouças Queiroz Fernandes 13 August 2018 (has links)
In many real classification problems, the data set used for model induction is significantly imbalanced. This occurs when the number of examples of some classes is much lower than the other classes. Imbalanced datasets can compromise the performance of most classical classification algorithms. The classification models induced by such datasets usually present a strong bias towards the majority classes, tending to classify new instances as belonging to these classes. A commonly adopted strategy for dealing with this problem is to train the classifier on a balanced sample from the original dataset. However, this procedure can discard examples that could be important for a better class discrimination, reducing classifier efficiency. On the other hand, in recent years several studies have shown that in different scenarios the strategy of combining several classifiers into structures known as ensembles has proved to be quite effective. This strategy has led to a stable predictive accuracy and, in particular, to a greater generalization ability than the classifiers that make up the ensemble. This generalization power of classifier ensembles has been the focus of research in the imbalanced learning field in order to reduce the bias toward the majority classes, despite the complexity involved in generating efficient ensembles. Optimization meta-heuristics, such as evolutionary algorithms, have many applications for ensemble learning, although they are little used for this purpose. For example, evolutionary algorithms maintain a set of possible solutions and diversify these solutions, which helps to escape out of the local optimal. In this context, this thesis investigates and develops approaches to deal with imbalanced datasets, using ensemble of classifiers induced by samples taken from the original dataset. More specifically, this theses propose three solutions based on evolutionary ensemble learning and a fourth proposal that uses a pruning mechanism based on dominance ranking, a common concept in multiobjective evolutionary algorithms. Experiments showed the potential of the developed solutions. / Em muitos problemas reais de classificação, o conjunto de dados usado para a indução do modelo é significativamente desbalanceado. Isso ocorre quando a quantidade de exemplos de algumas classes é muito inferior às das outras classes. Conjuntos de dados desbalanceados podem comprometer o desempenho da maioria dos algoritmos clássicos de classificação. Os modelos de classificação induzidos por tais conjuntos de dados geralmente apresentam um forte viés para as classes majoritárias, tendendo classificar novas instâncias como pertencentes a essas classes. Uma estratégia comumente adotada para lidar com esse problema, é treinar o classificador sobre uma amostra balanceada do conjunto de dados original. Entretanto, esse procedimento pode descartar exemplos que poderiam ser importantes para uma melhor discriminação das classes, diminuindo a eficiência do classificador. Por outro lado, nos últimos anos, vários estudos têm mostrado que em diferentes cenários a estratégia de combinar vários classificadores em estruturas conhecidas como comitês tem se mostrado bastante eficaz. Tal estratégia tem levado a uma acurácia preditiva estável e principalmente a apresentar maior habilidade de generalização que os classificadores que compõe o comitê. Esse poder de generalização dos comitês de classificadores tem sido foco de pesquisas no campo de aprendizado desbalanceado, com o objetivo de diminuir o viés em direção as classes majoritárias, apesar da complexidade que envolve gerar comitês de classificadores eficientes. Meta-heurísticas de otimização, como os algoritmos evolutivos, têm muitas aplicações para o aprendizado de comitês, apesar de serem pouco usadas para este fim. Por exemplo, algoritmos evolutivos mantêm um conjunto de soluções possíveis e diversificam essas soluções, o que auxilia na fuga dos ótimos locais. Nesse contexto, esta tese investiga e desenvolve abordagens para lidar com conjuntos de dados desbalanceados, utilizando comitês de classificadores induzidos a partir de amostras do conjunto de dados original por meio de metaheurísticas. Mais especificamente, são propostas três soluções baseadas em aprendizado evolucionário de comitês e uma quarta proposta que utiliza um mecanismo de poda baseado em ranking de dominância, conceito comum em algoritmos evolutivos multiobjetivos. Experimentos realizados mostraram o potencial das soluções desenvolvidas.
17

Técnicas para o problema de dados desbalanceados em classificação hierárquica / Techniques for the problem of imbalanced data in hierarchical classification

Barella, Victor Hugo 24 July 2015 (has links)
Os recentes avanços da ciência e tecnologia viabilizaram o crescimento de dados em quantidade e disponibilidade. Junto com essa explosão de informações geradas, surge a necessidade de analisar dados para descobrir conhecimento novo e útil. Desse modo, áreas que visam extrair conhecimento e informações úteis de grandes conjuntos de dados se tornaram grandes oportunidades para o avanço de pesquisas, tal como o Aprendizado de Máquina (AM) e a Mineração de Dados (MD). Porém, existem algumas limitações que podem prejudicar a acurácia de alguns algoritmos tradicionais dessas áreas, por exemplo o desbalanceamento das amostras das classes de um conjunto de dados. Para mitigar tal problema, algumas alternativas têm sido alvos de pesquisas nos últimos anos, tal como o desenvolvimento de técnicas para o balanceamento artificial de dados, a modificação dos algoritmos e propostas de abordagens para dados desbalanceados. Uma área pouco explorada sob a visão do desbalanceamento de dados são os problemas de classificação hierárquica, em que as classes são organizadas em hierarquias, normalmente na forma de árvore ou DAG (Direct Acyclic Graph). O objetivo deste trabalho foi investigar as limitações e maneiras de minimizar os efeitos de dados desbalanceados em problemas de classificação hierárquica. Os experimentos realizados mostram que é necessário levar em consideração as características das classes hierárquicas para a aplicação (ou não) de técnicas para tratar problemas dados desbalanceados em classificação hierárquica. / Recent advances in science and technology have made possible the data growth in quantity and availability. Along with this explosion of generated information, there is a need to analyze data to discover new and useful knowledge. Thus, areas for extracting knowledge and useful information in large datasets have become great opportunities for the advancement of research, such as Machine Learning (ML) and Data Mining (DM). However, there are some limitations that may reduce the accuracy of some traditional algorithms of these areas, for example the imbalance of classes samples in a dataset. To mitigate this drawback, some solutions have been the target of research in recent years, such as the development of techniques for artificial balancing data, algorithm modification and new approaches for imbalanced data. An area little explored in the data imbalance vision are the problems of hierarchical classification, in which the classes are organized into hierarchies, commonly in the form of tree or DAG (Direct Acyclic Graph). The goal of this work aims at investigating the limitations and approaches to minimize the effects of imbalanced data with hierarchical classification problems. The experimental results show the need to take into account the features of hierarchical classes when deciding the application of techniques for imbalanced data in hierarchical classification.
18

Damping power system oscillations using a phase imbalanced hybrid series capacitive compensation scheme

Pan, Sushan 13 January 2011
Interconnection of electric power systems is becoming increasingly widespread as part of the power exchange between countries as well as regions within countries in many parts of the world. There are numerous examples of interconnection of remotely separated regions within one country. Such are found in the Nordic countries, Argentina, and Brazil. In cases of long distance AC transmission, as in interconnected power systems, care has to be taken for safeguarding of synchronism as well as stable system voltages, particularly in conjunction with system faults. With series compensation, bulk AC power transmission over very long distances (over 1000 km) is a reality today. These long distance power transfers cause, however, the system low-frequency oscillations to become more lightly damped. As a result, many power network operators are taking steps to add supplementary damping devices in their systems to improve the system security by damping these undesirable oscillations. With the advent of thyristor controlled series compensation, AC power system interconnections can be brought to their fullest benefit by optimizing their power transmission capability, safeguarding system stability under various operating conditions and optimizing the load sharing between parallel circuits at all times. This thesis reports the results of digital time-domain simulation studies that are carried out to investigate the effectiveness of a phase imbalanced hybrid single-phase-Thyristor Controlled Series Capacitor (TCSC) compensation scheme in damping power system oscillations in multi-machine power systems. This scheme which is feasible, technically sound, and has an industrial application potential, is economically attractive when compared with the full three-phase TCSC which has been used for power oscillations damping.<p> Time-domain simulations are conducted on a benchmark model using the ElectroMagnetic Transients program (EMTP-RV). The results of the investigations have demonstrated that the hybrid single-phase-TCSC compensation scheme is very effective in damping power system oscillations at different loading profiles.
19

Cost-Sensitive Boosting for Classification of Imbalanced Data

Sun, Yanmin 11 May 2007 (has links)
The classification of data with imbalanced class distributions has posed a significant drawback in the performance attainable by most well-developed classification systems, which assume relatively balanced class distributions. This problem is especially crucial in many application domains, such as medical diagnosis, fraud detection, network intrusion, etc., which are of great importance in machine learning and data mining. This thesis explores meta-techniques which are applicable to most classifier learning algorithms, with the aim to advance the classification of imbalanced data. Boosting is a powerful meta-technique to learn an ensemble of weak models with a promise of improving the classification accuracy. AdaBoost has been taken as the most successful boosting algorithm. This thesis starts with applying AdaBoost to an associative classifier for both learning time reduction and accuracy improvement. However, the promise of accuracy improvement is trivial in the context of the class imbalance problem, where accuracy is less meaningful. The insight gained from a comprehensive analysis on the boosting strategy of AdaBoost leads to the investigation of cost-sensitive boosting algorithms, which are developed by introducing cost items into the learning framework of AdaBoost. The cost items are used to denote the uneven identification importance among classes, such that the boosting strategies can intentionally bias the learning towards classes associated with higher identification importance and eventually improve the identification performance on them. Given an application domain, cost values with respect to different types of samples are usually unavailable for applying the proposed cost-sensitive boosting algorithms. To set up the effective cost values, empirical methods are used for bi-class applications and heuristic searching of the Genetic Algorithm is employed for multi-class applications. This thesis also covers the implementation of the proposed cost-sensitive boosting algorithms. It ends with a discussion on the experimental results of classification of real-world imbalanced data. Compared with existing algorithms, the new algorithms this thesis presents are superior in achieving better measurements regarding the learning objectives.
20

Cost-Sensitive Boosting for Classification of Imbalanced Data

Sun, Yanmin 11 May 2007 (has links)
The classification of data with imbalanced class distributions has posed a significant drawback in the performance attainable by most well-developed classification systems, which assume relatively balanced class distributions. This problem is especially crucial in many application domains, such as medical diagnosis, fraud detection, network intrusion, etc., which are of great importance in machine learning and data mining. This thesis explores meta-techniques which are applicable to most classifier learning algorithms, with the aim to advance the classification of imbalanced data. Boosting is a powerful meta-technique to learn an ensemble of weak models with a promise of improving the classification accuracy. AdaBoost has been taken as the most successful boosting algorithm. This thesis starts with applying AdaBoost to an associative classifier for both learning time reduction and accuracy improvement. However, the promise of accuracy improvement is trivial in the context of the class imbalance problem, where accuracy is less meaningful. The insight gained from a comprehensive analysis on the boosting strategy of AdaBoost leads to the investigation of cost-sensitive boosting algorithms, which are developed by introducing cost items into the learning framework of AdaBoost. The cost items are used to denote the uneven identification importance among classes, such that the boosting strategies can intentionally bias the learning towards classes associated with higher identification importance and eventually improve the identification performance on them. Given an application domain, cost values with respect to different types of samples are usually unavailable for applying the proposed cost-sensitive boosting algorithms. To set up the effective cost values, empirical methods are used for bi-class applications and heuristic searching of the Genetic Algorithm is employed for multi-class applications. This thesis also covers the implementation of the proposed cost-sensitive boosting algorithms. It ends with a discussion on the experimental results of classification of real-world imbalanced data. Compared with existing algorithms, the new algorithms this thesis presents are superior in achieving better measurements regarding the learning objectives.

Page generated in 0.0305 seconds