• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 1
  • 1
  • Tagged with
  • 5
  • 5
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Induction of Classifiers from Multi-labeled Examples: an Information-retrieval Point of View

Sarinnapakorn, Kanoksri 21 December 2007 (has links)
An important task of information retrieval is to induce classifiers capable of categorizing text documents. The fact that the same document can simultaneously belong to two or more categories is referred by the term multi-label classification (or categorization). Domains of this kind have been encountered in diverse fields even outside information retrieval. This dissertation discusses one challenging aspect of text categorization: the documents (i.e., training examples) are characterized by an extremely large number of features. As a result, many existing machine learning techniques are in such domains prohibitively expensive. This dissertation seeks to reduce these costs significantly. The proposed scheme consists of two steps. The first runs a so-called baseline induction algorithm (BIA) separately on different versions of the data, each time inducing a different subclassifier---more specifically, BIA is run always on the same training documents that are each time described by a different subset of the features. The second step then combines the subclassifiers by a fusion algorithm: when a document is to be classified, each subclassifier outputs a set of class labels accompanied by its confidence in these labels; these outputs are then combined into a single multi-label recommendation. The dissertation investigates a few alternative fusion techniques, including an original one, inspired by the Dempster-Shafer Theory. The main contribution is a mechanism for assigning the mass function to individual labels from subclassifiers. The system's behavior is illustrated on two real-world data sets. As indicated, in each of them the examples are described by thousands of features, and each example is labeled with a subset of classes. Experimental evidence indicates that the method can scale up well and achieves impressive computational savings in exchange for only a modest loss in the classification performance. The fusion method proposed is also shown to be more accurate than other more traditional fusion mechanisms. For a very large multi-label data set, the proposed mechanism not only speeds up the total induction time, but also facilitates the execution of the task on a small computer. The fact that subclassifiers can be constructed independently and more conveniently from small subsets of features provides an avenue for parallel processing that might offer further increase in computational efficiency.
2

Evolutionary ensembles for imbalanced learning / Comitês evolucionários para aprendizado desbalanceado

Fernandes, Everlandio Rebouças Queiroz 13 August 2018 (has links)
In many real classification problems, the data set used for model induction is significantly imbalanced. This occurs when the number of examples of some classes is much lower than the other classes. Imbalanced datasets can compromise the performance of most classical classification algorithms. The classification models induced by such datasets usually present a strong bias towards the majority classes, tending to classify new instances as belonging to these classes. A commonly adopted strategy for dealing with this problem is to train the classifier on a balanced sample from the original dataset. However, this procedure can discard examples that could be important for a better class discrimination, reducing classifier efficiency. On the other hand, in recent years several studies have shown that in different scenarios the strategy of combining several classifiers into structures known as ensembles has proved to be quite effective. This strategy has led to a stable predictive accuracy and, in particular, to a greater generalization ability than the classifiers that make up the ensemble. This generalization power of classifier ensembles has been the focus of research in the imbalanced learning field in order to reduce the bias toward the majority classes, despite the complexity involved in generating efficient ensembles. Optimization meta-heuristics, such as evolutionary algorithms, have many applications for ensemble learning, although they are little used for this purpose. For example, evolutionary algorithms maintain a set of possible solutions and diversify these solutions, which helps to escape out of the local optimal. In this context, this thesis investigates and develops approaches to deal with imbalanced datasets, using ensemble of classifiers induced by samples taken from the original dataset. More specifically, this theses propose three solutions based on evolutionary ensemble learning and a fourth proposal that uses a pruning mechanism based on dominance ranking, a common concept in multiobjective evolutionary algorithms. Experiments showed the potential of the developed solutions. / Em muitos problemas reais de classificação, o conjunto de dados usado para a indução do modelo é significativamente desbalanceado. Isso ocorre quando a quantidade de exemplos de algumas classes é muito inferior às das outras classes. Conjuntos de dados desbalanceados podem comprometer o desempenho da maioria dos algoritmos clássicos de classificação. Os modelos de classificação induzidos por tais conjuntos de dados geralmente apresentam um forte viés para as classes majoritárias, tendendo classificar novas instâncias como pertencentes a essas classes. Uma estratégia comumente adotada para lidar com esse problema, é treinar o classificador sobre uma amostra balanceada do conjunto de dados original. Entretanto, esse procedimento pode descartar exemplos que poderiam ser importantes para uma melhor discriminação das classes, diminuindo a eficiência do classificador. Por outro lado, nos últimos anos, vários estudos têm mostrado que em diferentes cenários a estratégia de combinar vários classificadores em estruturas conhecidas como comitês tem se mostrado bastante eficaz. Tal estratégia tem levado a uma acurácia preditiva estável e principalmente a apresentar maior habilidade de generalização que os classificadores que compõe o comitê. Esse poder de generalização dos comitês de classificadores tem sido foco de pesquisas no campo de aprendizado desbalanceado, com o objetivo de diminuir o viés em direção as classes majoritárias, apesar da complexidade que envolve gerar comitês de classificadores eficientes. Meta-heurísticas de otimização, como os algoritmos evolutivos, têm muitas aplicações para o aprendizado de comitês, apesar de serem pouco usadas para este fim. Por exemplo, algoritmos evolutivos mantêm um conjunto de soluções possíveis e diversificam essas soluções, o que auxilia na fuga dos ótimos locais. Nesse contexto, esta tese investiga e desenvolve abordagens para lidar com conjuntos de dados desbalanceados, utilizando comitês de classificadores induzidos a partir de amostras do conjunto de dados original por meio de metaheurísticas. Mais especificamente, são propostas três soluções baseadas em aprendizado evolucionário de comitês e uma quarta proposta que utiliza um mecanismo de poda baseado em ranking de dominância, conceito comum em algoritmos evolutivos multiobjetivos. Experimentos realizados mostraram o potencial das soluções desenvolvidas.
3

Evolutionary ensembles for imbalanced learning / Comitês evolucionários para aprendizado desbalanceado

Everlandio Rebouças Queiroz Fernandes 13 August 2018 (has links)
In many real classification problems, the data set used for model induction is significantly imbalanced. This occurs when the number of examples of some classes is much lower than the other classes. Imbalanced datasets can compromise the performance of most classical classification algorithms. The classification models induced by such datasets usually present a strong bias towards the majority classes, tending to classify new instances as belonging to these classes. A commonly adopted strategy for dealing with this problem is to train the classifier on a balanced sample from the original dataset. However, this procedure can discard examples that could be important for a better class discrimination, reducing classifier efficiency. On the other hand, in recent years several studies have shown that in different scenarios the strategy of combining several classifiers into structures known as ensembles has proved to be quite effective. This strategy has led to a stable predictive accuracy and, in particular, to a greater generalization ability than the classifiers that make up the ensemble. This generalization power of classifier ensembles has been the focus of research in the imbalanced learning field in order to reduce the bias toward the majority classes, despite the complexity involved in generating efficient ensembles. Optimization meta-heuristics, such as evolutionary algorithms, have many applications for ensemble learning, although they are little used for this purpose. For example, evolutionary algorithms maintain a set of possible solutions and diversify these solutions, which helps to escape out of the local optimal. In this context, this thesis investigates and develops approaches to deal with imbalanced datasets, using ensemble of classifiers induced by samples taken from the original dataset. More specifically, this theses propose three solutions based on evolutionary ensemble learning and a fourth proposal that uses a pruning mechanism based on dominance ranking, a common concept in multiobjective evolutionary algorithms. Experiments showed the potential of the developed solutions. / Em muitos problemas reais de classificação, o conjunto de dados usado para a indução do modelo é significativamente desbalanceado. Isso ocorre quando a quantidade de exemplos de algumas classes é muito inferior às das outras classes. Conjuntos de dados desbalanceados podem comprometer o desempenho da maioria dos algoritmos clássicos de classificação. Os modelos de classificação induzidos por tais conjuntos de dados geralmente apresentam um forte viés para as classes majoritárias, tendendo classificar novas instâncias como pertencentes a essas classes. Uma estratégia comumente adotada para lidar com esse problema, é treinar o classificador sobre uma amostra balanceada do conjunto de dados original. Entretanto, esse procedimento pode descartar exemplos que poderiam ser importantes para uma melhor discriminação das classes, diminuindo a eficiência do classificador. Por outro lado, nos últimos anos, vários estudos têm mostrado que em diferentes cenários a estratégia de combinar vários classificadores em estruturas conhecidas como comitês tem se mostrado bastante eficaz. Tal estratégia tem levado a uma acurácia preditiva estável e principalmente a apresentar maior habilidade de generalização que os classificadores que compõe o comitê. Esse poder de generalização dos comitês de classificadores tem sido foco de pesquisas no campo de aprendizado desbalanceado, com o objetivo de diminuir o viés em direção as classes majoritárias, apesar da complexidade que envolve gerar comitês de classificadores eficientes. Meta-heurísticas de otimização, como os algoritmos evolutivos, têm muitas aplicações para o aprendizado de comitês, apesar de serem pouco usadas para este fim. Por exemplo, algoritmos evolutivos mantêm um conjunto de soluções possíveis e diversificam essas soluções, o que auxilia na fuga dos ótimos locais. Nesse contexto, esta tese investiga e desenvolve abordagens para lidar com conjuntos de dados desbalanceados, utilizando comitês de classificadores induzidos a partir de amostras do conjunto de dados original por meio de metaheurísticas. Mais especificamente, são propostas três soluções baseadas em aprendizado evolucionário de comitês e uma quarta proposta que utiliza um mecanismo de poda baseado em ranking de dominância, conceito comum em algoritmos evolutivos multiobjetivos. Experimentos realizados mostraram o potencial das soluções desenvolvidas.
4

Seleção de canais para BCIs baseadas no P300 / Channel selection for P300-based BCIs

Ulisses, Pedro Henrique da Costa 19 February 2019 (has links)
Interface Cérebro-Computador é um meio que permite a comunicação do cérebro com dispositivos externos e tem como principal público-alvo as pessoas com problemas motores, incapazes de se comunicarem e/ou se locomoverem. Uma das principais aplicações são os soletradores baseados no P300 que fornecem um meio de indivíduos se comunicarem através de um teclado virtual. Devolver a capacidade de comunicação para uma pessoa é de extrema importância para a qualidade de vida das pessoas. Esse tipo de aplicação possui diversos desafios, um deles é a necessidade da BCI ser treinada especificamente para cada indivíduo. Esse treinamento pode levar horas e até mesmo dias. Uma das formas de diminuir esse tempo é utilizar um dos conjuntos de canais pré-definidos que são sugeridos na literatura, porém esses conjuntos não garantem um funcionamento adequado da BCI, o que pode frustar os indivíduos não desejar mais utilizar uma BCI. Para solucionar esse problema, é proposto no presente trabalho a seleção de canais a partir de um conjunto de canais para agilizar o processo de treinamento e atingir um ótimo desempenho com a BCI. / Brain-Computer Interface is a means that allows the communication of the brain with external devices and has as main target audience the people with motor problems, unable to communicate and/or move around. One of the main applications is the P300-based spellers that provide a means for individuals to communicate through a virtual keyboard. Recovering the ability to communicate to a person is of extreme importance to the quality of peoples lives. This type of application has several challenges, one of which is the need for BCI to be trained specifically for each individual. This training can take hours and even days. One of the ways to decrease this time is to use one of the predefined set of channels that are suggested in the literature, but these sets do not guarantee an adequate functioning of BCI, which can frustrate individuals no longer want to use a BCI. To solve this problem, it is proposed in the present work the selection of channels from a set of channels to accelerate the training process and achieve optimal performance with BCI.
5

Αποτίμηση μεθόδων εκπαίδευσης τεχνητών νευρωνικών δικτύων και εφαρμογές

Λιβιέρης, Ιωάννης 31 August 2009 (has links)
Τα τεχνητά νευρωνικά δίκτυα είναι μια μορφή τεχνητής νοημοσύνης, τα οποία αποτελούνται από ένα σύνολο απλών, διασυνδεδεμένων και προσαρμοστικών μονάδων, οι οποίες συνιστούν ένα παράλληλο πολύπλοκο υπολογιστικό μοντέλο. Μέχρι σήμερα έχουν εφαρμοστεί επιτυχημένα σε ένα ευρύ φάσμα περιοχών για την επίλυση προβλημάτων ταξινόμησης ή πρόβλεψης, όπως η βιολογία, η ιατρική, η γεολογία, η φυσική κ.ά. Σε αυτήν την εργασία θα ασχοληθούμε με την εκπαίδευση τεχνητών νευρωνικών δικτύων ανά πρότυπο εισόδου. Αυτή η προσέγγιση θεωρείται κατεξοχήν κατάλληλη για περιπτώσεις όπου η εκπαίδευση διαθέτει σημαντικό χρόνο και απαιτεί μεγάλο αποθηκευτικό χώρο, όπως συμβαίνει συχνά όταν έχουμε μεγάλα σύνολα προτύπων ή/και δίκτυα. Μέχρι σήμερα έχουν προταθεί πολλοί αλγόριθμοι εκπαίδευσης νευρωνικών δικτύων, καλύπτοντας ο ένας τα κενά του άλλου, σχεδιασμένοι ώστε να επιλύουν τα προβλήματα που παλιότερα ήταν δύσκολο να επιλυθούν. Στόχος της εργασίας είναι η εκτενής ανάλυση και αξιολόγηση των αλγορίθμων εκπαίδευσης καθώς και η ικανότητα γενίκευσης των εκπαιδευόμενων δικτύων σε μια ποικιλία προβλημάτων από τους τομείς τις ιατρικής και της βιοπληροφορικής. Επίσης επηρεασμένοι από τη δυνατότητα για την επίτευξη καλύτερης απόδοσης θα μελετήσουμε την συμβολή των νευρωνικών δικτύων στη μηχανική μάθηση. Συγκεκριμένα θα αποτιμήσουμε τη συνεισφορά των νευρωνικών δικτύων στη δημιουργία αξιόπιστων συστημάτων αποφάσεων χρησιμοποιώντας τεχνικές συνδυασμού ταξινομητών. Τέλος, θα μελετήσουμε τις δυνατότητες συνδυασμού τους με διάφορες άλλες κατηγορίες ταξινομητών μηχανικής μάθησης για την ανάπτυξη ισχυρότερων υβριδικών συστημάτων εξαγωγής πληροφορίας. / Literature review corroborates that artificial neural networks are being successfully applied in a variety of regression and classification problems. Due of their ability to exploit the tolerance for imprecision and uncertainty in real-world problems and their robustness and parallelism, artificial neural networks have been increasingly used in many applications. It is well-known that the procedure of training a neural network is highly consistent with unconstrained optimization theory and many attempts have been made to speed up this process. In particular, various algorithms motivated from numerical optimization theory have been applied for accelerating neural network training. Moreover, commonly known heuristics approaches such as momentum or variable learning rate lead to a significant improvement. In this work we compare the performance of classical gradient descent methods and examine the effect of incorporating into them a variable learning rate and an adaptive nonmonotone strategy. We perform a large scale study on the behavior of the presented algorithms and identify their possible advantages. Additionally, we propose two modifications of two well-known second order algorithms aiming to overcome the limitations of the original methods.

Page generated in 0.1087 seconds