Return to search

Dynamic optimization of classification systems for adaptive incremental learning.

Tese de Doutorado, defendida na Université Du Québec, Canadian. 2010 / Submitted by Nilson Junior (nilson.junior@unila.edu.br) on 2016-05-25T23:32:09Z
No. of bitstreams: 2
MKapp_PhD_2010.pdf: 2076219 bytes, checksum: db47dfe0b9ee92594b31d05a2f2e7fef (MD5)
Recibo Deposito Legal_TESE_Marcelo Nepomoceno Kapp01.pdf: 205561 bytes, checksum: ac4617abe7f6d68472c1ac6ac02abd26 (MD5) / Made available in DSpace on 2016-05-25T23:32:22Z (GMT). No. of bitstreams: 2
MKapp_PhD_2010.pdf: 2076219 bytes, checksum: db47dfe0b9ee92594b31d05a2f2e7fef (MD5)
Recibo Deposito Legal_TESE_Marcelo Nepomoceno Kapp01.pdf: 205561 bytes, checksum: ac4617abe7f6d68472c1ac6ac02abd26 (MD5)
Previous issue date: 2010 / An incremental learning system updates itself in response to incoming data without reexamining all the old data. Since classification systems capable of incrementally storing, filtering, and classifying data are economical, in terms of both space and time, which makes them immensely useful for industrial, military, and commercial purposes, interest in designing them
is growing. However, the challenge with incremental learning is that classification tasks can no longer be seen as unvarying, since they can actually change with the evolution of the data. These changes in turn cause dynamic changes to occur in the classification system’s parameters If such variations are neglected, the overall performance of these systems will be compromised in the future. In this thesis, on the development of a system capable of incrementally accommodating new
data and dynamically tracking new optimum system parameters for self-adaptation, we first address the optimum selection of classifiers over time. We propose a framework which combines the power of Swarm Intelligence Theory and the conventional grid-search method to progressively identify potential solutions for gradually updating training datasets. The key here is to
consider the adjustment of classifier parameters as a dynamic optimization problem that depends on the data available. Specifically, it has been shown that, if the intention is to build efficient Support Vector Machine (SVM) classifiers from sources that provide data gradually and serially, then the best way to do this is to consider model selection as a dynamic process
which can evolve and change over time. This means that a number of solutions are required, depending on the knowledge available about the problem and uncertainties in the data. We also investigate measures for evaluating and selecting classifier ensembles composed of SVM classifiers. The measures employed are based on two different theories (diversity and margin)
commonly used to understand the success of ensembles. This study has given us valuable insights and helped us to establish confidence-based measures as a tool for the selection of classifier ensembles. The main contribution of this thesis is a dynamic optimization approach that performs incremental learning in an adaptive fashion by tracking, evolving, and combining optimum hypotheses over time. The approach incorporates various theories, such as dynamic Particle Swarm Optimization, incremental Support Vector Machine classifiers, change detection, and dynamic ensemble selection based on classifier confidence levels. Experiments carried out on synthetic and real-world databases demonstrate that the proposed approach outperforms the classification methods often used in incremental learning scenarios.

Identiferoai:union.ndltd.org:IBICT/oai:dspace.unila.edu.br:123456789/550
Date25 May 2016
CreatorsKapp, Marcelo Nepomoceno
Source SetsIBICT Brazilian ETDs
LanguageEnglish
Detected LanguageEnglish
Typeinfo:eu-repo/semantics/publishedVersion, info:eu-repo/semantics/doctoralThesis
Sourcereponame:Repositório Institucional da UNILA, instname:Universidade Federal da Integração Latino-Americana, instacron:UNILA
Rightsinfo:eu-repo/semantics/openAccess

Page generated in 0.002 seconds