• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 32
  • 6
  • 4
  • 3
  • Tagged with
  • 56
  • 56
  • 17
  • 17
  • 13
  • 12
  • 11
  • 11
  • 10
  • 10
  • 10
  • 9
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Temporal Data Mining in a Dynamic Feature Space

Wenerstrom, Brent K. 22 May 2006 (has links) (PDF)
Many interesting real-world applications for temporal data mining are hindered by concept drift. One particular form of concept drift is characterized by changes to the underlying feature space. Seemingly little has been done to address this issue. This thesis presents FAE, an incremental ensemble approach to mining data subject to concept drift. FAE achieves better accuracies over four large datasets when compared with a similar incremental learning algorithm.
12

Incremental Learning Of Discrete Hidden Markov Models

Florez-Larrahondo, German 06 August 2005 (has links)
We address the problem of learning discrete hidden Markov models from very long sequences of observations. Incremental versions of the Baum-Welch algorithm that approximate the beta-values used in the backward procedure are commonly used for this problem since their memory complexity is independent of the sequence length. However, traditional approaches have two main disadvantages: the approximation of the beta-values deviates far from the real values, and the learning algorithm requires previous knowledge of the topology of the model. This dissertation describes a new incremental Baum-Welch algorithm with a novel backward procedure that improves the approximation of the â-values based on a one-step lookahead in the training sequence and investigates heuristics to prune unnecessary states from an initial complex model. Two new approaches for pruning, greedy and controlled, are introduced and a novel method for identification of ill-conditioned models is presented. Incremental learning of multiple independent observations is also investigated. We justify the new approaches analytically and report empirical results that show they converge faster than the traditional Baum-Welch algorithm using fewer computer resources. Furthermore, we demonstrate that the new learning algorithms converge faster than the previous incremental approaches and can be used to perform online learning of high-quality models useful for classification tasks. Finally, this dissertation explores the use of the new algorithms for anomaly detection in computer systems, that improve our previous research work on detectors based on hidden Markov models integrated into real-world monitoring systems of high-performance computers.
13

Referencing Unlabelled World Data to Prevent Catastrophic Forgetting in Class-incremental Learning

Li, Xuan 24 June 2022 (has links)
This thesis presents a novel strategy to address the challenge of "catastrophic forgetting" in deep continual-learning systems. The term refers to severe performance degradation for older tasks, as a system learns new tasks that are presented sequentially. Most previous techniques have emphasized preservation of existing knowledge while learning new tasks, in some cases advocating a memory buffer that grows in proportion to the number of tasks. However, we offer another perspective, which is that mitigating local-task fitness during learning is as important as attempting to preserve existing knowledge. We posit the existence of a consistent, unlabelled world environment that the system uses as an easily-accessible reference to avoid favoring spurious properties over more generalizable ones. Based on this assumption, we have developed a novel method called Learning with Reference (LwR), which delivers substantial performance gains relative to its state-of-the-art counterparts. The approach does not involve a growing memory buffer, and therefore promotes better performance at scale. We present extensive empirical evaluation on real-world datasets. / Master of Science / Rome was not built in a day, and in nature knowledge is acquired and consolidated gradually over time. Evolution has taught biological systems how to address emerging challenges by building on past experience, adapting quickly while retaining known skills. Modern artificial intelligence systems also seek to amortize the learning process over time. Specifically, one large learning task can be divided into many smaller non-overlapping tasks. For example, a classification task of two classes, tiger and horse, is divided into two tasks, where the classifier only sees and learns from tiger data in the first task and horse data in the second task. The systems are expected to sequentially acquire knowledge from these smaller tasks. Such learning strategy is known as continual learning and provides three meaningful benefits: higher resource efficiency, a progressively better knowledge base, and strong adaptability. In this thesis, we investigate the class-incremental learning problem, a subset of continual learning, which refers to learning a classification model from a sequence of tasks. Different from transfer learning, which targets better performance in new domains, continual learning emphasizes the knowledge preservation of both old and new tasks. In deep neural networks, one challenge against the preservation is "catastrophic forgetting", which refers to severe performance degradation for older tasks, as a system learns new ones that are presented sequentially. An intuitive explanation is that old task data is missing in the new tasks under continual learning setting and the model is optimized toward new tasks without concerning the old ones. To overcome this, most previous techniques have emphasized the preservation of existing knowledge while learning new tasks, in some cases advocating old-data replay with a memory buffer, which grows in proportion to the number of tasks. In this thesis, we offer another perspective, which is that mitigating local-task fitness during learning is as important as attempting to preserve existing knowledge. We notice that local task data always has strong biases because of its smaller size. Optimization on it leads the model to local optima, therefore losing a holistic view that is crucial for other tasks. To mitigate this, a reliable reference should be enforced across tasks and the model should consistently learn all new knowledge based on this. With this assumption, we have developed a novel method called Learning with Reference (LwR), which posits the existence of a consistent, unlabelled world environment that the system uses as an easily-accessible reference to avoid favoring spurious properties over more generalizable ones. Our extensive empirical experiments show that it significantly outperforms state-of-the-art counterparts in real-world datasets.
14

Interoperability Infrastructure and Incremental learning for unreliable heterogeneous communicating Systems

Haseeb, Abdul January 2009 (has links)
<p>In a broader sense the main research objective of this thesis (and ongoing research work) is distributed knowledge management for mobile dynamic systems. But the primary focus and presented work focuses on communication/interoperability of heterogeneous entities in an infrastructure less paradigm, a distributed resource manipulation infrastructure and distributed learning in the absence of global knowledge. The research objectives achieved discover the design aspects of heterogeneous distributed knowledge systems towards establishing a seamless integration. This thesis doesn’t cover all aspects in this work; rather focuses on interoperability and distributed learning.</p><p>Firstly a discussion on the issues in knowledge management for swarm of heterogeneous entities is presented. This is done in a broader and rather abstract fashion to provide an insight of motivation for interoperability and distributed learning towards knowledge management. Moreover this will also serve the reader to understand the ongoing work and research activities in much broader perspective.</p><p>Primary focus of this thesis is communication/interoperability of heterogeneous entities in an infrastructure less paradigm, a distributed resource manipulation infrastructure and distributed learning in the absence of global knowledge. In dynamic environments for mobile autonomous systems such as robot swarms or mobile software agents there is a need for autonomic publishing and discovery of resources and just-in-time integration for on-the-fly service consumption without any a priori knowledge. SOA (Service-Oriented Architecture) serves the purpose of resource reuse and sharing of services different entities. Web services (a SOA manifestation) achieves these objectives but its exploitation in dynamic environments, where the communication infrastructure is lacking, requires a considerable research. Generally Web services are exploited in stable client-server paradigms, which is a pressing assumption when dynamic distributed systems are considered. UDDI (Universal Description Discovery and Integration) is the main pediment in the exploitation of Web services in distributed control and dynamic natured systems. UDDI can be considered as a directory for publication and discovery of categorized Web services but assumes a centralized registry; even if distributed registries and associated mechanism are employed problems of collaborative communication in infrastructure less paradigms are ignored.</p><p>Towards interoperability main contribution this thesis is a mediator-based distributed Web services discovery and invocation middleware, which provides a collaborative and decentralized services discovery and management middleware for infrastructure-less mobile dynamic systems with heterogeneous communication capabilities. Heterogeneity of communication capabilities is abstracted in middleware by a conceptual classification of computing entities on the basis of their communication capabilities and communication issues are resolved via conceptual overlay formation for query propagation in system.</p><p>The proposed and developed middleware has not only been evaluated extensively using Player Stage simulator but also been applied in physical robot swarms. Experimental validations analyze the results in different communication modes i. active and ii. passive mode of communication with and without shared resource conflict resolution. I analyze discoverable Web services with respect to time, services available in complete view of cluster and the impact and resultant improvements in distributed Web services discovery by using caching and semantics.</p><p>Second part of this thesis focuses on distributed learning in the absence of global information. This thesis takes the argument of defeasibility (common-sense inference) as the basis of intelligence in human-beings, in which conclusions/inferences are drawn and refuted at the same time as more information becomes available. The ability of common-sense reasoning to adapt to dynamic environments and reasoning with uncertainty in the absence of global information seems to be best fit for distributed learning for dynamic systems.</p><p>This thesis, thus, overviews epistemic cognition in human beings, which motivates the need of a similar epistemic cognitive solution in fabricated systems and considers formal concept analysis as a case for incremental and distributed learning of formal concepts. Thesis also presents a representational schema for underlying logic formalism and formal concepts. An algorithm for incremental learning and its use-case for robotic navigation, in which robots incrementally learn formal concepts and perform common-sense reasoning for their intelligent navigation, is also presented. Moreover elaboration of the logic formalism employed and details of implementation of developed defeasible reasoning engine is given in the latter half of this thesis.</p><p>In summary, the research results and achievements described in this thesis focus on interoperability and distributed learning for heterogeneous distributed knowledge systems which contributes towards establishing a seamless integration in mobile dynamic systems.</p> / QC 20100614 / ROBOSWARM EU FP6
15

Incremental Support Vector Machine Approach for DoS and DDoS Attack Detection

Seunghee Lee (6636224) 14 May 2019 (has links)
<div> <div> <div> <p>Support Vector Machines (SVMs) have generally been effective in detecting instances of network intrusion. However, from a practical point of view, a standard SVM is not able to handle large-scale data efficiently due to the computation complexity of the algorithm and extensive memory requirements. To cope with the limitation, this study presents an incremental SVM method combined with a k-nearest neighbors (KNN) based candidate support vectors (CSV) selection strategy in order to speed up training and test process. The proposed incremental SVM method constructs or updates the pattern classes by incrementally incorporating new signatures without having to load and access the entire previous dataset in order to cope with evolving DoS and DDoS attacks. Performance of the proposed method is evaluated with experiments and compared with the standard SVM method and the simple incremental SVM method in terms of precision, recall, F1-score, and training and test duration.<br></p> </div> </div> </div>
16

Bayesian models of category acquisition and meaning development

Frermann, Lea January 2017 (has links)
The ability to organize concepts (e.g., dog, chair) into efficient mental representations, i.e., categories (e.g., animal, furniture) is a fundamental mechanism which allows humans to perceive, organize, and adapt to their world. Much research has been dedicated to the questions of how categories emerge and how they are represented. Experimental evidence suggests that (i) concepts and categories are represented through sets of features (e.g., dogs bark, chairs are made of wood) which are structured into different types (e.g, behavior, material); (ii) categories and their featural representations are learnt jointly and incrementally; and (iii) categories are dynamic and their representations adapt to changing environments. This thesis investigates the mechanisms underlying the incremental and dynamic formation of categories and their featural representations through cognitively motivated Bayesian computational models. Models of category acquisition have been extensively studied in cognitive science and primarily tested on perceptual abstractions or artificial stimuli. In this thesis, we focus on categories acquired from natural language stimuli, using nouns as a stand-in for their reference concepts, and their linguistic contexts as a representation of the concepts’ features. The use of text corpora allows us to (i) develop large-scale unsupervised models thus simulating human learning, and (ii) model child category acquisition, leveraging the linguistic input available to children in the form of transcribed child-directed language. In the first part of this thesis we investigate the incremental process of category acquisition. We present a Bayesian model and an incremental learning algorithm which sequentially integrates newly observed data. We evaluate our model output against gold standard categories (elicited experimentally from human participants), and show that high-quality categories are learnt both from child-directed data and from large, thematically unrestricted text corpora. We find that the model performs well even under constrained memory resources, resembling human cognitive limitations. While lists of representative features for categories emerge from this model, they are neither structured nor jointly optimized with the categories. We address these shortcomings in the second part of the thesis, and present a Bayesian model which jointly learns categories and structured featural representations. We present both batch and incremental learning algorithms, and demonstrate the model’s effectiveness on both encyclopedic and child-directed data. We show that high-quality categories and features emerge in the joint learning process, and that the structured features are intuitively interpretable through human plausibility judgment evaluation. In the third part of the thesis we turn to the dynamic nature of meaning: categories and their featural representations change over time, e.g., children distinguish some types of features (such as size and shade) less clearly than adults, and word meanings adapt to our ever changing environment and its structure. We present a dynamic Bayesian model of meaning change, which infers time-specific concept representations as a set of feature types and their prevalence, and captures their development as a smooth process. We analyze the development of concept representations in their complexity over time from child-directed data, and show that our model captures established patterns of child concept learning. We also apply our model to diachronic change of word meaning, modeling how word senses change internally and in prevalence over centuries. The contributions of this thesis are threefold. Firstly, we show that a variety of experimental results on the acquisition and representation of categories can be captured with computational models within the framework of Bayesian modeling. Secondly, we show that natural language text is an appropriate source of information for modeling categorization-related phenomena suggesting that the environmental structure that drives category formation is encoded in this data. Thirdly, we show that the experimental findings hold on a larger scale. Our models are trained and tested on a larger set of concepts and categories than is common in behavioral experiments and the categories and featural representations they can learn from linguistic text are in principle unrestricted.
17

A probabilistic and incremental model for online classification of documents : DV-INBC

Rodrigues, Thiago Fredes January 2016 (has links)
Recentemente, houve um aumento rápido na criação e disponibilidade de repositórios de dados, o que foi percebido nas áreas de Mineração de Dados e Aprendizagem de Máquina. Este fato deve-se principalmente à rápida criação de tais dados em redes sociais. Uma grande parte destes dados é feita de texto, e a informação armazenada neles pode descrever desde perfis de usuários a temas comuns em documentos como política, esportes e ciência, informação bastante útil para várias aplicações. Como muitos destes dados são criados em fluxos, é desejável a criação de algoritmos com capacidade de atuar em grande escala e também de forma on-line, já que tarefas como organização e exploração de grandes coleções de dados seriam beneficiadas por eles. Nesta dissertação um modelo probabilístico, on-line e incremental é apresentado, como um esforço em resolver o problema apresentado. O algoritmo possui o nome DV-INBC e é uma extensão ao algoritmo INBC. As duas principais características do DV-INBC são: a necessidade de apenas uma iteração pelos dados de treino para criar um modelo que os represente; não é necessário saber o vocabulário dos dados a priori. Logo, pouco conhecimento sobre o fluxo de dados é necessário. Para avaliar a performance do algoritmo, são apresentados testes usando datasets populares. / Recently the fields of Data Mining and Machine Learning have seen a rapid increase in the creation and availability of data repositories. This is mainly due to its rapid creation in social networks. Also, a large part of those data is made of text documents. The information stored in such texts can range from a description of a user profile to common textual topics such as politics, sports and science, information very useful for many applications. Besides, since many of this data are created in streams, scalable and on-line algorithms are desired, because tasks like organization and exploration of large document collections would be benefited by them. In this thesis an incremental, on-line and probabilistic model for document classification is presented, as an effort of tackling this problem. The algorithm is called DV-INBC and is an extension to the INBC algorithm. The two main characteristics of DV-INBC are: only a single scan over the data is necessary to create a model of it; the data vocabulary need not to be known a priori. Therefore, little knowledge about the data stream is needed. To assess its performance, tests using well known datasets are presented.
18

An incremental gaussian mixture network for data stream classification in non-stationary environments / Uma rede de mistura de gaussianas incrementais para classificação de fluxos contínuos de dados em cenários não estacionários

Diaz, Jorge Cristhian Chamby January 2018 (has links)
Classificação de fluxos contínuos de dados possui muitos desafios para a comunidade de mineração de dados quando o ambiente não é estacionário. Um dos maiores desafios para a aprendizagem em fluxos contínuos de dados está relacionado com a adaptação às mudanças de conceito, as quais ocorrem como resultado da evolução dos dados ao longo do tempo. Duas formas principais de desenvolver abordagens adaptativas são os métodos baseados em conjunto de classificadores e os algoritmos incrementais. Métodos baseados em conjunto de classificadores desempenham um papel importante devido à sua modularidade, o que proporciona uma maneira natural de se adaptar a mudanças de conceito. Os algoritmos incrementais são mais rápidos e possuem uma melhor capacidade anti-ruído do que os conjuntos de classificadores, mas têm mais restrições sobre os fluxos de dados. Assim, é um desafio combinar a flexibilidade e a adaptação de um conjunto de classificadores na presença de mudança de conceito, com a simplicidade de uso encontrada em um único classificador com aprendizado incremental. Com essa motivação, nesta dissertação, propomos um algoritmo incremental, online e probabilístico para a classificação em problemas que envolvem mudança de conceito. O algoritmo é chamado IGMN-NSE e é uma adaptação do algoritmo IGMN. As duas principais contribuições da IGMN-NSE em relação à IGMN são: melhoria de poder preditivo para tarefas de classificação e a adaptação para alcançar um bom desempenho em cenários não estacionários. Estudos extensivos em bases de dados sintéticas e do mundo real demonstram que o algoritmo proposto pode rastrear os ambientes em mudança de forma muito próxima, independentemente do tipo de mudança de conceito. / Data stream classification poses many challenges for the data mining community when the environment is non-stationary. The greatest challenge in learning classifiers from data stream relates to adaptation to the concept drifts, which occur as a result of changes in the underlying concepts. Two main ways to develop adaptive approaches are ensemble methods and incremental algorithms. Ensemble method plays an important role due to its modularity, which provides a natural way of adapting to change. Incremental algorithms are faster and have better anti-noise capacity than ensemble algorithms, but have more restrictions on concept drifting data streams. Thus, it is a challenge to combine the flexibility and adaptation of an ensemble classifier in the presence of concept drift, with the simplicity of use found in a single classifier with incremental learning. With this motivation, in this dissertation we propose an incremental, online and probabilistic algorithm for classification as an effort of tackling concept drifting. The algorithm is called IGMN-NSE and is an adaptation of the IGMN algorithm. The two main contributions of IGMN-NSE in relation to the IGMN are: predictive power improvement for classification tasks and adaptation to achieve a good performance in non-stationary environments. Extensive studies on both synthetic and real-world data demonstrate that the proposed algorithm can track the changing environments very closely, regardless of the type of concept drift.
19

Incremental learning for querying multimodal symbolic data.

Lazarescu, Mihai M. January 2000 (has links)
In this thesis we present an incremental learning algorithm for learning and classifying the pattern of movement of multiple objects in a dynamic scene. The method that we describe is based on symbolic representations of the patterns. The typical representation has a spatial component that describes the relationships of the objects and a temporal component that describes the ordering of the actions of the objects in the scene. The incremental learning algorithm (ILF) uses evidence based forgetting, generates compact concept structures and can track concept drift.We also present two novel algorithms that combine incremental learning and image analysis. The first algorithm is used in an American Football application and shows how natural language parsing can be combined with image processing and expert background knowledge to address the difficult problem of classifying and learning American Football plays. We present in detail the model developed to representAmerican Football plays, the parser used to process the transcript of the American Football commentary and the algorithms developed to label the players and classify the queries. The second algorithm is used in a cricket application. It combines incremental machine learning and camera motion estimation to classify and learn common cricket shots. We describe the method used to extract and convert the camera motion parameter values to symbolic form and the processing involved in learning the shots.Finally, we explore the issues that arise from combining incremental learning with incremental recognition. Two methods that combine incremental recognition and incremental learning are presented along with a comparison between the algorithms.
20

Incremental Unsupervised-Learning of Appearance Manifold with View-Dependent Covariance Matrix for Face Recognition from Video Sequences

MURASE, Hiroshi, IDE, Ichiro, TAKAHASHI, Tomokazu, Lina 01 April 2009 (has links)
No description available.

Page generated in 0.1214 seconds