Spelling suggestions: "subject:"4hupstream minining"" "subject:"4hupstream chanining""
11 |
串流資料分析在台灣股市指數期貨之應用 / An Application of Streaming Data Analysis on TAIEX Futures林宏哲, Lin, Hong Che Unknown Date (has links)
資料串流探勘是一個重要的研究領域,因為在現實中有許多重要的資料以串流的形式產生或被收集,金融市場的資料常常是一種資料串流,而通常這類型資料的本質是變動性大的。在這篇論文中我們運應了資料串流探勘的技術去預測台灣加權指數期貨的漲跌。對機器而言,預測期貨這種資料串流並不容易,而困難度跟概念飄移的種類與程度或頻率有關。概念飄移表示資料的潛在分布改變,這造成預測的準確率會急遽下降,因此我們專注在如何處理概念飄移。首先我們根據實驗的結果推測台灣加權指數期貨可能存在高頻率的概念飄移。另外實驗結果指出,使用偵測概念飄移的演算法可以大幅改善預測的準確率,甚至對於原本表現不好的演算法都能有顯著的改善。在這篇論文中我們亦整理出專門處理各類概念飄移的演算法。此外,我們提出了一個多分類器演算法,有助於偵測「重複發生」類別的概念飄移。該演算法相比改進之前,其最大的特色在於不需要使用者設定每個子分類器的樣本數,而該樣本數是影響演算法的關鍵之一。 / Data stream mining is an important research field, because data is usually generated and collected in a form of a stream in many cases in the real world. Financial market data is such an example. It is intrinsically dynamic and usually generated in a sequential manner. In this thesis, we apply data stream mining techniques to the prediction of Taiwan Stock Exchange Capitalization Weighted Stock Index Futures or TAIEX Futures. Our goal is to predict the rising or falling of the futures. The prediction is difficult and the difficulty is associated with concept drift, which indicates changes in the underlying data distribution. Therefore, we focus on concept drift handling. We first show that concept drift occurs frequently in the TAIEX Futures data by referring to the results from an empirical study. In addition, the results indicate that a concept drift detection method can improve the accuracy of the prediction even when it is used with a data stream mining algorithm that does not perform well. Next, we explore methods that can help us identify the types of concept drift. The experimental results indicate that sudden and reoccurring concept drift exist in the TAIEX Futures data. Moreover, we propose an ensemble based algorithm for reoccurring concept drift. The most characteristic feature of the proposed algorithm is that it can adaptively determine the chunk size, which is an important parameter for other concept drift handling algorithms.
|
12 |
A Reservoir of Adaptive Algorithms for Online Learning from Evolving Data StreamsPesaranghader, Ali 26 September 2018 (has links)
Continuous change and development are essential aspects of evolving environments and applications, including, but not limited to, smart cities, military, medicine, nuclear reactors, self-driving cars, aviation, and aerospace. That is, the fundamental characteristics of such environments may evolve, and so cause dangerous consequences, e.g., putting people lives at stake, if no reaction is adopted. Therefore, learning systems need to apply intelligent algorithms to monitor evolvement in their environments and update themselves effectively. Further, we may experience fluctuations regarding the performance of learning algorithms due to the nature of incoming data as it continuously evolves. That is, the current efficient learning approach may become deprecated after a change in data or environment. Hence, the question 'how to have an efficient learning algorithm over time against evolving data?' has to be addressed. In this thesis, we have made two contributions to settle the challenges described above.
In the machine learning literature, the phenomenon of (distributional) change in data is known as concept drift. Concept drift may shift decision boundaries, and cause a decline in accuracy. Learning algorithms, indeed, have to detect concept drift in evolving data streams and replace their predictive models accordingly. To address this challenge, adaptive learners have been devised which may utilize drift detection methods to locate the drift points in dynamic and changing data streams. A drift detection method able to discover the drift points quickly, with the lowest false positive and false negative rates, is preferred. False positive refers to incorrectly alarming for concept drift, and false negative refers to not alarming for concept drift. In this thesis, we introduce three algorithms, called as the Fast Hoeffding Drift Detection Method (FHDDM), the Stacking Fast Hoeffding Drift Detection Method (FHDDMS), and the McDiarmid Drift Detection Methods (MDDMs), for detecting drift points with the minimum delay, false positive, and false negative rates. FHDDM is a sliding window-based algorithm and applies Hoeffding’s inequality (Hoeffding, 1963) to detect concept drift. FHDDM slides its window over the prediction results, which are either 1 (for a correct prediction) or 0 (for a wrong prediction). Meanwhile, it compares the mean of elements inside the window with the maximum mean observed so far; subsequently, a significant difference between the two means, upper-bounded by the Hoeffding inequality, indicates the occurrence of concept drift. The FHDDMS extends the FHDDM algorithm by sliding multiple windows over its entries for a better drift detection regarding the detection delay and false negative rate. In contrast to FHDDM/S, the MDDM variants assign weights to their entries, i.e., higher weights are associated with the most recent entries in the sliding window, for faster detection of concept drift. The rationale is that recent examples reflect the ongoing situation adequately. Then, by putting higher weights on the latest entries, we may detect concept drift quickly. An MDDM algorithm bounds the difference between the weighted mean of elements in the sliding window and the maximum weighted mean seen so far, using McDiarmid’s inequality (McDiarmid, 1989). Eventually, it alarms for concept drift once a significant difference is experienced. We experimentally show that FHDDM/S and MDDMs outperform the state-of-the-art by representing promising results in terms of the adaptation and classification measures.
Due to the evolving nature of data streams, the performance of an adaptive learner, which is defined by the classification, adaptation, and resource consumption measures, may fluctuate over time. In fact, a learning algorithm, in the form of a (classifier, detector) pair, may present a significant performance before a concept drift point, but not after. We define this problem by the question 'how can we ensure that an efficient classifier-detector pair is present at any time in an evolving environment?' To answer this, we have developed the Tornado framework which runs various kinds of learning algorithms simultaneously against evolving data streams. Each algorithm incrementally and independently trains a predictive model and updates the statistics of its drift detector. Meanwhile, our framework monitors the (classifier, detector) pairs, and recommends the efficient one, concerning the classification, adaptation, and resource consumption performance, to the user. We further define the holistic CAR measure that integrates the classification, adaptation, and resource consumption measures for evaluating the performance of adaptive learning algorithms. Our experiments confirm that the most efficient algorithm may differ over time because of the developing and evolving nature of data streams.
|
13 |
An efficient entropy estimation approachPaavola, M. (Marko) 01 November 2011 (has links)
Abstract
Advances in miniaturisation have led to the development of new wireless measurement technologies such as wireless sensor networks (WSNs). A WSN consists of low cost nodes, which are battery-operated devices, capable of sensing the environment, transmitting and receiving, and computing. While a WSN has several advantages, including cost-effectiveness and easy installation, the nodes suffer from small memory, low computing power, small bandwidth and limited energy supply. In order to cope with restrictions on resources, data processing methods should be as efficient as possible. As a result, high quality approximates are preferred instead of accurate answers.
The aim of this thesis was to propose an efficient entropy approximation method for resource-constrained environments. Specifically, the algorithm should use a small, constant amount of memory, and have certain accuracy and low computational demand.
The performance of the proposed algorithm was evaluated experimentally with three case studies. The first study focused on the online monitoring of WSN communications performance in an industrial environment. The monitoring approach was based on the observation that entropy could be applied to assess the impact of interferences on time-delay variation of periodic tasks.
The main purpose of the additional two cases, depth of anaesthesia (DOA) –monitoring and benchmarking with simulated data sets was to provide additional evidence on the general applicability of the proposed method. Moreover, in case of DOA-monitoring, an efficient entropy approximation could assist in the development of handheld devices or processing large amount of online data from different channels simultaneously.
The initial results from the communication and DOA monitoring applications as well as from simulations were encouraging. Therefore, based on the case studies, the proposed method was able to meet the stated requirements. Since entropy is a widely used quantity, the method is also expected to have a variety of applications in measurement systems with similar requirements. / Tiivistelmä
Mekaanisten- ja puolijohdekomponenttien pienentyminen on mahdollistanut uusien mittaustekniikoiden, kuten langattomien anturiverkkojen kehittämisen. Anturiverkot koostuvat halvoista, paristokäyttöisistä solmuista, jotka pystyvät mittaamaan ympäristöään sekä käsittelemään, lähettämään ja vastaanottamaan tietoja. Anturiverkkojen etuja ovat kustannustehokkuus ja helppo käyttöönotto, rajoitteina puolestaan vähäinen muisti- ja tiedonsiirtokapasiteetti, alhainen laskentateho ja rajoitettu energiavarasto. Näiden rajoitteiden vuoksi solmuissa käytettävien laskentamenetelmien tulee olla mahdollisimman tehokkaita.
Tämän työn tavoitteena oli esittää tehokas entropian laskentamenetelmä resursseiltaan rajoitettuihin ympäristöihin. Algoritmin vaadittiin olevan riittävän tarkka, muistinkulutukseltaan pieni ja vakiosuuruinen sekä laskennallisesti tehokas.
Työssä kehitetyn menetelmän suorituskykyä tutkittiin sovellusesimerkkien avulla. Ensimmäisessä tapauksessa perehdyttiin anturiverkon viestiyhteyksien reaaliaikaiseen valvontaan. Lähestymistavan taustalla oli aiempi tutkimus, jonka perusteella entropian avulla voidaan havainnoida häiriöiden vaikutusta viestien viiveiden vaihteluun.
Muiden sovellusesimerkkien, anestesian syvyysindikaattorin ja simulaatiokokeiden, päätavoite oli tutkia menetelmän yleistettävyyttä. Erityisesti anestesian syvyyden seurannassa menetelmän arvioitiin voivan olla lisäksi hyödyksi langattomien, käsikäyttöisten syvyysmittareiden kehittämisessä ja suurten mittausmäärien reaaliaikaisessa käsittelyssä.
Alustavat tulokset langattoman verkon yhteyksien ja anestesian syvyyden valvonnasta sekä simuloinneista olivat lupaavia. Sovellusesimerkkien perusteella esitetty algoritmi kykeni vastaamaan asetettuihin vaatimuksiin. Koska entropia on laajalti käytetty suure, menetelmä saattaa soveltua useisiin mittausympäristöihin, joissa on samankaltaisia vaatimuksia.
|
14 |
Potlačení DoS útoků s využitím strojového učení / Mitigation of DoS Attacks Using Machine LearningGoldschmidt, Patrik January 2021 (has links)
Útoky typu odoprenia služby (DDoS) sú v dnešných počítačových sieťach stále frekventovanejším bezpečnostným incidentom. Táto práca sa zameriava na detekciu týchto útokov a poskytnutie relevantných informácii za účelom ich mitigácie v reálnom čase. Spomínaná funkcionalita je dosiahnutá s využitím techník prúdového dolovania z dát a strojového učenia. Výsledkom práce je sada nástrojov zastrešujúca celý proces strojového učenia - od vlastnej extrakcie príznakov cez predspracovanie dát až po export natrénovaného modelu pripraveného na nasadenie v produkcii. Experimentálne výsledky vyhodnotené na viacerých reálnych a syntetických dátových sadách poukazujú na presnosť systému väčšiu ako 99% s možnosťou spoľahlivej detekcie prebiehajúceho útoku do 4 sekúnd od jeho začiatku.
|
15 |
Obtenção de padrões sequenciais em data streams atendendo requisitos do Big DataCarvalho, Danilo Codeco 06 June 2016 (has links)
Submitted by Daniele Amaral (daniee_ni@hotmail.com) on 2016-10-20T18:13:56Z
No. of bitstreams: 1
DissDCC.pdf: 2421455 bytes, checksum: 5fd16625959b31340d5f845754f109ce (MD5) / Approved for entry into archive by Marina Freitas (marinapf@ufscar.br) on 2016-11-08T18:42:36Z (GMT) No. of bitstreams: 1
DissDCC.pdf: 2421455 bytes, checksum: 5fd16625959b31340d5f845754f109ce (MD5) / Approved for entry into archive by Marina Freitas (marinapf@ufscar.br) on 2016-11-08T18:42:42Z (GMT) No. of bitstreams: 1
DissDCC.pdf: 2421455 bytes, checksum: 5fd16625959b31340d5f845754f109ce (MD5) / Made available in DSpace on 2016-11-08T18:42:49Z (GMT). No. of bitstreams: 1
DissDCC.pdf: 2421455 bytes, checksum: 5fd16625959b31340d5f845754f109ce (MD5)
Previous issue date: 2016-06-06 / Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq) / The growing amount of data produced daily, by both businesses and individuals in the web, increased the demand for analysis and extraction of knowledge of this data. While the last two decades the solution was to store and perform data mining algorithms, currently it has become unviable even to supercomputers. In addition, the requirements of the Big Data age go far beyond the large amount of data to analyze. Response time requirements and complexity of the data acquire more weight in many areas in the real world. New models have been researched and developed, often proposing distributed computing or different ways to handle the data stream mining. Current researches shows that an alternative in the data stream mining is to join a real-time event handling mechanism with a classic mining association rules or sequential patterns algorithms. In this work is shown a data stream mining approach to meet the Big Data response time requirement, linking the event handling mechanism in real time Esper and Incremental Miner of Stretchy Time Sequences (IncMSTS) algorithm. The results show that is possible to take a static data mining algorithm for data stream environment and keep tendency
in the patterns, although not possible to continuously read all data coming into the data stream. / O crescimento da quantidade de dados produzidos diariamente, tanto por empresas como por indivíduos na web, aumentou a exigência para a análise e extração de conhecimento sobre esses dados. Enquanto nas duas últimas décadas a solução era armazenar e executar algoritmos de mineração de dados, atualmente isso se tornou inviável mesmo em super computadores. Além disso, os requisitos da chamada era do Big Data vão muito além da grande quantidade de dados a se analisar. Requisitos de tempo de resposta e complexidade dos dados adquirem maior peso em muitos domínios no mundo real. Novos modelos têm sido pesquisados e desenvolvidos, muitas vezes propondo computação distribuída ou diferentes formas de se tratar a mineração de fluxo de dados. Pesquisas atuais mostram que uma alternativa na mineração de fluxo de dados é unir um mecanismo de tratamento de eventos em tempo real com algoritmos clássicos de mineração de regras de associação ou padrões sequenciais. Neste trabalho é mostrada uma abordagem de mineração de fluxo de dados (data stream) para atender ao requisito de tempo de resposta do Big Data, que une o mecanismo de manipulação de eventos em tempo real Esper e o algoritmo Incremental Miner of Stretchy Time Sequences (IncMSTS). Os resultados mostram ser possível levar um algoritmo de mineração de dados estático para o ambiente de fluxo de dados e manter as tendências de padrões encontrados, mesmo não sendo possível ler todos os dados vindos continuamente no fluxo de dados.
|
16 |
DS-Fake : a data stream mining approach for fake news detectionMputu Boleilanga, Henri-Cedric 08 1900 (has links)
L’avènement d’internet suivi des réseaux sociaux a permis un accès facile et une diffusion rapide de l’information par toute personne disposant d’une connexion internet. L’une des conséquences néfastes de cela est la propagation de fausses informations appelées «fake news». Les fake news représentent aujourd’hui un enjeu majeur au regard de ces conséquences. De nombreuses personnes affirment encore aujourd’hui que sans la diffusion massive de fake news sur Hillary Clinton lors de la campagne présidentielle de 2016, Donald Trump n’aurait peut-être pas été le vainqueur de cette élection. Le sujet de ce mémoire concerne donc la détection automatique des fake news.
De nos jours, il existe un grand nombre de travaux à ce sujet. La majorité des approches présentées se basent soit sur l’exploitation du contenu du texte d’entrée, soit sur le contexte social du texte ou encore sur un mélange entre ces deux types d’approches. Néanmoins, il existe très peu d’outils ou de systèmes efficaces qui détecte une fausse information dans la vie réelle, tout en incluant l’évolution de l’information au cours du temps. De plus, il y a un manque criant de systèmes conçues dans le but d’aider les utilisateurs des réseaux sociaux à adopter un comportement qui leur permettrait de détecter les fausses nouvelles.
Afin d’atténuer ce problème, nous proposons un système appelé DS-Fake. À notre connaissance, ce système est le premier à inclure l’exploration de flux de données. Un flux de données est une séquence infinie et dénombrable d’éléments et est utilisée pour représenter des données rendues disponibles au fil du temps. DS-Fake explore à la fois l’entrée et le contenu d’un flux de données. L’entrée est une publication sur Twitter donnée au système afin qu’il puisse déterminer si le tweet est digne de confiance. Le flux de données est extrait à l’aide de techniques d’extraction du contenu de sites Web. Le contenu reçu par ce flux est lié à l’entrée en termes de sujets ou d’entités nommées mentionnées dans le texte d’entrée. DS-Fake aide également les utilisateurs à développer de bons réflexes face à toute information qui se propage sur les réseaux sociaux.
DS-Fake attribue un score de crédibilité aux utilisateurs des réseaux sociaux. Ce score décrit la probabilité qu’un utilisateur puisse publier de fausses informations. La plupart des systèmes utilisent des caractéristiques comme le nombre de followers, la localisation, l’emploi, etc. Seuls quelques systèmes utilisent l’historique des publications précédentes d’un utilisateur afin d’attribuer un score. Pour déterminer ce score, la majorité des systèmes utilisent la moyenne. DS-Fake renvoie un pourcentage de confiance qui détermine la probabilité que l’entrée soit fiable. Contrairement au petit nombre de systèmes qui utilisent l’historique des publications en ne prenant pas en compte que les tweets précédents d’un utilisateur, DS-Fake calcule le score de crédibilité sur la base des tweets précédents de tous les utilisateurs. Nous avons renommé le score de crédibilité par score de légitimité. Ce dernier est basé sur la technique de la moyenne Bayésienne. Cette façon de calculer le score permet d’atténuer l’impact des résultats des publications précédentes en fonction du nombre de publications dans l’historique. Un utilisateur donné ayant un plus grand nombre de tweets dans son historique qu’un autre utilisateur, même si les tweets des deux sont tous vrais, le premier utilisateur est plus crédible que le second. Son score de légitimité sera donc plus élevé. À notre connaissance, ce travail est le premier qui utilise la moyenne Bayésienne basée sur l’historique de tweets de toutes les sources pour attribuer un score à chaque source.
De plus, les modules de DS-Fake ont la capacité d’encapsuler le résultat de deux tâches, à savoir la similarité de texte et l’inférence en langage naturel hl(en anglais Natural Language Inference). Ce type de modèle qui combine ces deux tâches de TAL est également nouveau pour la problématique de la détection des fake news. DS-Fake surpasse en termes de performance toutes les approches de l’état de l’art qui ont utilisé FakeNewsNet et qui se sont basées sur diverses métriques.
Il y a très peu d’ensembles de données complets avec une variété d’attributs, ce qui constitue un des défis de la recherche sur les fausses nouvelles. Shu et al. ont introduit en 2018 l’ensemble de données FakeNewsNet pour résoudre ce problème. Le score de légitimité et les tweets récupérés ajoutent des attributs à l’ensemble de données FakeNewsNet. / The advent of the internet, followed by online social networks, has allowed easy access and rapid propagation of information by anyone with an internet connection. One of the harmful consequences of this is the spread of false information, which is well-known by the term "fake news". Fake news represent a major challenge due to their consequences. Some people still affirm that without the massive spread of fake news about Hillary Clinton during the 2016 presidential campaign, Donald Trump would not have been the winner of the 2016 United States presidential election. The subject of this thesis concerns the automatic detection of fake news.
Nowadays, there is a lot of research on this subject. The vast majority of the approaches presented in these works are based either on the exploitation of the input text content or the social context of the text or even on a mixture of these two types of approaches. Nevertheless, there are only a few practical tools or systems that detect false information in real life, and that includes the evolution of information over time. Moreover, no system yet offers an explanation to help social network users adopt a behaviour that will allow them to detect fake news.
In order to mitigate this problem, we propose a system called DS-Fake. To the best of our knowledge, this system is the first to include data stream mining. A data stream is a sequence of elements used to represent data elements over time. This system explores both the input and the contents of a data stream. The input is a post on Twitter given to the system that determines if the tweet can be trusted. The data stream is extracted using web scraping techniques. The content received by this flow is related to the input in terms of topics or named entities mentioned in the input text. This system also helps users develop good reflexes when faced with any information that spreads on social networks.
DS-Fake assigns a credibility score to users of social networks. This score describes how likely a user can publish false information. Most of the systems use features like the number of followers, the localization, the job title, etc. Only a few systems use the history of a user’s previous publications to assign a score. To determine this score, most systems use the average. DS-Fake returns a percentage of confidence that determines how likely the input is reliable. Unlike the small number of systems that use the publication history by taking into account only the previous tweets of a user, DS-Fake calculates the credibility score based on the previous tweets of all users. We renamed the credibility score legitimacy score. The latter is based on the Bayesian averaging technique. This way of calculating the score allows attenuating the impact of the results from previous posts according to the number of posts in the history. A user who has more tweets in his history than another user, even if the tweets of both are all true, the first user is more credible than the second. His legitimacy score will therefore be higher. To our knowledge, this work is the first that uses the Bayesian average based on the post history of all sources to assign a score to each source.
DS-Fake modules have the ability to encapsulate the output of two tasks, namely text similarity and natural language inference. This type of model that combines these two NLP tasks is also new for the problem of fake news detection.
There are very few complete datasets with a variety of attributes, which is one of the challenges of fake news research. Shu et al. introduce in 2018 the FakeNewsNet dataset to tackle this issue. Our work uses and enriches this dataset. The legitimacy score and the retrieved tweets from named entities mentioned in the input texts add features to the FakeNewsNet dataset. DS-Fake outperforms all state-of-the-art approaches that have used FakeNewsNet and that are based on various metrics.
|
Page generated in 0.0617 seconds