141 |
Δομές δεικτοδότησης και υπολογισμός ερωτημάτων εύρους κ-διαστάσεων σε κατανεμημένα περιβάλλοντα / Indexing structures and computation k-dimensional range queries in distributed environmentsΚαπλάνης, Αθανάσιος 24 November 2014 (has links)
Ανέκαθεν, η ανάγκη του ανθρώπου για πληροφορία ήτανε μια από αυτές που φρόντιζε να ικανοποιήσει όσο το δυνατόν πληρέστερα. Η πληροφορία είναι σε όλες τις περιπτώσεις ένα πολύτιμο εργαλείο στην λήψη αποφάσεων και οι άνθρωποι γρήγορα αντιλήφθηκαν την σημασία της, ειδικότερα μάλιστα στην σύγχρονη εποχή στην οποία μέσω της επιστήμης της Πληροφορικής δόθηκε η δυνατότητα σε μεγάλο μέρος του κοινού να έχει πρόσβαση σε τεράστιο όγκο δεδομένων, τα οποία μέσω της σωστής επεξεργασίας μετατρέπονται σε πληροφορία. Αυτό που πλέον αποτελεί πρόκληση, η οποία μας καλεί σαν επιστήμονες της Πληροφορικής να αντιμετωπίσουμε, είναι η εύρεση και στην συνέχεια η εφαρμογή καινούργιων μεθόδων γρήγορης και ανέξοδης συλλογής, αποδοτικής αποθήκευσης και εποικοδομητικής ανάλυσης δεδομένων, έτσι ώστε να γίνουν πληροφορία ποιοτική, πλούσια και με σημαντική χρηστική αξία. Στις μέρες μας, η ανάπτυξη του κλάδου τόσο των κατανεμημένων συστημάτων όσο και του διαδικτύου, μας έχουνε δώσει την δυνατότητα να χρησιμοποιούνται χαμηλοί σε απαιτήσεις υπολογιστικοί πόροι για να επεξεργάζονται παράλληλα μεγάλο όγκο δεδομένων. Ο κλάδος της Πληροφορικής που ασχολείται εκτενώς με αυτά τα συστήματα είναι τα ομότιμα συστήματα ή αλλιώς p2p συστήματα και ο κατανεμημένος υπολογισμός.
Η παρούσα διπλωματική εργασία έχει ως στόχο να βρίσκει σε κατανεμημένο περιβάλλον σημεία στις δύο διαστάσεις. Ορίζεται, δηλαδή, ένας χώρος από κ – διαστάσεις που είναι το πλέγμα (grid), στον οποίο ο χρήστης προσπαθεί να εντοπίσει σημεία που τον ενδιαφέρουν δημιουργώντας έτσι ερωτήματα εύρους. Το σύστημα θα ψάχνει να βρει το αποτέλεσμα στο ερώτημα αυτό για να καταλήξει σε ποιο από τα άλλα ορθογώνια τμήματα του πλέγματος εμπλέκεται και στην συνέχεια αυτά (τα τμήματα) θα επιστρέφονται. Πιο συγκεκριμένα, το πλέγμα μας χωρίζεται σε τετράγωνες περιοχές και κάθε κόμβος του κατανεμημένου δικτύου αναλαμβάνει να φιλοξενήσει τα σημεία της κάθε τετράγωνης περιοχής. Όλοι αυτοί οι κόμβοι οργανώνονται σε ένα hadoop cluster και τα δεδομένα εισάγονται στην κατανεμημένη βάση δεδομένων HBase που βασίζεται στην αρχιτεκτονική του BigTable της Google File System. Ο τρόπος που οργανώνονται τα δεδομένα στην HBase είναι κατανεμημένος και γίνεται χρήση των B+ -δέντρων. Η χρησιμότητα των B+ -δέντρων σε συνδυασμό με το κατανεμημένο πλαίσιο εργασίας του Hadoop, έγκειται στο γεγονός ότι με την χρήση των απαραίτητων εργαλείων τόσο της HBase όσο και του Hadoop FS, μπορούμε να γνωρίζουμε σε ποιόν κόμβο του hadoop cluster είναι αποθηκευμένοι οι ζητούμενοι κόμβοι του B+ -δέντρου και έτσι να επιτυγχάνεται η γρήγορη ανάκτηση των αποτελεσμάτων σε ένα ερώτημα εύρους.
Η διάρθρωση της εργασίας έχει ως εξής: Στο πρώτο κεφάλαιο γίνεται μια εισαγωγή στις έννοιες του κατανεμημένου υπολογισμού πάνω σε κατανεμημένα περιβάλλοντα. Στο δεύτερο γίνεται μια αναφορά στα ομότιμα δίκτυα (p2p) και πιο συγκεκριμένα αναλύεται το δίκτυο επικάλυψης του BATON που έχει δενδρική δομή όμοια με αυτή του Β+ -δέντρου. Στο τρίτο κεφάλαιο αναφέρεται μια υλοποίηση δεικτοδότησης και απάντησης σε ερωτήματα εύρους στο Νέφος Υπολογιστών με χρήση βασικών δομών δεδομένων B+ -δέντρου. Επίσης, η ART Autonomous Range Tree δομή παρουσιάζεται η οποία μπορεί να υποστηρίξει ερωτήματα εύρους σε τόσο ευρείας κλίμακας σε μη κεντρικοποιημένα περιβάλλοντα και μπορεί να κλιμακώνεται σε σχέση με τον αριθμό των κόμβων, καθώς και με βάση τα στοιχεία που είναι αποθηκευμένα. Η ART δομή ξεπερνά τις πιο δημοφιλείς μη κεντρικοποιημένες δομές, συμπεριλαμβανομένου του Chord (και μερικοί από τους διαδόχους του), του ΒΑΤΟΝ (και τον διάδοχό του) και των Skip-Graphs. Στο τέταρτο και πέμπτο κεφάλαιο, αντίστοιχα, γίνεται μια αναφορά στα βασικότερα σημεία της αρχιτεκτονικής και της λειτουργίας του Hadoop Framework και της HBase. Στο έκτο κεφάλαιο, βρίσκεται η περιγραφή της υλοποίησης της παρούσης διπλωματικής εργασίας μαζί με τους αλγορίθμους και τον τρόπο λειτουργίας τους. Στο επόμενο γίνεται η αξιολόγηση των πειραματικών αποτελεσμάτων της παρούσης διπλωματικής εργασίας καθώς, και το τι συμπεράσματα προκύπτουν μέσα από την αξιολόγηση. Τέλος, στο τελευταίο και όγδοο κεφάλαιο γίνεται η αποτίμηση της διπλωματικής εργασίας, καθώς αναφέρονται τα βασικά της μέρη, όπως επίσης και πιθανές προεκτάσεις που θα βελτίωναν την απόδοση του συστήματος. / Traditionally, the human need for information was one of those seeking to satisfy as much as possible. Information is in every way a valuable tool in decision making and people quickly realized its importance, especially in modern times, when the Information Technology gave the public access to the vast volume of data, which can be further processed into information. What seems to be now a challenge that IT specialists have to face is finding and implementing new methods of fast and inexpensive data collection, efficient storing of data and constructive data analysis, in order to turn them into quality, rich and useful information. Nowadays, the devel-opment of both the field of distributed systems and the Internet gave us the possibility of using computational resources with low requirements for simultaneous processing of large amounts of data. The IT field that deals extensively with these systems are peer-to-peer systems (p2p) and distributed computing.
The present dissertation aims at finding points in a distributed environment in the two-dimensional space. A space of k – dimensions is defined, i.e. the grid, in which the user tries to identify points of interest creating range queries. The system will search to find the result in this question to come up with the rectangular section of the grid that is involved and then these sections will be returned. More specifically, the grid is divided into square areas, and each node of the distributed network will accommodate points of each square area. All these nodes are organized into a hadoop cluster and the data is imported into the HBase distributed database based on BigTable architecture of the Google File System. In HBase data is organized in a distributed way and B+ -trees are used. The utility of B+ -trees in conjunction with the distributed framework of Hadoop lies on the fact that using the necessary tools of both HBase and Hadoop FS we can know in which hadoop cluster node the requested B+ -tree nodes are stored and thus achieve fast results retrieval in a range query.
The structure of the project is as follows: The first chapter is an introduction to the concepts of distributed computing over distributed environments. The second is a reference to peer-to-peer networks (p2p) and more specifically the BATON overlay network, which has a tree structure similar to that of the B+ -tree, is analyzed. The third chapter deals with an indexation and answering implementation on range queries in the Computer Cloud using B+ -tree basic data structures. Also, ART Autonomous Range Tree structure is presented which can support range queries in such large-scale decentralized environments and can scale in terms of the number of nodes as well as in terms of the data items stored. ART outperforms the most popular decentralized structures, including Chord (and some of its successors), BATON (and its successor) and Skip-Graphs. In the fourth and fifth chapter respectively a reference is made to the main points of Hadoop Framework and HBase architecture and operation. The sixth chapter is the description of the implementation of this dissertation together with the algorithms and how they operate. The next chapter is the evaluation of the experimental results of this dissertation and of the conclusions that derive from the evaluation. Finally, the eighth and last chapter is an overview of the dissertation, mentioning its basic parts, as well as possible extensions that would improve the system performance.
|
142 |
Mining Tera-Scale Graphs: Theory, Engineering and DiscoveriesKang, U 01 May 2012 (has links)
How do we find patterns and anomalies, on graphs with billions of nodes and edges, which do not fit in memory? How to use parallelism for such Tera- or Peta-scale graphs? In this thesis, we propose PEGASUS, a large scale graph mining system implemented on the top of the HADOOP platform, the open source version of MAPREDUCE. PEGASUS includes algorithms which help us spot patterns and anomalous behaviors in large graphs.
PEGASUS enables the structure analysis on large graphs. We unify many different structure analysis algorithms, including the analysis on connected components, PageRank, and radius/diameter, into a general primitive called GIM-V. GIM-V is highly optimized, achieving good scale-up on the number of edges and available machines. We discover surprising patterns using GIM-V, including the 7-degrees of separation in one of the largest publicly available Web graphs, with 7 billion edges.
PEGASUS also enables the inference and the spectral analysis on large graphs. We design an efficient distributed belief propagation algorithm which infer the states of unlabeled nodes given a set of labeled nodes. We also develop an eigensolver for computing top k eigenvalues and eigenvectors of the adjacency matrices of very large graphs. We use the eigensolver to discover anomalous adult advertisers in the who-follows-whom Twitter graph with 3 billion edges. In addition, we develop an efficient tensor decomposition algorithm and use it to analyze a large knowledge base tensor.
Finally, PEGASUS allows the management of large graphs. We propose efficient graph storage and indexing methods to answer graph mining queries quickly. We also develop an edge layout algorithm for better compressing graphs.
|
143 |
Scaling Software Security Analysis to Millions of Malicious Programs and Billions of Lines of CodeJang, Jiyong 01 August 2013 (has links)
Software security is a big data problem. The volume of new software artifacts created far outpaces the current capacity of software analysis. This gap has brought an urgent challenge to our security community—scalability. If our techniques cannot cope with an ever increasing volume of software, we will always be one step behind attackers. Thus developing scalable analysis to bridge the gap is essential.
In this dissertation, we argue that automatic code reuse detection enables an efficient data reduction of a high volume of incoming malware for downstream analysis and enhances software security by efficiently finding known vulnerabilities across large code bases. In order to demonstrate the benefits of automatic software similarity detection, we discuss two representative problems that are remedied by scalable analysis: malware triage and unpatched code clone detection.
First, we tackle the onslaught of malware. Although over one million new malware are reported each day, existing research shows that most malware are not written from scratch; instead, they are automatically generated variants of existing malware. When groups of highly similar variants are clustered together, new malware more easily stands out. Unfortunately, current systems struggle with handling this high volume of malware. We scale clustering using feature hashing and perform semantic analysis using co-clustering. Our evaluation demonstrates that these techniques are an order of magnitude faster than previous systems and automatically discover highly correlated features and malware groups. Furthermore, we design algorithms to infer evolutionary relationships among malware, which helps analysts understand trends over time and make informed decisions about which malware to analyze first.
Second, we address the problem of detecting unpatched code clones at scale. When buggy code gets copied from project to project, eventually all projects will need to be patched. We call clones of buggy code that have been fixed in only a subset of projects unpatched code clones. Unfortunately, code copying is usually ad-hoc and is often not tracked, which makes it challenging to identify all unpatched vulnerabilities in code basesat the scale of entire OS distributions. We scale unpatched code clone detection to spot over15,000 latent security vulnerabilities in 2.1 billion lines of code from the Linux kernel, allDebian and Ubuntu packages, and all C/C++ projects in SourceForge in three hours on asingle machine. To the best of our knowledge, this is the largest set of bugs ever reported in a single paper.
|
144 |
傳統關聯式資料庫暨欄導向資料庫之轉換機制研究-以台灣學術期刊搜尋引擎為例 / An approach to the translation mechanism from relational-based database to column-oriented database - take Taiwan academic journal search engine as an example黃勁超, Huang, Chin Chao Unknown Date (has links)
源於資訊量爆炸時代的來臨,企業面臨大量資料所帶來的挑戰:傳統關聯式資料庫無法負荷龐大資料所造成的效能及儲存設備升級等問題。為了解決大量資料所帶來的諸多問題,各界提出不同的理論,而其中最被廣為討論的就是雲端運算。時至今日,許多企業及個體用戶逐漸開始使用雲端運算中,目前最具代表性的分散式架構Hadoop上的資料庫代表-欄導向資料庫HBase來作為底層資料庫。故本研究提出一套傳統關聯式資料庫轉換至欄導向資料庫HBase之轉換機制,以台灣學術期刊搜尋引擎為例。
|
145 |
Uma abordagem não intrusiva e automática para configuração do Hadoop / An approach non intrusive and automation for Hadoop configurationAlves, Nathália de Meneses 29 September 2015 (has links)
The amount of digital data produce in the last years has increased significantly. MapRe-
duce framework such as Hadoop have been widely used for processing big data on top of
cloud resources. In spite of these advances, contemporary systems are complex and dy-
namic which makes them hard to configure in order to improve application performance.
Software auto-tuning is a solution to this problem as it helps developers and system ad-
ministrators to handle hundreds of system parameters. For example, current work in
the literature use machine learning algorithms for Hadoop automatic configuration to
improve performance. However, these solutions use single machine learning algorithms,
thus making unfeasible to compare these solutions with each other to understand which
approach is best suited given an application and its input. In addition, current work is
intrusive or expose operational details for developers and/or system administrators. This
work proposes a transparent, modular and hybrid approach to improve the performance
of Hadoop applications. The approach proposes an architecture and implementation of
transparent software that automatically configures the Hadoop. Furthermore, this ap-
proach proposes a hybrid solution that combines genetic algorithms with various machine
learning techniques as separate modules. A research prototype was implemented and eval-
uated proving that the proposed approach can significantly reduce the execution time of
applications Hadoop WordCount and Terasort autonomously. Furthermore, the approach
converges quickly to the most suitable configuration application with low overhead. / Nas últimas décadas, a quantidade de dados gerados no mundo tem aumentado de maneira
significativa. A Computação em Nuvem juntamente com o modelo de programação Map-
Reduce, através do arcabouço Hadoop, têm sido utilizados para o processamento desses
dados. Contudo, os sistemas contemporâneos ainda são complexos e dinâmicos, tornando-se
difíceis de se configurar. A configuração automática de software é uma solução para esse
problema, ajudando os programadores e administradores gerir a complexidade desses sistemas.
Por exemplo, há soluções na literatura que utilizam aprendizado de máquina para
a configuração automática do Hadoop com o intuito de melhorar o desempenho das suas
aplicações. Apesar desses avanços, as soluções atuais para configurar automaticamente
o Hadoop utilizam soluções muito específicas, aplicando algoritmos de aprendizagem de
máquinas isoladamente. Assim, esses algoritmos não são comparados entre si para entender
qual abordagem é mais adequada para a configuração automática do Hadoop. Além
disso, essas soluções são intrusivas, ou seja, expõem detalhes operacionais para programadores
e/ou administradores de sistemas. Esse trabalho tem por objetivo propor uma
abordagem transparente, modular e híbrida para melhorar o desempenho de aplicações
Hadoop. A abordagem propõe uma arquitetura e implementação de software transparente
que configura automaticamente o Hadoop. Além disso, a abordagem propõe uma solução
híbrida que combina Algoritmos Genéticos e várias técnicas de aprendizado de máquina
(machine learning) implementadas em módulos separados. Um protótipo de pesquisa foi
implementado a avaliado mostrando que a abordagem proposta consegue diminuir significativamente o tempo de execução das aplicações Hadoop WordCount e Terasort. Além
disso, a abordagem consegue convergir rapidamente para a configuração mais adequada
de cada aplicação, alcançando baixos níveis de custos adicionais (overhead).
|
146 |
A Cloud Based Platform for Big Data ScienceIslam, Md. Zahidul January 2014 (has links)
With the advent of cloud computing, resizable scalable infrastructures for data processing is now available to everyone. Software platforms and frameworks that support data intensive distributed applications such as Amazon Web Services and Apache Hadoop enable users to the necessary tools and infrastructure to work with thousands of scalable computers and process terabytes of data. However writing scalable applications that are run on top of these distributed frameworks is still a demanding and challenging task. The thesis aimed to advance the core scientific and technological means of managing, analyzing, visualizing, and extracting useful information from large data sets, collectively known as “big data”. The term “big-data” in this thesis refers to large, diverse, complex, longitudinal and/or distributed data sets generated from instruments, sensors, internet transactions, email, social networks, twitter streams, and/or all digital sources available today and in the future. We introduced architectures and concepts for implementing a cloud-based infrastructure for analyzing large volume of semi-structured and unstructured data. We built and evaluated an application prototype for collecting, organizing, processing, visualizing and analyzing data from the retail industry gathered from indoor navigation systems and social networks (Twitter, Facebook etc). Our finding was that developing large scale data analysis platform is often quite complex when there is an expectation that the processed data will grow continuously in future. The architecture varies depend on requirements. If we want to make a data warehouse and analyze the data afterwards (batch processing) the best choices will be Hadoop clusters and Pig or Hive. This architecture has been proven in Facebook and Yahoo for years. On the other hand, if the application involves real-time data analytics then the recommendation will be Hadoop clusters with Storm which has been successfully used in Twitter. After evaluating the developed prototype we introduced a new architecture which will be able to handle large scale batch and real-time data. We also proposed an upgrade of the existing prototype to handle real-time indoor navigation data.
|
147 |
Hadoop Read Performance During Datanode Crashes / Hadoops läsprestanda vid datanodkrascherJohannsen, Fabian, Hellsing, Mattias January 2016 (has links)
This bachelor thesis evaluates the impact of datanode crashes on the performance of the read operations of a Hadoop Distributed File System, HDFS. The goal is to better understand how datanode crashes, as well as how certain parameters, affect the performance of the read operation by looking at the execution time of the get command. The parameters used are the number of crashed nodes, block size and file size. By setting up a Linux test environment with ten virtual machines and Hadoop installed on them and running tests on it, data has been collected in order to answer these questions. From this data the average execution time and standard deviation of the get command was calculated. The network activity during the tests was also measured. The results showed that neither the number of crashed nodes nor block size had any significant effect on the execution time. It also demonstrated that the execution time of the get command was not directly proportional to the size of the fetched file. The execution time was up to 4.5 times as long when the file size was four times as large. A four times larger file did sometimes result in more than a four times as long execution time. Although, the consequences of a datanode crash while fetching a small file appear to be much greater than with a large file. The average execution time increased by up to 36% when a large file was fetched but it increased by as much as 85% when fetching a small file.
|
148 |
Konsulters beskrivning av Big Data och dess koppling till Business IntelligenceBesson, Henrik January 2012 (has links)
De allra flesta av oss kommer ständigt i kontakt med olika dataflöden vilket har blivit en helt naturlig del av vårt nutida informationssamhälle. Dagens företag agerar i en ständigt föränderlig omvärld, och hantering av data och information har blivit en allt viktigare konkurrensfaktor. Detta i takt med att den totala datamängden i den digitala världen har ökat kraftigt de senaste åren. En benämning för gigantiska datamängder är Big Data, som har blivit ett populärt begrepp inom IT-branschen. Big Data kommer med helt nya analysmöjligheter, men det har visat sig att många företag är oroliga för hur de ska hantera och ta tillvara på de växande datamängderna. Syftet med denna studie har varit att ge ett kunskapsbidrag till det relativt outforskade Big Data området, detta utifrån en induktiv ansats med utgångspunkten ur intervjuer. Den problematik som kommit med Big Data beskrivs oftast ur tre perspektiv; där data förekommer i stora volymer, med varierande data-typer och källor, samt att data genereras med olika hastighet. Det framgick av studiens resultat att Big Data som begrepp berör många olika områden och det kan variera väldigt mycket mellan företag inom olika branscher vad gäller betydelse, förmåga, ambition och omfattning. De traditionella teknologierna för datalagring och utvinning är inte tillräckliga för att hantera data som benämns som Big Data. I samband med att ny teknologi tagits fram och äldre lösningar uppgraderats, har detta dock lett till att det nu går att se informationshantering och analysarbete i helt nya perspektiv. Eftersom Big Data huvudsakligen har samma syfte som området Business Intelligence, kan dessa lösningar lämpligen integreras. En mycket stor utmaning med Big Data är att det inte är möjligt att exakt veta vad som kommer att uppnås med datainsamling och analys. Efter att data har samlats in bör ett business case tas fram med riktlinjer för vad som ska uppnås. Det finns en stor potential i denna uppgående marknad som, trots allt, är relativt omogen. Informationshantering kommer att bli allt viktigare framöver och för företagen handlar det om att hänga med i snabba utvecklingen och skaffa sig en bra förståelse för nya trender i IT-världen.
|
149 |
Big data - použití v bankovní sféře / Big data - application in bankingUřídil, Martin January 2012 (has links)
There is a growing volume of global data, which is offering new possibilities for those market participants, who know to take advantage of it. Data, information and knowledge are new highly regarded commodity especially in the banking industry. Traditional data analytics is intended for processing data with known structure and meaning. But how can we get knowledge from data with no such structure? The thesis focuses on Big Data analytics and its use in banking and financial industry. Definition of specific applications in this area and description of benefits for international and Czech banking institutions are the main goals of the thesis. The thesis is divided in four parts. The first part defines Big Data trend, the second part specifies activities and tools in banking. The purpose of the third part is to apply Big Data analytics on those activities and shows its possible benefits. The last part focuses on the particularities of Czech banking and shows what actual situation about Big Data in Czech banks is. The thesis gives complex description of possibilities of using Big Data analytics. I see my personal contribution in detailed characterization of the application in real banking activities.
|
150 |
Performance Evaluation of Data Intensive Computing In The CloudKaza, Bhagavathi 01 January 2013 (has links)
Big data is a topic of active research in the cloud community. With increasing demand for data storage in the cloud, study of data-intensive applications is becoming a primary focus. Data-intensive applications involve high CPU usage for processing large volumes of data on the scale of terabytes or petabytes. While some research exists for the performance effect of data intensive applications in the cloud, none of the research compares the Amazon Elastic Compute Cloud (Amazon EC2) and Google Compute Engine (GCE) clouds using multiple benchmarks. This study performs extensive research on the Amazon EC2 and GCE clouds using the TeraSort, MalStone and CreditStone benchmarks on Hadoop and Sector data layers. Data collected for the Amazon EC2 and GCE clouds measure performance as the number of nodes is varied. This study shows that GCE is more efficient for data-intensive applications compared to Amazon EC2.
|
Page generated in 0.027 seconds