Spelling suggestions: "subject:"mapreduce"" "subject:"hmapreduce""
61 |
A MapReduce Framework for Heterogeneous Computing ArchitecturesElteir, Marwa Khamis 01 June 2013 (has links)
Nowadays, an increasing number of computational systems are equipped with heterogeneous compute resources, i.e., following different architecture. This applies to the level of a single chip, a single node and even supercomputers and large-scale clusters. With its impressive price-to-performance ratio as well as power efficiently compared to traditional multicore processors, graphics processing units (GPUs) has become an integrated part of these systems. GPUs deliver high peak performance; however efficiently exploiting their computational power requires the exploration of a multi-dimensional space of optimization methodologies, which is challenging even for the well-trained expert. The complexity of this multi-dimensional space arises not only from the traditionally well known but arduous task of architecture-aware GPU optimization at design and compile time, but it also arises in the partitioning and scheduling of the computation across these heterogeneous resources. Even with programming models like the Compute Unified Device Architecture (CUDA) and Open Computing Language (OpenCL), the developer still needs to manage the data transfer be- tween host and device and vice versa, orchestrate the execution of several kernels, and more arduously, optimize the kernel code.
In this dissertation, we aim to deliver a transparent parallel programming environment for heterogeneous resources by leveraging the power of the MapReduce programming model and OpenCL programming language. We propose a portable architecture-aware framework that efficiently runs an application across heterogeneous resources, specifically AMD GPUs and NVIDIA GPUs, while hiding complex architectural details from the developer. To further enhance performance portability, we explore approaches for asynchronously and efficiently distributing the computations across heterogeneous resources. When applied to benchmarks and representative applications, our proposed framework significantly enhances performance, including up to 58% improvement over traditional approaches to task assignment and up to a 45-fold improvement over state-of-the-art MapReduce implementations. / Ph. D.
|
62 |
Data-Intensive Biocomputing in the CloudMeeramohideen Mohamed, Nabeel 25 September 2013 (has links)
Next-generation sequencing (NGS) technologies have made it possible to rapidly sequence the human genome, heralding a new era of health-care innovations based on personalized genetic information. However, these NGS technologies generate data at a rate that far outstrips Moore\'s Law. As a consequence, analyzing this exponentially increasing data deluge requires enormous computational and storage resources, resources that many life science institutions do not have access to. As such, cloud computing has emerged as an obvious, but still nascent, solution.
This thesis intends to investigate and design an efficient framework for running and managing large-scale data-intensive scientific applications in the cloud. Based on the learning from our parallel implementation of a genome analysis pipeline in the cloud, we aim to provide a framework for users to run such data-intensive scientific workflows using a hybrid setup of client and cloud resources. We first present SeqInCloud, our highly scalable parallel implementation of a popular genetic variant pipeline called genome analysis toolkit (GATK), on the Windows Azure HDInsight cloud platform. Together with a parallel implementation of GATK on Hadoop, we evaluate the potential of using cloud computing for large-scale DNA analysis and present a detailed study on efficiently utilizing cloud resources for running data-intensive, life-science applications. Based on our experience from running SeqInCloud on Azure, we present CloudFlow, a feature rich workflow manager for running MapReduce-based bioinformatic pipelines utilizing both client and cloud resources. CloudFlow, built on the top of an existing MapReduce-based workflow manager called Cloudgene, provides unique features that are not offered by existing MapReduce-based workflow managers, such as enabling simultaneous use of client and cloud resources, automatic data-dependency handling between client and cloud resources, and the flexibility of implementing user-defined plugins for data transformations. In-general, we believe that our work attempts to increase the adoption of cloud resources for running data-intensive scientific workloads. / Master of Science
|
63 |
On the Feasibility of MapReduce to Compute Phase Space Properties of Graphical Dynamical Systems: An Empirical StudyHamid, Tania 09 July 2015 (has links)
A graph dynamical system (GDS) is a theoretical construct that can be used to simulate and analyze the dynamics of a wide spectrum of real world processes which can be modeled as networked systems. One of our goals is to compute the phase space of a system, and for this, even 30-vertex graphs present a computational challenge. This is because the number of state transitions needed to compute the phase space is exponential in the number of graph vertices. These problems thus produce memory and execution speed challenges. To address this, we devise various MapReduce programming paradigms that can be used to characterize system state transitions, compute phase spaces, functional equivalence classes, dynamic equivalence classes and cycle equivalence classes of dynamical systems. We also evaluate these paradigms and analyze their suitability for modeling different GDSs. / Master of Science
|
64 |
Partitioning XML data, towards distributed and parallel management / Méthode de Partitionnement pour le traitement distribué et parallèle de données XML.Malla, Noor 21 September 2012 (has links)
Durant cette dernière décennie, la diffusion du format XML pour représenter les données générées par et échangées sur le Web a été accompagnée par la mise en œuvre de nombreux moteurs d’évaluation de requêtes et de mises à jour XQuery. Parmi ces moteurs, les systèmes « mémoire centrale » (Main-memory Systems) jouent un rôle très important dans de nombreuses applications. La gestion et l’intégration de ces systèmes dans des environnements de programmation sont très faciles. Cependant, ces systèmes ont des problèmes de passage à l’échelle puisqu’ils requièrent le chargement complet des documents en mémoire centrale avant traitement.Cette thèse présente une technique de partitionnement des documents XML qui permet aux moteurs « mémoire principale » d’évaluer des expressions XQuery (requêtes et mises à jour) pour des documents de très grandes tailles. Cette méthode de partitionnement s’applique à une classe de requêtes et mises à jour pertinentes et fréquentes, dites requêtes et mises à jour itératives.Cette thèse propose une technique d'analyse statique pour reconnaître les expressions « itératives ». Cette analyse statique est basée sur l’extraction de chemins à partir de l'expression XQuery, sans utilisation d'information supplémentaire sur le schéma. Des algorithmes sont spécifiés, utilisant les chemins extraits par l’étape précédente, pour partitionner les documents en entrée en plusieurs parties, de sorte que la requête ou la mise à jour peut être évaluée sur chaque partie séparément afin de calculer le résultat final par simple concaténation des résultats obtenus pour chaque partie. Ces algorithmes sont mis en œuvre en « streaming » et leur efficacité est validée expérimentalement.En plus, cette méthode de partitionnement est caractérisée également par le fait qu'elle peut être facilement implémentée en utilisant le paradigme MapReduce, permettant ainsi d'évaluer une requête ou une mise à jour en parallèle sur les données partitionnées. / With the widespread diffusion of XML as a format for representing data generated and exchanged over the Web, main query and update engines have been designed and implemented in the last decade. A kind of engines that are playing a crucial role in many applications are « main-memory » systems, which distinguish for the fact that they are easy to manage and to integrate in a programming environment. On the other hand, main-memory systems have scalability issues, as they load the entire document in main-memory before processing. This Thesis presents an XML partitioning technique that allows main-memory engines to process a class of XQuery expressions (queries and updates), that we dub « iterative », on arbitrarily large input documents. We provide a static analysis technique to recognize these expressions. The static analysis is based on paths extracted from the expression and does not need additional schema information. We provide algorithms using path information for partitioning the input documents, so that the query or update can be separately evaluated on each part in order to compute the final result. These algorithms admit a streaming implementation, whose effectiveness is experimentally validated. Besides enabling scalability, our approach is also characterized by the fact that it is easily implementable into a MapReduce framework, thus enabling parallel query/update evaluation on the partitioned data.
|
65 |
Optimization for big joins and recursive query evaluation using intersection and difference filters in MapReduce / Utilisation de filtres d’intersection et de différence pour l’optimisation des jointures à grande échelle et l’exécution de requêtes récursives à l’aide MapReducePhan, Thuong-Cang 07 July 2014 (has links)
La communauté informatique a créé une quantité de données sans précédent grâce aux applications à grande échelle. Ces données massives sont considérées comme une mine d’or, ces informations n’attendant que la puissance de traitement sûre et appropriée à l’évaluation d’algorithmes d’analyse complexe. MapReduce est un des modèles de programmation les plus réputé, connu pour la gestion de ce type de traitement. Il est devenu un standard pour le traitement, l’analyse et la génération de grandes quantités de données en parallèle. Cependant, le modèle de programmation MapReduce souffre d’importantes limites pour des opérations non simples (scans ou regroupements simples), en particulier les traitements avec entrées multiples. Dans ce mémoire, nous étudions et optimisons l’évaluation, dans un environnement MapReduce, d’une des opérations les plus importantes et représentatives : la jointure. Notre travail aborde, en plus de la jointure binaire, des jointures complexes comme la jointure multidimensionnelle et la jointure récursive. Pour atteindre ces objectifs, nous proposons d’abord un nouveau type de filtre appelé filter d’intersection qui utilise un modèle probabiliste pour représenter une approximation de l’intersection des ensembles. Le filtre d’intersection est ensuite appliqué à l’opération de jointure bidirectionnelle pour éliminer la majorité des éléments non-joints dans des ensembles de données d'entrée, avant d’envoyer les données pour le processus de jointure. De plus, nous proposons une extension du filtre d’intersection pour améliorer l’efficacité de la jointure ternaire et de la jointure en cascade correspondant à un cycle de jointure avec plusieurs clés partagées lors de la jointure. Nous utilisons la méthode des multiplicateurs de Lagrange afin de réaliser un choix pertinent entre les différentes solutions proposées pour les jointures multidimensionnelles. Une autre proposition est le filtre de différence, une structure de données probabiliste formée pour représenter un ensemble et examiner des éléments disjoints. Ce filtre peut être appliqué à un grand nombre de problèmes, tels que la réconciliation, la déduplication, la correction d’erreur et en ce qui nous concerne la jointure récursive. Une jointure récursive utilisant un filtre de différence est effectuée comme une répétition de jointures en lieu et place d’une jointure et d’un processus de différenciation. Cette amélioration réduit de moitié le nombre de tâches effectuées et les associés tels que la lecture des données, la génération des données intermédiaires et les communications. Ceci permet notamment une amélioration de l’évaluation de l’algorithme semi-naïf et par conséquent l’évaluation des requêtes récursives en MapReduce. Ensuite, nous fournissons des modèles de coût généraux pour les jointures binaire, à n-aire et récursive. Grâce à ces modèles, nous pouvons comparer les algorithmes de jointure les plus représentatifs. Ainsi, nous pouvons montrer l’intérêt des filtres proposés, grâce notamment à la réduction des coûts E/S (entrée/ sortie) sur disque et sur réseau. De plus, des expérimentations ont été menées, montrant l’efficacité du filtre d’intersection par rapport aux solutions, en comparant en particulier des critères tels que la quantité de données intermédiaires, la quantité de données produites en sortie, le temps d’exécution et la répartition des tâches. Nos propositions pour les opérations de jointure contribuent à l’optimisation en général de la gestion de données à l’aide du paradigme MapReduce sur des infrastructures distribuées à grande échelle. / The information technology community has created unprecedented amount of data through large-scale applications. As a result, the Big Data is considered as gold mines of information that just wait for the processing power to be available, reliable, and apt at evaluating complex analytic algorithms. MapReduce is one of the most popular programming models designed to support such processing. It has become a standard for processing, analyzing and generating large data in a massively parallel manner. However, the MapReduce programming model suffers from severe limitations of operations beyond simple scan/grouping, particularly operations with multiple inputs. In the present dissertation we efficiently investigate and optimize the evaluation, in a MapReduce environment, of one of the most salient and representative such operations: Join. It focuses not only on two-way joins, but also complex joins such as multi-way joins and recursive joins. To achieve these objectives, we first devise a new type of filter called intersection filter using a probabilistic model to represent an approximation of the set intersection. The intersection filter is then applied to two-way join operations to eliminate most non-joining elements in input datasets before sending data to actual join processing. In addition, we make an extension of the intersection filter to improve the performance of three-way joins and chain joins including both cyclic chain joins with many shared join keys. We use the Lagrangian multiplier method to indicate a good choice between our optimized solutions for the multi-way joins. Another important proposal is a difference filter, which is a probabilistic data structure designed to represent a set and examine disjoint elements of the set. It can be applied to a wide range of popular problems such as reconciliation, deduplication, error-correction, especially a recursive join operation. A recursive join using the difference filter is implemented as an iteration of one join job instead of two jobs including a join job and a difference job. This improvement will significantly reduce the number of executed jobs by half, and the related overheads such as data rescanning, intermediate data, and communication for the deduplication and difference operations. Besides, this research also improves the general semi-naive algorithm, as well as the evaluation of recursive queries in MapReduce. We then provide general cost models for two-way joins, multi-way joins, and recursive joins. Thanks to these cost models, we can make comparisons of the join algorithms more persuasive. As a result, with using the proposed filters, the join operations can minimize disk I/O and communication costs. Moreover, the intersection filter-based join operations are demonstrated to be more efficient than existing solutions through experimental evaluations. Experimental comparisons of different algorithms for joins are examined with respect to intermediate data amount, the total output amount, the total execution time, and especially task timelines. Finally, our improvements on the join operations contribute to the global scene of optimizing data management for MapReduce applications on large-scale distributed infrastructures.
|
66 |
Avaliação do Star Schema Benchmark aplicado a bancos de dados NoSQL distribuídos e orientados a colunas / Evaluation of the Star Schema Benchmark applied to NoSQL column-oriented distributed databases systemsScabora, Lucas de Carvalho 06 May 2016 (has links)
Com o crescimento do volume de dados manipulado por aplicações de data warehousing, soluções centralizadas tornam-se muito custosas e enfrentam dificuldades para tratar a escalabilidade do volume de dados. Nesse sentido, existe a necessidade tanto de se armazenar grandes volumes de dados quanto de se realizar consultas analíticas (ou seja, consultas OLAP) sobre esses dados volumosos de forma eficiente. Isso pode ser facilitado por cenários caracterizados pelo uso de bancos de dados NoSQL gerenciados em ambientes paralelos e distribuídos. Dentre os desafios relacionados a esses cenários, destaca-se a necessidade de se promover uma análise de desempenho de aplicações de data warehousing que armazenam os dados do data warehouse (DW) em bancos de dados NoSQL orientados a colunas. A análise experimental e padronizada de diferentes sistemas é realizada por meio de ferramentas denominadas benchmarks. Entretanto, benchmarks para DW foram desenvolvidos majoritariamente para bancos de dados relacionais e ambientes centralizados. Nesta pesquisa de mestrado são investigadas formas de se estender o Star Schema Benchmark (SSB), um benchmark de DW centralizado, para o banco de dados NoSQL distribuído e orientado a colunas HBase. São realizadas propostas e análises principalmente baseadas em testes de desempenho experimentais considerando cada uma das quatro etapas de um benchmark, ou seja, esquema e carga de trabalho, geração de dados, parâmetros e métricas, e validação. Os principais resultados obtidos pelo desenvolvimento do trabalho são: (i) proposta do esquema FactDate, o qual otimiza consultas que acessam poucas dimensões do DW; (ii) investigação da aplicabilidade de diferentes esquemas a cenários empresariais distintos; (iii) proposta de duas consultas adicionais à carga de trabalho do SSB; (iv) análise da distribuição dos dados gerados pelo SSB, verificando se os dados agregados pelas consultas OLAP estão balanceados entre os nós de um cluster; (v) investigação da influência de três importantes parâmetros do framework Hadoop MapReduce no processamento de consultas OLAP; (vi) avaliação da relação entre o desempenho de consultas OLAP e a quantidade de nós que compõem um cluster; e (vii) proposta do uso de visões materializadas hierárquicas, por meio do framework Spark, para otimizar o desempenho no processamento de consultas OLAP consecutivas que requerem a análise de dados em níveis progressivamente mais ou menos detalhados. Os resultados obtidos representam descobertas importantes que visam possibilitar a proposta futura de um benchmark para DWs armazenados em bancos de dados NoSQL dentro de ambientes paralelos e distribuídos. / Due to the explosive increase in data volume, centralized data warehousing applications become very costly and are facing several problems to deal with data scalability. This is related to the fact that these applications need to store huge volumes of data and to perform analytical queries (i.e., OLAP queries) against these voluminous data efficiently. One solution is to employ scenarios characterized by the use of NoSQL databases managed in parallel and distributed environments. Among the challenges related to these scenarios, there is a need to investigate the performance of data warehousing applications that store the data warehouse (DW) in column-oriented NoSQL databases. In this context, benchmarks are widely used to perform standard and experimental analysis of distinct systems. However, most of the benchmarks for DW focus on relational database systems and centralized environments. In this masters research, we investigate how to extend the Star Schema Benchmark (SSB), which was proposed for centralized DWs, to the distributed and column-oriented NoSQL database HBase. We introduce proposals and analysis mainly based on experimental performance tests considering each one of the four steps of a benchmark, i.e. schema and workload, data generation, parameters and metrics, and validation. The main results described in this masters research are described as follows: (i) proposal of the FactDate schema, which optimizes queries that access few dimensions of the DW; (ii) investigation of the applicability of different schemas for different business scenarios; (iii) proposal of two additional queries to the SSB workload; (iv) analysis of the data distribution generated by the SSB, verifying if the data aggregated by OLAP queries are balanced between the nodes of a cluster; (v) investigation of the influence caused by three important parameters of the Hadoop MapReduce framework in the OLAP query processing; (vi) evaluation of the relationship between the OLAP query performance and the number of nodes of a cluster; and (vii) employment of hierarchical materialized views using the Spark framework to optimize the processing performance of consecutive OLAP queries that require progressively more or less aggregated data. These results represent important findings that enable the future proposal of a benchmark for DWs stored in NoSQL databases and managed in parallel and distributed environments.
|
67 |
Adaptação de algoritmos de processamento de dados ambientais para o contexto de Big DataCampos, Guilherme Falcão da Silva 23 November 2015 (has links)
Submitted by Jordan (jordanbiblio@gmail.com) on 2017-05-04T14:04:39Z
No. of bitstreams: 1
DISS_2015_Guilherme Falcão da Silva Campos.pdf: 3678965 bytes, checksum: 16184b756c14ab6fc7eb19e95ff445d4 (MD5) / Approved for entry into archive by Jordan (jordanbiblio@gmail.com) on 2017-05-04T15:41:39Z (GMT) No. of bitstreams: 1
DISS_2015_Guilherme Falcão da Silva Campos.pdf: 3678965 bytes, checksum: 16184b756c14ab6fc7eb19e95ff445d4 (MD5) / Made available in DSpace on 2017-05-04T15:41:39Z (GMT). No. of bitstreams: 1
DISS_2015_Guilherme Falcão da Silva Campos.pdf: 3678965 bytes, checksum: 16184b756c14ab6fc7eb19e95ff445d4 (MD5)
Previous issue date: 2015-11-23 / Pesquisas ambientais dependem de dados de sensores para a criação das séries
temporais referentes às variáveis analisadas. A quantidade de dados tende a aumentar,
cada vez mais, à medida que novos sensores são criados e instalados.
Com o passar do tempo os conjuntos de dados se tornam massivos, requerendo
novas formas de armazenamento e processamento. Este trabalho busca meios de
se contornar esses problemas utilizando uma solução tecnológica capaz de armazenar
e processar grandes quantidades de dados. A solução tecnológica utilizada
é o Apache Hadoop, uma ferramenta voltada a problemas de Big Data. Com a
finalidade de avaliar a ferramenta foram utilizados diferentes conjuntos de dados
e adaptados diferentes algoritmos usados na análise de séries temporais. Foram
implementados analises de séries caóticas e não caóticas. As implementações foram
a transformada de wavelet, uma busca por similaridade usando a função de
distância Euclidiana, cálculo da dimensão box-counting e o cálculo da dimensão
de correlação. Essas implementações foram adaptadas para utilizar o paradigma
de processamento distribuído MapReduce. / Environmental research depend on sensor generated data to create time series
regarding the variables that are being analyzed. The amount of data tends to
increase as more and more sensors are created and installed. After some time the
datasets become huge and requires new ways to process and store the data. This
work seeks to find ways to avoid these issues using a technological solution able
to store and process large amounts of data. The solution used is Apache Hadoop,
a tool which purpose is to solve Big Data problems. In order to evaluate the tool
were used different datasets and time series analysis algorithms. The analysis of
chaotic and non-chaotic time series were implemented. These implementations
were: the wavelet transform, similarity search using Euclidean distance function,
the calculus of the box-counting dimension and the calculus of the correlation
dimension. Those implementations were adapted for the MapReduce parallel
processing paradigm.
|
68 |
Entrepôts de données NoSQL orientés colonnes dans un environnement cloud / Columnar NoSQL data warehouses in the cloud environment.Dehdouh, Khaled 05 November 2015 (has links)
Le travail présenté dans cette thèse vise à proposer des approches pour construire et développer des entrepôts de données selon le modèle NoSQL orienté colonnes. L'intérêt porté aux modèles NoSQL est motivé d'une part, par l'avènement des données massives et d'autre part, par l'incapacité du modèle relationnel, habituellement utilisés pour implémenter les entrepôts de données, à permettre le passage à très grande échelle. En effet, les différentes modèles NoSQL sont devenus des standards dans le stockage et la gestion des données massives. Ils ont été conçus à l'origine pour construire des bases de données dont le modèle de stockage est le modèle « clé/valeur ». D'autres modèles sont alors apparus pour tenir compte de la variabilité des données : modèles orienté colonne, orienté document et orienté graphe. Pour développer des entrepôts de données massives, notre choix s'est porté sur le modèle NoSQL orienté colonnes car il apparaît comme étant le plus approprié aux traitements des requêtes décisionnelles qui sont définies en fonction d'un ensemble de colonnes (mesures et dimensions) issues de l'entrepôt. Cependant, le modèle NoSQL en colonnes ne propose pas d'opérateurs de type analyse en ligne (OLAP) afin d'exploiter les entrepôts de données.Nous présentons dans cette thèse des solutions innovantes sur la modélisation logique et physique des entrepôts de données NoSQL en colonnes. Nous avons proposé une approche de construction des cubes de données qui prend compte des spécificités de l'environnement du stockage orienté colonnes. Par ailleurs, afin d'exploiter les entrepôts de données en colonnes, nous avons défini des opérateurs d'agrégation permettant de créer des cubes OLAP. Nous avons proposé l'opérateur C-CUBE (Columnar-Cube) permettant de construire des cubes OLAP stockés en colonnes dans un environnement relationnel en utilisant la jointure invisible. MC-CUBE (MapReduce Columnar-Cube) pour construire des cubes OLAP stockés en colonnes dans un environnement distribué exploitant la jointure invisible et le paradigme MapReduce pour paralléliser les traitements. Et enfin, nous avons développé l'opérateur CN-CUBE (Columnar-NoSQL Cube) qui tient compte des faits et des dimensions qui sont groupés dans une même table lors de la génération de cubes à partir d'un entrepôt dénormalisé selon un certain modèle logique. Nous avons réalisé une étude de performance des modèles de données dimensionnels NoSQL et de nos opérateurs OLAP. Nous avons donc proposé un index de jointure en étoile adapté aux entrepôts de données NoSQL orientés colonnes, baptisé C-SJI (Columnar-Star Join Index). Pour évaluer nos propositions, nous avons défini un modèle de coût pour mesurer l'impact de l'apport de cet index. D'autre part, nous avons proposé un modèle logique baptisé FLM (Flat Logical Model) pour implémenter des entrepôts de données NoSQL orientés colonnes et de permettre une meilleure prise en charge par les SGBD NoSQL de cette famille.Pour valider nos différentes contributions, nous avons développé une plate-forme logicielle CG-CDW (Cube Generation for Columnar Data Warehouses) qui permet de générer des cubes OLAP à partir d'entrepôts de données en colonnes. Pour terminer et afin d'évaluer nos contributions, nous avons tout d'abord développé un banc d'essai décisionnel NoSQL en colonnes (CNSSB : Columnar NoSQL Star Schema Benchmark) basé sur le banc d'essai SSB (Star Schema Benchmark), puis, nous avons procédé à plusieurs tests qui ont permis de montrer l'efficacité des différents opérateurs d'agrégation que nous avons proposé. / The work presented in this thesis aims at proposing approaches to build data warehouses by using the columnar NoSQL model. The use of NoSQL models is motivated by the advent of big data and the inability of the relational model, usually used to implement data warehousing, to allow data scalability. Indeed, the NoSQL models are suitable for storing and managing massive data. They are designed to build databases whose storage model is the "key/value". Other models, then, appeared to account for the variability of the data: column oriented, document oriented and graph oriented. We have used the column NoSQL oriented model for building massive data warehouses because it is more suitable for decisional queries that are defined by a set of columns (measures and dimensions) from warehouse. However, the NoSQL model columns do not offer online analysis operators (OLAP) for exploiting the data warehouse.We present in this thesis new solutions for logical and physical modeling of columnar NoSQL data warehouses. We have proposed a new approach that allows building data cubes by taking the characteristics of the columnar environment into account. Thus, we have defined new cube operators which allow building columnar cubes. C-CUBE (Columnar-CUBE) for columnar relational data warehouses. MC-CUBE (MapReduce Columnar-CUBE) for columnar NoSQL data warehouses when measures and dimensions are stored in different tables. Finally, CN-CUBE (Columnar NoSQL-CUBE) when measures and dimensions are gathered in the same table according a new logical model that we proposed. We have studied the NoSQL dimensional data model performance and our OLAP operators, and we have proposed a new star join index C-SJI (Columnar-Star join index) suitable for columnar NoSQL data warehouses which store measures and dimensions separately. To evaluate our contribution, we have defined a cost model to measure the impact of the use of this index. Furthermore, we have proposed a logic model called FLM (Flat Logical Model) to represent a data cube NoSQL oriented columns and enable a better management by columnar NoSQL DBMS.To validate our contributions, we have developed a software framework CG-CDW (Cube Generation for Data Warehouses Columnar) to generate OLAP cubes from columnar data warehouses. Also, we have developed a columnar NoSQL decisional benchmark CNSSB (Columnar NoSQL Star Schema Benchmark) based on the SSB and finally, we conducted several tests that have shown the effectiveness of different aggregation operators that we proposed.
|
69 |
Nouveaux algorithmes pour la détection de communautés disjointes et chevauchantes basés sur la propagation de labels et adaptés aux grands graphes / New algorithms for disjoint and overlapping community detection based on label propagation and adapted to large graphsAttal, Jean-Philippe 19 January 2017 (has links)
Les graphes sont des structures mathématiques capable de modéliser certains systèmes complexes.Une des nombreuses problématiques liée aux graphes concerne la détection de communautés qui vise à trouver une partition en sommet d'un graphe en vue d'en comprendre la structure. A titre d'exemple, en représentant des contratsd'assurances par des noeuds et leurs degrés de similarité par une arête,détecter des groupes de noeuds fortement connectésconduit à détecter des profils similaires, et donc a voir des profils à risques.De nombreux algorithmes ont essayé de répondreà ce problème.Une des méthodes est la propagation de labels qui consiste à ce quechaque noeud puisse recevoir un label par un vote majoritaire de ses voisins.Bien que cette méthode soit simple à mettre en oeuvre,elle présente une grande instabilité due au non déterminisme del'algorithme et peut dans certains cas ne pas détecter de structures communautaires.La première contribution de cette thèse sera de i) proposerune méthode de stabilisation de la propagation de labelstout en appliquant des barrages artificiels pour limiter les possibles mauvaises propagations.Les réseaux complexes ont également comme caractéristique que certains noeuds puissent appartenir à plusieurs communautés, on parle alors de recouvrements. C'est en ce sens que la secondecontribution de cette thèse portera sur ii) la créationd'un algorithme auquel seront adjointes des fonctions d'appartenancespour détecter de possibles recouvrements via des noeuds candidats au chevauchement.La taille des graphes est également une notion à considérer dans la mesure où certains réseaux peuvent contenir plusieursmillions de noeuds et d'arêtes.Nous proposons iii) une version parallèleet distribuée de la détection de communautés en utilisant la propagation de labels par coeur.Une étude comparative sera effectuée pour observerla qualité de partitionnement et de recouvrement desalgorithmes proposés. / Graphs are mathematical structures amounting to a set of nodes (objects or persons) in which some pairs are in linked with edges. Graphs can be used to model complex systems.One of the main problems in graph theory is the community detection problemwhich aims to find a partition of nodes in the graph to understand its structure.For instance, by representing insurance contracts by nodes and their relationship by edges,detecting groups of nodes highly connected leads to detect similar profiles and to evaluate risk profiles. Several algorithms are used as aresponse to this currently open research field.One of the fastest method is the label propagation.It's a local method, in which each node changes its own label according toits neighbourhood.Unfortunately, this method has two major drawbacks. The first is the instability of the method. Each trialgives rarely the same result.The second is a bad propagation which can lead to huge communities without sense (giant communities problem).The first contribution of the thesis is i) proposing a stabilisation methodfor the label propagation with artificial dams on edges of some networks in order to limit bad label propagations. Complex networks are also characterized by some nodes which may belong to several communities,we call this a cover.For example, in Protein–protein interaction networks, some proteins may have several functions.Detecting these functions according to their communities could help to cure cancers. The second contribution of this thesis deals with the ii)implementation of an algorithmwith functions to detect potential overlapping nodes .The size of the graphs is also to be considered because some networks contain several millions of nodes and edges like the Amazon product co-purchasing network.We propose iii) a parallel and a distributed version of the community detection using core label propagation.A study and a comparative analysis of the proposed algorithms will be done based on the quality of the resulted partitions and covers.
|
70 |
Avaliação do Star Schema Benchmark aplicado a bancos de dados NoSQL distribuídos e orientados a colunas / Evaluation of the Star Schema Benchmark applied to NoSQL column-oriented distributed databases systemsLucas de Carvalho Scabora 06 May 2016 (has links)
Com o crescimento do volume de dados manipulado por aplicações de data warehousing, soluções centralizadas tornam-se muito custosas e enfrentam dificuldades para tratar a escalabilidade do volume de dados. Nesse sentido, existe a necessidade tanto de se armazenar grandes volumes de dados quanto de se realizar consultas analíticas (ou seja, consultas OLAP) sobre esses dados volumosos de forma eficiente. Isso pode ser facilitado por cenários caracterizados pelo uso de bancos de dados NoSQL gerenciados em ambientes paralelos e distribuídos. Dentre os desafios relacionados a esses cenários, destaca-se a necessidade de se promover uma análise de desempenho de aplicações de data warehousing que armazenam os dados do data warehouse (DW) em bancos de dados NoSQL orientados a colunas. A análise experimental e padronizada de diferentes sistemas é realizada por meio de ferramentas denominadas benchmarks. Entretanto, benchmarks para DW foram desenvolvidos majoritariamente para bancos de dados relacionais e ambientes centralizados. Nesta pesquisa de mestrado são investigadas formas de se estender o Star Schema Benchmark (SSB), um benchmark de DW centralizado, para o banco de dados NoSQL distribuído e orientado a colunas HBase. São realizadas propostas e análises principalmente baseadas em testes de desempenho experimentais considerando cada uma das quatro etapas de um benchmark, ou seja, esquema e carga de trabalho, geração de dados, parâmetros e métricas, e validação. Os principais resultados obtidos pelo desenvolvimento do trabalho são: (i) proposta do esquema FactDate, o qual otimiza consultas que acessam poucas dimensões do DW; (ii) investigação da aplicabilidade de diferentes esquemas a cenários empresariais distintos; (iii) proposta de duas consultas adicionais à carga de trabalho do SSB; (iv) análise da distribuição dos dados gerados pelo SSB, verificando se os dados agregados pelas consultas OLAP estão balanceados entre os nós de um cluster; (v) investigação da influência de três importantes parâmetros do framework Hadoop MapReduce no processamento de consultas OLAP; (vi) avaliação da relação entre o desempenho de consultas OLAP e a quantidade de nós que compõem um cluster; e (vii) proposta do uso de visões materializadas hierárquicas, por meio do framework Spark, para otimizar o desempenho no processamento de consultas OLAP consecutivas que requerem a análise de dados em níveis progressivamente mais ou menos detalhados. Os resultados obtidos representam descobertas importantes que visam possibilitar a proposta futura de um benchmark para DWs armazenados em bancos de dados NoSQL dentro de ambientes paralelos e distribuídos. / Due to the explosive increase in data volume, centralized data warehousing applications become very costly and are facing several problems to deal with data scalability. This is related to the fact that these applications need to store huge volumes of data and to perform analytical queries (i.e., OLAP queries) against these voluminous data efficiently. One solution is to employ scenarios characterized by the use of NoSQL databases managed in parallel and distributed environments. Among the challenges related to these scenarios, there is a need to investigate the performance of data warehousing applications that store the data warehouse (DW) in column-oriented NoSQL databases. In this context, benchmarks are widely used to perform standard and experimental analysis of distinct systems. However, most of the benchmarks for DW focus on relational database systems and centralized environments. In this masters research, we investigate how to extend the Star Schema Benchmark (SSB), which was proposed for centralized DWs, to the distributed and column-oriented NoSQL database HBase. We introduce proposals and analysis mainly based on experimental performance tests considering each one of the four steps of a benchmark, i.e. schema and workload, data generation, parameters and metrics, and validation. The main results described in this masters research are described as follows: (i) proposal of the FactDate schema, which optimizes queries that access few dimensions of the DW; (ii) investigation of the applicability of different schemas for different business scenarios; (iii) proposal of two additional queries to the SSB workload; (iv) analysis of the data distribution generated by the SSB, verifying if the data aggregated by OLAP queries are balanced between the nodes of a cluster; (v) investigation of the influence caused by three important parameters of the Hadoop MapReduce framework in the OLAP query processing; (vi) evaluation of the relationship between the OLAP query performance and the number of nodes of a cluster; and (vii) employment of hierarchical materialized views using the Spark framework to optimize the processing performance of consecutive OLAP queries that require progressively more or less aggregated data. These results represent important findings that enable the future proposal of a benchmark for DWs stored in NoSQL databases and managed in parallel and distributed environments.
|
Page generated in 0.0375 seconds