• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 82
  • 20
  • 10
  • 5
  • 5
  • 4
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 165
  • 61
  • 59
  • 51
  • 46
  • 37
  • 37
  • 37
  • 28
  • 21
  • 19
  • 18
  • 18
  • 16
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

"Big Data" Management and Security Application to Telemetry Data Products

Kalibjian, Jeff 10 1900 (has links)
ITC/USA 2013 Conference Proceedings / The Forty-Ninth Annual International Telemetering Conference and Technical Exhibition / October 21-24, 2013 / Bally's Hotel & Convention Center, Las Vegas, NV / "Big Data" [1] and the security challenge of managing "Big Data" is a hot topic in the IT world. The term "Big Data" is used to describe very large data sets that cannot be processed by traditional database applications in "tractable" periods of time. Securing data in a conventional database is challenge enough; securing data whose size may exceed hundreds of terabytes or even petabytes is even more daunting! As the size of telemetry product and telemetry post-processed product continues to grow, "Big Data" management techniques and the securing of that data may have ever increasing application in the telemetry realm. After reviewing "Big Data", "Big Data" security and management basics, potential application to telemetry post-processed product will be explored.
22

The Impact of Near-Duplicate Documents on Information Retrieval Evaluation

Khoshdel Nikkhoo, Hani 18 January 2011 (has links)
Near-duplicate documents can adversely affect the efficiency and effectiveness of search engines. Due to the pairwise nature of the comparisons required for near-duplicate detection, this process is extremely costly in terms of the time and processing power it requires. Despite the ubiquitous presence of near-duplicate detection algorithms in commercial search engines, their application and impact in research environments is not fully explored. The implementation of near-duplicate detection algorithms forces trade-offs between efficiency and effectiveness, entailing careful testing and measurement to ensure acceptable performance. In this thesis, we describe and evaluate a scalable implementation of a near-duplicate detection algorithm, based on standard shingling techniques, running under a MapReduce framework. We explore two different shingle sampling techniques and analyze their impact on the near-duplicate document detection process. In addition, we investigate the prevalence of near-duplicate documents in the runs submitted to the adhoc task of TREC 2009 web track.
23

G2P-DBSCAN: Estratégia de Particionamento de Dados e de Processamento Distribuído fazer DBSCAN com MapReduce. / G2P-DBSCAN: Data Partitioning Strategy and Distributed Processing of DBSCAN with MapReduce.

Araújo Neto, Antônio Cavalcante January 2016 (has links)
ARAÚJO NETO, Antônio Cavalcante. G2P-DBSCAN: Estratégia de Particionamento de Dados e de Processamento Distribuído fazer DBSCAN com MapReduce. 2016. 63 f. Dissertação (mestrado em ciência da computação)- Universidade Federal do Ceará, Fortaleza-CE, 2016. / Submitted by Elineudson Ribeiro (elineudsonr@gmail.com) on 2016-03-22T19:21:02Z No. of bitstreams: 1 2016_dis_acaraujoneto.pdf: 5671232 bytes, checksum: ce91a85d087f63206ad938133c163560 (MD5) / Approved for entry into archive by Rocilda Sales (rocilda@ufc.br) on 2016-04-25T12:33:12Z (GMT) No. of bitstreams: 1 2016_dis_acaraujoneto.pdf: 5671232 bytes, checksum: ce91a85d087f63206ad938133c163560 (MD5) / Made available in DSpace on 2016-04-25T12:33:12Z (GMT). No. of bitstreams: 1 2016_dis_acaraujoneto.pdf: 5671232 bytes, checksum: ce91a85d087f63206ad938133c163560 (MD5) Previous issue date: 2016 / Clustering is a data mining technique that brings together elements of a data set such so that the elements of a same group are more similar to each other than to those from other groups. This thesis studied the problem of processing the clustering based on density DBSCAN algorithm distributedly through the MapReduce paradigm. In the distributed processing it is important that the partitions are processed have approximately the same size, provided that the total of the processing time is limited by the time the node with a larger amount of data leads to complete the computation of data assigned to it. For this reason we also propose a data set partitioning strategy called G2P, which aims to distribute the data set in a balanced manner between partitions and takes into account the characteristics of DBSCAN algorithm. More Specifically, the G2P strategy uses grid and graph structures to assist in the division of space low density regions. Distributed DBSCAN the algorithm is done processing MapReduce two stages and an intermediate phase that identifies groupings that can were divided into more than one partition, called candidates from merging. The first MapReduce phase applies the algorithm DSBCAN the partitions individually. The second and checks correcting, if necessary, merge candidate clusters. Experiments using data sets demonstrate that true G2P-DBSCAN strategy overcomes the baseline adopted in all the scenarios, both at runtime and quality of obtained partitions. / Clusterizaçao é uma técnica de mineração de dados que agrupa elementos de um conjunto de dados de forma que os elementos que pertencem ao mesmo grupo são mais semelhantes entre si que entre elementos de outros grupos. Nesta dissertação nós estudamos o problema de processar o algoritmo de clusterização baseado em densidade DBSCAN de maneira distribuída através do paradigma MapReduce. Em processamentos distribuídos é importante que as partições de dados a serem processadas tenham tamanhos proximadamente iguais, uma vez que o tempo total de processamento é delimitado pelo tempo que o nó com uma maior quantidade de dados leva para finalizar a computação dos dados a ele atribuídos. Por essa razão nós também propomos uma estratégia de particionamento de dados, chamada G2P, que busca distribuir o conjunto de dados de forma balanceada entre as partições e que leva em consideração as características do algoritmo DBSCAN. Mais especificamente, a estratégia G2P usa estruturas de grade e grafo para auxiliar na divisão do espaço em regiões de baixa densidade. Já o processamento distribuído do algoritmo DBSCAN se dá por meio de duas fases de processamento MapReduce e uma fase intermediária que identifica clusters que podem ter sido divididos em mais de uma partição, chamados de candidatos à junção. A primeira fase de MapReduce aplica o algoritmo DSBCAN nas partições de dados individualmente, e a segunda verifica e corrige, caso necessário, os clusters candidatos à junção. Experimentos utilizando dados reais mostram que a estratégia G2P-DBSCAN se comporta melhor que a solução utilizada para comparação em todos os cenários considerados, tanto em tempo de execução quanto em qualidade das partições obtidas.
24

G2P-DBSCAN: Data Partitioning Strategy and Distributed Processing of DBSCAN with MapReduce. / G2P-DBSCAN: EstratÃgia de Particionamento de Dados e de Processamento DistribuÃdo fazer DBSCAN com MapReduce.

AntÃnio Cavalcante AraÃjo Neto 17 August 2015 (has links)
CoordenaÃÃo de AperfeÃoamento de Pessoal de NÃvel Superior / Clustering is a data mining technique that brings together elements of a data set such so that the elements of a same group are more similar to each other than to those from other groups. This thesis studied the problem of processing the clustering based on density DBSCAN algorithm distributedly through the MapReduce paradigm. In the distributed processing it is important that the partitions are processed have approximately the same size, provided that the total of the processing time is limited by the time the node with a larger amount of data leads to complete the computation of data assigned to it. For this reason we also propose a data set partitioning strategy called G2P, which aims to distribute the data set in a balanced manner between partitions and takes into account the characteristics of DBSCAN algorithm. More Specifically, the G2P strategy uses grid and graph structures to assist in the division of space low density regions. Distributed DBSCAN the algorithm is done processing MapReduce two stages and an intermediate phase that identifies groupings that can were divided into more than one partition, called candidates from merging. The first MapReduce phase applies the algorithm DSBCAN the partitions individually. The second and checks correcting, if necessary, merge candidate clusters. Experiments using data sets demonstrate that true G2P-DBSCAN strategy overcomes the baseline adopted in all the scenarios, both at runtime and quality of obtained partitions. / ClusterizaÃao à uma tÃcnica de mineraÃÃo de dados que agrupa elementos de um conjunto de dados de forma que os elementos que pertencem ao mesmo grupo sÃo mais semelhantes entre si que entre elementos de outros grupos. Nesta dissertaÃÃo nÃs estudamos o problema de processar o algoritmo de clusterizaÃÃo baseado em densidade DBSCAN de maneira distribuÃda atravÃs do paradigma MapReduce. Em processamentos distribuÃdos à importante que as partiÃÃes de dados a serem processadas tenham tamanhos proximadamente iguais, uma vez que o tempo total de processamento à delimitado pelo tempo que o nà com uma maior quantidade de dados leva para finalizar a computaÃÃo dos dados a ele atribuÃdos. Por essa razÃo nÃs tambÃm propomos uma estratÃgia de particionamento de dados, chamada G2P, que busca distribuir o conjunto de dados de forma balanceada entre as partiÃÃes e que leva em consideraÃÃo as caracterÃsticas do algoritmo DBSCAN. Mais especificamente, a estratÃgia G2P usa estruturas de grade e grafo para auxiliar na divisÃo do espaÃo em regiÃes de baixa densidade. Jà o processamento distribuÃdo do algoritmo DBSCAN se dà por meio de duas fases de processamento MapReduce e uma fase intermediÃria que identifica clusters que podem ter sido divididos em mais de uma partiÃÃo, chamados de candidatos à junÃÃo. A primeira fase de MapReduce aplica o algoritmo DSBCAN nas partiÃÃes de dados individualmente, e a segunda verifica e corrige, caso necessÃrio, os clusters candidatos à junÃÃo. Experimentos utilizando dados reais mostram que a estratÃgia G2P-DBSCAN se comporta melhor que a soluÃÃo utilizada para comparaÃÃo em todos os cenÃrios considerados, tanto em tempo de execuÃÃo quanto em qualidade das partiÃÃes obtidas.
25

Scaling Geospatial Searches in Large Spatial Databases

Cary, Ariel 08 November 2011 (has links)
Modern geographical databases store a rich set of aspatial attributes in addition to geographic data. Retrieving spatial records constrained on spatial and aspatial attributes provides users the ability to perform more interesting spatial analyses via composite spatial searches; e.g., in a real estate database, "Find the nearest homes for sale to my current location that have backyard and whose prices are between $50,000 and $80,000". Efficient processing of such composite searches requires combined indexing strategies of multiple types of data. Existing spatial query engines commonly apply a two-filter approach (spatial filter followed by non-spatial filter, or viceversa), which can incur large performance overheads. On the other hand, the amount of geolocation data in databases is rapidly increasing due in part to advances in geolocation technologies (e.g., GPS- enabled mobile devices) that allow to associate location data to nearly every object or event. Hence, practical spatial databases may face data ingestion challenges of large data volumes. In this dissertation, we first show how indexing spatial data with R-trees (a typical data pre- processing task) can be scaled in MapReduce – a well-adopted parallel programming model, developed by Google, for data intensive problems. Close to linear scalability was observed in index construction tasks over large spatial datasets. Subsequently, we develop novel techniques for simultaneously indexing spatial with textual and numeric data to process k-nearest neighbor searches with aspatial Boolean selection constraints. In particular, numeric ranges are compactly encoded and explicitly indexed. Experimental evaluations with real spatial databases showed query response times within acceptable ranges for interactive search systems.
26

Evaluating MapReduce System Performance: A Simulation Approach

Wang, Guanying 13 September 2012 (has links)
Scale of data generated and processed is exploding in the Big Data era. The MapReduce system popularized by open-source Hadoop is a powerful tool for the exploding data problem, and is widely employed in many areas involving large scale of data. In many circumstances, hypothetical MapReduce systems must be evaluated, e.g. to provision a new MapReduce system to provide certain performance goal, to upgrade a currently running system to meet increasing business demands, to evaluate novel network topology, new scheduling algorithms, or resource arrangement schemes. The traditional trial-and-error solution involves the time-consuming and costly process in which a real cluster is first built and then benchmarked. In this dissertation, we propose to simulate MapReduce systems and evaluate hypothetical MapReduce systems using simulation. This simulation approach offers significantly lower turn-around time and lower cost than experiments. Simulation cannot entirely replace experiments, but can be used as a preliminary step to reveal potential flaws and gain critical insights. We studied MapReduce systems in detail and developed a comprehensive performance model for MapReduce, including sub-task phase level performance models for both map and reduce tasks and a model for resource contention between multiple processes running in concurrent. Based on the performance model, we developed a comprehensive simulator for MapReduce, MRPerf. MRPerf is the first full-featured MapReduce simulator. It supports both workload simulation and resource contention, and it still offers the most complete features among all MapReduce simulators to date. Using MRPerf, we conducted two case studies to evaluate scheduling algorithms in MapReduce and shared storage in MapReduce, without building real clusters. Furthermore, in order to further integrate simulation and performance prediction into MapReduce systems and leverage predictions to improve system performance, we developed online prediction framework for MapReduce, which periodically runs simulations within a live Hadoop MapReduce system. The framework can predict task execution within a window in near future. These predictions can be used by other components in MapReduce systems in order to improve performance. Our results show that the framework can achieve high prediction accuracy and incurs negligible overhead. We present two potential use cases, prefetching and dynamic adapting scheduler. / Ph. D.
27

Partitionnement dans les systèmes de gestion de données parallèles / Data Partitioning in Parallel Data Management Systems

Liroz Gistau, Miguel 17 December 2013 (has links)
Au cours des dernières années, le volume des données qui sont capturées et générées a explosé. Les progrès des technologies informatiques, qui fournissent du stockage à bas prix et une très forte puissance de calcul, ont permis aux organisations d'exécuter des analyses complexes de leurs données et d'en extraire des connaissances précieuses. Cette tendance a été très importante non seulement pour l'industrie, mais a également pour la science, où les meilleures instruments et les simulations les plus complexes ont besoin d'une gestion efficace des quantités énormes de données.Le parallélisme est une technique fondamentale dans la gestion de données extrêmement volumineuses car il tire parti de l'utilisation simultanée de plusieurs ressources informatiques. Pour profiter du calcul parallèle, nous avons besoin de techniques de partitionnement de données efficaces, qui sont en charge de la division de l'ensemble des données en plusieurs partitions et leur attribution aux nœuds de calculs. Le partitionnement de données est un problème complexe, car il doit prendre en compte des questions différentes et souvent contradictoires telles que la localité des données, la répartition de charge et la maximisation du parallélisme.Dans cette thèse, nous étudions le problème de partitionnement de données, en particulier dans les bases de données parallèles scientifiques qui sont continuellement en croissance. Nous étudions également ces partitionnements dans le cadre MapReduce.Dans le premier cas, nous considérons le partitionnement de très grandes bases de données dans lesquelles des nouveaux éléments sont ajoutés en permanence, avec pour exemple une application aux données astronomiques. Les approches existantes sont limitées à cause de la complexité de la charge de travail et l'ajout en continu de nouvelles données limitent l'utilisation d'approches traditionnelles. Nous proposons deux algorithmes de partitionnement dynamique qui attribuent les nouvelles données aux partitions en utilisant une technique basée sur l'affinité. Nos algorithmes permettent d'obtenir de très bons partitionnements des données en un temps d'exécution réduit comparé aux approches traditionnelles.Nous étudions également comment améliorer la performance du framework MapReduce en utilisant des techniques de partitionnement de données. En particulier, nous sommes intéressés par le partitionnement efficient de données d'entrée / During the last years, the volume of data that is captured and generated has exploded. Advances in computer technologies, which provide cheap storage and increased computing capabilities, have allowed organizations to perform complex analysis on this data and to extract valuable knowledge from it. This trend has been very important not only for industry, but has also had a significant impact on science, where enhanced instruments and more complex simulations call for an efficient management of huge quantities of data.Parallel computing is a fundamental technique in the management of large quantities of data as it leverages on the concurrent utilization of multiple computing resources. To take advantage of parallel computing, we need efficient data partitioning techniques which are in charge of dividing the whole data and assigning the partitions to the processing nodes. Data partitioning is a complex problem, as it has to consider different and often contradicting issues, such as data locality, load balancing and maximizing parallelism.In this thesis, we study the problem of data partitioning, particularly in scientific parallel databases that are continuously growing and in the MapReduce framework.In the case of scientific databases, we consider data partitioning in very large databases in which new data is appended continuously to the database, e.g. astronomical applications. Existing approaches are limited since the complexity of the workload and continuous appends restrict the applicability of traditional approaches. We propose two partitioning algorithms that dynamically partition new data elements by a technique based on data affinity. Our algorithms enable us to obtain very good data partitions in a low execution time compared to traditional approaches.We also study how to improve the performance of MapReduce framework using data partitioning techniques. In particular, we are interested in efficient data partitioning of the input datasets to reduce the amount of data that has to be transferred in the shuffle phase. We design and implement a strategy which, by capturing the relationships between input tuples and intermediate keys, obtains an efficient partitioning that can be used to reduce significantly the MapReduce's communication overhead.
28

Energy-efficient Straggler Mitigation for Big Data Applications on the Clouds / Amélioration de l'efficacité énergétique de la prévention des stragglers pour les applications Big Data sur les Clouds

Phan, Tien-Dat 30 November 2017 (has links)
La consommation d’énergie est une préoccupation importante pour les systèmes de traitement Big Data à grande échelle, ce qui entraîne un coût monétaire énorme. En raison de l’hétérogénéité du matériel et des conflits entre les charges de travail simultanées, les stragglers (i.e., les tâches qui sont relativement plus lentes que les autres tâches) peuvent augmenter considérablement le temps d’exécution et la consommation d’énergie du travail. Par conséquent, l’atténuation des stragglers devient une technique importante pour améliorer les performances des systèmes de traitement Big Data à grande échelle. Typiquement, il se compose de deux phases: la détection de stragglers et la manipulation de stragglers. Dans la phase de détection, les tâches lentes (par exemple, les tâches avec une vitesse ou une progression inférieure à la moyenne) sont marquées en tant que stragglers. Ensuite, les stragglers sont traités en utilisant la technique d’exécution spéculative. Avec cette technique, une copie du straggler détecté est lancée en parallèle avec le straggler dans l’espoir qu’il puisse finir plus tôt, réduisant ainsi le temps d’exécution du straggler. Bien qu’un grand nombre d’études aient été proposées pour améliorer les performances des applications Big Data en utilisant la technique d’exécution spéculative, peu d’entre elles ont étudié l’efficacité énergétique de leurs solutions.Dans le cadre de cette thèse, nous commençons par caractériser l’impact de l’atténuation des stragglers sur la performance et la consommation d’énergie des systèmes de traitement de Big Data. Nous observons que l’efficacité énergétique des techniques actuelles d’atténuation des stragglers pourrait être considérablement améliorée. Cela motive une étude détaillée de ses deux phases: détection de straggler et traitement de straggler. En ce qui concerne la détection de straggler, nous introduisons un cadre novateur pour caractériser et évaluer de manière exhaustive les mécanismes de détection de straggler. En conséquence, nous proposons un nouveau mécanisme énergétique de détection de straggler. Ce mécanisme de détection est implémenté dans Hadoop et se révèle avoir une efficacité énergétique plus élevée par rapport aux mécanismes les plus récentes. En ce qui concerne le traitement de straggler, nous présentons une nouvelle méthode pour répartir des copies spéculatives, qui prend en compte l’impact de l’hétérogénéité des ressources sur la performance et la consommation d’énergie. Enfin, nous introduisons un nouveau mécanisme éconergétique pour gérer les stragglers. Ce mécanisme fournit plus de ressources disponibles pour lancer des copies spéculatives, en utilisant une approche de réservation dynamique de ressources. Il est démontré qu’elle améliore considérablement l’efficacité énergétique en utilisant une simulation. / Energy consumption is an important concern for large-scale Big Data processing systems, which results in huge monetary cost. Due to the hardware heterogeneity and contentions between concurrent workloads, stragglers (i.e., tasks performing relatively slower than other tasks) can severely increase the job’s execution time and energy consumption. Consequently, straggler mitigation becomes an important technique to improve the performance of large-scale Big Data processing systems. Typically, it consists of two phases: straggler detection and straggler handling. In the detection phase, slow tasks (e.g., tasks with speed or progress below the average) are marked as stragglers. Then, stragglers are handled using the speculative execution technique. With this technique, a copy of the detected straggler is launched in parallel with the straggler with the expectation that it can finish earlier, thus, reduce the straggler’s execution time. Although a large number of studies have been proposed to improve the performance of Big Data applications using speculative execution technique, few of them have studied the energy efficiency of their solutions. Addressing this lack, we conduct an experimental study to fully understand the impact of straggler mitigation techniques on the performance and the energy consumption of Big Data processing systems. We observe that current straggler mitigation techniques are not energy efficient. As a result, this promotes further studies aiming at higher energy efficiency for straggler mitigation. In terms of straggler detection, we introduce a novel framework for comprehensively characterizing and evaluating straggler detection mechanisms. Accordingly, we propose a new energy-driven straggler detection mechanism. This straggler detection mechanism is implemented into Hadoop and is demonstrated to have higher energy efficiency compared to the state-of-the-art mechanisms. In terms of straggler handling, we present a new speculative copy allocation method, which takes into consideration the impact of resource heterogeneity on performance and energy consumption. Finally, an energy efficient straggler handling mechanism is introduced. This mechanism provides more resource availability for launching speculative copies, by adopting a dynamic resource reservation approach. It is demonstrated, via a trace-driven simulation, to bring a high improvement in energy efficiency.
29

Efficient Big Data Processing on Large-Scale Shared Platforms ˸ managing I/Os and Failure / Sur l'efficacité des traitements Big Data sur les plateformes partagées à grandes échelle ˸ gestion des entrées-sorties et des pannes

Yildiz, Orcun 08 December 2017 (has links)
En 2017 nous vivons dans un monde régi par les données. Les applications d’analyse de données apportent des améliorations fondamentales dans de nombreux domaines tels que les sciences, la santé et la sécurité. Cela a stimulé la croissance des volumes de données (le déluge du Big Data). Pour extraire des informations utiles à partir de cette quantité énorme d’informations, différents modèles de traitement des données ont émergé tels que MapReduce, Hadoop, et Spark. Les traitements Big Data sont traditionnellement exécutés à grande échelle (les systèmes HPC et les Clouds) pour tirer parti de leur puissance de calcul et de stockage. Habituellement, ces plateformes à grande échelle sont utilisées simultanément par plusieurs utilisateurs et de multiples applications afin d’optimiser l’utilisation des ressources. Bien qu’il y ait beaucoup d’avantages à partager de ces plateformes, plusieurs problèmes sont soulevés dès lors qu’un nombre important d’utilisateurs et d’applications les utilisent en même temps, parmi lesquels la gestion des E / S et des défaillances sont les principales qui peuvent avoir un impact sur le traitement efficace des données.Nous nous concentrons tout d’abord sur les goulots d’étranglement liés aux performances des E/S pour les applications Big Data sur les systèmes HPC. Nous commençons par caractériser les performances des applications Big Data sur ces systèmes. Nous identifions les interférences et la latence des E/S comme les principaux facteurs limitant les performances. Ensuite, nous nous intéressons de manière plus détaillée aux interférences des E/S afin de mieux comprendre les causes principales de ce phénomène. De plus, nous proposons un système de gestion des E/S pour réduire les dégradations de performance que les applications Big Data peuvent subir sur les systèmes HPC. Par ailleurs, nous introduisons des modèles d’interférence pour les applications Big Data et HPC en fonction des résultats que nous obtenons dans notre étude expérimentale concernant les causes des interférences d’E/S. Enfin, nous exploitons ces modèles afin de minimiser l’impact des interférences sur les performances des applications Big Data et HPC. Deuxièmement, nous nous concentrons sur l’impact des défaillances sur la performance des applications Big Data en étudiant la gestion des pannes dans les clusters MapReduce partagés. Nous présentons un ordonnanceur qui permet un recouvrement rapide des pannes, améliorant ainsi les performances des applications Big Data. / As of 2017, we live in a data-driven world where data-intensive applications are bringing fundamental improvements to our lives in many different areas such as business, science, health care and security. This has boosted the growth of the data volumes (i.e., deluge of Big Data). To extract useful information from this huge amount of data, different data processing frameworks have been emerging such as MapReduce, Hadoop, and Spark. Traditionally, these frameworks run on largescale platforms (i.e., HPC systems and clouds) to leverage their computation and storage power. Usually, these largescale platforms are used concurrently by multiple users and multiple applications with the goal of better utilization of resources. Though benefits of sharing these platforms exist, several challenges are raised when sharing these large-scale platforms, among which I/O and failure management are the major ones that can impact efficient data processing.To this end, we first focus on I/O related performance bottlenecks for Big Data applications on HPC systems. We start by characterizing the performance of Big Data applications on these systems. We identify I/O interference and latency as the major performance bottlenecks. Next, we zoom in on I/O interference problem to further understand the root causes of this phenomenon. Then, we propose an I/O management scheme to mitigate the high latencies that Big Data applications may encounter on HPC systems. Moreover, we introduce interference models for Big Data and HPC applications based on the findings we obtain in our experimental study regarding the root causes of I/O interference. Finally, we leverage these models to minimize the impact of interference on the performance of Big Data and HPC applications. Second, we focus on the impact of failures on the performance of Big Data applications by studying failure handling in shared MapReduce clusters. We introduce a failure-aware scheduler which enables fast failure recovery while optimizing data locality thus improving the application performance.
30

La modélisation et le contrôle des services BigData : application à la performance et la fiabilité de MapReduce / Modeling and control of cloud services : application to MapReduce performance and dependability

Berekmeri, Mihaly 18 November 2015 (has links)
Le grand volume de données généré par nos téléphones mobiles, tablettes, ordinateurs, ainsi que nos montres connectées présente un défi pour le stockage et l'analyse. De nombreuses solutions ont émergées dans l'industrie pour traiter cette grande quantité de données, la plus populaire d'entre elles est MapReduce. Bien que la complexité de déploiement des systèmes informatiques soit en constante augmentation, la disponibilité permanente et la rapidité du temps de réponse sont toujours une priorité. En outre, avec l'émergence des solutions de virtualisation et du cloud, les environnements de fonctionnement sont devenus de plus en plus dynamiques. Par conséquent, assurer les contraintes de performance et de fiabilité d'un service MapReduce pose un véritable challenge. Dans cette thèse, les problématiques de garantie de la performance et de la disponibilité de services de cloud MapReduce sont abordées en utilisant une approche basée sur la théorie du contrôle. Pour commencer, plusieurs modèles dynamiques d'un service MapReduce exécutant simultanément de multiples tâches sont introduits. Par la suite, plusieurs lois de contrôle assurant les différents objectifs de qualités de service sont synthétisées. Des contrôleurs classiques par retour de sortie avec feedforward garantissant les performances de service ont d'abord été développés. Afin d'adapter nos contrôleurs au cloud, tout en minimisant le nombre de reconfigurations et les coûts, une nouvelle architecture de contrôle événementiel a été mise en œuvre. Finalement, l'architecture de contrôle optimal MR-Ctrl a été développée. C'est la première solution à fournir aux systèmes MapReduce des garanties en termes de performances et de disponibilité, tout en minimisant le coût. Les approches de modélisation et de contrôle ont été évaluées à la fois en simulation, et en expérimentation sous MRBS, qui est une suite de tests complète pour évaluer la performance et la fiabilité des systèmes MapReduce. Les tests ont été effectuées en ligne sur un cluster MapReduce de 60 nœuds exécutant une tâche de calcul intensive de type Business Intelligence. Nos expériences montrent que le contrôle ainsi conçu, peut garantir les contraintes de performance et de disponibilité. / The amount of raw data produced by everything from our mobile phones, tablets, computers to our smart watches brings novel challenges in data storage and analysis. Many solutions have arisen in the industry to treat these large quantities of raw data, the most popular being the MapReduce framework. However, while the deployment complexity of such computing systems is steadily increasing, continuous availability and fast response times are still the expected norm. Furthermore, with the advent of virtualization and cloud solutions, the environments where these systems need to run is becoming more and more dynamic. Therefore ensuring performance and dependability constraints of a MapReduce service still poses significant challenges. In this thesis we address this problematic of guaranteeing the performance and availability of MapReduce based cloud services, taking an approach based on control theory. We develop the first dynamic models of a MapReduce service running a concurrent workload. Furthermore, we develop several control laws to ensure different quality of service objectives. First, classical feedback and feedforward controllers are developed to guarantee service performance. To further adapt our controllers to the cloud, such as minimizing the number of reconfigurations and costs, a novel event-based control architecture is introduced for performance management. Finally we develop the optimal control architecture MR-Ctrl, which is the first solution to provide guarantees in terms of both performance and dependability for MapReduce systems, meanwhile keeping cost at a minimum. All the modeling and control approaches are evaluated both in simulation and experimentally using MRBS, a comprehensive benchmark suite for evaluating the performance and dependability of MapReduce systems. Validation experiments were run in a real 60 node Hadoop MapReduce cluster, running a data intensive Business Intelligence workload. Our experiments show that the proposed techniques can successfully guarantee performance and dependability constraints.

Page generated in 0.0343 seconds