• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 24
  • 8
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 54
  • 54
  • 21
  • 19
  • 17
  • 16
  • 15
  • 14
  • 14
  • 13
  • 12
  • 12
  • 9
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Transversal I/O scheduling for parallel file systems : from applications to devices / Escalonamento de E/S transversal para sistemas de arquivos paralelos : das aplicações aos dispositivos

Boito, Francieli Zanon January 2015 (has links)
Esta tese se concentra no escalonamento de operações de entrada e saída (E/S) como uma solução para melhorar o desempenho de sistemas de arquivos paralelos, aleviando os efeitos da interferência. É usual que sistemas de computação de alto desempenho (HPC) ofereçam uma infraestrutura compartilhada de armazenamento para as aplicações. Nessa situação, em que múltiplas aplicações acessam o sistema de arquivos compartilhado de forma concorrente, os acessos das aplicações causarão interferência uns nos outros, comprometendo a eficácia de técnicas para otimização de E/S. Uma avaliação extensiva de desempenho foi conduzida, abordando cinco algoritmos de escalonamento trabalhando nos servidores de dados de um sistema de arquivos paralelo. Foram executados experimentos em diferentes plataformas e sob diferentes padrões de acesso. Os resultados indicam que os resultados obtidos pelos escalonadores são afetados pelo padrão de acesso das aplicações, já que é importante que o ganho de desempenho provido por um algoritmo de escalonamento ultrapasse o seu sobrecusto. Ao mesmo tempo, os resultados do escalonamento são afetados pelas características do subsistema local de E/S - especialmente pelos dispositivos de armazenamento. Dispositivos diferentes apresentam variados níveis de sensibilidade à sequencialidade dos acessos e ao seu tamanho, afetando o quanto técnicas de escalonamento de E/S são capazes de aumentar o desempenho. Por esses motivos, o principal objetivo desta tese é prover escalonamento de E/S com dupla adaptabilidade: às aplicações e aos dispositivos. Informações sobre o padrão de acesso das aplicações são obtidas através de arquivos de rastro, vindos de execuções anteriores. Aprendizado de máquina foi aplicado para construir um classificador capaz de identificar os aspectos espacialidade e tamanho de requisição dos padrões de acesso através de fluxos de requisições anteriores. Além disso, foi proposta uma técnica para obter eficientemente a razão entre acessos sequenciais e aleatórios para dispositivos de armazenamento, executando testes para apenas um subconjunto dos parâmetros e estimando os demais através de regressões lineares. Essas informações sobre características de aplicações e dispositivos de armazenamento são usadas para decidir a melhor escolha em algoritmo de escalonamento através de uma árvore de decisão. A abordagem proposta aumenta o desempenho em até 75% sobre uma abordagem que usa o mesmo algoritmo para todas as situações, sem adaptabilidade. Além disso, essa técnica melhora o desempenho para até 64% mais situações, e causa perdas de desempenho em até 89% menos situações. Os resultados obtidos evidenciam que ambos aspectos - aplicações e dispositivos de armazenamento - são essenciais para boas decisões de escalonamento. Adicionalmente, apesar do fato de não haver algoritmo de escalonamento capaz de prover ganhos de desempenho para todas as situações, esse trabalho mostra que através da dupla adaptabilidade é possível aplicar técnicas de escalonamento de E/S para melhorar o desempenho, evitando situações em que essas técnicas prejudicariam o desempenho. / This thesis focuses on I/O scheduling as a tool to improve I/O performance on parallel file systems by alleviating interference effects. It is usual for High Performance Computing (HPC) systems to provide a shared storage infrastructure for applications. In this situation, when multiple applications are concurrently accessing the shared parallel file system, their accesses will affect each other, compromising I/O optimization techniques’ efficacy. We have conducted an extensive performance evaluation of five scheduling algorithms at a parallel file system’s data servers. Experiments were executed on different platforms and under different access patterns. Results indicate that schedulers’ results are affected by applications’ access patterns, since it is important for the performance improvement obtained through a scheduling algorithm to surpass its overhead. At the same time, schedulers’ results are affected by the underlying I/O system characteristics - especially by storage devices. Different devices present different levels of sensitivity to accesses’ sequentiality and size, impacting on how much performance is improved through I/O scheduling. For these reasons, this thesis main objective is to provide I/O scheduling with double adaptivity: to applications and devices. We obtain information about applications’ access patterns through trace files, obtained from previous executions. We have applied machine learning to build a classifier capable of identifying access patterns’ spatiality and requests size aspects from streams of previous requests. Furthermore, we proposed an approach to efficiently obtain the sequential to random throughput ratio metric for storage devices by running benchmarks for a subset of the parameters and estimating the remaining through linear regressions. We use this information on applications’ and storage devices’ characteristics to decide the best fit in scheduling algorithm though a decision tree. Our approach improves performance by up to 75% over an approach that uses the same scheduling algorithm to all situations, without adaptability. Moreover, our approach improves performance for up to 64% more situations, and decreases performance for up to 89% less situations. Our results evidence that both aspects - applications and storage devices - are essential for making good scheduling choices. Moreover, despite the fact that there is no scheduling algorithm able to provide performance gains for all situations, we show that through double adaptivity it is possible to apply I/O scheduling techniques to improve performance, avoiding situations where it would lead to performance impairment.
22

Dinamismo de servidores de dados no sistema de arquivos dNFSp / Data Servers Dynamism in the dNFSp File System

Hermann, Everton January 2006 (has links)
Um dos maiores desafios no desenvolvimento de sistemas de alto desempenho é a questão da transferência e armazenamento de grandes quantidades de dados dentro do sistema. Diferentes abordagens tentam solucionar este problema. Entre elas, tem-se os sistemas de arquivos voltados para cluster, como PVFS, Lustre e NFSp. Eles distribuem as funções de armazenamento entre os nós do cluster. Na maioria dos casos, os nós do sistema de arquivos são divididos em duas categorias: servidores de dados e servidores de metadados. Assim, fica a cargo do administrador determinar como estes servidores são dispostos dentro do cluster. No entanto, esta tarefa nem sempre é óbvia, pois grande parte dos sistemas de arquivos exige que os nós destinados ao sistema sejam determinados na sua instalação, sem a possibilidade de alterações posteriores. Uma má configuração inicial pode exigir a reinstalação do sistema, e o fato de não fazer esta reinstalação pode resultar em um serviço que não satisfaz às necessidades dos usuários.O objetivo deste trabalho é propor um modelo de tratamento do dinamismo de servidores de dados em um sistema de arquivos para cluster. Três cenários foram estudados, e para cada um deles foram analisadas estratégias de autoconfiguração do sistema de arquivos emtempo de execução. O primeiro caso tratado foi a adição de servidores de dados por parte do administrador para expandir a capacidade do sistema de arquivos. Testes sobre este caso mostraram que, nas situações onde a distribuição de carga entre os servidores de dados é homogênea, pode-se extrair os melhores resultados do sistema. O segundo caso tratado foi a inserção por parte do usuário de servidores temporários de dados. Esta inserção temcomo objetivo suprir as necessidades temporárias de algumas aplicações. Foram realizados testes comparando o desempenho de aplicações com e sem a utilização de servidores temporários. Em todos os casos, a aplicação com servidores temporários teve maior desempenho, atingindo até 20% de ganho. O último cenário tratado combina técnicas de replicação com o dinamismo de nós. Assim, foi possível manter o sistema de arquivos em funcionamento mesmo após a perda de um servidor de dados. Os resultados mostraram que a perda de servidores de dados pode resultar em desequilíbrio de carga entre servidores, comprometendo o desempenho do sistema de arquivos. / One of the most important challenges to high performance systems designers is storing and transfering large amounts of data between the nodes on the system. Different approaches have been proposed to solve this storage performance problem. Cluster file systems, like PVFS, Lustre and NFSp are examples of such systems, as they distribute the functionality of a file system across the nodes of cluster, achieving a high level of parallelism and offering a larger storage space than centralized solutions. Usually the file system nodes are of two types: metadata servers and data servers. The placement of those services on a cluster is left to the cluster administrator. Such configuration is not an obvious task, as most file systems do not allow changing the configuration after the installation. A suboptimal initial configuration may result on a file system that does not fit the users need and changing such configuration may require a file system reinstall. The objective of this work is to propose a model to treat the dynamism of data servers on a cluster file system. Three scenarios were studied and for each one we have designed suitable reconfiguration strategies. The first case has its origin on the system administrator’s actions, adding or removing data servers to change the capacity of the file system. The tests have shown that with an homogeneous load distribution across the servers it was possible to obtain the best results. The second scenario treats the temporary data server insertion by the user. This case aims to provide extra storage capacity to a specified application. Tests were performed comparing applications with and without temporary data servers. On all the cases the application with temporary data server has had better performance results, reaching 20% of performance gain. The last scenario, combines replication techniqueswith server dynamism. Thisway, itwas possible to keep the file systemworking even on data servers failure. The tests have shown that the losts of a nodemay result on load unbalancing on data servers, degrading the overall file system performance.
23

Transversal I/O scheduling for parallel file systems : from applications to devices / Escalonamento de E/S transversal para sistemas de arquivos paralelos : das aplicações aos dispositivos

Boito, Francieli Zanon January 2015 (has links)
Esta tese se concentra no escalonamento de operações de entrada e saída (E/S) como uma solução para melhorar o desempenho de sistemas de arquivos paralelos, aleviando os efeitos da interferência. É usual que sistemas de computação de alto desempenho (HPC) ofereçam uma infraestrutura compartilhada de armazenamento para as aplicações. Nessa situação, em que múltiplas aplicações acessam o sistema de arquivos compartilhado de forma concorrente, os acessos das aplicações causarão interferência uns nos outros, comprometendo a eficácia de técnicas para otimização de E/S. Uma avaliação extensiva de desempenho foi conduzida, abordando cinco algoritmos de escalonamento trabalhando nos servidores de dados de um sistema de arquivos paralelo. Foram executados experimentos em diferentes plataformas e sob diferentes padrões de acesso. Os resultados indicam que os resultados obtidos pelos escalonadores são afetados pelo padrão de acesso das aplicações, já que é importante que o ganho de desempenho provido por um algoritmo de escalonamento ultrapasse o seu sobrecusto. Ao mesmo tempo, os resultados do escalonamento são afetados pelas características do subsistema local de E/S - especialmente pelos dispositivos de armazenamento. Dispositivos diferentes apresentam variados níveis de sensibilidade à sequencialidade dos acessos e ao seu tamanho, afetando o quanto técnicas de escalonamento de E/S são capazes de aumentar o desempenho. Por esses motivos, o principal objetivo desta tese é prover escalonamento de E/S com dupla adaptabilidade: às aplicações e aos dispositivos. Informações sobre o padrão de acesso das aplicações são obtidas através de arquivos de rastro, vindos de execuções anteriores. Aprendizado de máquina foi aplicado para construir um classificador capaz de identificar os aspectos espacialidade e tamanho de requisição dos padrões de acesso através de fluxos de requisições anteriores. Além disso, foi proposta uma técnica para obter eficientemente a razão entre acessos sequenciais e aleatórios para dispositivos de armazenamento, executando testes para apenas um subconjunto dos parâmetros e estimando os demais através de regressões lineares. Essas informações sobre características de aplicações e dispositivos de armazenamento são usadas para decidir a melhor escolha em algoritmo de escalonamento através de uma árvore de decisão. A abordagem proposta aumenta o desempenho em até 75% sobre uma abordagem que usa o mesmo algoritmo para todas as situações, sem adaptabilidade. Além disso, essa técnica melhora o desempenho para até 64% mais situações, e causa perdas de desempenho em até 89% menos situações. Os resultados obtidos evidenciam que ambos aspectos - aplicações e dispositivos de armazenamento - são essenciais para boas decisões de escalonamento. Adicionalmente, apesar do fato de não haver algoritmo de escalonamento capaz de prover ganhos de desempenho para todas as situações, esse trabalho mostra que através da dupla adaptabilidade é possível aplicar técnicas de escalonamento de E/S para melhorar o desempenho, evitando situações em que essas técnicas prejudicariam o desempenho. / This thesis focuses on I/O scheduling as a tool to improve I/O performance on parallel file systems by alleviating interference effects. It is usual for High Performance Computing (HPC) systems to provide a shared storage infrastructure for applications. In this situation, when multiple applications are concurrently accessing the shared parallel file system, their accesses will affect each other, compromising I/O optimization techniques’ efficacy. We have conducted an extensive performance evaluation of five scheduling algorithms at a parallel file system’s data servers. Experiments were executed on different platforms and under different access patterns. Results indicate that schedulers’ results are affected by applications’ access patterns, since it is important for the performance improvement obtained through a scheduling algorithm to surpass its overhead. At the same time, schedulers’ results are affected by the underlying I/O system characteristics - especially by storage devices. Different devices present different levels of sensitivity to accesses’ sequentiality and size, impacting on how much performance is improved through I/O scheduling. For these reasons, this thesis main objective is to provide I/O scheduling with double adaptivity: to applications and devices. We obtain information about applications’ access patterns through trace files, obtained from previous executions. We have applied machine learning to build a classifier capable of identifying access patterns’ spatiality and requests size aspects from streams of previous requests. Furthermore, we proposed an approach to efficiently obtain the sequential to random throughput ratio metric for storage devices by running benchmarks for a subset of the parameters and estimating the remaining through linear regressions. We use this information on applications’ and storage devices’ characteristics to decide the best fit in scheduling algorithm though a decision tree. Our approach improves performance by up to 75% over an approach that uses the same scheduling algorithm to all situations, without adaptability. Moreover, our approach improves performance for up to 64% more situations, and decreases performance for up to 89% less situations. Our results evidence that both aspects - applications and storage devices - are essential for making good scheduling choices. Moreover, despite the fact that there is no scheduling algorithm able to provide performance gains for all situations, we show that through double adaptivity it is possible to apply I/O scheduling techniques to improve performance, avoiding situations where it would lead to performance impairment.
24

DecaFS: A Modular Distributed File System to Facilitate Distributed Systems Education

Meth, Halli Elaine 01 June 2014 (has links)
Data quantity, speed requirements, reliability constraints, and other factors encourage industry developers to build distributed systems and use distributed services. Software engineers are therefore exposed to distributed systems and services daily in the workplace. However, distributed computing is hard to teach in Computer Science courses due to the complexity distribution brings to all problem spaces. This presents a gap in education where students may not fully understand the challenges introduced with distributed systems. Teaching students distributed concepts would help better prepare them for industry development work. DecaFS, Distributed Educational Component Adaptable File System, is a modular distributed file system designed for educational use. The goal of the system is to teach distributed computing concepts to undergraduate and graduate level students by allowing them to develop small, digestible portions of the system. The system is broken up into layers, and each layer is broken up into modules so that students can build or modify different components in small, assignment- sized portions. Students can replace modules or entire layers by following the DecaFS APIs and recompiling the system. This allows the behavior of the DFS (Distributed File System) to change based on student implementation, while providing base functionality for students to work from. Our implementation includes a code base of core DecaFS Modules that students can work from and basic implementations of non-core DecaFS Modules. Our basic non-core modules can be modified to implement more complex distribution techniques without modifying core modules. We have shown the feasibility of developing a modular DFS, while adhering to requirements such as configurable sizes (file, stripe, chunk) and support of multiple data replication strategies.
25

PolyFS Visualizer

Fallon, Paul Martin 01 June 2016 (has links)
One of the most important operating system topics, file systems, control how we store and access data and form a key point in a computer scientists understanding of the underlying mechanisms of a computer. However, file systems, with their abstract concepts and lack of concrete learning aids, is a confusing subjects for students. Historically at Cal Poly, the CPE 453 Introduction to Operating Systems has been on of the most failed classes in the computing majors, leading to the need for better teaching and learning tools. Tools allowing students to gain concrete examples of abstract concepts could be used to better prepare students for industry. The PolyFS Visualizer is a block level file system visualization service built for the PolyFS and TinyFS file systems design specifications currently used by some of professors teaching CPE 453. The service allows students to easily view the blocks of their file system and see metadata, the blocks binary content and the interlinked structure. Students can either compile their file system code with a provided block emulation library to build their disk on a remote server and make use of a visualization website or place the file mounted as their file system directly into the visualization service to view it locally. This allows students to easily view, debug and explore their implementation of a file system to understand how different design decisions affect its operation. The implementation includes three main components: a disk emulation library in C for compilation with students code, a node JS back-end to handle students file systems and block operations and a read only visualization service. We have conducted two surveys of students in order to determine the usefulness of the PolyFS Visualizer. Students responded that the use of the PolyFS visualizer helps with the PolyFS file system design project and has several ideas for future features and expansions.
26

USER-SPACE, CUSTOM FILE SYSTEM FOR PROXY SERVERS

DHAR, MEGHNA 02 September 2003 (has links)
No description available.
27

<b>Comparison of Persistence of Deleted Files on Different File Systems and Disk Types</b>

Chinmay Amul Chhajed (18403644) 19 April 2024 (has links)
<p dir="ltr">The presence of digital devices in various settings, from workplaces to personal spaces, necessitates reliable and secure data storage solutions. These devices store data on non-volatile media like Solid State Drives (SSDs) and Hard Disk Drives (HDDs), ensuring data preservation even after power loss. Files, fundamental units of data storage, are created, modified, and deleted through user activities like application installations or file management. File systems, acting as the backbone of the system, manage these files on storage devices.</p><p dir="ltr">This research explores how three key factors: (1) different operating systems running various file system types (ext4, NTFS, FAT, etc.), (2) different disk types (SSD and HDD), and (3) common user activities (system shutdowns, reboots, web browsing, downloads, etc.) influence the persistence of deleted files.</p><p dir="ltr">This research aims to fill a gap in the understanding by looking at how these factors influence how quickly new information overwrites deleted files. This is especially important for digital forensics, where investigators need to be sure they can find all the evidence on a device. The research will focus on how operating systems handle deleted files and how everyday activities affect the chances of getting them back. This can ultimately improve data security and make digital forensics more reliable.</p>
28

Scalable Data Management for Object-based Storage Systems

Wadhwa, Bharti 19 August 2020 (has links)
Parallel I/O performance is crucial to sustain scientific applications on large-scale High-Performance Computing (HPC) systems. Large scale distributed storage systems, in particular the object-based storage systems, face severe challenges for managing the data efficiently. Inefficient data management leads to poor I/O and storage performance in HPC applications and scientific workflows. Some of the main challenges for efficient data management arise from poor resource allocation, load imbalance in object storage targets, and inflexible data sharing between applications in a workflow. In addition, parallel I/O makes it challenging to shoehorn new interfaces, such as taking advantage of multiple layers of storage and support for analysis in the data path. Solving these challenges to improve performance and efficiency of object-based storage systems is crucial, especially for upcoming era of exascale systems. This dissertation is focused on solving these major challenges in object-based storage systems by providing scalable data management strategies. In the first part of the dis-sertation (Chapter 3), we present a resource contention aware load balancing tool (iez) for large scale distributed object-based storage systems. In Chapter 4, we extend iez to support Progressive File Layout for object-based storage system: Lustre. In the second part (Chapter 5), we present a technique to facilitate data sharing in scientific workflows using object-based storage, with our proposed tool Workflow Data Communicator. In the last part of this dissertation, we present a solution for transparent data management in multi-layer storage hierarchy of present and next-generation HPC systems.This dissertation shows that by intelligently employing scalable data management techniques, scientific applications' and workflows' flexibility and performance in object-based storage systems can be enhanced manyfold. Our proposed data management strategies can guide next-generation HPC storage systems' software design to efficiently support data for scientific applications and workflows. / Doctor of Philosophy / Large scale object-based storage systems face severe challenges to manage the data efficiently for HPC applications and workflows. These storage systems often manage and share data inflexibly, without considering the load imbalance and resource contention in the underlying multi-layer storage hierarchy. This dissertation first studies how resource contention and inflexible data sharing mechanisms impact HPC applications' storage and I/O performance; and then presents a series of efficient techniques, tools and algorithms to provide efficient and scalable data management for current and next-generation HPC storage systems
29

On the Use of Containers in High Performance Computing

Abraham, Subil 09 July 2020 (has links)
The lightweight, portable, and flexible nature of containers is driving their widespread adoption in cloud solutions. Data analysis and deep learning applications have especially benefited from containerized solutions. As such data analysis is also being utilized in the high performance computing (HPC) domain, the need for container support in HPC has become paramount. However, container adoption in HPC face crucial performance and I/O challenges. One obstacle is that while there have been container solutions for HPC, such solutions have not been thoroughly investigated, especially from the aspect of their impact on the crucial I/O throughput needs of HPC. To this end, this paper provides a first-of-its-kind empirical analysis of state-of-the-art representative container solutions (Docker, Podman, Singularity, and Charliecloud) in HPC environments, especially how containers interact with the HPC storage systems. We present the design of an analysis framework that is deployed on all nodes in an HPC environment, and captures aspects such as CPU, memory, network, and file I/O statistics from the nodes and the storage system. We are able to garner key insights from our analysis, e.g., Charliecloud outperforms other container solutions in terms of container start-up time, while Singularity and Charliecloud are equivalent in I/O throughput. But this comes at a cost, as Charliecloud invokes the most metadata and I/O operations on the underlying Lustre file system. By identifying such optimization opportunities, we can enhance performance of containers atop HPC and help the aforementioned applications. / Master of Science / Containers are a technology that allow for applications to be packaged along with its ideal environment, all the way down to its preferred operating system. This allows an application to run anywhere that can support containers without a huge hit to the application performance. Hence containers have seen wide adoption for use in the cloud. These qualities have also made it very appealing for use in the world of scientific research in national labs. Modern research heavily relies on the power of computing in order to model, simulate, and test the behavior of real world entities, often making use of large amounts of data and utilizing machine learning and deep learning. Doing this often requires the high performance computing power found in supercomputers. In most cases, scientists just want to be able to write their code and expect it to just work. Their applications might depend on other source code that form part of their standard toolkit and would expect to also be installed in the supercomputing environment. This may not always be the case, taking the scientist's focus away from their work in order ensure their requirements are set up in the supercomputing environment which might require extensive cooperation with the operations team responsible for the supercomputers. Containers easily solve this problem because it can package everything together. However, the use of containers in these environments have not been extensively tested, especially for applications that are very heavy on the analysis of large quantities of data. To fill this gap, this work analyzes the performance of several state-of-the-art container technologies (Docker, Podman, Singularity, Charliecloud), with a particular focus on its interaction with the Lustre data storage systems widely used in supercomputing environments. As part of this work, we design an analysis setup that captures the behavior of various aspects of the high performance computing environment like CPU, memory, network usage and data movement while using containers to run data heavy applications. We garner important insights about their performance that can help inform the best choice of container technology given an environment and the kind of application that needs to be run.
30

Energy savings and performance improvements with SSDs in the Hadoop Distributed File System / Economia de energia e aumento de desempenho usando SSDs no Hadoop Distributed File System

Polato, Ivanilton 29 August 2016 (has links)
Energy issues gathered strong attention over the past decade, reaching IT data processing infrastructures. Now, they need to cope with such responsibility, adjusting existing platforms to reach acceptable performance while promoting energy consumption reduction. As the de facto platform for Big Data, Apache Hadoop has evolved significantly over the last years, with more than 60 releases bringing new features. By implementing the MapReduce programming paradigm and leveraging HDFS, its distributed file system, Hadoop has become a reliable and fault tolerant middleware for parallel and distributed computing over large datasets. Nevertheless, Hadoop may struggle under certain workloads, resulting in poor performance and high energy consumption. Users increasingly demand that high performance computing solutions address sustainability and limit energy consumption. In this thesis, we introduce HDFSH, a hybrid storage mechanism for HDFS, which uses a combination of Hard Disks and Solid-State Disks to achieve higher performance while saving power in Hadoop computations. HDFSH brings, to the middleware, the best from HDs (affordable cost per GB and high storage capacity) and SSDs (high throughput and low energy consumption) in a configurable fashion, using dedicated storage zones for each storage device type. We implemented our mechanism as a block placement policy for HDFS, and assessed it over six recent releases of Hadoop with different architectural properties. Results indicate that our approach increases overall job performance while decreasing the energy consumption under most hybrid configurations evaluated. Our results also showed that, in many cases, storing only part of the data in SSDs results in significant energy savings and execution speedups / Ao longo da última década, questões energéticas atraíram forte atenção da sociedade, chegando às infraestruturas de TI para processamento de dados. Agora, essas infraestruturas devem se ajustar a essa responsabilidade, adequando plataformas existentes para alcançar desempenho aceitável enquanto promovem a redução no consumo de energia. Considerado um padrão para o processamento de Big Data, o Apache Hadoop tem evoluído significativamente ao longo dos últimos anos, com mais de 60 versões lançadas. Implementando o paradigma de programação MapReduce juntamente com o HDFS, seu sistema de arquivos distribuídos, o Hadoop tornou-se um middleware tolerante a falhas e confiável para a computação paralela e distribuída para grandes conjuntos de dados. No entanto, o Hadoop pode perder desempenho com determinadas cargas de trabalho, resultando em elevado consumo de energia. Cada vez mais, usuários exigem que a sustentabilidade e o consumo de energia controlado sejam parte intrínseca de soluções de computação de alto desempenho. Nesta tese, apresentamos o HDFSH, um sistema de armazenamento híbrido para o HDFS, que usa uma combinação de discos rígidos e discos de estado sólido para alcançar maior desempenho, promovendo economia de energia em aplicações usando Hadoop. O HDFSH traz ao middleware o melhor dos HDs (custo acessível por GB e grande capacidade de armazenamento) e SSDs (alto desempenho e baixo consumo de energia) de forma configurável, usando zonas de armazenamento dedicadas para cada dispositivo de armazenamento. Implementamos nosso mecanismo como uma política de alocação de blocos para o HDFS e o avaliamos em seis versões recentes do Hadoop com diferentes arquiteturas de software. Os resultados indicam que nossa abordagem aumenta o desempenho geral das aplicações, enquanto diminui o consumo de energia na maioria das configurações híbridas avaliadas. Os resultados também mostram que, em muitos casos, armazenar apenas uma parte dos dados em SSDs resulta em economia significativa de energia e aumento na velocidade de execução

Page generated in 0.4896 seconds