• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 49
  • 32
  • 6
  • 4
  • 1
  • 1
  • Tagged with
  • 101
  • 44
  • 35
  • 30
  • 19
  • 16
  • 16
  • 15
  • 14
  • 13
  • 13
  • 12
  • 12
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

De la conception d'une plateforme de télétravail virtualisée et unifiée : Analyses socio-techniques du travail "à distance" équipé / About the design of a virtualized and unified platform : Socio-technical analyzes of equipped "remote" working

Marrauld, Laurie 05 December 2012 (has links)
Cette thèse de doctorat, en Sciences de Gestion, a pour terrain de réflexions et d’actions de recherche, le projet WITE 2.0 consacré à l’analyse technico-organisationnelle d’un dispositif TIC en développement : une plateforme intégrée de télétravail. Cette plateforme permet de travailler «à distance» en mode connecté ou non, à partir de n’importe quel terminal (PC, téléphone, tablette), sur un mode «client léger» et dans un environnement de type cloud computing. De la conception de cette plateforme ont émergé des questionnements relatifs à la place des technologies d’information et de communication (TIC) dans les activités de travail réalisées «à distance» de son collectif de travail. Notre stratégie de recherche est constituée de deux grandes phases d’actions de recherche : la première consistait à connaître la diversité des configurations de «télétravail» et la seconde à comprendre les modes d’appropriation et les limites des technologies de communication unifiée entrant en jeu dans la conception de la plateforme. Ces deux phases ont été conduites dans une perspective de l’«action située» et suivant une méthodologie qualitative fondée sur des études par entretiens et par observations. Les résultats rendent compte des réalités des pratiques de travail à distance en situation de mobilité équipée, des limites des équipements et des tactiques construites par les acteurs pendant la «mise en pratique» de la technologie. Ces résultats révèlent aussi les normes, souvent tacites, et valeurs d’usages de ces nouvelles technologies et permettent d’appréhender leur conception au travers des recommandations managériales englobant leurs aspects technique, d’usage et de service. / This doctoral thesis in Management Sciences concerns the project WITE 2.0 dedicated to the analysis and the design of technical and organizational ICT device : an integrated platform for teleworking. This platform allows you to work "remotely" on a connected mode or not, from any device (PC, phone, tablet), on a "thin client" and in a work environment like « cloud computing ». Some questions have emerged related to the design of the platform : these questions concern the role of the information and communication technologies (ICT) in the progress of remote working. Our design of research is divided into two research’s actions : firstly, we wanted to know the diversity of remote working configurations and secondly, we wanted to understand how the appropriation’s codes and norms of the new technologies (used for the platform) take place. We followed an « situed action » perspective and a qualitative methodology based on semi-structured interviews and observations. In our results, we describe the remote working’s realities, the limitations of the technologies and the tactics built by the workers while they « enact » the technology remotly. We discovered some use’s norms, often in a tacit dimension, and use’s values of these new technologies. Finally we gave some managerial recommandations concerning the technical, use and service aspects.
2

Efficient cross-architecture hardware virtualisation

Spink, Thomas January 2017 (has links)
Hardware virtualisation is the provision of an isolated virtual environment that represents real physical hardware. It enables operating systems, or other system-level software (the guest), to run unmodified in a “container” (the virtual machine) that is isolated from the real machine (the host). There are many use-cases for hardware virtualisation that span a wide-range of end-users. For example, home-users wanting to run multiple operating systems side-by-side (such as running a Windows® operating system inside an OS X environment) will use virtualisation to accomplish this. In research and development environments, developers building experimental software and hardware want to prototype their designs quickly, and so will virtualise the platform they are targeting to isolate it from their development workstation. Large-scale computing environments employ virtualisation to consolidate hardware, enforce application isolation, migrate existing servers or provision new servers. However, the majority of these use-cases call for same-architecture virtualisation, where the architecture of the guest and the host machines match—a situation that can be accelerated by the hardware-assisted virtualisation extensions present on modern processors. But, there is significant interest in virtualising the hardware of different architectures on a host machine, especially in the architectural research and development worlds. Typically, the instruction set architecture of a guest platform will be different to the host machine, e.g. an ARM guest on an x86 host will use an ARM instruction set, whereas the host will be using the x86 instruction set. Therefore, to enable this cross-architecture virtualisation, each guest instruction must be emulated by the host CPU—a potentially costly operation. This thesis presents a range of techniques for accelerating this instruction emulation, improving over a state-of-the art instruction set simulator by 2:64x. But, emulation of the guest platform’s instruction set is not enough for full hardware virtualisation. In fact, this is just one challenge in a range of issues that must be considered. Specifically, another challenge is efficiently handling the way external interrupts are managed by the virtualisation system. This thesis shows that when employing efficient instruction emulation techniques, it is not feasible to arbitrarily divert control-flow without consideration being given to the state of the emulated processor. Furthermore, it is shown that it is possible for the virtualisation environment to behave incorrectly if particular care is not given to the point at which control-flow is allowed to diverge. To solve this, a technique is developed that maintains efficient instruction emulation, and correctly handles external interrupt sources. Finally, modern processors have built-in support for hardware virtualisation in the form of instruction set extensions that enable the creation of an abstract computing environment, indistinguishable from real hardware. These extensions enable guest operating systems to run directly on the physical processor, with minimal supervision from a hypervisor. However, these extensions are geared towards same-architecture virtualisation, and as such are not immediately well-suited for cross-architecture virtualisation. This thesis presents a technique for exploiting these existing extensions, and using them in a cross-architecture virtualisation setting, improving the performance of a novel cross-architecture virtualisation hypervisor over state-of-the-art by 2:5x.
3

vNUMA: Virtual shared-memory multiprocessors

Chapman, Matthew, Computer Science & Engineering, Faculty of Engineering, UNSW January 2009 (has links)
Shared memory systems, such as SMP and ccNUMA topologies, simplify programming and administration. On the other hand, systems without hardware support for shared memory, such as clusters of commodity workstations, are commonly used due to cost and flexibility considerations. In this thesis, virtualisation is proposed as a technique that can bridge the gap between these architectures. The resulting system, vNUMA, is a hypervisor with a unique feature: it provides the illusion of shared memory across separate nodes on a fast network. This allows a cluster of workstations to be transformed into a single shared memory multiprocessor, supporting existing operating systems and applications. Such an approach could also have applications for emerging highly-parallel architectures, allowing a shared memory programming model to be retained while reducing hardware complexity. To build such a system, it is necessary to meld both a high-performance hypervisor and a high-performance distributed shared memory (DSM) system. This thesis addresses the challenges inherent in both of these tasks. First, designing an efficient hypervisor layer is considered; since vNUMA is implemented on the Itanium processor architecture, this is with particular reference to Itanium processor virtualisation. Then, novel DSM protocols are developed that allow SMP consistency models to be reproduced while providing better performance than a simple atomically-consistent DSM system. Finally, the system is evaluated, proving that it can provide good performance and compelling advantages for a variety of applications.
4

vNUMA: Virtual shared-memory multiprocessors

Chapman, Matthew, Computer Science & Engineering, Faculty of Engineering, UNSW January 2009 (has links)
Shared memory systems, such as SMP and ccNUMA topologies, simplify programming and administration. On the other hand, systems without hardware support for shared memory, such as clusters of commodity workstations, are commonly used due to cost and flexibility considerations. In this thesis, virtualisation is proposed as a technique that can bridge the gap between these architectures. The resulting system, vNUMA, is a hypervisor with a unique feature: it provides the illusion of shared memory across separate nodes on a fast network. This allows a cluster of workstations to be transformed into a single shared memory multiprocessor, supporting existing operating systems and applications. Such an approach could also have applications for emerging highly-parallel architectures, allowing a shared memory programming model to be retained while reducing hardware complexity. To build such a system, it is necessary to meld both a high-performance hypervisor and a high-performance distributed shared memory (DSM) system. This thesis addresses the challenges inherent in both of these tasks. First, designing an efficient hypervisor layer is considered; since vNUMA is implemented on the Itanium processor architecture, this is with particular reference to Itanium processor virtualisation. Then, novel DSM protocols are developed that allow SMP consistency models to be reproduced while providing better performance than a simple atomically-consistent DSM system. Finally, the system is evaluated, proving that it can provide good performance and compelling advantages for a variety of applications.
5

Entre ciel et terre, l'horizon virtuel : expériences artistiques et géographie du virtuel à l'ère interconnectée / Between the sky and the ground, the virtual horizon : artistic experiences and geographies of the virtual in the interconnected era

Pan, Cheng-Yu 28 April 2014 (has links)
Cette thèse vise à interroger la situation de l'individu face à l'horizon virtuel de l'ère interconnectée, ainsi que la créativité surgie du cyberespace. Divisée en deux parties, la première section de cette thèse consiste en une analyse des expériences plastiques, elles-mêmes élaborées en rapport avec les réseaux sous diverses formes : installation, net-art, vidéo, photographie numérique et performance GPS. Les quatre projets artistiques proposés ont ainsi permis une interrogation sur la question du regard, qui, dans le cadre des arts plastiques, traverse à la fois l'auteur, le dispositif et le spectateur. Dans la seconde section,il s'agit de tenter de répondre davantage aux problématiques engagées par ces expériences artistiques. La métaphore du « cycle hydrologique » est d'abord proposée afin de comprendre les réseaux informatiques. Une approche du devenir de l'image du monde est ensuite réalisée à travers un parcours historique sur des cartes géographiques. L'hypothèse proposée est que si ce parcours et cette évolution sont conçus comme la transformation d'une conception mythologique du monde en une représentation rationnelle, l'imaginaire engagé par les cartes numériques peut impliquer à nouveau certaines caractéristiques d'une représentation mythologique. Des phénomènes culturels se trouvant à la frontière du cybermonde sont analysés, dans lesquels les questions du virtuel s'avèrent déterminantes, en particulier celle de la réalité augmentée, de la durée d'immersion virtuelle, de « l'auto-virtualisation » et rejoignent à nouveau l'hypothèse d'une mythologie propre à l'espace-temps virtuel édifié sur les réseaux. / This thesis aims to examine the situation of individuals facing virtual space in the interconnected era and the creativity that emerges in the cyberspace. The thesis is divides into two parts. The first part consists of an analysis of my personal experience in conducting four art projects elaborated in various forms, including installation art, net.art, video art, digital photography and GPS performance art. Regardless of which artistic form is in question, « look » is the key in connecting the author, the audience as well as the visual media. The second part aims to further explore the artistic issues derivated from the personal experience, namely net art, map and virtuality. In order to fulfil this task, the metaphor of « water cycle » is applied to conceptualize the development of internet in a systematic way. In addition, world maps at different stages of human civilization are analyzed to shed light on how the worlview has changed over time. Based on that, a hypothesis is proposed : if the changes in world maps over the course of human history reflects a gradual replacement of a mythological perception of the world by a more rational perspective, the creativity shown in trendy digital maps nowadays might point to a resuscitation of a world view featuring mythological representation. Some cultural phenomena in the cyber space are also analyzed. The focus of the analysis is on a few issues related to virtuality, such as augmented reality, the duration of virtual immersion, as well as « auto-virtualization ». Such analysis should help corroborate the above mentioned hypothesis, namely the rise of mythological representation in virtual space.
6

Mitigation of Virtunoid Attacks on Cloud Computing Systems

Forsell, Daniel McKinnon January 2015 (has links)
Virtunoid is a proof of concept exploit abusing a vulnerability in the open source hardware virtualisation control program QEMU-KVM. The vulnerability originally stems from improper hotplugging of emulated embedded circuitry in the Intel PIIX4 southbridge resulting in memory corruption and dangling pointers. The exploit can be used to compromise the availability of the virtual machine, or to escalate privileges compromising the confidentiality of the resources in the host system. The research presented in this dissertation shows that the discretionary access control system, provided by default in most Linux operating systems, is insufficient in protecting the QEMU-KVM hypervisor against the Virtunoid exploit. Further, the research presented in this dissertation shows that the open source solutions AppArmor and grsecurity enhances the Linux operating system with additional protection against the Virtunoid exploit through mandatory access control, either through profiling or role-based access control. The research also shows that the host intrusion prevention system PaX does not provide any additional protection against the Virtunoid exploit. The comprehensive and detailed hands-on approach of this dissertation holds the ability to be reproduced and quantified for comparison necessary for future research.
7

Content-aware networking in virtualised environments for optimised resource exploitation / Approche réseau basée sur la conscience du contenu pour l’optimisation de l’exploitation des ressources au sein d’environnements virtualisés

Anapliotis, Petros 19 December 2014 (has links)
Aujourd'hui, l'hétérogénéité des infrastructures de réseaux actuelles, ainsi que le manque d'interopérabilité en termes d'architectures et de cadres pour l'adaptation du contenu aux contextes des différents utilisateurs, empêchent les Prosumers (consommateurs-fournisseurs) d’offrir une haute qualité d'expérience sur différentes plates-formes et au travers de contextes diversifiés. Par conséquent, l'objectif de cette thèse est d'étudier, concevoir et développer une architecture novatrice, susceptible d'offrir la QoS/QoE garantie en exploitant efficacement les ressources disponibles et en adaptant dynamiquement la performance du réseau selon les environnements Réseau, Service et Utilisateur. Pour cela, l'architecture proposée est basée sur (1) un cadre de gestion distribuée qui exploite des mécanismes de réseau conscient du contenu pour identifier le contenu en transit et la correspondance sur les exigences de QoS/QoE, et sur (2) un mécanisme d'allocation des ressources de réseau et leur adaptation aux caractéristiques de QoS/QoE demandées. Un prototype de routeur de contenu a été réalisé, offrant des fonctions de reconnaissance du type de contenu et de routage suivant le contenu. Il propose un système de gestion synergique capable d'orchestrer les processus d'optimisation cross-layer pour les services de différenciation/classification et à termes une exploitation efficace des ressources. La validité de l'architecture proposée est vérifiée par un grand nombre d'expériences menées à l'aide d’infrastructures physiques et virtuelles. Un banc d'essai à grande échelle conforme aux spécifications de conception architecturale a été déployé pour valider l'approche proposée. / Today, the heterogeneity of current networking infrastructures, along with the lack of interoperability in terms of architectures and frameworks for adapting content to the various users’ contexts, prevent prosumers to deliver high QoE over different platforms and under diversified contexts. Consequently, the objective of this PhD thesis is to study, design, and develop a novel architecture capable to offer guaranteed QoS/QoE by efficiently exploiting the available resources and by dynamically adapting the network performance across the various Service, Network and User environments. To this end, the proposed architecture is based on (1) a distributed management framework that exploits Content Aware Network (CAN) mechanisms – on top of the Internet Protocol (IP) – for identifying content in transit and mapping its QoS/QoE requirements into specific network characteristics, and on (2) a network resource allocation mechanism for adapting the intra-domain resources to the requested QoS/QoE. A prototype Media-Aware Network Element (MANE) has been achieved, offering content type recognition and content-based routing/forwarding as a matter of guaranteed QoS/QoE provision in an end-to-end approach. Furthermore, it proposes a synergetic management system capable to orchestrate cross-layer optimization processes for service differentiation/classification, towards efficient resource exploitation. The validity of the proposed architecture is verified through a large number of experiments conducted using physical and virtual infrastructures. A large-scale test-bed conforming to the architectural design specifications was deployed for validating the proposed approach.
8

Virtualisation en contexte HPC / Virtualisation in HPC context

Capra, Antoine 17 December 2015 (has links)
Afin de répondre aux besoins croissants de la simulation numérique et de rester à la pointe de la technologie, les supercalculateurs doivent d’être constamment améliorés. Ces améliorations peuvent être d’ordre matériel ou logiciel. Cela force les applications à s’adapter à un nouvel environnement de programmation au fil de son développement. Il devient alors nécessaire de se poser la question de la pérennité des applications et de leur portabilité d’une machine à une autre. L’utilisation de machines virtuelles peut être une première réponse à ce besoin de pérennisation en stabilisant les environnements de programmation. Grâce à la virtualisation, une application peut être développée au sein d’un environnement figé, sans être directement impactée par l’environnement présent sur une machine physique. Pour autant, l’abstraction supplémentaire induite par les machines virtuelles entraine en pratique une perte de performance. Nous proposons dans cette thèse un ensemble d’outils et de techniques afin de permettre l’utilisation de machines virtuelles en contexte HPC. Tout d’abord nous montrons qu’il est possible d’optimiser le fonctionnement d’un hyperviseur afin de répondre le plus fidèlement aux contraintes du HPC que sont : le placement des fils d’exécution et la localité mémoire des données. Puis en s’appuyant sur ce résultat, nous avons proposé un service de partitionnement des ressources d’un noeud de calcul par le biais des machines virtuelles. Enfin, pour étendre nos travaux à une utilisation pour des applications MPI, nous avons étudié les solutions et performances réseau d’une machine virtuelle. / To meet the growing needs of the digital simulation and remain at the forefront of technology, supercomputers must be constantly improved. These improvements can be hardware or software order. This forces the application to adapt to a new programming environment throughout its development. It then becomes necessary to raise the question of the sustainability of applications and portability from one machine to another. The use of virtual machines may be a first response to this need for sustaining stabilizing programming environments. With virtualization, applications can be developed in a fixed environment, without being directly impacted by the current environment on a physical machine. However, the additional abstraction induced by virtual machines in practice leads to a loss of performance. We propose in this thesis a set of tools and techniques to enable the use of virtual machines in HPC context. First we show that it is possible to optimize the operation of a hypervisor to respond accurately to the constraints of HPC that are : the placement of implementing son and memory data locality. Then, based on this, we have proposed a resource partitioning service from a compute node through virtual machines. Finally, to expand our work to use for MPI applications, we studied the network solutions and performance of a virtual machine.
9

Virtual network provisioning framework for the future Internet / Architecture d'allocation de réseaux virtuels pour l'Internet du futur

Louati, Inès 26 April 2010 (has links)
Les avancées récentes de la recherche dans le domaine de la virtualisation des réseaux ainsi que l'émergence de nouveaux acteurs, ont beaucoup motivé la recherche et le développement de nouvelles approches et techniques permettant de relever les défis de l'Internet du Futur. Cette thèse a été motivée par ces avancées et par le besoin de concevoir une architecture d'allocation de réseaux virtuels à la demande à partir d'une infrastructure physique partagée. L'objectif de la thèse était, par conséquent, de concevoir et développer des algorithmes d'optimisation et des méthodes d'allocation de ressources virtuelles pour composer et créer des réseaux virtuels selon les besoins des utilisateurs et les conditions du réseau physique partagé. Ce travail suppose l'existence d'un acteur tiers «broker», appelé fournisseur de réseau virtuel, responsable de négocier et d'allouer des ressources virtuelles, offertes comme des services par des fournisseurs d'infrastructures, et de créer et offrir des réseaux virtuels à la demande pour des utilisateurs. L'objectif de la thèse est donc de développer des mécanismes et des algorithmes de découverte (ou «matching»), de correspondance (ou «mapping») et d'instanciation de réseaux virtuels tout en optimisant l'utilisation des ressources du substrat d'une part, et en réduisant, d'autre part, le coût pour les fournisseurs. Dans une première partie, l'analyse et la comparaison de plusieurs algorithmes d'allocation de réseau virtuel proposés dans la littérature ont été menées. Les différentes phases d'allocation de réseaux virtuels comprenant le matching, le mapping et l'instanciation sont définies et explorées en considérant la présence de plusieurs fournisseurs d'infrastructures (multi-domaine). La deuxième partie de cette thèse porte sur la conception, le développement et l'évaluation des algorithmes heuristiques de découverte (matching) permettant la recherche de correspondance entre les besoins spécifiés par les requêtes de réseau virtuel et les propriétés fonctionnelles (ou statiques) des ressources disponibles du substrat physique. Des techniques de regroupement conceptuel sont utilisées pour faciliter et accélérer la découverte et le matching des ressources virtuelles. Des solutions de partitionnement exactes et heuristiques, basées sur des algorithmes de max-flow/min-cut et des techniques de programmation linéaire, sont également proposées et évaluées pour partitionner les requêtes de réseaux virtuels entre plusieurs fournisseurs d'infrastructures tout en réduisant les coûts. La troisième partie de la thèse se focalise sur la conception, le développement et l'évaluation des algorithmes de mapping heuristiques et exacts qui consistent à extraire un graphe de réseau virtuel à partir d'un graphe de substrat physique d'une manière optimale. Un algorithme de mapping heuristique et distribué, basé sur l'approche multi-agents, est développé et évalué permettant d'améliorer le passage à l'échelle et d'assurer une distribution de charge. Un autre algorithme de mapping exact est également modélisé comme étant un programme linéaire afin d'assurer une sélection optimale des ressources tout en réduisant les coûts et maximisant le taux d'acceptation des requêtes. Dans la dernière partie, des algorithmes d'allocation adaptative de réseaux virtuels sont proposés, développés et évalués pour maintenir des réseaux virtuels suite à des changements dynamiques au niveau des services demandés ou bien au niveau des infrastructures physiques. / Recent advances in computer and network virtualisation combined with the emergence of new actors and business models motivated much research and development of new approaches to face the challenges of future Internet architectures. This thesis was motivated by these advances and by the need for efficient algorithms and frameworks to allocate virtual resources and create on demand virtual networks over shared physical infrastructures. The objective of the thesis has consequently been to conceive and develop provisioning algorithms and methods to set up and maintain virtual networks according to user needs and networking conditions. The work assumes the existence of virtual network providers acting as brokers that request virtual resources, on behalf of users, from multiple infrastructure providers. The investigation and research objective is to explore how virtual resources, offered as a service by infrastructure providers, are allocated while optimising the use of substrate resources and reducing the cost for providers. This thesis starts off with the analysis and comparison of several virtual network provisioning approaches and algorithms proposed in the literature. Provisioning phases are defined and explored including resource matching, embedding and binding. The scenario where multiple infrastructure providers are involved in the virtual network provisioning is addressed and a mathematical model of the VN provisioning problem is formulated. The second part of this thesis provides the design, implementation and evaluation of exact and heuristic matching algorithms to search, find and match virtual network requests with available substrate resources. Conceptual clustering techniques are used to facilitate finding and matching of virtual resources in the initial provisioning phases. Exact and heuristic algorithms are also proposed and evaluated to efficiently split virtual network requests over multiple infrastructure providers while reducing the matching cost. The request splitting problem is solved using both max-flow min-cut algorithms and linear programming techniques. The third part of this thesis presents the design, implementation and evaluation of exact and heuristic embedding algorithms to simultaneously assign virtual nodes and links to substrate resources. A distributed embedding algorithm, relying on a multi-agent based approach, is developed for large scale networks. An exact embedding algorithm, formulated as a mixed integer program, is also proposed and evaluated to ensure optimal node and link mapping while reducing cost and increasing the acceptance ratio of requests. Finally, this thesis presents the design and development of adaptive provisioning frameworks and algorithms to maintain virtual networks subject to dynamic changes in services demands and in physical infrastructures. Adaptive matching and embedding algorithms are designed, developed and evaluated to repair resource failures and dynamically optimize substrate networks utilisation.
10

Monitoring and Analysis of Disk throughput and latency in servers running Cassandra database

Kalidindi, Rajeev varma January 2016 (has links)
Context. Light weight process virtualization has been used in the past e.g., Solaris zones, jails in Free BSD and Linux’s containers (LXC). But only since 2013 is there a kernel support for user namespace and process grouping control that make the use of lightweight virtualization interesting to create virtual environments comparable to virtual machines. Telecom providers have to handle the massive growth of information due to the growing number of customers and devices. Traditional databases are not designed to handle such massive data ballooning. NoSQL databases were developed for this purpose. Cassandra, with its high read and write throughputs, is a popular NoSQL database to handle this kind of data. Running the database using operating system virtualization or containerization would offer a significant performance gain when compared to that of virtual machines and also gives the benefits of migration, fast boot up and shut down times, lower latency and less use of physical resources of the servers. Objectives. This thesis aims to investigate the trade-off in performance while loading a Cassandra cluster in bare-metal and containerized environments. A detailed study of the effect of loading the cluster in each individual node in terms of Latency, CPU and Disk throughput will be analyzed. Methods. We implement the physical model of the Cassandra cluster based on realistic and commonly used scenarios or database analysis for our experiment. We generate different load cases on the cluster for bare-metal and Cassandra in docker scenarios and see the values of CPU utilization, Disk throughput and latency using standard tools like sar and iostat. Statistical analysis (Mean value analysis, higher moment analysis, and confidence intervals) are done on measurements on specific interfaces in order to increase the reliability of the results. Results.Experimental results show a quantitative analysis of measurements consisting Latency, CPU and Disk throughput while running a Cassandra cluster in Bare Metal and Container Environments.A statistical analysis summarizing the performance of Cassandra cluster is surveyed. Results.Experimental results show a quantitative analysis of measurements consisting Latency, CPU and Disk throughput while running a Cassandra cluster in Bare Metal and Container Environments.A statistical analysis summarizing the performance of Cassandra cluster is surveyed. Conclusions. With the detailed analysis, the resource utilization of the database was similar in both the bare-metal and container scenarios. Disk throughput is similar in the case of mixed load and containers have a slight overhead in the case of write loads for both the maximum load case and 66% of maximum load case. The latency values inside the container are slightly higher for all the cases. The mean value analysis and higher moment analysis helps us in doing a finer analysis of the results. The confidence intervals calculated show that there is a lot of variation in the disk performance which might be due to compactions happening randomly. Future work in the area can be done on compaction strategies.

Page generated in 0.1112 seconds