• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 264
  • 85
  • 55
  • 23
  • 20
  • 17
  • 16
  • 8
  • 7
  • 6
  • 4
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 588
  • 174
  • 152
  • 145
  • 135
  • 134
  • 96
  • 76
  • 64
  • 61
  • 61
  • 59
  • 57
  • 56
  • 56
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Monitoring and Analysis of CPU Utilization, Disk Throughput and Latency in servers running Cassandra database : An Experimental Investigation

Chekkilla, Avinash Goud January 2017 (has links)
Context Light weight process virtualization has been used in the past e.g., Solaris zones, jails in Free BSD and Linux’s containers (LXC). But only since 2013 is there a kernel support for user namespace and process grouping control that make the use of lightweight virtualization interesting to create virtual environments comparable to virtual machines. Telecom providers have to handle the massive growth of information due to the growing number of customers and devices. Traditional databases are not designed to handle such massive data ballooning. NoSQL databases were developed for this purpose. Cassandra, with its high read and write throughputs, is a popular NoSQL database to handle this kind of data. Running the database using operating system virtualization or containerization would offer a significant performance gain when compared to that of virtual machines and also gives the benefits of migration, fast boot up and shut down times, lower latency and less use of physical resources of the servers. Objectives This thesis aims to investigate the trade-off in performance while loading a Cassandra cluster in bare-metal and containerized environments. A detailed study of the effect of loading the cluster in each individual node in terms of Latency, CPU and Disk throughput will be analyzed. Method We implement the physical model of the Cassandra cluster based on realistic and commonly used scenarios or database analysis for our experiment. We generate different load cases on the cluster for Bare-Metal and Docker and see the values of CPU utilization, Disk throughput and latency using standard tools like sar and iostat. Statistical analysis (Mean value analysis, higher moment analysis and confidence intervals) are done on measurements on specific interfaces in order to show the reliability of the results. Results Experimental results show a quantitative analysis of measurements consisting Latency, CPU and Disk throughput while running a Cassandra cluster in Bare Metal and Container Environments. A statistical analysis summarizing the performance of Cassandra cluster while running single Cassandra is surveyed. Conclusions With the detailed analysis, the resource utilization of the database was similar in both the bare-metal and container scenarios. From the results the CPU utilization for the bare-metal servers is equivalent in the case of mixed, read and write loads. The latency values inside the container are slightly higher for all the cases. The mean value analysis and higher moment analysis helps us in doing a finer analysis of the results. The confidence intervals calculated show that there is a lot of variation in the disk performance which might be due to compactions happening randomly. Further work can be done by configuring the compaction strategies, memory, read and write rates.
82

Efficient Bare Metal Backup and Restore in OpenStack Based Cloud InfrastructureDesign : Implementation and Testing of a Prototype

TADESSE, ADDISHIWOT January 2016 (has links)
No description available.
83

Software Defined Networking : Virtual Router Performance

Svantesson, Björn January 2016 (has links)
Virtualization is becoming more and more popular since the hardware that is available today often has theability to run more than just a single machine. The hardware is too powerful in relation to the requirementsof the software that is supposed to run on the hardware, making it inefficient to run too little software ontoo powerful of machines. With virtualization, the ability exists to run a lot of different software on thesame hardware, thereby increasing the efficiency of hardware usage.Virtualization doesn't stop at just virtualizing operating systems or commodity software, but can also beused to virtualize networking components. These networking components include everything from routersto switches and are possible to set up on any kind of virtulized system.When discussing virtualization of networking components, the experssion “Software Defined Networking”is hard to miss. Software Defined Networking is a definition that contains all of these virtualized networkingcomponents and is the expression that should be used when researching further into this subject. There'san increasing interest in these virtualized networking components now in relation to just a few years ago.This is due to company networking becoming much more complex now in relation to the complexity thatcould be found in a network a few years back. More services need to be up inside of the network and a lotof people believe that Software Defined Networking can help in this regard.This thesis aim is to try to find out what kind of differences there are between multiple different softwarerouters. Finding out things like, which one of the routers that offer the highest network speed for the leastamount of hardware cost, are the kind of things that this thesis will be focused on. It will also look at somedifferent aspects of performance that the routers offer in relation to one another in order to try toestablish if there exists any kind of “best” router in multiple different areas.The idea is to build up a virtualized network that somewhat relates to how a normal network looks insmaller companies today. This network will then be used for different types of testing while having thesoftware based router placed in the middle and having it take care of routing between different local virtualnetworks. All of the routers will be placed on the same server and their configuration will be very basicwhile also making sure that each of the routers get access to the same amount of hardware.After initial testing, all routers that perform bad will be opted out for additional testing. This is done tomake sure that there's no unnecessary testing done on routers that seem to not be able to keep up withthe other ones. The results from these tests will be compared to the results of a hardware router with thesame kind of tests used with it in the middle in relation to the tests the software routers had to go through.The results from the testing were fairly surprising, only having one single router being eliminated early onas the remaining ones continued to “battle” one another with more tests. These tests were compared tothe results of a hardware router and the results here were also quite surprising with a much betterperformance in many different areas from the software routers perspective.
84

Virtualizace odolná vůči chybám / Fault-tolerant virtualization

Herrmann, Pavel January 2014 (has links)
Virtualization is often used as a tool for resource consolidation in the server market. Virtualization is also used to simplify management tasks and provide high availability. However, the ultimate high availability feature, fault-tolerance, has been limited to special and costly hardware and software. This thesis will give an overview of how one can use virtualization tech- nologies to build a fault tolerant system, and show what would be the cost, in the sense of performance degradation when compared to a non-fault-tolerant system. 1
85

Performance Evaluation of MongoDB on Amazon Web Service and OpenStack

Avutu, Neeraj January 2018 (has links)
Context MongoDB is an open-source, scalable, NoSQL database that distributes the data over many commodity servers. It provides no single point of failure by copying and storing the data in different locations. MongoDB uses a master-slave design rather than the ring topology used by Cassandra. Virtualization is the technique used for accessing multiple machines in a single host and utilizing the various virtual machines. It is the fundamental technology, which allows cloud computing to provide resource sharing among the users. Objectives Studying and identifying MongoDB, Virtualization on AWS and OpenStack. Experiments were conducted to identify the CPU utilization associated when Mongo DB instances are deployed on AWS and physical server arrangement. Understanding the effect of Replication in the Mongo DB instances and its effect on MongoDB concerning throughput, CPU utilization and latency. Methods Initially, a literature review is conducted to design the experiment with the mentioned problems. A three node MongoDB cluster runs on Amazon EC2 and OpenStack Nova with Ubuntu 16.04 LTS as an operating system. Latency, throughput and CPU utilization were measured using this setup. This procedure was repeated for five nodes MongoDB cluster and three nodes production cluster with six types of workloads of YCSB. Results Virtualization overhead has been identified in terms of CPU utilization and the effects of virtualization on MongoDB are found out in terms of CPU utilization, latency and throughput. Conclusions It is concluded that there is a decrease in latency and increases throughput with the increase in nodes. Due to replication, increase in latency was observed.
86

Improving energy efficiency of virtualized datacenters / Améliorer l'efficacité énergétique des datacenters virtualisés

Nitu, Vlad-Tiberiu 28 September 2018 (has links)
De nos jours, de nombreuses entreprises choisissent de plus en plus d'adopter le cloud computing. Plus précisément, en tant que clients, elles externalisent la gestion de leur infrastructure physique vers des centres de données (ou plateformes de cloud computing). La consommation d'énergie est une préoccupation majeure pour la gestion des centres de données (datacenter, DC). Son impact financier représente environ 80% du coût total de possession et l'on estime qu'en 2020, les DCs américains dépenseront à eux seuls environ 13 milliards de dollars en factures énergétiques. Généralement, les serveurs de centres de données sont conçus de manière à atteindre une grande efficacité énergétique pour des utilisations élevées. Pour diminuer le coût de calcul, les serveurs de centre de données devraient maximiser leur utilisation. Afin de lutter contre l'utilisation historiquement faible des serveurs, le cloud computing a adopté la virtualisation des serveurs. Cette dernière permet à un serveur physique d'exécuter plusieurs serveurs virtuels (appelés machines virtuelles) de manière isolée. Avec la virtualisation, le fournisseur de cloud peut regrouper (consolider) l'ensemble des machines virtuelles (VM) sur un ensemble réduit de serveurs physiques et ainsi réduire le nombre de serveurs actifs. Même ainsi, les serveurs de centres de données atteignent rarement des utilisations supérieures à 50%, ce qui signifie qu'ils fonctionnent avec des ensembles de ressources majoritairement inutilisées (appelés «trous»). Ma première contribution est un système de gestion de cloud qui divise ou fusionne dynamiquement les machines virtuelles de sorte à ce qu'elles puissent mieux remplir les trous. Cette solution n'est efficace que pour des applications élastiques, c'est-à-dire des applications qui peuvent être exécutées et reconfigurées sur un nombre arbitraire de machines virtuelles. Cependant, la fragmentation des ressources provient d'un problème plus fondamental. On observe que les applications cloud demandent de plus en plus de mémoire, tandis que les serveurs physiques fournissent plus de CPU. Dans les DC actuels, les deux ressources sont fortement couplées puisqu'elles sont liées à un serveur physique. Ma deuxième contribution est un moyen pratique de découpler la paire CPU-mémoire, qui peut être simplement appliquée à n'importe quel serveur. Ainsi, les deux ressources peuvent varier indépendamment, en fonction de leur demande. Ma troisième et ma quatrième contribution montrent un système pratique qui exploite la deuxième contribution. La sous-utilisation observée sur les serveurs physiques existe également pour les machines virtuelles. Il a été démontré que les machines virtuelles ne consomment qu'une petite fraction des ressources allouées car les clients du cloud ne sont pas en mesure d'estimer correctement la quantité de ressources nécessaire à leurs applications. Ma troisième contribution est un système qui estime la consommation de mémoire (c'est-à-dire la taille du working set) d'une machine virtuelle, avec un surcoût faible et une grande précision. Ainsi, nous pouvons maintenant consolider les machines virtuelles en fonction de la taille de leur working set (plutôt que leur mémoire réservée). Cependant, l'inconvénient de cette approche est le risque de famine de mémoire. Si une ou plusieurs machines virtuelles ont une forte augmentation de la demande en mémoire, le serveur physique peut manquer de mémoire. Cette situation n'est pas souhaitable, car la plate-forme cloud est incapable de fournir au client la mémoire qu'il a payée. Finalement, ma quatrième contribution est un système qui permet à une machine virtuelle d'utiliser la mémoire distante fournie par un autre serveur du rack. Ainsi, dans le cas d'un pic de la demande en mémoire, mon système permet à la VM d'allouer de la mémoire sur un serveur physique distant. / Nowadays, many organizations choose to increasingly implement the cloud computing approach. More specifically, as customers, these organizations are outsourcing the management of their physical infrastructure to data centers (or cloud computing platforms). Energy consumption is a primary concern for datacenter (DC) management. Its cost represents about 80% of the total cost of ownership and it is estimated that in 2020, the US DCs alone will spend about $13 billion on energy bills. Generally, the datacenter servers are manufactured in such a way that they achieve high energy efficiency at high utilizations. Thereby for a low cost per computation all datacenter servers should push the utilization as high as possible. In order to fight the historically low utilization, cloud computing adopted server virtualization. The latter allows a physical server to execute multiple virtual servers (called virtual machines) in an isolated way. With virtualization, the cloud provider can pack (consolidate) the entire set of virtual machines (VMs) on a small set of physical servers and thereby, reduce the number of active servers. Even so, the datacenter servers rarely reach utilizations higher than 50% which means that they operate with sets of longterm unused resources (called 'holes'). My first contribution is a cloud management system that dynamically splits/fusions VMs such that they can better fill the holes. This solution is effective only for elastic applications, i.e. applications that can be executed and reconfigured over an arbitrary number of VMs. However the datacenter resource fragmentation stems from a more fundamental problem. Over time, cloud applications demand more and more memory but the physical servers provide more an more CPU. In nowadays datacenters, the two resources are strongly coupled since they are bounded to a physical sever. My second contribution is a practical way to decouple the CPU-memory tuple that can simply be applied to a commodity server. Thereby, the two resources can vary independently, depending on their demand. My third and my forth contribution show a practical system which exploit the second contribution. The underutilization observed on physical servers is also true for virtual machines. It has been shown that VMs consume only a small fraction of the allocated resources because the cloud customers are not able to correctly estimate the resource amount necessary for their applications. My third contribution is a system that estimates the memory consumption (i.e. the working set size) of a VM, with low overhead and high accuracy. Thereby, we can now consolidate the VMs based on their working set size (not the booked memory). However, the drawback of this approach is the risk of memory starvation. If one or multiple VMs have an sharp increase in memory demand, the physical server may run out of memory. This event is undesirable because the cloud platform is unable to provide the client with the booked memory. My fourth contribution is a system that allows a VM to use remote memory provided by a different rack server. Thereby, in the case of a peak memory demand, my system allows the VM to allocate memory on a remote physical server.
87

LXC utvärdering : Skriv- och läshastighet till disk analys av LXC under ESXi / LXC Evaluation : Write and reading speed evaluation of LXC intertwined with ESXi

Olsson, Johan January 2016 (has links)
There are several ways to virtualize machines from the different closed source variants as VMware ESXi and Windows Hyper-V virtualization to open source varies as Xen and Kernel-based Virtual Machine (KVM). There is also another way to virtualize parts of an operating system to increase versatility and be able to use more of the system’s resources in a more efficient way. LXC (Linux Containers) is a lightweight virtualization that is run on top of the existing operating system by encapsulating applications that is inside the container. LXC works so the kernel of the Linux system is shared by the containers that run next to each other without much knowledge of each other. In that way it can be more resource efficient than virtualizing the entire Linux kernel several times for different applications in a traditional guest to host environment. Many data centers today are already using some variant of virtualization in their production environment, it may then be interesting to examine if there are some other methods that result in better performance for chosen application and power savings when hosts can be turned off. That is why this project has carried out a field study to examine how LXC performs when the host system is virtualized in a hypervisor environment. An organization might want to migrate from a hypervisor environment to a lightweight virtualization environment that is based on containers. The work has been done by doing experiments using two different software to examine I/O to determine if LXC is affected by being nested inside ESXi. The study begins with a small background study to obtain information that will give relevant information from previous done work in relevant fields. The study was conducted with the use of the experimental method to be able to answer the hypothesis and the projects questions. The questions that was answered in the project was: How much degradation of the file system's read and write speeds arises when LXC is nested in ESXi? Does it affect the file system's ability to read and write to disk when there are restrictions on available resources? The result of the experiments show that LXC performs close to equal of the bare metal systems, with a 2 percent loss as a minimum and a maximum of 11 percent in write and read ability to/from disk. When LXC is intertwined with ESXi there is an up to 15 percent loss in write and read ability excluding the loss the hypervisor adds. When restricting the resources for a container down to one processor core and two gigabytes of primary memory experiments show that there was a 3 to 15 percent loss in write and read ability from the disk
88

Utilization of Dynamic Attributes in Resource Discovery for Network Virtualization

Amarasinghe, Heli 16 July 2012 (has links)
The success of the internet over last few decades has mainly depended on various infrastructure technologies to run distributed applications. Due to diversification and multi-provider nature of the internet, radical architectural improvements which require mutual agreement between infrastructure providers have become highly impractical. This escalating resistance towards the further growth has created a rising demand for new approaches to address this challenge. Network virtualization is regarded as a prominent solution to surmount these limitations. It decouples the conventional Internet service provider’s role into infrastructure provider (InP) and service provider (SP) and introduce a third player known as virtual network Provider (VNP) which creates virtual networks (VNs). Resource discovery aims to assist the VNP in selecting the best InP that has the best matching resources for a particular VN request. In the current literature, resource discovery focuses mainly on static attributes of network resources highlighting the fact that utilization on dynamic attributes imposes significant overhead on the network itself. In this thesis we propose a resource discovery approach that is capable of utilizing the dynamic resource attributes to enhance the resource discovery and increase the overall efficiency of VN creation. We realize that recourse discovery techniques should be fast and cost efficient, enough to not to impose any significant load. Hence our proposed scheme calculates aggregation values of the dynamic attributes of the substrate resources. By comparing aggregation values to VN requirements, a set of potential InPs is selected. The potential InPs satisfy basic VN embedding requirements. Moreover, we propose further enhancements to the dynamic attribute monitoring process using a vector based aggregation approach.
89

Flexible Computing with Virtual Machines

Lagar Cavilla, Horacio Andres 30 March 2011 (has links)
This thesis is predicated upon a vision of the future of computing with a separation of functionality between core and edges, very similar to that governing the Internet itself. In this vision, the core of our computing infrastructure is made up of vast server farms with an abundance of storage and processing cycles. Centralization of computation in these farms, coupled with high-speed wired or wireless connectivity, allows for pervasive access to a highly-available and well-maintained repository for data, configurations, and applications. Computation in the edges is concerned with provisioning application state and user data to rich clients, notably mobile devices equipped with powerful displays and graphics processors. We define flexible computing as systems support for applications that dynamically leverage the resources available in the core infrastructure, or cloud. The work in this thesis focuses on two instances of flexible computing that are crucial to the realization of the aforementioned vision. Location flexibility aims to, transparently and seamlessly, migrate applications between the edges and the core based on user demand. This enables performing the interactive tasks on rich edge clients and the computational tasks on powerful core servers. Scale flexibility is the ability of applications executing in cloud environments, such as parallel jobs or clustered servers, to swiftly grow and shrink their footprint according to execution demands. This thesis shows how we can use system virtualization to implement systems that provide scale and location flexibility. To that effect we build and evaluate two system prototypes: Snowbird and SnowFlock. We present techniques for manipulating virtual machine state that turn running software into a malleable entity which is easily manageable, is decoupled from the underlying hardware, and is capable of dynamic relocation and scaling. This thesis demonstrates that virtualization technology is a powerful and suitable tool to enable solutions for location and scale flexibility.
90

Predictor Virtualization: Teaching Old Caches New Tricks

Burcea, Ioana Monica 20 August 2012 (has links)
To improve application performance, current processors rely on prediction-based hardware optimizations, such as data prefetching and branch prediction. These hardware optimizations store application metadata in on-chip predictor tables and use the metadata to anticipate and optimize for future application behavior. As application footprints grow, the predictor tables need to scale for predictors to remain effective. One important challenge in processor design is to decide which hardware optimizations to implement and how much resources to dedicate to a specific optimization. Traditionally, processor architects employ a one-size-fits-all approach when designing predictor-based hardware optimizations: for each optimization, a fixed portion of the on-chip resources is allocated to the predictor storage. This approach often leads to sub-optimal designs where: 1) resources are wasted for applications that do not benefit from a particular predictor or require only small predictor tables, or 2) predictors under-perform for applications that need larger predictor tables that can not be built due to area-latency-power constraints. This thesis introduces Predictor Virtualization (PV), a framework that uses the traditional processor memory hierarchy to store application metadata used in speculative hardware optimizations. This allows to emulate large, more accurate predictor tables, which, in return, leads to higher application performance. PV exploits the current trend of unprecedentedly large on- chip secondary caches and allocates on demand a small portion of the cache capacity to store application metadata used in hardware optimizations, adjusting to the application’s need for predictor resources. As a consequence, PV is a pay-as-you-go technique that emulates large predictor tables without increasing the dedicated storage overhead. To demonstrate the benefits of virtualizing hardware predictors, we present virtualized designs for three different hardware optimizations: a state-of-the-art data prefetcher, conventional branch target buffers and an object-pointer prefetcher. While each of these hardware predictors exhibit different characteristics that lead to different virtualized designs, virtualization improves the cost-performance trade-off for all these optimizations. PV increases the utility of traditional processor caches: in addition to being accelerators for slow off-chip memories, on-chip caches are leveraged for increasing the effectiveness of predictor-based hardware optimizations.

Page generated in 0.0756 seconds