• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 79
  • 8
  • 5
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 119
  • 119
  • 119
  • 28
  • 15
  • 14
  • 12
  • 12
  • 11
  • 11
  • 11
  • 10
  • 9
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

The Design, Implementation, and Evaluation of Software and Architectural Support for ARM Virtualization

Dall, Christoffer January 2018 (has links)
The ARM architecture is dominating in the mobile and embedded markets and is making an upwards push into the server and networking markets where virtualization is a key technology. Similar to x86, ARM has added hardware support for virtualization, but there are important differences between the ARM and x86 architectural designs. Given two widely deployed computer architectures with different approaches to hardware virtualization support, we can evaluate, in practice, benefits and drawbacks of different approaches to architectural support for virtualization. This dissertation explores new approaches to combining software and architectural support for virtualization with a focus on the ARM architecture and shows that it is possible to provide virtualization services an order of magnitude more efficiently than traditional implementations. First, we investigate why the ARM architecture does not meet the classical requirements for virtualizable architectures and present an early prototype of KVM for ARM, a hypervisor using lightweight paravirtualization to run VMs on ARM systems without hardware virtualization support. Lightweight paravirtualization is a fully automated approach which replaces sensitive instructions with privileged instructions and requires no understanding of the guest OS code. Second, we introduce split-mode virtualization to support hosted hypervisor designs using ARM's architectural support for virtualization. Different from x86, the ARM virtualization extensions are based on a new hypervisor CPU mode, separate from existing CPU modes. This separate hypervisor CPU mode does not support running existing unmodified OSes, and therefore hosted hypervisor designs, in which the hypervisor runs as part of a host OS, do not work on ARM. Split-mode virtualization splits the execution of the hypervisor such that the host OS with core hypervisor functionality runs in the existing kernel CPU mode, but a small runtime runs in the hypervisor CPU mode and supports switching between the VM and the host OS. Split-mode virtualization was used in KVM/ARM, which was designed from the ground up as an open source project and merged in the mainline Linux kernel, resulting in interesting lessons about translating research ideas into practice. Third, we present an in-depth performance study of 64-bit ARMv8 virtualization using server hardware and compare against x86. We measure the performance of both standalone and hosted hypervisors on both ARM and x86 and compare their results. We find that ARM hardware support for virtualization can enable faster transitions between the VM and the hypervisor for standalone hypervisors compared to x86, but results in high switching overheads for hosted hypervisors compared to both x86 and to standalone hypervisors on ARM. We identify a key reason for high switching overhead for hosted hypervisors being the need to save and restore kernel mode state between the host OS kernel and the VM kernel. However, standalone hypervisors such as Xen, cannot leverage their performance benefit in practice for real application workloads. Other factors related to hypervisor software design and I/O emulation play a larger role in overall hypervisor performance than low-level interactions between the hypervisor and the hardware. Fourth, realizing that modern hypervisors rely on running a full OS kernel, the hypervisor OS kernel, to support their hypervisor functionality, we present a new hypervisor design which runs the hypervisor and its hypervisor OS kernel in ARM's separate hypervisor CPU mode and avoids the need to multiplex kernel mode CPU state between the VM and the hypervisor. Our design benefits from new architectural features, the virtualization host extensions (VHE), in ARMv8.1 to avoid modifying the hypervisor OS kernel to run in the hypervisor CPU mode. We show that the hypervisor must be co-designed with the hardware features to take advantage of running in a separate CPU mode and implement our changes to KVM/ARM. We show that running the hypervisor OS kernel in a separate CPU mode from the VM and taking advantage of ARM's ability to quickly switch between the VM and hypervisor results in an order of magnitude reduction in overhead for important virtualization microbenchmarks and reduces the overhead of real application workloads by more than 50%.
42

Kernel-space inline deduplication file systems for virtual machine image storage.

January 2013 (has links)
從文件系統設計的角度,我們探索了利用重復數據删除技術來消除硬盤陣列存儲設備當中的重復數據。我們提出了ScaleDFS,一個重復數據删除技術的文件系統, 旨在硬盤陣列存儲設備上實現可擴展的吞吐性能。ScaleDFS有三個主要的特點。第一,利用多核CPU並行計算出用作識別重復數據的加密指紋,以提高寫入速度。第二,緩存曾經讀取過的重復數據塊,以顯著提高讀取速度。第三,優化用作查找指紋的內存數據結構,以更加節省內存。ScaleDFS是一個以Linux系統內核模塊開發的,與POSIX兼容的,可以用在一般低成本硬件配置上的文件系統。我們進行了一系列的微觀性能測試,以及用42個不同版本的Linux虛擬鏡像文件進行了宏觀性能測試。我們證實,ScaleDFS在磁盤陣列上比目前已有的開源重復數據删除文件系統擁有更好的讀寫性能。 / We explore the use of deduplication for eliminating the storage of redundant data in RAID from a file-system design perspective. We propose ScaleDFS, a deduplication file system that seeks to achieve scalable read/write throughput in RAID. ScaleDFS is built on three novel design features. First, we improve the write throughput by exploiting multiple CPU cores to parallelize the processing of the cryptographic fingerprints that are used to identify redundant data. Second, we improve the read throughput by specifically caching in memory the recently read blocks that have been deduplicated. Third, we reduce the memory usage by enhancing the data structures that are used for fingerprint lookups. ScaleDFS is implemented as a POSIX-compliant, kernel-space driver module that can be deployed in commodity hardware configurations. We conduct microbenchmark experiments using synthetic workloads, and macrobenchmark experiments using a dataset of 42 VM images of different Linux distributions. We show that ScaleDFS achieves higher read/write throughput than existing open-source deduplication file systems in RAID. / Detailed summary in vernacular field only. / Ma, Mingcao. / "October 2012." / Thesis (M.Phil.)--Chinese University of Hong Kong, 2013. / Includes bibliographical references (leaves 39-42). / Abstracts also in Chinese. / Chapter 1 --- Introduction --- p.2 / Chapter 2 --- Literature Review --- p.5 / Chapter 2.1 --- Backup systems --- p.5 / Chapter 2.2 --- Use of special hardware --- p.6 / Chapter 2.3 --- Scalable storage --- p.6 / Chapter 2.4 --- Inline DFSs --- p.6 / Chapter 2.5 --- VM image storage with deduplication --- p.7 / Chapter 3 --- ScaleDFS Background --- p.8 / Chapter 3.1 --- Spatial Locality of Fingerprint Placement --- p.9 / Chapter 3.2 --- Prefetching of Fingerprint Stores --- p.12 / Chapter 3.3 --- Journaling --- p.13 / Chapter 4 --- ScaleDFS Design --- p.15 / Chapter 4.1 --- Parallelizing Deduplication --- p.15 / Chapter 4.2 --- Caching Read Blocks --- p.17 / Chapter 4.3 --- Reducing Memory Usage --- p.17 / Chapter 5 --- Implementation --- p.20 / Chapter 5.1 --- Choice of Hash Function --- p.20 / Chapter 5.2 --- OpenStack Deployment --- p.21 / Chapter 6 --- Experiments --- p.23 / Chapter 6.1 --- Microbenchmarks --- p.23 / Chapter 6.2 --- OpenStack Deployment --- p.28 / Chapter 6.3 --- VM Image Operations in a RAID Setup --- p.33 / Chapter 7 --- Conclusions and FutureWork --- p.38 / Bibliography --- p.39
43

Computer systems with a very large address space and garbage collection

Bishop, Peter Boehler January 1977 (has links)
Thesis. 1977. Ph.D.--Massachusetts Institute of Technology. Dept. of Electrical Engineering and Computer Science. / MICROFICHE COPY AVAILABLE IN ARCHIVES AND ENGINEERING. / Vita. / Bibliography : leaves 261-265. / by Peter B. Bishop. / Ph.D.
44

A tunable version control system for virtual machines in an open-source cloud / CUHK electronic theses & dissertations collection

January 2013 (has links)
Open-source cloud platforms provide a feasible alternative of deploying cloud computing in low-cost commodity hardware and operating systems. To enhance the reliability of an open-source cloud, we design and implement CloudVS, a practical add-on system that enables version control for virtual machines (VMs). CloudVS targets a commodity cloud platform that has limited available resources. It exploits content similarities across different VM versions using redundancy elimination (RE), such that only non-redundant data chunks of a VM version are transmitted over the network and kept in persistent storage. Using RE as a building block, we propose a suite of performance adaptation mechanisms that make CloudVS amenable to different commodity settings. Specifically, we propose a tunable mechanism to balance the storage and disk seek overheads, as well as various I/O optimization techniques to minimize the interferences to other co-resident processes. We further exploit a higher degree of content similarity by applying RE to multiple VM images simultaneously, and support the copy-on-write image format. Using real-world VM snapshots, we experiment CloudVS in an open-source cloud testbed built on Eucalyptus. We demonstrate how CloudVS can be parameterized to balance the performance trade-offs between version control and normal VM operations. / 開源雲端平台為供低成本硬件及作業系統提供一個可行的替代方案。為了提高開源雲的可靠性,我們設計及實踐了CloudVS,一個針對虛擬機的實用版本控制系統。CloudVS針對有限資源的低成本硬件雲平台,利用內容相似性,在不同的虛擬機版本使用冗餘消除。這樣,在虛擬機版本數據中只有非冗餘的部分在網絡上傳輸,並保存在持久存儲。使用冗餘消除作為構建塊,我們提出了一套性能適應機制,使CloudVS適合於不同的低成本硬件配置。具體而言,我們提出了一種可調諧的機制來平衡存儲和磁盤尋道開銷,以及應用各種I/O優化技術去最大限度地減少對其他同時運行進程的干擾。我們應用冗餘消除多個虛擬機影像去進一步利用其內容相似度,同時,我們更進一步支持寫時複製格式。使用來自真實世界的虛擬機快照,我們嘗試在開放源碼的雲測試平台Eucalyptus中測試CloudVS。我們演示CloudVS如何可以參數化,以平衡版本控制和正常的虛擬機操作之間的性能取捨。 / Tang, Chung Pan. / Thesis M.Phil. Chinese University of Hong Kong 2013. / Includes bibliographical references (leaves 57-65). / Abstracts also in Chinese. / Title from PDF title page (viewed on 07, October, 2016). / Detailed summary in vernacular field only.
45

EPA-RIMM-V: Efficient Rootkit Detection for Virtualized Environments

Vibhute, Tejaswini Ajay 12 July 2018 (has links)
The use of virtualized environments continues to grow for efficient utilization of the available compute resources. Hypervisors virtualize the underlying hardware resources and allow multiple Operating Systems to run simultaneously on the same infrastructure. Since the hypervisor is installed at a higher privilege level than the Operating Systems in the software stack it is vulnerable to rootkits that can modify the environment to gain control, crash the system and even steal sensitive information. Thus, runtime integrity measurement of the hypervisor is essential. The currently proposed solutions achieve the goal by relying either partially or entirely on the features of the hypervisor itself, causing them to lack stealth and leaving themselves vulnerable to attack. We have developed a performance sensitive methodology for identifying rootkits in hypervisors from System Management Mode (SMM) while using the features of SMI Transfer Monitor (STM). STM is a recent technology from Intel and it is a virtual machine manager at the firmware level. Our solution extends a research prototype called EPA-RIMM, developed by Delgado and Karavanic at Portland State University. Our solution extends the state of the art in that it stealthily performs measurements of hypervisor memory and critical data structures using firmware features, keeps performance perturbation to acceptable levels and leverages the security features provided by the STM. We describe our approach and include experimental results using a prototype we have developed for Xen hypervisor on Minnowboard Turbot, an open hardware platform.
46

Quantifying resource sharing, resource isolation and agility for web applications with virtual machines

Miller, Elliot A. January 2007 (has links)
Thesis (M.S.) -- Worcester Polytechnic Institute. / Keywords: virtual machine; agility. Includes bibliographical references (p.58-59).
47

Optimal divisible resource allocation for self-organizing cloud

Di, Sheng, 狄盛 January 2011 (has links)
published_or_final_version / Computer Science / Doctoral / Doctor of Philosophy
48

Network performance isolation for virtual machines

Cheng, Luwei., 程芦伟. January 2011 (has links)
Cloud computing is a new computing paradigm that aims to transform computing services into a utility, just as providing electricity in a “pay-as-you-go” manner. Data centers are increasingly adopting virtualization technology for the purpose of server consolidation, flexible resource management and better fault tolerance. Virtualization-based cloud services host networked applications in virtual machines (VMs), with each VM provided the desired amount of resources using resource isolation mechanisms. Effective network performance isolation is fundamental to data centers, which offers significant benefit of performance predictability for applications. This research is application-driven. We study how network performance isolation can be achieved for latency-sensitive cloud applications. For media streaming applications, network performance isolation means both predicable network bandwidth and low-jittered network latency. The current resource sharing methods for VMs mainly focus on resource proportional share, whereas ignore the fact that I/O latency in VM-hosted platforms is mostly related to resource provisioning rate. The resource isolation with only quantitative promise does not sufficiently guarantee performance isolation. Even the VM is allocated with adequate resources such as CPU time and network bandwidth, problems such as network jitter (variation in packet delays) can still happen if the resources are provisioned at inappropriate moments. So in order to achieve performance isolation, the problem is not only how many/much resources each VM gets, but more importantly whether the resources are provisioned in a timely manner. How to guarantee both requirements to be achieved in resource allocation is challenging. This thesis systematically analyzes the causes of unpredictable network latency in VM-hosted platforms, with both technical discussion and experimental illustration. We identify that the varied network latency is jointly caused by VMM CPU scheduler and network traffic shaper, and then address the problem in these two parts. In our solutions, we consider the design goals of resource provisioning rate and resource proportionality as two orthogonal dimensions. In the hypervisor, a proportional share CPU scheduler with soft real-time support is proposed to guarantee predictable scheduling delay; in network traffic shaper, we introduce the concept of smooth window to smooth packet delay and apply closed-loop feedback control to maintain network bandwidth consumption. The solutions are implemented in Xen 4.1.0 and Linux 2.6.32.13, which are both the latest versions when this research was conducted. Extensive experiments have been carried out using both real-life applications and low-level benchmarks. Testing results show that the proposed solutions can effectively guarantee network performance isolation, by achieving both predefined network bandwidth and low-jittered network latency. / published_or_final_version / Computer Science / Master / Master of Philosophy
49

Adaptive live VM migration over WAN: modelingand implementation

Zhang, Weida, 张伟达 January 2013 (has links)
The combination of traditional process migration and the new virtualization technology enables mobility of virtual machines and resource provisioning within data centers. While applied to wide area network (WAN), a traditional migration algorithm has to adjust itself according to the various WAN situations and VM status. This thesis identifies four performance measurements of a VM migration: total migration time, downtime, remote up time and performance degradation. It observes that the total migration time and the remote up time of traditional pre-copy over WAN is too long to tolerate. This thesis claims that even for WAN, post-copy could be used to improve the total migration time and remote up time, only by introducing tolerable, predictable and controllable performance degradation. The adaptiveness of the migration algorithm is concerned. It proposes a hybrid solution of pre-copy and post-copy, both for memory and storage, to do the migration. In the hybrid solution, a fraction of memory (Mfrac) and a fraction of storage (Sfrac) are migrated in the pre-copy and freeze-and-copy phase, and the remaining are migrated in the post-copy phase. A model-based solution with the help of profiling is proposed to adaptively find the best combination of Mfrac and Sfrac. The evaluation part suggests that the proposed solution could adapt to different application behaviors and network conditions. / published_or_final_version / Computer Science / Master / Master of Philosophy
50

Cost-aware online VM purchasing for cloud-based application service providers with arbitrary demands

Shi, Shengkai, 石晟恺 January 2014 (has links)
Recent years witness the proliferation of Infrastructure-as-a-Service (IaaS) cloud services, which provide on-demand resources (CPU, RAM, disk, etc.) in the form of virtual machines (VMs) for hosting services of third parties. As such, the way of enabling scalable and dynamic Internet applications has been remarkably revolutionized. More and more Application Service Providers (ASPs) are launching their applications in clouds, eliminating the need to construct and operate their owned IT hardware and software. Given the state-of-the-art IaaS offerings, it is still a problem of fundamental importance how the ASPs should rent VMs from the clouds to serve their application needs, in order to minimize the cost while meeting their job demands over a long run. Cloud providers offer different pricing options to meet computing requirements of a variety of applications. The commonly adopted cloud pricing schemes are (1) reserved instance pricing, (2) on-demand instance pricing, and (3) spot instance pricing. However, the challenge facing an ASP is how these pricing schemes can be blended to accommodate arbitrary demands at the optimal cost. In this thesis, we seek to integrate all available pricing options and design effective online algorithms for the long-term operation of ASPs. We formulate the long-term-averaged VM cost minimization problem of an ASP with time-varying and delay-tolerant workloads in a stochastic optimization model. An efficient online VM purchasing algorithm is designed to guide the VM purchasing decisions of the ASP based on the Lyapunov optimization technique. In stark contrast with the existing studies, our online VM purchasing algorithm does not require any a priori knowledge of the workload or any future information. Moreover, it addresses the possible job interruption due to uncertain availability of spot instances. Rigorous analysis shows that our algorithm can achieve a time-averaged VM purchasing cost with a constant gap from its offline minimum. Trace-driven simulations further verify the efficacy of our algorithm. / published_or_final_version / Computer Science / Master / Master of Philosophy

Page generated in 0.1197 seconds