• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 79
  • 8
  • 5
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 119
  • 119
  • 119
  • 28
  • 15
  • 14
  • 12
  • 12
  • 11
  • 11
  • 11
  • 10
  • 9
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

The Design, Implementation, and Evaluation of Software and Architectural Support for Nested Virtualization on Modern Architectures

Lim, Jin Tack January 2021 (has links)
Nested virtualization, the discipline of running virtual machines inside other virtual machines, is increasingly important because of the need to deploy workloads that are already using virtualization on top of virtualized cloud infrastructures. However, nested virtualization performance on modern computer architectures is far from native execution speed, which remains a key impediment to further adoption. My thesis is that simple changes to hardware, software, and virtual machine configuration that are transparent to nested virtual machines can provide near-native execution speed for real application workloads. This dissertation presents three mechanisms that improve nested virtualization performance. First, we present NEsted Virtualization Extensions for Arm (NEVE). As Arm servers make inroads in cloud infrastructure deployments, supporting nested virtualization on Arm is a key requirement. The requirement has recently been met with the introduction of nested virtualization support for the Arm architecture. We built the first hypervisor using Arm nested virtualization support and show that, despite similarities between Arm and x86 nested virtualization support, performance on Arm is much worse than on x86. This is due to excessive traps to the hypervisor caused by differences in non-nested virtualization support. To address this problem, we introduce a novel paravirtualization technique to rapidly prototype architectural changes for virtualization and evaluate their performance impact using existing hardware. Using this technique, we introduce NEVE, a set of simple architectural changes to Arm that can be used by software to coalesce and defer traps by logging the results of hypervisor instructions until the results are actually needed by the hypervisor. We show that NEVE allows hypervisors running real application workloads to provide an order of magnitude improvement in performance over current Arm nested virtualization support and up to three times less overhead than x86 nested virtualization. NEVE is included in the Armv8.4 architecture. Second, we introduce virtual-passthrough, a new approach for providing virtual I/O devices for nested virtualization without the intervention of multiple levels of hypervisors. Virtual-passthrough preserves I/O interposition while addressing the performance problem of I/O intensive workloads as they perform many times worse with nested virtualization than without virtualization. With virtual-passthrough, virtual devices provided by a host hypervisor, the hypervisor that runs directly on the hardware, can be assigned to nested virtual machines directly without delivering data and control through multiple layers of hypervisors. The approach leverages the existing direct device assignment mechanism and implementation, so it only requires virtual machine configuration changes. Virtual-passthrough is platform-agnostic and easily supports important virtualization features such as migration. We have applied virtual-passthrough in the Linux KVM hypervisor for both x86 and Arm hardware, and show that it can provide more than an order of magnitude improvement in performance over current KVM virtual device support on real application workloads. Third, we introduce Direct Virtual Hardware (DVH), a new approach that enables a host hypervisor to directly provide virtual hardware to nested virtual machines without the intervention of multiple levels of hypervisors. DVH is a generalization of virtual-passthrough and does not limit virtual hardware to I/O devices. Beyond virtual-passthrough, we introduce three additional DVH mechanisms: virtual timers, virtual inter-processor interrupts, and virtual idle. DVH provides virtual hardware for these mechanisms that mimics the underlying hardware and, in some cases, adds new enhancements that leverage the flexibility of software without the need for matching physical hardware support. We have implemented DVH in KVM. Our experimental results show that combining the four DVH mechanisms can provide even greater performance than virtual-passthrough alone and provide near-native execution speeds on real application workloads.
102

Hardware related optimizations in a Java virtual machine

Gu, Dayong. January 2007 (has links)
No description available.
103

The Implications Of Virtual Environments In Digital Forensic Investigations

Patterson, Farrah M 01 January 2011 (has links)
This research paper discusses the role of virtual environments in digital forensic investigations. With virtual environments becoming more prevalent as an analysis tool in digital forensic investigations, it’s becoming more important for digital forensic investigators to understand the limitation and strengths of virtual machines. The study aims to expose limitations within commercial closed source virtual machines and open source virtual machines. The study provides a brief overview of history digital forensic investigations and virtual environments, and concludes with an experiment with four common open and closed source virtual machines; the effects of the virtual machines on the host machine as well as the performance of the virtual machine itself. My findings discovered that while the open source tools provided more control and freedom to the operator, the closed source tools were more stable and consistent in their operation. The significance of these findings can be further researched by applying them in the context of exemplifying reliability of forensic techniques when presented as analysis tool used in litigation.
104

A device-independent graphics manager for MDL

Lim, Poh Chuan January 1982 (has links)
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1982. / MICROFICHE COPY AVAILABLE IN ARCHIVES AND ENGINEERING / Includes bibliographical references. / by Poh Chuan Lim. / M.S.
105

Semantic view re-creation for the secure monitoring of virtual machines

Carbone, Martim 28 June 2012 (has links)
The insecurity of modern-day software has created the need for security monitoring applications. Two serious deficiencies are commonly found in these applications. First, the absence of isolation from the system being monitored allows malicious software to tamper with them. Second, the lack of secure and reliable monitoring primitives in the operating system makes them easy to be evaded. A technique known as Virtual Machine Introspection attempts to solve these problems by leveraging the isolation and mediation properties of full-system virtualization. A problem known as semantic gap, however, occurs as a result of the low-level separation enforced by the hypervisor. This thesis proposes and investigates novel techniques to overcome the semantic gap, advancing the state-of-the-art on the syntactic and semantic view re-creation for applications that conduct passive and active monitoring of virtual machines. First, we propose a new technique for reconstructing a syntactic view of the guest OS kernel's heap state by applying a combination of static code and dynamic memory analysis. Our key contribution is the accuracy and completeness of our analysis. We also propose a new technique that allows out-of-VM applications to invoke and securely execute API functions inside the monitored guest's kernel, eliminating the need for the application to know details of the guest's internals. Our key contribution is the ability to overcome the semantic gap in a robust and secure manner. Finally, we propose a new virtualization-based event monitoring technique based on the interception of kernel data modifications. Our key contribution is the ability to monitor operating system events in a general and secure fashion.
106

Testbed evaluation of integrating ethernet switches in the differentiated services architecture using virtual LANs

Fornaro, Antony 05 1900 (has links)
No description available.
107

Virtualization services: scalable methods for virtualizing multicore systems

Raj, Himanshu 10 January 2008 (has links)
Multi-core technology is bringing parallel processing capabilities from servers to laptops and even handheld devices. At the same time, platform support for system virtualization is making it easier to consolidate server and client resources, when and as needed by applications. This consolidation is achieved by dynamically mapping the virtual machines on which applications run to underlying physical machines and their processing cores. Low cost processor and I/O virtualization methods efficiently scaled to different numbers of processing cores and I/O devices are key enablers of such consolidation. This dissertation develops and evaluates new methods for scaling virtualization functionality to multi-core and future many-core systems. Specifically, it re-architects virtualization functionality to improve scalability and better exploit multi-core system resources. Results from this work include a self-virtualized I/O abstraction, which virtualizes I/O so as to flexibly use different platforms' processing and I/O resources. Flexibility affords improved performance and resource usage and most importantly, better scalability than that offered by current I/O virtualization solutions. Further, by describing system virtualization as a service provided to virtual machines and the underlying computing platform, this service can be enhanced to provide new and innovative functionality. For example, a virtual device may provide obfuscated data to guest operating systems to maintain data privacy; it could mask differences in device APIs or properties to deal with heterogeneous underlying resources; or it could control access to data based on the ``trust' properties of the guest VM. This thesis demonstrates that extended virtualization services are superior to existing operating system or user-level implementations of such functionality, for multiple reasons. First, this solution technique makes more efficient use of key performance-limiting resource in multi-core systems, which are memory and I/O bandwidth. Second, this solution technique better exploits the parallelism inherent in multi-core architectures and exhibits good scalability properties, in part because at the hypervisor level, there is greater control in precisely which and how resources are used to realize extended virtualization services. Improved control over resource usage makes it possible to provide value-added functionalities for both guest VMs and the platform. Specific instances of virtualization services described in this thesis are the network virtualization service that exploits heterogeneous processing cores, a storage virtualization service that provides location transparent access to block devices by extending the functionality provided by network virtualization service, a multimedia virtualization service that allows efficient media device sharing based on semantic information, and an object-based storage service with enhanced access control.
108

A virtualized quality of service packet scheduler accelerator

Chuang, Kangtao Kendall 25 August 2008 (has links)
Resource virtualization is emerging as a technology to enable the management and sharing of hardware resources including multiple core processors and accelerators such as Digital Signal Processors (DSP), Graphics Processing Units (GPU), and Field Programmable Gate Arrays (FPGA). Accelerators present unique problems for virtualization and sharing due to their specialized architectures and interaction modes. This thesis explores and proposes solutions for the virtualized operation of high performance, quality of service (QoS) packet scheduling accelerators. It specifically concentrates on challenges to meet 10Gbps Ethernet wire speeds. The packet scheduling accelerator is realized in a FPGA and implements the Sharestreams-V architecture. ShareStreams-V implements the Dynamic Window-Constrained Scheduler (DWCS) algorithm, and virtualizes the previous ShareStreams architecture. The original ShareStreams architecture, implemented on Xilinx Virtex-I and Virtex-II FPGAs, was able to schedule 128 streams at 10Gbps Ethernet throughput for 1500-byte packets. Sharestreams-V provides both hardware and software extensions to enable a single implementation to host isolated, independent virtual schedulers. Four methods for virtualization of the packet scheduler accelerator are presented: coarse- and fine-grained temporal partitioning, spatial partitioning, and dynamic spatial partitioning. In addition to increasing the utilization of the scheduler, the decision throughput of the physical scheduler can be increased when sharing the physical scheduler across multiple virtual schedulers among multiple processes. This leads to the hypothesis for this work: Virtualization of a quality of service packet scheduler accelerator through dynamic spatial partitioning is an effective and efficient approach to the accelerator virtualization supporting scalable decision throughput across multiple processes. ShareStreams-V was synthesized targeting a Xilinx Virtex-4 FPGA. While sharing among four processes, designs that supported up to 16, 32, and 64 total streams are able to reach 10Gbps Ethernet scheduling throughput for 64-byte packets. When sharing among 32 processes, a scheduler supporting 64 total streams was able to reach the same throughput. An access API presents the virtual scheduler abstraction to individual processes in order to allocate, deallocate, update and control virtual the scheduler allocated to a process. Practically, the bottleneck for the test system is the software to hardware interface. Effective future implementations are anticipated to use a tightly-coupled host CPU to accelerator interconnect.
109

A grounded theory analysis of networking capabilities in virtual organizing

Koekemoer, Johannes Frederik. January 2008 (has links)
Thesis (D Phil.(Information Technology))--University of Pretoria, 2008. / Abstract in English. Includes bibliographical references.
110

Coordinated system level resource management for heterogeneous many-core platforms

Gupta, Vishakha 24 August 2011 (has links)
A challenge posed by future computer architectures is the efficient exploitation of their many and sometimes heterogeneous computational cores. This challenge is exacerbated by the multiple facilities for data movement and sharing across cores resident on such platforms. To answer the question of how systems software should treat heterogeneous resources, this dissertation describes an approach that (1) creates a common manageable pool for all the resources present in the platform, and then (2) provides virtual machines (VMs) with multiple `personalities', flexibly mapped to and efficiently run on the heterogeneous underlying hardware. A VM's personality is its execution context on the different types of available processing resources usable by the VM. We provide mechanisms for making such platforms manageable and evaluate coordinated scheduling policies for mapping different VM personalities on heterogeneous hardware. Towards that end, this dissertation contributes technologies that include (1) restructuring hypervisor and system functions to create high performance environments that enable flexibility of execution and data sharing, (2) scheduling and other resource management infrastructure for supporting diverse application needs and heterogeneous platform characteristics, and (3) hypervisor level policies to permit efficient and coordinated resource usage and sharing. Experimental evaluations on multiple heterogeneous platforms, like one comprised of x86-based cores with attached NVIDIA accelerators and others with asymmetric elements on chip, demonstrate the utility of the approach and its ability to efficiently host diverse applications and resource management methods.

Page generated in 0.0602 seconds