• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 31
  • 11
  • 5
  • 5
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 72
  • 37
  • 27
  • 20
  • 15
  • 15
  • 14
  • 14
  • 13
  • 13
  • 11
  • 11
  • 11
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

In Perfect Xen, a Performance Study of the Emerging Xen Scheduler

Hnarakis, Ryan 01 December 2013 (has links) (PDF)
Fifty percent of Fortune 500 companies trust Xen, an open-source bare-metal hypervisor, to virtualize their websites and mission critical services in the cloud. Providing superior fault tolerance, scalability, and migration, virtualization allows these companies to run several isolated operating systems simultaneously on the same physical server. These isolated operating systems, called virtual machines, require a virtual traffic guard to cooperate with one another. This guard known as the Credit2 scheduler along with the newest Xen hypervisor was recently developed to supersede the older schedulers. Since wasted CPU cycles can be costly, the Credit2 prototype must undergo significant performance validation before being released into production. Furthermore, leading commercial virtualization products, including VMWare and Microsoft Hyper-V frequently adopt Xen's proven technologies. This thesis provides quantitative performance measurements of the Credit1 and Credit2 schedulers, and provides recommendations for building hypervisor schedulers.
52

Towards a USB control area network

Golchin, Ahmad 01 February 2024 (has links)
Cyber-physical systems are computers equipped with sensors and actuators that enable them to interact with their surrounding environments. Ground vehicles, drones, and manufacturing robots are examples of such systems that require timing guarantees in addition to functional correctness to achieve their mission objectives. These systems often use multiple microcontroller boards for workload distribution and physical redundancy. The emergence of PC-class embedded systems featuring high processing capabilities and abundant resources presents an opportunity to consolidate separate microcontroller boards as software-defined functions into fewer computer systems. For instance, current automotive systems utilize upwards of 100 electronic control units (ECUs) for chassis, body, power-train, infotainment, and vehicle control services. Consolidation saves manufacturing costs, reduces wiring, simplifies packaging in space-limited situations, and streamlines software update delivery to end-users. However, consolidating functions on PC-class hardware does not address the real-time I/O challenges. A fundamental problem in such real-time solutions is the handling of device input and output in a timely manner. For example, a control system might require input data from a sensor to be sampled and processed regularly so that output signals to actuators occur within specific delay bounds. Input/output (I/O) devices connect to the host computer using different types of bus interfaces not necessarily supported by PC-class hardware natively. Examples of such interfaces include Controller Area Network (CAN) and FlexRay, which are prominent in the automotive world, but are not found in PC-class embedded systems. Universal Serial Bus (USB) is now ubiquitous in the PC-class domain, in part due to its support for many classes of devices with simplified hardware needed to connect to the host, and can be utilized to bridge this gap. USB provides the throughput and delay capabilities for next-generation high bandwidth sensors to be integrated with actuators in control area networks. However, typical USB host controller drivers suffer from potential timing delays that affect the delivery of data between tasks and devices. This Ph.D. thesis examines the use of Universal Serial Bus (USB) as the physical fabric for host-to-device and host-to-host communication, without special switching hardware or protocol translation logic, and through a unified programming interface. Combined with the real-time scheduling framework of the Quest RTOS, this work investigates how to form networks of I/O devices and computing nodes over USB with end-to-end timing guarantees. The main contribution of this thesis is a USB-centric design solution for real-time cyber-physical systems with distributed computing nodes.
53

Resilire: Achieving High Availability Through Virtual Machine Live Migration

Lu, Peng 16 October 2013 (has links)
High availability is a critical feature of data centers, cloud, and cluster computing environments. Replication is a classical approach to increase service availability by providing redundancy. However, traditional replication methods are increasingly unattractive for deployment due to several limitations such as application-level non-transparency, non-isolation of applications (causing security vulnerabilities), complex system management, and high cost. Virtualization overcomes these limitations through another layer of abstraction, and provides high availability through virtual machine (VM) live migration: a guest VM image running on a primary host is transparently check-pointed and migrated, usually at a high frequency, to a backup host, without pausing the VM; the VM is resumed from the latest checkpoint on the backup when a failure occurs. A virtual cluster (VC) generalizes the VM concept for distributed applications and systems: a VC is a set of multiple VMs deployed on different physical machines connected by a virtual network. This dissertation presents a set of VM live migration techniques, their implementations in the Xen hypervisor and Linux operating system kernel, and experimental studies conducted using benchmarks (e.g., SPEC, NPB, Sysbench) and production applications (e.g., Apache webserver, SPECweb). We first present a technique for reducing VM migration downtimes called FGBI. FGBI reduces the dirty memory updates that must be migrated during each migration epoch by tracking memory at block granularity. Additionally, it determines memory blocks with identical content and shares them to reduce the increased memory overheads due to block-level tracking granularity, and uses a hybrid compression mechanism on the dirty blocks to reduce the migration traffic. We implement FGBI in the Xen hypervisor and conduct experimental studies, which reveal that the technique reduces the downtime by 77% and 45% over competitors including LLM and Remus, respectively, with a performance overhead of 13%. We then present a lightweight, globally consistent checkpointing mechanism for virtual cluster, called VPC, which checkpoints the VC for immediate restoration after (one or more) VM failures. VPC predicts the checkpoint-caused page faults during each checkpointing interval, in order to implement a lightweight checkpointing approach for the entire VC. Additionally, it uses a globally consistent checkpointing algorithm, which preserves the global consistency of the VMs' execution and communication states, and only saves the updated memory pages during each checkpointing interval. Our Xen-based implementation and experimental studies reveal that VPC reduces the solo VM downtime by as much as 45% and reduces the entire VC downtime by as much as 50% over competitors including VNsnap, with a memory overhead of 9% and performance overhead of 16%. The dissertation's third contribution is a VM resumption mechanism, called VMresume, which restores a VM from a (potentially large) checkpoint on slow-access storage in a fast and efficient way. VMresume predicts and preloads the memory pages that are most likely to be accessed after the VM's resumption, minimizing otherwise potential performance degradation due to cascading page faults that may occur on VM resumption. Our experimental studies reveal that VM resumption time is reduced by an average of 57% and VM's unusable time is reduced by 73.8% over native Xen's resumption mechanism. Traditional VM live migration mechanisms are based on hypervisors. However, hypervisors are increasingly becoming the source of several major security attacks and flaws. We present a mechanism called HSG-LM that does not involve the hypervisor during live migration. HSG-LM is implemented in the guest OS kernel so that the hypervisor is completely bypassed throughout the entire migration process. The mechanism exploits a hybrid strategy that reaps the benefits of both pre-copy and post-copy migration mechanisms, and uses a speculation mechanism that improves the efficiency of handling post-copy page faults. We modify the Linux kernel and develop a new page fault handler inside the guest OS to implement HSG-LM. Our experimental studies reveal that the technique reduces the downtime by as much as 55%, and reduces the total migration time by as much as 27% over competitors including Xen-based pre-copy, post-copy, and self-migration mechanisms. In a virtual cluster environment, one of the main challenges is to ensure equal utilization of all the available resources while avoiding overloading a subset of machines. We propose an efficient load balancing strategy using VM live migration, called DCbalance. Differently from previous work, DCbalance records the history of mappings to inform future placement decisions, and uses a workload-adaptive live migration algorithm to minimize VM downtime. We improve Xen's original live migration mechanism and implement the DCbalance technique, and conduct experimental studies. Our results reveal that DCbalance reduces the decision generating time by 79%, the downtime by 73%, and the total migration time by 38%, over competitors including the OSVD virtual machine load balancing mechanism and the DLB (Xen-based) dynamic load balancing algorithm. The dissertation's final contribution is a technique for VM live migration in Wide Area Networks (WANs), called FDM. In contrast to live migration in Local Area Networks (LANs), VM migration in WANs involve migrating disk data, besides memory state, because the source and the target machines do not share the same disk service. FDM is a fast and storage-adaptive migration mechanism that transmits both memory state and disk data with short downtime and total migration time. FDM uses page cache to identify data that is duplicated between memory and disk, so as to avoid transmitting the same data unnecessarily. We implement FDM in Xen, targeting different disk formats including raw and Qcow2. Our experimental studies reveal that FDM reduces the downtime by as much as 87%, and reduces the total migration time by as much as 58% over competitors including pre-copy or post-copy disk migration mechanisms and the disk migration mechanism implemented in BlobSeer, a widely used large-scale distributed storage service. / Ph. D.
54

Design and Implementation of the VirtuOS Operating System

Nikolaev, Ruslan 21 January 2014 (has links)
Most operating systems provide protection and isolation to user processes, but not to critical system components such as device drivers or other systems code. Consequently, failures in these components often lead to system failures. VirtuOS is an operating system that exploits a new method of decomposition to protect against such failures. VirtuOS exploits virtualization to isolate and protect vertical slices of existing OS kernels in separate service domains. Each service domain represents a partition of an existing kernel, which implements a subset of that kernel's functionality. Service domains directly service system calls from user processes. VirtuOS exploits an exceptionless model, avoiding the cost of a system call trap in many cases. We illustrate how to apply exceptionless system calls across virtualized domains. To demonstrate the viability of VirtuOS's approach, we implemented a prototype based on the Linux kernel and Xen hypervisor. We created and evaluated a network and a storage service domain. Our prototype retains compatibility with existing applications, can survive the failure of individual service domains while outperforming alternative approaches such as isolated driver domains and even exceeding the performance of native Linux for some multithreaded workloads. The evaluation of VirtuOS revealed costs due to decomposition, memory management, and communication, which necessitated a fine-grained analysis to understand their impact on the system's performance. The interaction of virtual machines with multiple underlying software and hardware layers in virtualized environment makes this task difficult. Moreover, performance analysis tools commonly used in native environments were not available in virtualized environments. Our work addresses this problem to enable an in-depth performance analysis of VirtuOS. Our Perfctr-Xen framework provides capabilities for per-thread analysis with both accumulative event counts and interrupt-driven event sampling. Perfctr-Xen is a flexible and generic tool, supports different modes of virtualization, and can be used for many applications outside of VirtuOS. / Ph. D.
55

Jämförelse av Hypervisor & Zoner : Belastningstester vid drift av webbservrar

Nyquist, Johan, Manfredsson, Alexander January 2013 (has links)
Virtualisering av datorer rent generellt innebär att man delar upp hela eller delar av enmaskinkonfiguration i flera exekveringsmiljöer. Det är inte bara datorn i sig som kanvirtualiseras utan även delar av det, såsom minnen, lagring och nätverk. Virtualiseringanvänds ofta för att kunna nyttja systemets resurser mer effektivt. En hypervisorfungerar som ett lager mellan operativsystemet och den underliggande hårdvaran. Meden hypervisor har virtuella maskiner sitt egna operativsystems kärna. En annan tekniksom bortser från detta mellanlager kallas zoner. Zoner är en naturlig del avoperativsystemet och alla instanser delar på samma kärna, vilket inte ger någon extraoverhead. Problemet är att hypervisorn är en resurskrävande teknik. Genom att användazoner kan detta problem undkommas genom att ta bort hypervisorlagret och istället köramed instanser som kommunicerar direkt med operativsystemets kärna. Detta ärteoretiskt grundande och ingen tidigare forskning har utförts, därmed påkallades dennautredning. För att belysa problemet använde vi oss av Apache som webbserver.Verktyget Httperf användes för att kunna utföra belastningstester mot webbservern.Genom att göra detta kunde vi identifiera att den virtualiserade servern presterade sämreän en fysisk server (referensmaskin). Även att den nyare tekniken zoner bidrar till lägreoverhead, vilket gör att systemet presterar bättre än med den traditionella hypervisorn.För att styrka vår teori utfördes två tester. Det första testet bestod utav en virtualiseradserver, andra testet bestod av tre virtuella servrar. Anledningen var att se hur de olikateknikerna presterade vid olika scenarion. Det visade sig i båda fallen att zonerpresterade bättre och att det inte tappade lika mycket i prestanda i förhållande tillreferensmaskinerna. / Virtualization of computers in general means that the whole or parts of a machineconfiguration is split in multiple execution enviornments. It is not just the computeritself that can be virtualized, but also the resources such as memory, storage andnetworking. Virtualization is often used to utilize system resources more efficient. Ahypervisor acts as a layer between the operating system and the underlying hardware.With a hypervisor a virtual machine has its own operating system kernel. Anothertechnique that doesn't use this middle layer is called zones. Zones are a natural part ofthe operating system and all instances share the same core, this does not provide anyadditional overhead. The problem with hypervisors is that it is a rescource-demandingtechnique. The advantage with zones is that you should be able to avoid the problem byremoving the hypervisor layer and instead run instances that communicate directly tothe operating system kernel. This is just a theoretical foundation. No previous researchhas been done, which result in this investigation. To illustrate the problem we usedApache as a web server. Httperf will be used as a tool to benchmark the web server. Bydoing this we were able to identify that the virtualized server did not perform quite aswell as a physical server. Also that the new technique (zones) did contribute with loweroverhead, making the system perform better than the traditional hypervisor. In order toprove our theory two tests were performed. The first test consisted of one virtual serverand the other test consisted of three virtual servers. The reason behind this was to seehow the different techniques performed in different scenarios. In both cases we foundthat zones performed better and did not drop as much performance in relation to ourreference machines.
56

Detecting latency spikes in network quality measurements caused by hypervisor pausing virtual environment execution. : Finding ways to detect hypervisor-induced latency spikes during an execution in a virtual environment from the virtual environment.

Bouaddi, Hilaire January 2022 (has links)
Virtual Environments have transformed over the years the way software is built and distributed. The recent growth of services such as Amazon EC2 or Google Cloud is representative of this trend and encourages developers to build software intended for virtual environments like virtual machines or containers. Despite all the benefits that virtualization brings (isolation, security, energy efficiency, stability, portability, etc.), the extra layer of software between the virtual environment and the hardware, called the hypervisor, increases the complexity of a system and the interpretation of its metrics. In this paper, we explore the situation of software performing latency measurements from a virtual environment. This is an example of a use-case where latency from the hypervisor could lead to measurable noise on the virtual environment if the hypervisor makes our environment wait for resources for milliseconds. To solve this problem, we propose an algorithm that will filter out this noise from computed metrics from the virtual environment. This algorithm was developed studying correlation between those metrics and hypervisor-induced latency spikes. We also try to be hypervisor agnostic which means that this work stays relevant whether a virtual environment is deployed locally or on a Cloud Service with different (and constantly evolving) hypervisor technologies. This research gives an overview of hypervisor technologies and how latency can appear when executing processes on virtual environments. As we will see, computing the metric and running the algorithm make network quality measurements from virtual environments more reliable and can explain unexpected latencies. / Virtuella miljöer(virtual environments) har under åren förändrat hur mjukvara(software) byggs och distribueras. Den senaste tidens tillväxt av tjänster som Amazon EC2 eller Google Cloud är representativ för denna trend och uppmuntrar utvecklare att bygga programvara avsedd för virtuella miljöer som virtuella maskiner eller behållare. Trots alla fördelar som virtualisering ger (isolering, säkerhet, energieffektivitet, stabilitet, portabilitet, etc.), ökar det extra lagret av mjukvara mellan den virtuella miljön och hårdvaran, kallad hypervisor, komplexiteten hos ett system och tolkning av dessa måtvärden. I denna artikel utforskar vi situationen för programvara som utför latensmätningar från en virtuell miljö. Detta är ett exempel på ett användningsfall där latens från hypervisorn kan leda till mätbart brus i den virtuella miljön om hypervisorn får vår miljö att vänta på resurser i millisekunder. För att lösa detta problem föreslår vi en algoritm som kommer att filtrera bort detta brus från beräknade mätvärden från den virtuella miljön. Denna algoritm utvecklades för att studera korrelationen mellan dessa mätvärden och hypervisor-inducerade latensspikar. Vi försöker också vara hypervisoragnostiska vilket innebär att detta arbete förblir relevant oavsett om en virtuell miljö distribueras lokalt eller på en molntjänst med olika (och ständigt utvecklande) hypervisorteknologier. Denna forskning ger en översikt över hypervisorteknologier och hur latens kan uppstå när processer körs i virtuella miljöer. Som vi kommer att se gör beräkning av måtten och körning av algoritmen mätningar av nätverkskvalitet från virtuella miljöer mer tillförlitliga och kan förklara oväntade latenser. / Les environments virtuels transforment depuis des années la manière de développer et distribuer du logiciel. La récente croissance de services comme Amazon EC2 ou Google Cloud reflète bien cette tendance et encourage les développeurs à construire du logiciel déployable sur des environnements virtuels comme des machines virtuels ou des conteneurs. Malgré tous les bénéfices que la virtualisation apporte (isolation, sécurité, efficacité énergétique, stabilité, portabilité, etc.), la couche logiciel supplémentaire entre l’environnement virtuel et le hardware, appelée hyperviseur, augmente la complexité d’un système et l’interprétation de ces métriques. Dans ce mémoire de projet de fin d’études, nous explorons la situation où un logiciel effectue des tests de latence depuis un envirronnement virtuel. Cette situation est un exemple d’un cas d’utilisation où la latence introduite par un hyperviseur peut mener à un bruit mesurable si l’hyperviseur fait attendre notre environnement dans l’ordre de grandeur de la milliseconde. Pour résoudre ce problème, nous proposons un algorithme qui filtre ce bruit à partir de métriques calculées directement depuis l’environnement virtuel. Cet algorithme est dévelopé en étudiant la corrélation entre nos métriques et une latence dite "hypervisor-induced". Cette approche permet donc une grande flexibilité dans la technologie sous-jacente de l’hôte puisque celui-ci peut utiliser des hyperviseurs différents ou même faire partie d’un service Cloud sans que notre solution en soit impactée. Ce mémoire donne aussi un aperçu de la technologie derrière un hyperviseur et comment de la latence peut s’introduire dans l’exécution d’un processus dans un environnement virtuel.
57

No Hypervisor Is an Island : System-wide Isolation Guarantees for Low Level Code

Schwarz, Oliver January 2016 (has links)
The times when malware was mostly written by curious teenagers are long gone. Nowadays, threats come from criminals, competitors, and government agencies. Some of them are very skilled and very targeted in their attacks. At the same time, our devices – for instance mobile phones and TVs – have become more complex, connected, and open for the execution of third-party software. Operating systems should separate untrusted software from confidential data and critical services. But their vulnerabilities often allow malware to break the separation and isolation they are designed to provide. To strengthen protection of select assets, security research has started to create complementary machinery such as security hypervisors and separation kernels, whose sole task is separation and isolation. The reduced size of these solutions allows for thorough inspection, both manual and automated. In some cases, formal methods are applied to create mathematical proofs on the security of these systems. The actual isolation solutions themselves are carefully analyzed and included software is often even verified on binary level. The role of other software and hardware for the overall system security has received less attention so far. The subject of this thesis is to shed light on these aspects, mainly on (i) unprivileged third-party code and its ability to influence security, (ii) peripheral devices with direct access to memory, and (iii) boot code and how we can selectively enable and disable isolation services without compromising security. The papers included in this thesis are both design and verification oriented, however, with an emphasis on the analysis of instruction set architectures. With the help of a theorem prover, we implemented various types of machinery for the automated information flow analysis of several processor architectures. The analysis is guaranteed to be both sound and accurate. / Förr skrevs skadlig mjukvara mest av nyfikna tonåringar. Idag är våra datorer under ständig hot från statliga organisationer, kriminella grupper, och kanske till och med våra affärskonkurrenter. Vissa besitter stor kompetens och kan utföra fokuserade attacker. Samtidigt har tekniken runtomkring oss (såsom mobiltelefoner och tv-apparater) blivit mer komplex, uppkopplad och öppen för att exekvera mjukvara från tredje part. Operativsystem borde egentligen isolera känslig data och kritiska tjänster från mjukvara som inte är trovärdig. Men deras sårbarheter gör det oftast möjligt för skadlig mjukvara att ta sig förbi operativsystemens säkerhetsmekanismer. Detta har lett till utveckling av kompletterande verktyg vars enda funktion är att förbättra isolering av utvalda känsliga resurser. Speciella virtualiseringsmjukvaror och separationskärnor är exempel på sådana verktyg. Eftersom sådana lösningar kan utvecklas med relativt liten källkod, är det möjligt att analysera dem noggrant, både manuellt och automatiskt. I några fall används formella metoder för att generera matematiska bevis på att systemet är säkert. Själva isoleringsmjukvaran är oftast utförligt verifierad, ibland till och med på assemblernivå. Dock så har andra komponenters påverkan på systemets säkerhet hittills fått mindre uppmärksamhet, både när det gäller hårdvara och annan mjukvara. Den här avhandlingen försöker belysa dessa aspekter, huvudsakligen (i) oprivilegierad kod från tredje part och hur den kan påverka säkerheten, (ii) periferienheter med direkt tillgång till minnet och (iii) startkoden, samt hur man kan aktivera och deaktivera isolationstjänster på ett säkert sätt utan att starta om systemet. Avhandlingen är baserad på sex tidigare publikationer som handlar om både design- och verifikationsaspekter, men mest om säkerhetsanalys av instruktionsuppsättningar. Baserat på en teorembevisare har vi utvecklat olika verktyg för den automatiska informationsflödesanalysen av processorer. Vi har använt dessa verktyg för att tydliggöra vilka register oprivilegierad mjukvara har tillgång till på ARM- och MIPS-maskiner. Denna analys är garanterad att vara både korrekt och precis. Så vitt vi vet är vi de första som har publicerat en lösning för automatisk analys och bevis av informationsflödesegenskaper i standardinstruktionsuppsättningar. / <p>QC 20160919</p> / PROSPER / HASPOC
58

Architectural Introspection and Applications

Litty, Lionel 30 August 2010 (has links)
Widespread adoption of virtualization has resulted in an increased interest in Virtual Machine (VM) introspection. To perform useful analysis of the introspected VMs, hypervisors must deal with the semantic gap between the low-level information available to them and the high-level OS abstractions they need. To bridge this gap, systems have proposed making assumptions derived from the operating system source code or symbol information. As a consequence, the resulting systems create a tight coupling between the hypervisor and the operating systems run by the introspected VMs. This coupling is undesirable because any change to the internals of the operating system can render the output of the introspection system meaningless. In particular, malicious software can evade detection by making modifications to the introspected OS that break these assumptions. Instead, in this thesis, we introduce Architectural Introspection, a new introspection approach that does not require information about the internals of the introspected VMs. Our approach restricts itself to leveraging constraints placed on the VM by the hardware and the external environment. To interact with both of these, the VM must use externally specified interfaces that are both stable and not linked with a specific version of an operating system. Therefore, systems that rely on architectural introspection are more versatile and more robust than previous approaches to VM introspection. To illustrate the increased versatility and robustness of architectural introspection, we describe two systems, Patagonix and P2, that can be used to detect rootkits and unpatched software, respectively. We also detail Attestation Contracts, a new approach to attestation that relies on architectural introspection to improve on existing attestation approaches. We show that because these systems do not make assumptions about the operating systems used by the introspected VMs, they can be used to monitor both Windows and Linux based VMs. We emphasize that this ability to decouple the hypervisor from the introspected VMs is particularly useful in the emerging cloud computing paradigm, where the virtualization infrastructure and the VMs are managed by different entities. Finally, we show that these approaches can be implemented with low overhead, making them practical for real world deployment.
59

Architectural Introspection and Applications

Litty, Lionel 30 August 2010 (has links)
Widespread adoption of virtualization has resulted in an increased interest in Virtual Machine (VM) introspection. To perform useful analysis of the introspected VMs, hypervisors must deal with the semantic gap between the low-level information available to them and the high-level OS abstractions they need. To bridge this gap, systems have proposed making assumptions derived from the operating system source code or symbol information. As a consequence, the resulting systems create a tight coupling between the hypervisor and the operating systems run by the introspected VMs. This coupling is undesirable because any change to the internals of the operating system can render the output of the introspection system meaningless. In particular, malicious software can evade detection by making modifications to the introspected OS that break these assumptions. Instead, in this thesis, we introduce Architectural Introspection, a new introspection approach that does not require information about the internals of the introspected VMs. Our approach restricts itself to leveraging constraints placed on the VM by the hardware and the external environment. To interact with both of these, the VM must use externally specified interfaces that are both stable and not linked with a specific version of an operating system. Therefore, systems that rely on architectural introspection are more versatile and more robust than previous approaches to VM introspection. To illustrate the increased versatility and robustness of architectural introspection, we describe two systems, Patagonix and P2, that can be used to detect rootkits and unpatched software, respectively. We also detail Attestation Contracts, a new approach to attestation that relies on architectural introspection to improve on existing attestation approaches. We show that because these systems do not make assumptions about the operating systems used by the introspected VMs, they can be used to monitor both Windows and Linux based VMs. We emphasize that this ability to decouple the hypervisor from the introspected VMs is particularly useful in the emerging cloud computing paradigm, where the virtualization infrastructure and the VMs are managed by different entities. Finally, we show that these approaches can be implemented with low overhead, making them practical for real world deployment.
60

Real-time hierarchical hypervisor

Poon, Wing-Chi 07 February 2011 (has links)
Both real-time virtualization and recursive virtualization are desirable properties of a virtual machine monitor (or hypervisor). Although the prospect for virtualization and even recursive virtualization has become better as the PC hardware becomes faster, the real-time systems community so far has not been able to reap much benefits. This is because no existing virtualization mechanism can properly support the stringent timing requirements needed by real-time systems. It is hard to do real-time virtualization, and it is even harder to do it recursively. In this dissertation, we propose a framework whereby the hypervisor is capable of running real-time guests and participating in recursive virtualization. Such a hypervisor is called a real-time hierarchical hypervisor. We first look at virtualization of abstract resource types from the real-time systems perspective. Unlike the previous work on recursive real-time partitioning that assumes fully-preemptable resources, we concentrate on other and often more practical types of scheduling constraints, especially the non-preemptive and limited-preemptive ones. Then we consider the current x86 architecture and explore the problems that need to be addressed for real-time recursive virtualization. We drill down on the problem that affects timing properties the most, namely, the recursive forwarding and delivery of interrupts, exceptions and intercepts. We choose the x86 architecture because it is popular and readily available, but it is by no means the only architecture of choice for real-time recursive virtualization. We conclude the research with an architecture-independent discussion on future possibilities in real-time recursive virtualization. / text

Page generated in 0.1635 seconds