1 |
Coordinated memory management in virtualized environmentsMohapatra, Dushmanta 07 January 2016 (has links)
Two recent advances are the primary motivating factors for the research in my dissertation. First, virtualization is no longer confined to the powerful server class machines. It has already been introduced into smart-phones and will be a part of other high-end embedded systems like automobiles in the near future. Second, more and more resource intensive and latency sensitive applications are being used in devices which are rather resource constrained and introducing virtualization into the software stack just exacerbates the resource allocation issue.
The focus of my research is on memory management in virtualized environments. Existing memory-management mechanisms were designed for server class machines and their implementations are geared towards the applications running primarily on data centers and cloud setups. In these setups, appropriate load balancing and achieving fair division of resources are the goals and over-provisioning may be the norm. Latency involved in resource management mechanisms may not be a big concern. But in case of smart phones and other hand held devices, applications like media streaming, social-networking are prevalent, which are both resource intensive and latency sensitive. Moreover, the bursty nature of their memory requirement results in spikes in memory needs of the virtual machines. As over provisioning is not an option in these domains, fast and effective (memory) resource management mechanisms are necessary.
The overall thesis of my dissertation is: with appropriate design and implementation, it is possible to achieve inter-VM memory management with a latency comparable to the latency involved in intra-VM memory management mechanisms like ‘malloc’. Towards realizing and validating this goal, I have made the following research contributions through my dissertation: (1) I analyzed the memory requirement pattern of prevalent applications, which exhibit bursty behavior and showcased the need for fast memory management mechanisms. (2) I designed and implemented a Coordinated Memory Management mechanism in Xen based virtualized setup, based on the split driver principle (3) I analyzed this mechanism and did a comparative evaluation with parallel memory management mechanisms. (4)I analyzed the extent of interference from the schedulers in the operation of the mechanism and implemented constructs that help in reducing the interference and latency. (5) Based on my analysis, I revised the implementation of the mechanism to one in which Xen hypervisor plays a more significant and active role in the coordination of the mechanism and I did a detailed analysis to showcase the latency improvements due to this design change. (6) In order to validate my hypothesis, I did a comparative analysis of inter-vm and intra-vm memory management mechanisms as final part of my dissertation.
|
2 |
Undetectable Debugger / Undetectable DebuggerDemín, Michal January 2012 (has links)
Using debuggers is a common mean for identifying and analyzing malware (such as viruses, worms, spyware, rootkits, etc.). However, debuggers can be detected by malware via observing of the behavior of operating system, changes in code (such as breakpoint instructions) and non-standard behavior of the CPU, making the analysis of the malware can be hard and tedious. In this thesis we are implementing a basic debugger based on the QEMU emulator that hides its presence from the debugged application. This is accomplished by using the QEMU as virtual machine and adding context awareness to the already existing primitive debugger. The context awareness is implemented using an embedded Python scripting engine. Such setup gives us a flexible way of implementing support for various operating systems. In this thesis, we have developed two examples. One example is for the RTEMS operating system, which serves as easy to understand reference implementation. Second example is for the Linux operating system, to show the abilities of the undetectable debugger in a more real scenario.
|
3 |
An Overview of Virtualization Technologies for Cloud ComputingChen, Wei-Min 07 September 2012 (has links)
Cloud computing is a new concept that incorporates many existing technologies, such as virtualization. Virtualization is important for the establishment of cloud computing. With virtualization, cloud computing can virtualize the hardware resources into a huge resource pool for users to utilize. This thesis begins with an introduction to how a widely used service model classifies cloud computing into three layers. From the bottom up, they are IaaS, PaaS, and SaaS. Some service provides are taken as examples for each service model, such as Amazon
Beanstalk and Google App Engine for PaaS; Amazon CloudFormation and Microsoft mCloud for IaaS. Next, we turn our discussion to the hypervisors and the technologies for virtualizing hardware resources, such as CPUs, memory, and devices. Then, storage and network virtualization techniques are discussed. Finally, the conclusions and the future directions of virtualization are drawn.
|
4 |
Rethinking operating system trustHofmann, Owen Sebastian 25 February 2014 (has links)
Operating system kernels present a difficult security challenge. Despite
their millions of lines of code and broad, complex attack surface, they
remain a trusted component shared between all applications. If an attacker
can combine an exploit for any application on a system with a kernel
exploit or privilege escalation, the attacker can then control any other
application, regardless of whether the second application was itself
vulnerable.
This dissertation presents two hypervisor-based systems: OSck, which increases
the trustworthiness of a guest kernel by detecting kernel rootkits, and
InkTag, which removes the need for an application to trust the kernel at
all. Vital to both systems is their use of information from a potentially
malicious kernel. These systems rely on information from the kernel about
its own functionality to make their implementation simpler, more efficient,
and more secure. Importantly, although they rely on this information, they
do not trust it. A kernel that lies about its functionality to appear
benign will be detected, as will a kernel that simply acts maliciously.
OSck detects kernel rootkits: malicious software programs that are
particularly difficult to detect because they modify internal kernel
operation to hide their presence. Running concurrently with an operating
system and isolated by the hypervisor, OSck verifies safety properties for
large portions of the kernel heap with minimal overhead, by deducing type
information from unmodified kernel source code and in-memory kernel data
structures.
InkTag gives strong safety guarantees to trusted applications, even in the
presence of a malicious operating system. InkTag isolates applications
from the operating system, and enables applications to validate that the
kernel is acting in good faith, for example by ensuring that the kernel is
mapping the correct file data into the application's address space.
InkTag introduces paraverification, a technique that simplifies the
InkTag hypervisor by forcing the untrusted operating system to participate
in its own verification. InkTag requires that the kernel prove to the
hypervisor that its updates to application state (such as page tables) are
valid, and also to prove to the application that its responses to system
calls are consistent. InkTag is also the first system of its kind to
implement access control, secure naming, and consistency for data on stable
storage. / text
|
5 |
Performance analysis of TCP in KVM virtualized environmentSASANK, HYDERKHAN January 2015 (has links)
The requirement of high quality services is increasing day by day. So, in order to meet up with this requirement new technologies are being developed one of them being virtualization. The main agenda of introducing virtualization is that though virtualization needs more powerful devices to run the hypervisor, the technique also helps to increase consolidation which makes efficient use of resources like increase in the CPU utilization. The virtualization technique helps us to run more VM’s (Virtual Machine) on the same platform i.e. on the same hypervisor. In virtualization as number of VM’s share the CPU will there be any effect on the performance of TCP with the performance influencing factors of virtualization. While TCP being the most widely used protocol and most reliable protocol can performance of TCP vary if different TCP congestion control mechanism are used in the virtualized environment are the main aims of this research. In this study, we investigate the performance influencing factor of TCP in the virtualized environment and whether those influencing factors have any role to play with the performance of the TCP. Also which TCP congestion control mechanism is best suitable in order to download files when virtualization is used will be investigated by setting up a client-server test bed. The different TCP congestion control mechanism which have been used are CUBIC, BIC, Highspeed, Vegas, Veno, Yeah, Westwood, LP, Scalable, Reno, Hybla. Total download time has been compared in order to know which congestion control algorithm performs better in the virtualized environment. The method that has been used to carry out the research is by experimentation. That is by changing the RAM sizes and CPU cores which are the performance influencing factors in virtualization and then analyzing the total download time while downloading a file by changing the TCP congestion control mechanisms by running a single guest VM. Apart from changing only congestion control mechanisms the other network parameters which effect the performance of the TCP such as Delay have been injected while downloading the file, to match up with the real time scenarios. Results collected include average download time of a file by changing the different memory sizes and different CPU cores. Average Download time for different TCP congestion controls mechanisms with inclusion of the parameter that effects the total download time such as Delay. From the results we got we can see that there is a slight influence on the performance of TCP by the performance influencing factors memory sizes and CPU cores allotted to the VM in the KVM virtualized environment and of all the TCP congestion control algorithms having TCP – BIC and TCP- YEAH performs the best in the KVM virtualized environment. The performance of TCP – LP is the least in the KVM virtualized environment.
|
6 |
Server VirtualizationBaker, Scott Michael January 2005 (has links)
The client/server paradigm is a common means of implementing an application over a computer network. Servers provide services, such as access to files, directories, or web pages, and clients make use of those services. The communication between the clients and servers takes the form of a network protocol. These network protocols are often rigid and inflexible due to standardization, and because they are often implemented in the operating system kernels of the clients and servers. It is difficult to add new features to existing services without having complete control of all the clients and servers in question. Virtualization is a technique that can be used to alter the properties of a network service without requiring any modifications to the clients or servers. Virtualization is typically performed on an intermediate computer that is interposed between the clients and servers, such as a programmable router. This dissertation motivates the need for virtualization and presents several different examples of successful virtualizations. These virtualizations include translation, aggregation, replication and fortification. Virtualization is demonstrated both on commodity hardware, which has the advantage of low cost, and on a specialized network processor, which offers the advantage of high performance.
|
7 |
Shared-Memory Optimizations for Virtual MachinesMacdonell, A. Cameron Unknown Date
No description available.
|
8 |
Understanding and protecting closed-source systems through dynamic analysisDolan-Gavitt, Brendan 12 January 2015 (has links)
In this dissertation, we focus on dynamic analyses that examine the data handled by programs and operating systems in order to divine the undocumented constraints and implementation details that determine their behavior in the field. First, we introduce a novel technique for uncovering the constraints actually used in OS kernels to decide whether a given instance of a kernel data structure is valid. Next, we tackle the semantic gap problem in virtual machine security: we present a pair of systems that allow, on the one hand, automatic extraction of whole-system algorithms for collecting information about a running system, and, on the other, the rapid identification of “hook points” within a system or program where security tools can interpose to be notified of security-relevant events. Finally, we present and evaluate a new dynamic measure of code similarity that examines the content of the data handled by the code, rather than the syntactic structure of the code itself. This problem has implications both for understanding the capabilities of novel malware as well as understanding large binary code bases such as operating system kernels.
|
9 |
Virtualizace prostoru při prodeji bytů na internetu / Space Virtualization in the course of internet sales apartmentsFarlík, Jan January 2008 (has links)
This master's thesis is concerned with space virtualization in relation to internet apartment sales as represented by the developer company Finep CZ a.s. . Through an analysis of works of the representatives of the philosophy of media it aims to describe the unequal character of the interaction between the web designers and the web users, when the first one, designing virtual spaces, structures the experience of the second, as the designed virtual spaces embody assumed images of what the potentional clients would like rather than the "real" space.
|
10 |
Performance Optimizations for Isolated Driver DomainsShirole, Sushrut Madhukar 23 June 2014 (has links)
In most of today's operating system architectures, device drivers are tightly coupled with other kernel components. In such systems, a fault caused by a malicious or faulty device driver often leads to complete system failure, thereby reducing the overall reliability of the system. Even though a majority of the operating systems provide protection mechanisms at the user level, they do not provide the same level of protection for kernel components. Using virtualization, device drivers can be executed in separate, isolated virtual machines, called driver domains. Such domains provide the same level of isolation to device drivers as operating systems provide to user level applications. Domain-based isolation has the advantage that it is compatible with existing drivers and transparent to the kernel.
However, domain-based isolation incurs significant performance overhead due to the necessary interdomain communication. This thesis investigates techniques for reducing this overhead. The key idea is to replace the interrupt-based notification between domains with a spinning-based approach, thus trading CPU capacity for increased throughput.
We implemented a prototype, called the Isolated Device Driver system (IDDR), which includes front-end and back-end drivers and a communication module. We evaluated the impact of our optimizations for a variety of block devices. Our results show that our solution matches or outperforms Xen's isolated driver domain in most scenarios we considered. / Master of Science
|
Page generated in 0.0167 seconds