91 |
Development of a Virtual Applications Networking Infrastructure NodeRedmond, Keith 15 February 2010 (has links)
This thesis describes the design of a Virtual Application Networking Infrastructure
(VANI) node that can be used to facilitate network architecture experimentation. Cur-
rently the VANI nodes provide four classes of physical resources – processing, reconfig-
urable hardware, storage and interconnection fabric – but the set of sharable resources
can be expanded. Virtualization software allows slices of these resources to be appor-
tioned to VANI nodes that can in turn be interconnected to form virtual networks, which
can operate according to experimental network and application protocols. This thesis
discusses the design decisions that have been made in the development of this system
and provides a detailed description of the prototype, including how users interact with
the resources and the interfaces provided by the virtualization layers.
|
92 |
Predictor Virtualization: Teaching Old Caches New TricksBurcea, Ioana Monica 20 August 2012 (has links)
To improve application performance, current processors rely on prediction-based hardware optimizations, such as data prefetching and branch prediction. These hardware optimizations store application metadata in on-chip predictor tables and use the metadata to anticipate and optimize for future application behavior. As application footprints grow, the predictor tables need to scale for predictors to remain effective.
One important challenge in processor design is to decide which hardware optimizations to implement and how much resources to dedicate to a specific optimization. Traditionally, processor architects employ a one-size-fits-all approach when designing predictor-based hardware optimizations: for each optimization, a fixed portion of the on-chip resources is allocated to the predictor storage. This approach often leads to sub-optimal designs where: 1) resources are wasted for applications that do not benefit from a particular predictor or require only small predictor tables, or 2) predictors under-perform for applications that need larger predictor tables that can not be built due to area-latency-power constraints.
This thesis introduces Predictor Virtualization (PV), a framework that uses the traditional processor memory hierarchy to store application metadata used in speculative hardware optimizations. This allows to emulate large, more accurate predictor tables, which, in return, leads to higher application performance. PV exploits the current trend of unprecedentedly large on- chip secondary caches and allocates on demand a small portion of the cache capacity to store application metadata used in hardware optimizations, adjusting to the application’s need for predictor resources. As a consequence, PV is a pay-as-you-go technique that emulates large predictor tables without increasing the dedicated storage overhead.
To demonstrate the benefits of virtualizing hardware predictors, we present virtualized designs for three different hardware optimizations: a state-of-the-art data prefetcher, conventional branch target buffers and an object-pointer prefetcher. While each of these hardware predictors exhibit different characteristics that lead to different virtualized designs, virtualization improves the cost-performance trade-off for all these optimizations.
PV increases the utility of traditional processor caches: in addition to being accelerators for slow off-chip memories, on-chip caches are leveraged for increasing the effectiveness of predictor-based hardware optimizations.
|
93 |
Development of a Virtual Applications Networking Infrastructure NodeRedmond, Keith 15 February 2010 (has links)
This thesis describes the design of a Virtual Application Networking Infrastructure
(VANI) node that can be used to facilitate network architecture experimentation. Cur-
rently the VANI nodes provide four classes of physical resources – processing, reconfig-
urable hardware, storage and interconnection fabric – but the set of sharable resources
can be expanded. Virtualization software allows slices of these resources to be appor-
tioned to VANI nodes that can in turn be interconnected to form virtual networks, which
can operate according to experimental network and application protocols. This thesis
discusses the design decisions that have been made in the development of this system
and provides a detailed description of the prototype, including how users interact with
the resources and the interfaces provided by the virtualization layers.
|
94 |
Flexible Computing with Virtual MachinesLagar Cavilla, Horacio Andres 30 March 2011 (has links)
This thesis is predicated upon a vision of the future of computing with a separation of functionality between core and edges, very
similar to that governing the Internet itself. In this vision, the core of our computing infrastructure is made up of vast server farms with an abundance of storage and processing cycles. Centralization of
computation in these farms, coupled with high-speed wired or wireless connectivity, allows for pervasive access to a highly-available and well-maintained repository for data, configurations, and applications. Computation in the edges is concerned with provisioning application state and user data to rich clients, notably mobile devices equipped with powerful displays and graphics processors.
We define flexible computing as systems support for applications that dynamically leverage the resources available in the core
infrastructure, or cloud. The work in this thesis focuses on two instances of flexible computing that are crucial to the
realization of the aforementioned vision. Location flexibility aims to, transparently and seamlessly, migrate applications between
the edges and the core based on user demand. This enables performing the interactive tasks on rich edge clients and the computational tasks on powerful core servers. Scale flexibility is the ability of
applications executing in cloud environments, such as parallel jobs or
clustered servers, to swiftly grow and shrink their footprint according to execution demands.
This thesis shows how we can use system virtualization to implement systems that provide scale and location flexibility. To that effect we build and evaluate two system prototypes: Snowbird and SnowFlock. We present techniques for manipulating virtual machine state that turn running software into a malleable entity which is easily manageable, is decoupled from the underlying hardware, and is capable of dynamic relocation and scaling. This thesis demonstrates that virtualization technology is a powerful and suitable tool to
enable solutions for location and scale flexibility.
|
95 |
Flexible Monitoring of Storage I/OBenke, Tim 17 June 2009 (has links)
For any computer system, monitoring its performance is vital to understanding and fixing problems and performance bottlenecks. In this work we present the architecture and implementation of a system for monitoring storage devices that serve virtual machines. In contrast to existing approaches, our system is more flexible because it employs a query language that can capture both specific and detailed information on I/O transfers. Therefore our monitoring solution provides the user with enough statistics to enable him or her to find and solve problems, but not overwhelm them with too much information. Our system monitors I/O activity in virtual machines and supports basic distributed query processing. Experiments show the performance overhead of the prototype implementation to be acceptable in many realistic settings.
|
96 |
Flexible Monitoring of Storage I/OBenke, Tim 17 June 2009 (has links)
For any computer system, monitoring its performance is vital to understanding and fixing problems and performance bottlenecks. In this work we present the architecture and implementation of a system for monitoring storage devices that serve virtual machines. In contrast to existing approaches, our system is more flexible because it employs a query language that can capture both specific and detailed information on I/O transfers. Therefore our monitoring solution provides the user with enough statistics to enable him or her to find and solve problems, but not overwhelm them with too much information. Our system monitors I/O activity in virtual machines and supports basic distributed query processing. Experiments show the performance overhead of the prototype implementation to be acceptable in many realistic settings.
|
97 |
Resource Allocation, and Survivability in Network Virtualization EnvironmentsRahman, Muntasir Raihan January 2010 (has links)
Network virtualization can offer more flexibility and better manageability for the future Internet by allowing multiple heterogeneous virtual networks (VN) to coexist on a shared infrastructure provider (InP) network. A major challenge in this respect is the VN embedding problem that deals with the efficient mapping of virtual resources on InP network resources. Previous research focused on heuristic algorithms for the VN embedding problem assuming that the InP network remains operational at all times. In this thesis, we remove that assumption by formulating the survivable virtual network embedding (SVNE) problem and developing baseline policy heuristics and an efficient hybrid policy heuristic to solve it. The hybrid policy is based on a fast re-routing strategy and utilizes a pre-reserved quota for backup on each physical link. Our evaluation results show that our proposed heuristic for SVNE outperforms baseline heuristics in terms of long term business profit for the InP, acceptance ratio, bandwidth efficiency, and response time.
|
98 |
Dynamic Storage Provisioning with SLO GuaranteesGaharwar, Prashant January 2010 (has links)
Static provisioning of storage resources may lead to over-provisioning of resources, which increases costs, or under-provisioning, which runs the risk of violating application-level QoS goals. Toward this end, virtualization technologies have made automated provisioning of storage resources easier allowing more effective management of the resources. In this work, we present an approach that suggests a series of dynamic provisioning decisions to meet the I/O demands of a time-varying workload while avoiding unnecessary costs and Service Level Objective (SLO) violations. We also do a case-study to analyze the practical feasibility of dynamic provisioning and the associated performance effects in a virtualized environment, which forms the basis of our approach. Our approach is able to suggest the optimal provisioning decisions, for a given workload, that minimize cost and meet the SLO. We evaluate the approach using workload data obtained from real systems to demonstrate its cost-effectiveness, sensitivity to various system parameters, and runtime feasibility for use in real systems.
|
99 |
Storage Virtualization: A Case Study on LinuxLin, Luen-Yung 28 June 2007 (has links)
In the era of explosive information, storage subsystem is becoming more and more important in our daily life and commercial markets. Because more and more data are recorded in the digital form and stored in the storage device, an intelligent mechanism is required to make the management of the digital data and storage devices more eficiently rather than simply keep increasing more storage equipment into a system. The concept of storage virtualization was introduced to solve this problem, by aggregating all the physical devices into a single virtual storage device and hidding the complexity of underlying block devices. Through this virtual layer, users can dynamically allocate and resize their virtual storage device to satisfy their need, and they can also use the methods provided by the virtual layer to organize data more efficiently.
Linux Logical Volume Manager 2 (LVM2) is an implementation of storage virtualization on the Linux operation system. It includes three components: the kernel-space devicemapper, the user-space device-mapper support library (libdevmapper), and the user-space LVM2 toolset. This thesis will focus on the kernel-space device-mapper, which provides virtualization mechanism for user-space logical volume manager. The organization of this thesis is composed of : (1) Introduce novel technologies in the recent years, (2) Provide an advanced document about the internals of device-mapper, (3) Try optimizing the mapping table algorithm, and (4) Evaluate the performance of device-mapper.
|
100 |
Resource Allocation, and Survivability in Network Virtualization EnvironmentsRahman, Muntasir Raihan January 2010 (has links)
Network virtualization can offer more flexibility and better manageability for the future Internet by allowing multiple heterogeneous virtual networks (VN) to coexist on a shared infrastructure provider (InP) network. A major challenge in this respect is the VN embedding problem that deals with the efficient mapping of virtual resources on InP network resources. Previous research focused on heuristic algorithms for the VN embedding problem assuming that the InP network remains operational at all times. In this thesis, we remove that assumption by formulating the survivable virtual network embedding (SVNE) problem and developing baseline policy heuristics and an efficient hybrid policy heuristic to solve it. The hybrid policy is based on a fast re-routing strategy and utilizes a pre-reserved quota for backup on each physical link. Our evaluation results show that our proposed heuristic for SVNE outperforms baseline heuristics in terms of long term business profit for the InP, acceptance ratio, bandwidth efficiency, and response time.
|
Page generated in 0.0151 seconds