• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 264
  • 85
  • 55
  • 23
  • 20
  • 17
  • 16
  • 8
  • 7
  • 6
  • 4
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 588
  • 174
  • 152
  • 145
  • 135
  • 134
  • 96
  • 76
  • 64
  • 61
  • 61
  • 59
  • 57
  • 56
  • 56
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Ad hoc cloud computing

McGilvary, Gary Andrew January 2014 (has links)
Commercial and private cloud providers offer virtualized resources via a set of co-located and dedicated hosts that are exclusively reserved for the purpose of offering a cloud service. While both cloud models appeal to the mass market, there are many cases where outsourcing to a remote platform or procuring an in-house infrastructure may not be ideal or even possible. To offer an attractive alternative, we introduce and develop an ad hoc cloud computing platform to transform spare resource capacity from an infrastructure owner’s locally available, but non-exclusive and unreliable infrastructure, into an overlay cloud platform. The foundation of the ad hoc cloud relies on transferring and instantiating lightweight virtual machines on-demand upon near-optimal hosts while virtual machine checkpoints are distributed in a P2P fashion to other members of the ad hoc cloud. Virtual machines found to be non-operational are restored elsewhere ensuring the continuity of cloud jobs. In this thesis we investigate the feasibility, reliability and performance of ad hoc cloud computing infrastructures. We firstly show that the combination of both volunteer computing and virtualization is the backbone of the ad hoc cloud. We outline the process of virtualizing the volunteer system BOINC to create V-BOINC. V-BOINC distributes virtual machines to volunteer hosts allowing volunteer applications to be executed in the sandbox environment to solve many of the downfalls of BOINC; this however also provides the basis for an ad hoc cloud computing platform to be developed. We detail the challenges of transforming V-BOINC into an ad hoc cloud and outline the transformational process and integrated extensions. These include a BOINC job submission system, cloud job and virtual machine restoration schedulers and a periodic P2P checkpoint distribution component. Furthermore, as current monitoring tools are unable to cope with the dynamic nature of ad hoc clouds, a dynamic infrastructure monitoring and management tool called the Cloudlet Control Monitoring System is developed and presented. We evaluate each of our individual contributions as well as the reliability, performance and overheads associated with an ad hoc cloud deployed on a realistically simulated unreliable infrastructure. We conclude that the ad hoc cloud is not only a feasible concept but also a viable computational alternative that offers high levels of reliability and can at least offer reasonable performance, which at times may exceed the performance of a commercial cloud infrastructure.
72

An Enhanced MapReduce Workload Allocation Tool for Spot Market Resources

Hudzina, John Stephen 29 March 2015 (has links)
When a cloud user allocates a cluster to execute a map-reduce workload, the user must determine the number and type of virtual machine instances to minimize the workload's financial cost. The cloud user may rent on-demand instances at a fixed price or spot instances at a variable price to execute the workload. Although the cloud user may bid on spot virtual machine instances at a reduced rate, the spot market auction may delay the workload's start or terminate the spot instances before the workload completes. The cloud user requires a forecast for the workload's financial cost and completion time to analyze the trade-offs between on-demand and spot instances. While existing estimation tools predict map-reduce workloads' completion times and costs, these tools do not provide spot instance estimates because a spot market auction determines the instance's start time and duration. The ephemeral spot instances impact execution time estimates because the spot market auction forces the map-reduce workloads to use different storage strategies to persist data after the spot instances terminate. The spot market also reduces the existing tools' completion time and cost estimate accuracy because the tool must factor in spot instance wait times and early terminations. This dissertation updated an existing tool to forecast map-reduce workload's monetary cost and completion time based on spot market historical traces. The enhanced estimation tool includes three new enhancements over existing tools. First, the estimation tool models the impact to the execution from new storage strategies. Second, the enhanced tool calculates additional execution time from early spot instance termination. Finally, the enhance tool predicts the workloads wait time and early termination probabilities from historic traces. Based on two historical Amazon EC2 spot market traces, the enhancements reduce the average completion time prediction error by 96% and the average monetary cost prediction error by 99% over existing tools.
73

Design and evaluation of virtual network migration mechanisms on shared substrate

Lo, Sau Man 07 January 2016 (has links)
The Internet faces well-known challenges in realizing modifications to the core architecture. To help overcome these limitations, the use of network virtualization has been proposed. Network virtualization enables the deployment of novel network architectures and services on existing Internet infrastructure. Virtual networks run over physical networks and use Internet paths and protocols as essentially a link layer in the virtual network. Virtual networks can also share the resources in the physical substrate. Effective use of the underlying substrate network requires intelligent placement of virtual networks so that underlying resources do not incur over-subscription. Because virtual networks can come and go over time, and underlying networks can experience their own dynamic changes, virtual networks need to be migrated---re-mapped to the physical network during active operation---to maintain good performance. While virtual network placement, and to a lesser extent migration, has been studied in the past, little attention has been devoted to designing, deploying, and evaluating migration mechanisms for virtual networks. In this dissertation, we design virtual network migration mechanisms for different substrate platforms and further design a system to mitigate the effects of virtual network migration. In particular this dissertation makes the following contributions: 1. With the goal of minimizing the disruption during a virtual network migration, we design three algorithms for scheduling the sequence of virtual router moves that takes a virtual network from its original placement to its new placement. 2. We design and implement a controller-based architecture for virtual network migration on PlanetLab. This work explores the challenges in implementing virtual network migration on real infrastructure. Recommendations are given for infrastructure that support virtual network migration. 3. We propose and implement a mechanism to mitigate the performance degradation resulting from virtual network migration through transport and application layer collaboration. We utilize a centralized controller to notify the end-systems or the gateways about the time of the virtual network migration such that we prevent packet loss to the application traffic of the end-systems.
74

Private environments for programs

Dunn, Alan Mark 25 September 2014 (has links)
Commodity computer systems today do not provide system support for privacy. As a result, given the creation of new leak opportunities by ever-increasing software complexity, leaks of private data are inevitable. This thesis presents Suliban and Lacuna, two systems that allow programs to execute privately on commodity hardware. These systems demonstrate different points in a design space wherein stronger privacy guarantees can be traded for greater system usability. Suliban uses trusted computing technology to run computation-only code privately; we refer to this protection as "cloaking". In particular, Suliban can run malicious computations in a way that is resistant to analysis. Suliban uses the Trusted Platform Module and processor late launch to create an execution environment entirely disjoint from normal system software. Suliban uses a remote attestation protocol to demonstrate to a malware distribution platform that the environment has been correctly created before the environment is allowed to receive a malicious payload. Suliban's execution outside of standard system software allows it to resist attackers with privileged operating system access and those that can perform some forms of physical attack. However, Suliban cannot access system services, and requires extra case-by-case measures to get outside information like the date or host file contents. Nonetheless, we demonstrate that Suliban can run computations that would be useful in real malware. In building Suliban, we uncover which defenses are most effective against it and highlight current problems with the use of the Trusted Platform Module. Lacuna instead aims at achieving forensic deniability, which guarantees that an attacker that gains full control of a system after a computation has finished cannot learn answers to even binary questions (with a few exceptions) about the computation. This relaxation of Suliban's guarantees allows Lacuna to run full-featured programs concurrently with non-private programs on a system. Lacuna's key primitive is the ephemeral channel, which allows programs to use peripherals while maintaining forensic deniability. This thesis extends the original Lacuna work by investigating how Linux kernel statistics leak private session information and how to mitigate these leaks. / text
75

Deobfuscation of Packed and Virtualization-Obfuscation Protected Binaries

Coogan, Kevin Patrick January 2011 (has links)
Code obfuscation techniques are increasingly being used in software for such reasons as protecting trade secret algorithms from competitors and deterring license tampering by those wishing to use the software for free. However, these techniques have also grown in popularity in less legitimate areas, such as protecting malware from detection and reverse engineering. This work examines two such techniques - packing and virtualization-obfuscation - and presents new behavioral approaches to analysis that may be relevant to security analysts whose job it is to defend against malicious code. These approaches are robust against variations in obfuscation algorithms, such as changing encryption keys or virtual instruction byte code.Packing refers to the process of encrypting or compressing an executable file. This process "scrambles" the bytes of the executable so that byte-signature matching algorithms commonly used by anti-virus programs are ineffective. Standard static analysis techniques are similarly ineffective since the actual byte code of the program is hidden until after the program is executed. Dynamic analysis approaches exist, but are vulnerable to dynamic defenses. We detail a static analysis technique that starts by identifying the code used to "unpack" the executable, then uses this unpacker to generate the unpacked code in a form suitable for static analysis. Results show we are able to correctly unpack several encrypted and compressed malware, while still handling several dynamic defenses.Virtualization-obfuscation is a technique that translates the original program into virtual instructions, then builds a customized virtual machine for these instructions. As with packing, the byte-signature of the original program is destroyed. Furthermore, static analysis of the obfuscated program reveals only the structure of the virtual machine, and dynamic analysis produces a dynamic trace where original program instructions are intermixed, and often indistinguishable from, virtual machine instructions. We present a dynamic analysis approach whereby all instructions that affect the external behavior of the program are identified, thus building an approximation of the original program that is observationally equivalent. We achieve good results at both identifying instructions from the original program, as well as eliminating instructions known to be part of the virtual machine.
76

Three Case Studies On Business Collaboration And Process Management

Fan, Shaokun January 2012 (has links)
The importance of collaboration has been recognized for more than 2000 years. While recent improvement in technology creates vast opportunities for collaboration, effective collaboration remains challenging as ad hoc teams work across time, geographical, language, and technical boundaries, and suffer from process inefficiency. My dissertation addresses part of these challenges by proposing theoretical frameworks for business collaboration and process management. Case study is used as a research strategy for this thesis and it consists of three studies. The first study proposes a process modeling framework to support efficient process model design via model transformation and validation. First, we divide process modeling into three layers and formally define three layers of workflow models. Then, we develop a procedure for transforming a conceptual process model into its corresponding logical process model. Third, we create a validation procedure that can validate whether the derived logical model is consistent with its original conceptual model. The second study proposes a framework for analyzing the relationship between interaction processes and collaboration efficiency in software issue resolution in open source community. We first develop an algorithm to identify frequent interaction process structures referred to as interaction process patterns. Then, we assess patterns' impact through a time-dependent Cox regression model. By applying the interaction process analysis framework to software issue resolution processes, we identify several patterns that are significantly correlated with collaboration efficiency. We further conduct a case study to validate the findings of pattern efficiency in software issue resolution. The third study addresses the issue of suitability of virtual collaboration. Virtual collaboration seems to work well for some cases, but not for others. We define collaboration virtualization as the suitability for a task to be conducted virtually and propose a Collaboration Virtualization Theory (CVT) to explain collaboration virtualization. Three categories (i.e., task, technology, and team) of constructs that determine the suitability of collaboration virtualization are derived from a systematic literature review of related areas. In summary, this dissertation addresses challenges in collaboration and process management, and we believe that our research will have important theoretical and practical impacts on the development of collaboration management systems.
77

Automated Orchestra for Industrial Automation on Virtualized Multicore Environment / Extending Real-Time component-based Framework to Virtual Nodes : Demonstration: Automated Orchestra real-time Application

Mahmud, Nesredin January 2013 (has links)
Industrial control systems are applied in many areas e.g., motion control for industrial robotics, process control of large plants such as in the area of oil and gas, and in large national power grids. Since the last decade with advancement and adoption of virtualization and multicore technology (e.g., Virtual Monitoring Machine, cloud computing, server virtualization, application virtualization), IT systems, automation industries have benefited from low investment, effective system management and high service availability. However, virtualization and multicore technologies have posed a serious challenge to real-time systems, which is violating timeliness and predictability of real-time application running on control systems. To address the challenge, we have extended a real-time component-based framework with virtual nodes; and evaluated the framework in the context of virtualized multicore environment. The evaluation is demonstrated by modeling and implementing an orchestra application with QoS for CPU, memory and network bandwidth. The orchestra application is a real-time and distributed application deployed on virtualized multicore PCs connected with speakers. The result shows undistorted orchestra performance played through speakers connected to physical computer nodes. The contribution of the thesis can be considered: 1) extending a real-time component-based framework, Future Automation Software Architecture (FASA) with virtual nodes using Virtual Computation Resource (VCR) and 2) design and installation of reusable test environment for development, debugging and testing of real-time application on a network of virtualized multicore environment. / Vinnova project “AUTOSAR for Multi-Core in Automotive and Automation Industries “
78

Utilization of Dynamic Attributes in Resource Discovery for Network Virtualization

Amarasinghe, Heli 16 July 2012 (has links)
The success of the internet over last few decades has mainly depended on various infrastructure technologies to run distributed applications. Due to diversification and multi-provider nature of the internet, radical architectural improvements which require mutual agreement between infrastructure providers have become highly impractical. This escalating resistance towards the further growth has created a rising demand for new approaches to address this challenge. Network virtualization is regarded as a prominent solution to surmount these limitations. It decouples the conventional Internet service provider’s role into infrastructure provider (InP) and service provider (SP) and introduce a third player known as virtual network Provider (VNP) which creates virtual networks (VNs). Resource discovery aims to assist the VNP in selecting the best InP that has the best matching resources for a particular VN request. In the current literature, resource discovery focuses mainly on static attributes of network resources highlighting the fact that utilization on dynamic attributes imposes significant overhead on the network itself. In this thesis we propose a resource discovery approach that is capable of utilizing the dynamic resource attributes to enhance the resource discovery and increase the overall efficiency of VN creation. We realize that recourse discovery techniques should be fast and cost efficient, enough to not to impose any significant load. Hence our proposed scheme calculates aggregation values of the dynamic attributes of the substrate resources. By comparing aggregation values to VN requirements, a set of potential InPs is selected. The potential InPs satisfy basic VN embedding requirements. Moreover, we propose further enhancements to the dynamic attribute monitoring process using a vector based aggregation approach.
79

Dynamic Cloud Resource Management : Scheduling, Migration and Server Disaggregation

Svärd, Petter January 2014 (has links)
A key aspect of cloud computing is the promise of infinite, scalable resources, and that cloud services should scale up and down on demand. This thesis investigates methods for dynamic resource allocation and management of services in cloud datacenters, introducing new approaches as well as improvements to established technologies.Virtualization is a key technology for cloud computing as it allows several operating system instances to run on the same Physical Machine, PM, and cloud services normally consists of a number of Virtual Machines, VMs, that are hosted on PMs. In this thesis, a novel virtualization approach is presented. Instead of running each PM isolated, resources from multiple PMs in the datacenter are disaggregated and exposed to the VMs as pools of CPU, I/O and memory resources. VMs are provisioned by using the right amount of resources from each pool, thereby enabling both larger VMs than any single PM can host as well as VMs with tailor-made specifications for their application. Another important aspect of virtualization is live migration of VMs, which is the concept moving VMs between PMs without interruption in service. Live migration allows for better PM utilization and is also useful for administrative purposes. In the thesis, two improvements to the standard live migration algorithm are presented, delta compression and page transfer reordering. The improvements can reduce migration downtime, i.e., the time that the VM is unavailable, as well as the total migration time. Postcopy migration, where the VM is resumed on the destination before the memory content is transferred is also studied. Both userspace and in-kernel postcopy algorithms are evaluated in an in-depth study of live migration principles and performance.Efficient mapping of VMs onto PMs is a key problem for cloud providers as PM utilization directly impacts revenue. When services are accepted into a datacenter, a decision is made on which PM should host the service VMs. This thesis presents a general approach for service scheduling that allows for the same scheduling software to be used across multiple cloud architectures. A number of scheduling algorithms to optimize objectives like revenue or utilization are also studied. Finally, an approach for continuous datacenter consolidation is presented. As VM workloads fluctuate and server availability varies any initial mapping is bound to become suboptimal over time. The continuous datacenter consolidation approach adjusts this VM-to-PM mapping during operation based on combinations of management actions, like suspending/resuming PMs, live migrating VMs, and suspending/resuming VMs. Proof-of-concept software and a set of algorithms that allows cloud providers to continuously optimize their server resources are presented in the thesis.
80

Analysing Performance Effects of Deduplication on Virtual Machine Storage

Kauküla, Marcus January 2017 (has links)
Virtualization is a widely used technology for running multiple operating systems on a single set of hardware. Virtual machines running the same operating system have been shown to have a large amount of identical data, in such cases deduplication have been shown to be very effective in eliminating duplicated data. This study aimed to investigate if the storage savings are as large as shown in previous research, as well as to investigate if there are any negative performance impacts when using deduplication. The selected performance variables are resource utilisation and disk performance. The selected deduplication implementations are SDFS and ZFS deduplication. Each file system is tested with its respective non-deduplicated file systems, ext4 and ZFS. The results show that the storage savings are between 72,5 % and 73,65 % while the resource utilisation is generally higher when using deduplication. The results also show that deduplication using SDFS has an overall large negative disk performance impact, while ZFS deduplication has a general disk performance increase.

Page generated in 0.0996 seconds