• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 80
  • 13
  • 8
  • 7
  • 5
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 147
  • 147
  • 55
  • 39
  • 38
  • 33
  • 27
  • 26
  • 22
  • 21
  • 20
  • 17
  • 16
  • 15
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Protecting sensitive information from untrusted code

Roy, Indrajit 13 December 2010 (has links)
As computer systems support more aspects of modern life, from finance to health care, security is becoming increasingly important. However, building secure systems remains a challenge. Software continues to have security vulnerabilities due to reasons ranging from programmer errors to inadequate programming tools. Because of these vulnerabilities we need mechanisms that protect sensitive data even when the software is untrusted. This dissertation shows that secure and practical frameworks can be built for protecting users' data from untrusted applications in both desktop and cloud computing environment. Laminar is a new framework that secures desktop applications by enforcing policies written as information flow rules. Information flow control, a form of mandatory access control, enables programmers to write powerful, end-to-end security guarantees while reducing the amount of trusted code. Current programming abstractions and implementations of this model either compromise end-to-end security guarantees or require substantial modifications to applications, thus deterring adoption. Laminar addresses these shortcomings by exporting a single set of abstractions to control information flows through operating system resources and heap-allocated objects. Programmers express security policies by labeling data and represent access restrictions on code using a new abstraction called a security region. The Laminar programming model eases incremental deployment, limits dynamic security checks, and supports multithreaded programs that can access heterogeneously labeled data. In large scale, distributed computations safeguarding information requires solutions beyond mandatory access control. An important challenge is to ensure that the computation, including its output, does not leak sensitive information about the inputs. For untrusted code, access control cannot guarantee that the output does not leak information. This dissertation proposes Airavat, a MapReduce-based system which augments mandatory access control with differential privacy to guarantee security and privacy for distributed computations. Data providers control the security policy for their sensitive data, including a mathematical bound on potential privacy violations. Users without security expertise can perform computations on the data; Airavat prevents information leakage beyond the data provider's policy. Our prototype implementation of Airavat demonstrates that several data mining tasks can be performed in a privacy preserving fashion with modest performance overheads. / text
42

Energy Management for Virtual Machines

Ye, Lei January 2013 (has links)
Current computing infrastructures use virtualization to increase resource utilization by deploying multiple virtual machines on the same hardware. Virtualization is particularly attractive for data center, cloud computing, and hosting services; in these environments computer systems are typically configured to have fast processors, large physical memory and huge storage capable of supporting concurrent execution of virtual machines. Subsequently, this high demand for resources is directly translating into higher energy consumption and monetary costs. Increasingly managing energy consumption of virtual machines is becoming critical. However, virtual machines make the energy management more challenging because a layer of virtualization separates hardware from the guest operating system executing inside a virtual machine. This dissertation addresses the challenge of designing energy-efficient storage, memory and buffer cache for virtual machines by exploring innovative mechanisms as well as existing approaches. We analyze the architecture of an open-source virtual machine platform Xen and address energy management on each subsystem. For storage system, we study the I/O behavior of the virtual machine systems. We address the isolation between virtual machine monitor and virtual machines, and increase the burstiness of disk accesses to improve energy efficiency. In addition, we propose a transparent energy management on main memory for any types of guest operating systems running inside virtual machines. Furthermore, we design a dedicated mechanism for the buffer cache based on the fact that data-intensive applications heavily rely on a large buffer cache that occupies a majority of physical memory. We also propose a novel hybrid mechanism that is able to improve energy efficiency for any memory access. All the mechanisms achieve significant energy savings while lowering the impact on performance for virtual machines.
43

Bringing Visibility in the Clouds : using Security, Transparency and Assurance Services

Aslam, Mudassar January 2014 (has links)
The evolution of cloud computing allows the provisioning of IT resources over the Internet and promises many benefits for both - the service users and providers. Despite various benefits offered by cloud based services, many users hesitate in moving their IT systems to the cloud mainly due to many new security problems introduced by cloud environments. In fact, the characteristics of cloud computing become basis of new problems, for example, support of third party hosting introduces loss of user control on the hardware; similarly, on-demand availability requires reliance on complex and possibly insecure API interfaces; seamless scalability relies on the use of sub-providers; global access over public Internet exposes to broader attack surface; and use of shared resources for better resource utilization introduces isolation problems in a multi-tenant environment. These new security issues in addition to existing security challenges (that exist in today's classic IT environments) become major reasons for the lack of user trust in cloud based services categorized in Software-as-a-Service (SaaS), Platform-as-a-Service (PaaS) or Infrastructure-as-a-Service (IaaS). The focus of this thesis is on IaaS model which allows users to lease IT resources (e.g. computing power, memory, storage, etc.) from a public cloud to create Virtual Machine (VM) instances. The public cloud deployment model considered in this thesis exhibits most elasticity (i.e. degree of freedom to lease/release IT resources according to user demand) but is least secure as compared to private or hybrid models. As a result, public clouds are not trusted for many use cases which involve processing of security critical data such as health records, financial data, government data, etc. However, public IaaS clouds can also be made trustworthy and viable for these use cases by providing better transparency and security assurance services for the user. In this thesis, we consider such assurance services and identify security aspects which are important for making public clouds trustworthy. Based upon our findings, we propose solutions which promise to improve cloud transparency thereby realizing trustworthy clouds. The solutions presented in this thesis mainly deal with the secure life cycle management of the user VM which include protocols and their implementation for secure VM launch and migration. The VM launch and migration solutions ensure that the user VM is always hosted on correct cloud platforms which are setup according to a profile that fulfills the use case relevant security requirements. This is done by using an automated platform security audit and certification mechanism which uses trusted computing and security automation techniques in an integrated solution. In addition to provide the assurance about the cloud platforms, we also propose a solution which provides assurance about the placement of user data in correct and approved geographical locations which is critical from many legal aspects and usually an important requirement of the user. Finally, the assurance solutions provided in this thesis increase cloud transparency which is important for user trust and to realize trustworthy clouds.
44

Challenges and New Solutions for Live Migration of Virtual Machines in Cloud Computing Environments

Zhang, Fei 03 May 2018 (has links)
No description available.
45

Determining the Integrity of Applications and Operating Systems using Remote and Local Attesters

January 2011 (has links)
abstract: This research describes software based remote attestation schemes for obtaining the integrity of an executing user application and the Operating System (OS) text section of an untrusted client platform. A trusted external entity issues a challenge to the client platform. The challenge is executable code which the client must execute, and the code generates results which are sent to the external entity. These results provide the external entity an assurance as to whether the client application and the OS are in pristine condition. This work also presents a technique where it can be verified that the application which was attested, did not get replaced by a different application after completion of the attestation. The implementation of these three techniques was achieved entirely in software and is backward compatible with legacy machines on the Intel x86 architecture. This research also presents two approaches to incorporating software based "root of trust" using Virtual Machine Monitors (VMMs). The first approach determines the integrity of an executing Guest OS from the Host OS using Linux Kernel-based Virtual Machine (KVM) and qemu emulation software. The second approach implements a small VMM called MIvmm that can be utilized as a trusted codebase to build security applications such as those implemented in this research. MIvmm was conceptualized and implemented without using any existing codebase; its minimal size allows it to be trustworthy. Both the VMM approaches leverage processor support for virtualization in the Intel x86 architecture. / Dissertation/Thesis / Ph.D. Computer Science 2011
46

A comparison of energy efficient adaptation algorithms in cloud data centers

Penumetsa, Swetha January 2018 (has links)
Context: In recent years, Cloud computing has gained a wide range of attention in both industry and academics as Cloud services offer pay-per-use model, due to increase in need of factors like reliability and computing results with immense growth in Cloud-based companies along with a continuous expansion of their scale. However, the rise in Cloud computing users can cause a negative impact on energy consumption in the Cloud data centers as they consume huge amount of overall energy. In order to minimize the energy consumption in virtual datacenters, researchers proposed various energy efficient resources management strategies. Virtual Machine dynamic Consolidation is one of the prominent technique and an active research area in recent time, used to improve resource utilization and minimize the electric power consumption of a data center. This technique monitors the data centers utilization, identify overloaded, and underloaded hosts then migrate few/all Virtual Machines (VMs) to other suitable hosts using Virtual Machine selection and Virtual Machine placement, and switch underloaded hosts to sleep mode.   Objectives: Objective of this study is to define and implement new energy-aware heuristic algorithms to save energy consumption in Cloud data centers and show the best-resulted algorithm then compare performances of proposed heuristic algorithms with old heuristics.   Methods: Initially, a literature review is conducted to identify and obtain knowledge about the adaptive heuristic algorithms proposed previously for energy-aware VM Consolidation, and find the metrics to measure the performance of heuristic algorithms. Based on this knowledge, for our thesis we have proposed 32 combinations of novel adaptive heuristics for host overload detection (8) and VM selection algorithms (4), one host underload detection and two adaptive heuristic for VM placement algorithms which helps in minimizing both energy consumption and reducing overall Service Level Agreement (SLA) violation of Cloud data center. Further, an experiment is conducted to measure the performances of all proposed heuristic algorithms. We have used the CloudSim simulation toolkit for the modeling, simulation, and implementation of proposed heuristics. We have evaluated the proposed algorithms using PlanetLab VMs real workload traces.   Results: The results were measured using metrics energy consumption of data center (power model), Performance Degradation due to Migration (PDM), Service Level Agreement violation Time per Active Host (SLATAH), Service Level Agreement Violation (SLAV = PDM . SLATAH) and, Energy consumption and Service level agreement Violation (ESV).  Here for all four categories of VM Consolidation, we have compared the performances of proposed heuristics with each other and presented the best heuristic algorithm proposed in each category. We have also compared the performances of proposed heuristic algorithms with existing heuristics which are identified in the literature and presented the number of newly proposed algorithms work efficiently than existing algorithms. This comparative analysis is done using T-test and Cohen's d effect size.   From the comparison results of all proposed algorithms, we have concluded that Mean absolute Deviation around median (MADmedain) host overload detection algorithm equipped with Maximum requested RAM VM selection (MaxR) using Modified First Fit Decreasing VM placement (MFFD), and Standard Deviation (STD) host overload detection algorithm equipped with Maximum requested RAM VM selection (MaxR) using Modified Last Fit decreasing VM placement (MLFD) respectively performed better than other 31 combinations of proposed overload detection and VM selection heuristic algorithms, with regards to Energy consumption and Service level agreement Violation (ESV). However, from the comparative study between existing and proposed algorithms, 23 and 21 combinations of proposed host overload detection and VM selection algorithms using MFFD and MLFD VM placements respectively performed efficiently compared to existing (baseline) heuristic algorithms considered for this study.   Conclusions: This thesis presents novel proposed heuristic algorithms that are useful for minimization of both energy consumption and Service Level Agreement Violation in virtual datacenters. It presents new 23 combinations of proposed host overloading detection and VM selection algorithms using MFFD VM placement and 21 combinations of proposed host overloading detection and VM selection algorithms using MLFD VM placement, which consumes the minimum amount of energy with minimal SLA violation compared to the existing algorithms. It gives scope for future researchers related to improving resource utilization and minimizing the electric power consumption of a data center. This study can be extended in further by implementing the work on other Cloud software platforms and developing much more efficient algorithms for all four categories of VM consolidation.
47

Efficient Scientific Workflow Scheduling in Cloud Environment

Cao, Fei 01 May 2014 (has links)
Cloud computing enables the delivery of remote computing, software and storage services through web browsers following pay-as-you-go model. In addition to successful commercial applications, many research efforts including DOE Magellan Cloud project focus on discovering the opportunities and challenges arising from the computing and data-intensive scientific applications that are not well addressed by the current supercomputers, Linux clusters and Grid technologies. The elastic resource provision, noninterfering resource sharing and flexible customized configuration provided by the Cloud infrastructure has shed light on efficient execution of many scientific applications modeled as Directed Acyclic Graph (DAG) structured workflows to enforce the intricate dependency among a large number of different processing tasks. Meanwhile, the Cloud environment poses various challenges. Cloud providers and Cloud users pursue different goals. Providers aim to maximize profit by achieving higher resource utilization and users want to minimize expenses while meeting their performance requirements. Moreover, due to the expanding Cloud services and emerging newer technologies, the ever-increasing heterogeneity of the Cloud environment complicates the challenges for both parties. In this thesis, we address the workflow scheduling problem from different applications and various objectives. For batch applications, due to the increasing deployment of many data centers and computer servers around the globe escalated by the higher electricity price, the energy cost on running the computing, communication and cooling together with the amount of CO2 emissions have skyrocketed. In order to maintain sustainable Cloud computing facing with ever-increasing problem complexity and big data size in the next decades, we design and develop energy-aware scientific workflow scheduling algorithm to minimize energy consumption and CO2 emission while still satisfying certain Quality of Service (QoS) such as response time specified in Service Level Agreement (SLA). Furthermore, the underlying Cloud hardware/Virtual Machine (VM) resource availability is time-dependent because of the dual operation modes namely on-demand and reservation instances at various Cloud data centers. We also apply techniques such as Dynamic Voltage and Frequency Scaling (DVFS) and DNS scheme to further reduce energy consumption within acceptable performance bounds. Our multiple-step resource provision and allocation algorithm achieves the response time requirement in the step of forward task scheduling and minimizes the VM overhead for reduced energy consumption and higher resource utilization rate in the backward task scheduling step. We also evaluate the candidacy of multiple data centers from the energy and performance efficiency perspectives as different data centers have various energy and cost related parameters. For streaming applications, we formulate scheduling problems with two different objectives, namely one is to maximize the throughput under a budget constraint while another is to minimize execution cost under a minimum throughput constraint. Two different algorithms named as Budget constrained RATE (B-RATE) and Budget constrained SWAP (B-SWAP) are designed under the first objective; Another two algorithms, namely Throughput constrained RATE (TP-RATE) and Throughput constrained SWAP (TP-SWAP) are developed under the second objective.
48

Gestion de ressources de façon "éco-énergétique" dans un système virtualisé : application à l'ordonnanceur de marchines virtuelles / Design and implementation of an energy-effcient resources manager in a virtualized system : case of virtuals machines scheduler

Mayap Kamga, Christine 26 June 2014 (has links)
Face au coût de la gestion locale des infrastructures informatiques, de nombreuses entreprises ont décidé de la faire gérer par des fournisseurs externes. Ces derniers, connus sous le nom de IaaS (Infrastructure as a Service), mettent des ressources à la disposition des entreprises sous forme de machine virtuelle (VM - Virtual Machine). Ainsi, les entreprises n'utilisent qu'un nombre limité de machines virtuelles capables de satisfaire leur besoin. Ce qui contribue à la réduction des coûts de l'infrastructure informatique des entreprises clientes. Cependant, cette externalisation soulève pour le fournisseur, les problèmes de respect d'accord de niveau de service (SLA - Service Layer Agreement) souscrit par le client et d'optimisation de la consommation énergétique de son infrastructure. Au regard de l'importance que revêt ces deux défis, de nombreux travaux de recherches se sont intéressés à cette problématique. Les solutions de gestion d'énergie proposées consistent à faire varier la vitesse d'exécution des périphériques concernés. Cette variation de vitesse est implémentée, soit de façon native parce que le périphérique dispose des mécaniques intégrés, soit par simulation à travers des regroupements (spatial et temporel) des traitements. Toutefois, cette variation de vitesse permet d'optimiser la consommation énergétique d'un périphérique mais, a pour effet de bord d'impacter le niveau de service des clients. Cette situation entraine une incompatibilité entre les politiques de variation de vitesse pour la baisse d'énergie et le respect de l'accord de niveau de service. Dans cette thèse, nous étudions la conception et l'implantation d'un gestionnaire de ressources "éco énergétique" dans un système virtualisé. Un tel gestionnaire doit permettre un partage équitable des ressources entre les machines virtuelles tout en assurant une utilisation optimale de l'énergie que consomment ces ressources. Nous illustrons notre étude avec un ordonnanceur de machines virtuelles. La politique de variation de vitesse est implantée par le DVFS (Dynamic Voltage Frequency Scaling) et l'allocation de la capacité CPU aux machines virtuelles l'accord de niveau de service à respecter. / Considering the cost of local management of the computing infrastructures, numerous companies decided to delegate theirs to providers. These latter are known as an Infrastructure as a Service (IaaS) and provide resources to companies in the form of virtual machine (VM). This decision to outsource contributes to lower the cost of IT infrastructure of the customer companies. However, it raises for the provider, the problems of the respect of the Service Layer agreement (SLA) of the customer and of the optimization of the energy consumption of his infrastructure. With regard to the importance of these two challenges, many research works have focused on this problem. The proposed energy management solutions consist in varying the execution speed of the affected devices. This variation of speed is implemented either natively because the device has integrated mechanics, or by simulation through a spatial or temporal batching requests. However, this variation of speed optimizes the energy consumption of a device but has the side effect of degrading the customers SLA. In this thesis, we study the design and the implementation of an energy-efficient resources manager in a virtualized system. Such a manager must ensure a fair share of resources among VMs while ensuring optimal use of the energy consumed by the resources. We illustrate our study thanks to a scheduler of VMs. The DVFS constitutes our energy management policy and the CPU capacity of the VMs the SLA to respect.
49

Comparative Study of Virtual Machine Software Packages with Real Operating System

Jayaraman, Arunkumar, Rayapudi, Pavankumar January 2012 (has links)
Virtualization allows computer users to utilize their resources more efficiently and effectively. Operating system that runs on top of the Virtual Machine or Hypervisor is called guest OS. The Virtual Machine is an abstraction of the real physical machine. The main aim of this thesis work was to analyze different kinds of virtualization software packages and to investigate their advantages and disadvantages. In addition, we analyzed the performance of the virtual software packages with a real operating system in terms of web services. Web Servers play an important role on the Internet. The response time and throughput for a web server are different for different virtualization software packages and between a real host and a virtual host. In this thesis, we analyzed the web server performance on Linux. We compared the throughput for three different virtualization software packages (VMware, QEMU, and Virtual Box). The performance results clearly indicate that the real machine performance is better than the performance of the virtual machines. VMware has the better performance compared to other virtual software packages.
50

LiveLab : What are the requirements of a Virtual Laboratory?

Moret, Denis January 2008 (has links)
This thesis presents the different ways that have been achieved to improve and widen the interaction possibilities between LiveLab users. LiveLab is a virtual laboratory used at IDA (Institutionen för datavetenskap / The Department of Computer and Information Sciences) at Linköpings Universitet. This virtual laboratory is a virtual machine running an Kubuntu Linux 1 distribution thanks to VMware 2 Player. It was created at the HCS (Human-Centered Systems) division of IDA. Aiming to be used in more and more courses, LiveLab may present a lack of certain functionalities. Thus thesis tries to shows how the development of applications may fulfil this lack.

Page generated in 0.4303 seconds