• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 89
  • 13
  • 10
  • 8
  • 6
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 164
  • 164
  • 59
  • 41
  • 39
  • 35
  • 28
  • 26
  • 26
  • 23
  • 21
  • 21
  • 17
  • 17
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

A comparison of energy efficient adaptation algorithms in cloud data centers

Penumetsa, Swetha January 2018 (has links)
Context: In recent years, Cloud computing has gained a wide range of attention in both industry and academics as Cloud services offer pay-per-use model, due to increase in need of factors like reliability and computing results with immense growth in Cloud-based companies along with a continuous expansion of their scale. However, the rise in Cloud computing users can cause a negative impact on energy consumption in the Cloud data centers as they consume huge amount of overall energy. In order to minimize the energy consumption in virtual datacenters, researchers proposed various energy efficient resources management strategies. Virtual Machine dynamic Consolidation is one of the prominent technique and an active research area in recent time, used to improve resource utilization and minimize the electric power consumption of a data center. This technique monitors the data centers utilization, identify overloaded, and underloaded hosts then migrate few/all Virtual Machines (VMs) to other suitable hosts using Virtual Machine selection and Virtual Machine placement, and switch underloaded hosts to sleep mode.   Objectives: Objective of this study is to define and implement new energy-aware heuristic algorithms to save energy consumption in Cloud data centers and show the best-resulted algorithm then compare performances of proposed heuristic algorithms with old heuristics.   Methods: Initially, a literature review is conducted to identify and obtain knowledge about the adaptive heuristic algorithms proposed previously for energy-aware VM Consolidation, and find the metrics to measure the performance of heuristic algorithms. Based on this knowledge, for our thesis we have proposed 32 combinations of novel adaptive heuristics for host overload detection (8) and VM selection algorithms (4), one host underload detection and two adaptive heuristic for VM placement algorithms which helps in minimizing both energy consumption and reducing overall Service Level Agreement (SLA) violation of Cloud data center. Further, an experiment is conducted to measure the performances of all proposed heuristic algorithms. We have used the CloudSim simulation toolkit for the modeling, simulation, and implementation of proposed heuristics. We have evaluated the proposed algorithms using PlanetLab VMs real workload traces.   Results: The results were measured using metrics energy consumption of data center (power model), Performance Degradation due to Migration (PDM), Service Level Agreement violation Time per Active Host (SLATAH), Service Level Agreement Violation (SLAV = PDM . SLATAH) and, Energy consumption and Service level agreement Violation (ESV).  Here for all four categories of VM Consolidation, we have compared the performances of proposed heuristics with each other and presented the best heuristic algorithm proposed in each category. We have also compared the performances of proposed heuristic algorithms with existing heuristics which are identified in the literature and presented the number of newly proposed algorithms work efficiently than existing algorithms. This comparative analysis is done using T-test and Cohen's d effect size.   From the comparison results of all proposed algorithms, we have concluded that Mean absolute Deviation around median (MADmedain) host overload detection algorithm equipped with Maximum requested RAM VM selection (MaxR) using Modified First Fit Decreasing VM placement (MFFD), and Standard Deviation (STD) host overload detection algorithm equipped with Maximum requested RAM VM selection (MaxR) using Modified Last Fit decreasing VM placement (MLFD) respectively performed better than other 31 combinations of proposed overload detection and VM selection heuristic algorithms, with regards to Energy consumption and Service level agreement Violation (ESV). However, from the comparative study between existing and proposed algorithms, 23 and 21 combinations of proposed host overload detection and VM selection algorithms using MFFD and MLFD VM placements respectively performed efficiently compared to existing (baseline) heuristic algorithms considered for this study.   Conclusions: This thesis presents novel proposed heuristic algorithms that are useful for minimization of both energy consumption and Service Level Agreement Violation in virtual datacenters. It presents new 23 combinations of proposed host overloading detection and VM selection algorithms using MFFD VM placement and 21 combinations of proposed host overloading detection and VM selection algorithms using MLFD VM placement, which consumes the minimum amount of energy with minimal SLA violation compared to the existing algorithms. It gives scope for future researchers related to improving resource utilization and minimizing the electric power consumption of a data center. This study can be extended in further by implementing the work on other Cloud software platforms and developing much more efficient algorithms for all four categories of VM consolidation.
52

Efficient Scientific Workflow Scheduling in Cloud Environment

Cao, Fei 01 May 2014 (has links)
Cloud computing enables the delivery of remote computing, software and storage services through web browsers following pay-as-you-go model. In addition to successful commercial applications, many research efforts including DOE Magellan Cloud project focus on discovering the opportunities and challenges arising from the computing and data-intensive scientific applications that are not well addressed by the current supercomputers, Linux clusters and Grid technologies. The elastic resource provision, noninterfering resource sharing and flexible customized configuration provided by the Cloud infrastructure has shed light on efficient execution of many scientific applications modeled as Directed Acyclic Graph (DAG) structured workflows to enforce the intricate dependency among a large number of different processing tasks. Meanwhile, the Cloud environment poses various challenges. Cloud providers and Cloud users pursue different goals. Providers aim to maximize profit by achieving higher resource utilization and users want to minimize expenses while meeting their performance requirements. Moreover, due to the expanding Cloud services and emerging newer technologies, the ever-increasing heterogeneity of the Cloud environment complicates the challenges for both parties. In this thesis, we address the workflow scheduling problem from different applications and various objectives. For batch applications, due to the increasing deployment of many data centers and computer servers around the globe escalated by the higher electricity price, the energy cost on running the computing, communication and cooling together with the amount of CO2 emissions have skyrocketed. In order to maintain sustainable Cloud computing facing with ever-increasing problem complexity and big data size in the next decades, we design and develop energy-aware scientific workflow scheduling algorithm to minimize energy consumption and CO2 emission while still satisfying certain Quality of Service (QoS) such as response time specified in Service Level Agreement (SLA). Furthermore, the underlying Cloud hardware/Virtual Machine (VM) resource availability is time-dependent because of the dual operation modes namely on-demand and reservation instances at various Cloud data centers. We also apply techniques such as Dynamic Voltage and Frequency Scaling (DVFS) and DNS scheme to further reduce energy consumption within acceptable performance bounds. Our multiple-step resource provision and allocation algorithm achieves the response time requirement in the step of forward task scheduling and minimizes the VM overhead for reduced energy consumption and higher resource utilization rate in the backward task scheduling step. We also evaluate the candidacy of multiple data centers from the energy and performance efficiency perspectives as different data centers have various energy and cost related parameters. For streaming applications, we formulate scheduling problems with two different objectives, namely one is to maximize the throughput under a budget constraint while another is to minimize execution cost under a minimum throughput constraint. Two different algorithms named as Budget constrained RATE (B-RATE) and Budget constrained SWAP (B-SWAP) are designed under the first objective; Another two algorithms, namely Throughput constrained RATE (TP-RATE) and Throughput constrained SWAP (TP-SWAP) are developed under the second objective.
53

Gestion de ressources de façon "éco-énergétique" dans un système virtualisé : application à l'ordonnanceur de marchines virtuelles / Design and implementation of an energy-effcient resources manager in a virtualized system : case of virtuals machines scheduler

Mayap Kamga, Christine 26 June 2014 (has links)
Face au coût de la gestion locale des infrastructures informatiques, de nombreuses entreprises ont décidé de la faire gérer par des fournisseurs externes. Ces derniers, connus sous le nom de IaaS (Infrastructure as a Service), mettent des ressources à la disposition des entreprises sous forme de machine virtuelle (VM - Virtual Machine). Ainsi, les entreprises n'utilisent qu'un nombre limité de machines virtuelles capables de satisfaire leur besoin. Ce qui contribue à la réduction des coûts de l'infrastructure informatique des entreprises clientes. Cependant, cette externalisation soulève pour le fournisseur, les problèmes de respect d'accord de niveau de service (SLA - Service Layer Agreement) souscrit par le client et d'optimisation de la consommation énergétique de son infrastructure. Au regard de l'importance que revêt ces deux défis, de nombreux travaux de recherches se sont intéressés à cette problématique. Les solutions de gestion d'énergie proposées consistent à faire varier la vitesse d'exécution des périphériques concernés. Cette variation de vitesse est implémentée, soit de façon native parce que le périphérique dispose des mécaniques intégrés, soit par simulation à travers des regroupements (spatial et temporel) des traitements. Toutefois, cette variation de vitesse permet d'optimiser la consommation énergétique d'un périphérique mais, a pour effet de bord d'impacter le niveau de service des clients. Cette situation entraine une incompatibilité entre les politiques de variation de vitesse pour la baisse d'énergie et le respect de l'accord de niveau de service. Dans cette thèse, nous étudions la conception et l'implantation d'un gestionnaire de ressources "éco énergétique" dans un système virtualisé. Un tel gestionnaire doit permettre un partage équitable des ressources entre les machines virtuelles tout en assurant une utilisation optimale de l'énergie que consomment ces ressources. Nous illustrons notre étude avec un ordonnanceur de machines virtuelles. La politique de variation de vitesse est implantée par le DVFS (Dynamic Voltage Frequency Scaling) et l'allocation de la capacité CPU aux machines virtuelles l'accord de niveau de service à respecter. / Considering the cost of local management of the computing infrastructures, numerous companies decided to delegate theirs to providers. These latter are known as an Infrastructure as a Service (IaaS) and provide resources to companies in the form of virtual machine (VM). This decision to outsource contributes to lower the cost of IT infrastructure of the customer companies. However, it raises for the provider, the problems of the respect of the Service Layer agreement (SLA) of the customer and of the optimization of the energy consumption of his infrastructure. With regard to the importance of these two challenges, many research works have focused on this problem. The proposed energy management solutions consist in varying the execution speed of the affected devices. This variation of speed is implemented either natively because the device has integrated mechanics, or by simulation through a spatial or temporal batching requests. However, this variation of speed optimizes the energy consumption of a device but has the side effect of degrading the customers SLA. In this thesis, we study the design and the implementation of an energy-efficient resources manager in a virtualized system. Such a manager must ensure a fair share of resources among VMs while ensuring optimal use of the energy consumed by the resources. We illustrate our study thanks to a scheduler of VMs. The DVFS constitutes our energy management policy and the CPU capacity of the VMs the SLA to respect.
54

Podpora vizuálního programování mobilního robota / Visual Programming Backend for a Mobile Robot

Staněk, Ondřej January 2017 (has links)
Title: Visual Programming Backend for a Mobile Robot Author: Bc. Ondřej Staněk Department: The Department of Software Engineering Supervisor: RNDr. David Obdržálek, Ph.D. Supervisor's e-mail address: David.Obdrzalek@mff.cuni.cz Abstract: In this work, the author designs and implements a solution for programming small mobile robots using a visual programming language. A suitable visual programming front-end is selected and back-end layers are created that allow execution of the program in a mobile robot. The author designs and implements a virtual machine that runs alongside the original robot firmware on an 8-bit microcontroller with limited resources. A code generator layer compiles the visual representation of the program into a sequence of bytecode instructions that is interpreted on board of the mobile robot. The solution supports typical features of procedural programming languages, in particular: variables, expressions, conditional statements, loops, static arrays, function calls and recursion. The emphasis is put on robustness of the implementation. To verify and maintain code quality, methods of automated software testing are used. Keywords: visual programming language, virtual machine, mobile robot, Blockly Powered by TCPDF (www.tcpdf.org)
55

Comparative Study of Virtual Machine Software Packages with Real Operating System

Jayaraman, Arunkumar, Rayapudi, Pavankumar January 2012 (has links)
Virtualization allows computer users to utilize their resources more efficiently and effectively. Operating system that runs on top of the Virtual Machine or Hypervisor is called guest OS. The Virtual Machine is an abstraction of the real physical machine. The main aim of this thesis work was to analyze different kinds of virtualization software packages and to investigate their advantages and disadvantages. In addition, we analyzed the performance of the virtual software packages with a real operating system in terms of web services. Web Servers play an important role on the Internet. The response time and throughput for a web server are different for different virtualization software packages and between a real host and a virtual host. In this thesis, we analyzed the web server performance on Linux. We compared the throughput for three different virtualization software packages (VMware, QEMU, and Virtual Box). The performance results clearly indicate that the real machine performance is better than the performance of the virtual machines. VMware has the better performance compared to other virtual software packages.
56

LiveLab : What are the requirements of a Virtual Laboratory?

Moret, Denis January 2008 (has links)
This thesis presents the different ways that have been achieved to improve and widen the interaction possibilities between LiveLab users. LiveLab is a virtual laboratory used at IDA (Institutionen för datavetenskap / The Department of Computer and Information Sciences) at Linköpings Universitet. This virtual laboratory is a virtual machine running an Kubuntu Linux 1 distribution thanks to VMware 2 Player. It was created at the HCS (Human-Centered Systems) division of IDA. Aiming to be used in more and more courses, LiveLab may present a lack of certain functionalities. Thus thesis tries to shows how the development of applications may fulfil this lack.
57

A framework to migrate and replicate VMware Virtual Machines to Amazon Elastic Compute Cloud : Performance comparison between on premise and the migrated Virtual Machine

Bachu, Rajesh January 2015 (has links)
Context Cloud Computing is the new trend in the IT industry. Traditionally obtaining servers was quiet time consuming for companies. The whole process of research on what kind of hardware to buy, get budget approval, purchase the hardware and get access to the servers could take weeks or months. In order to save time and reduce expenses, most companies are moving towards the cloud. One of the known cloud providers is Amazon Elastic Compute Cloud (EC2). Amazon EC2 makes it easy for companies to obtain virtual servers (known as computer instances) in a cloud quickly and inexpensively. Another advantage of using Amazon EC2 is the flexibility that they offer, so the companies can even import/export the Virtual Machines (VM) that they have built which meets the companies IT security, configuration, management and compliance requirements into Amazon EC2. Objectives In this thesis, we investigate importing a VM running on VMware into Amazon EC2. In addition, we make a performance comparison between a VM running on VMware and the VM with same image running on Amazon EC2. Methods A Case study research has been done to select a persistent method to migrate VMware VMs to Amazon EC2. In addition an experimental research is conducted to measure the performance of Virtual Machine running on VMware and compare it with same Virtual Machine running on EC2. We measure the performance in terms of CPU, memory utilization as well as disk read/write speed using well-known open-source benchmarks from Phoronix Test Suite (PTS). Results Investigation on importing VM snapshots (VMDK, VHD and RAW format) to EC2 was done using three methods provided by AWS. Comparison of performance was done by running each benchmark for 25 times on each Virtual Machine. Conclusions Importing VM to EC2 was successful only with RAW format and replication was not successful as AWS installs some software and drivers while importing the VM to EC2. Migrated EC2 VM performs better than on premise VMware VM in terms of CPU, memory utilization and disk read/write speed.
58

Link Extraction for Crawling Flash on the Web

Antelius, Daniel January 2015 (has links)
The set of web pages not reachable using conventional web search engines is usually called the hidden or deep web. One client-side hurdle for crawling the hidden web is Flash files. This thesis presents a tool for extracting links from Flash files up to version 8 to enable web crawling. The files are both parsed and selectively interpreted to extract links. The purpose of the interpretation is to simulate the normal execution of Flash in the Flash runtime of a web browser. The interpretation is a low level approach that allows the extraction to occur offline and without involving automation of web browsers. A virtual machine is implemented and a set of limitations is chosen to reduce development time and maximize the coverage of interpreted byte code. Out of a test set of about 3500 randomly sampled Flash files the link extractor found links in 34% of the files. The resulting estimated web search engine coverage improvement is almost 10%.
59

Aplikační rozhraní pro administraci projektu Libvirt / Libvirt Admintration API

Škultéty, Erik January 2016 (has links)
Tato práce se zabývá problematikou virtualizace, konkrétně virtualizační knihovnou libvirt, cílem které je správa virtuálních strojů a podpora různých typů hypervizorů a virtualizačních řešení jednotným způsobem transparentním pro uživatele. Podstatná část funkcionality knihovny libvirt je na pozadí implementována formou démona libvirtd. Ačkoliv libvirtd démon poskytuje služby pro správu virtuálních strojů, neumožňuje správu sebe samého, kromě změn hodnot parametrů v konfiguračním souboru. Pro změnu nastavení je pak standardním přístupem změna v konfiguračním souboru a následný restart démona. Jelikož uvedený způsob mění pouze perzistentní konfiguraci a restart démona nemusí být vždy optimální řešení, vznikla idea administrativního rozhraní knihovny libvirt, které by umožnilo správu démona za běhu. Hlavním přínosem této práce je návrh a popis implementace aplikačního rozhraní pro administraci knihovny libvirt. Konkrétně pro tuto práci byla zvolena rozhraní pro konfiguraci počtu obslužných vláken, nastavení úrovně a filtrovacích parametrů pro žurnálovací podsystém a správu připojených klientů na straně démona libvirtd.
60

VM Instruction Decoding Using C Unions in Stack and Register Architectures

Strömberg Skott, Kasper January 2022 (has links)
The architecture of virtual machine (VM) interpreters has long been a subject of researchand discussion. The initial trend of stack-based interpreters was shortly thereafterchallenged by research showing the performance advantages of virtual register machines. Despite this, many VM interpreters are still stack-based, with some notable exceptions, like Lua, Android Runtime, and its predecessor Dalvik. A register architecture isusually associated with greater overhead from instruction dispatch, and to some extent, instruction decoding. By designing, and implementing a novel technique that replaces the conventional way of decoding instructions, this thesis attempts to reduce that overhead. More specifically, a VM interpreter is developed as an artifact of design-science research. The novel technique is then evaluated through benchmarking in various configurations. As the results indicate, however, using this technique showed no performance advantage, as the resulting machine instructions are exactly the same after compiler optimization. This suggests that there is no apparent decoding overhead to begin with. As a result, register-based VMs seem to not suffer from any dispatch related overhead, other than the fact that there are more operands per instruction to access. Source code is available on GitHub, at https://github.com/kaspr61/RackVM.

Page generated in 0.0485 seconds