• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 89
  • 13
  • 10
  • 8
  • 6
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 164
  • 164
  • 59
  • 41
  • 39
  • 35
  • 28
  • 26
  • 26
  • 23
  • 21
  • 21
  • 17
  • 17
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Architectural Introspection and Applications

Litty, Lionel 30 August 2010 (has links)
Widespread adoption of virtualization has resulted in an increased interest in Virtual Machine (VM) introspection. To perform useful analysis of the introspected VMs, hypervisors must deal with the semantic gap between the low-level information available to them and the high-level OS abstractions they need. To bridge this gap, systems have proposed making assumptions derived from the operating system source code or symbol information. As a consequence, the resulting systems create a tight coupling between the hypervisor and the operating systems run by the introspected VMs. This coupling is undesirable because any change to the internals of the operating system can render the output of the introspection system meaningless. In particular, malicious software can evade detection by making modifications to the introspected OS that break these assumptions. Instead, in this thesis, we introduce Architectural Introspection, a new introspection approach that does not require information about the internals of the introspected VMs. Our approach restricts itself to leveraging constraints placed on the VM by the hardware and the external environment. To interact with both of these, the VM must use externally specified interfaces that are both stable and not linked with a specific version of an operating system. Therefore, systems that rely on architectural introspection are more versatile and more robust than previous approaches to VM introspection. To illustrate the increased versatility and robustness of architectural introspection, we describe two systems, Patagonix and P2, that can be used to detect rootkits and unpatched software, respectively. We also detail Attestation Contracts, a new approach to attestation that relies on architectural introspection to improve on existing attestation approaches. We show that because these systems do not make assumptions about the operating systems used by the introspected VMs, they can be used to monitor both Windows and Linux based VMs. We emphasize that this ability to decouple the hypervisor from the introspected VMs is particularly useful in the emerging cloud computing paradigm, where the virtualization infrastructure and the VMs are managed by different entities. Finally, we show that these approaches can be implemented with low overhead, making them practical for real world deployment.
82

App enabling environment for Volvo CE platforms

Duff, Gerard January 2015 (has links)
No description available.
83

Real-time Code Generation in Virtualizing Runtime Environments

Däumler, Martin 16 March 2015 (has links) (PDF)
Modern general purpose programming languages like Java or C# provide a rich feature set and a higher degree of abstraction than conventional real-time programming languages like C/C++ or Ada. Applications developed with these modern languages are typically deployed via platform independent intermediate code. The intermediate code is typically executed by a virtualizing runtime environment. This allows for a high portability. Prominent examples are the Dalvik Virtual Machine of the Android operating system, the Java Virtual Machine as well as Microsoft .NET’s Common Language Runtime. The virtualizing runtime environment executes the instructions of the intermediate code. This introduces additional challenges to real-time software development. One issue is the transformation of the intermediate code instructions to native code instructions. If this transformation interferes with the execution of the real-time application, this might introduce jitter to its execution times. This can degrade the quality of soft real-time systems like augmented reality applications on mobile devices, but can lead to severe problems in hard real-time applications that have strict timing requirements. This thesis examines the possibility to overcome timing issues with intermediate code execution in virtualizing runtime environments. It addresses real-time suitable generation of native code from intermediate code in particular. In order to preserve the advantages of modern programming languages over conventional ones, the solution has to adhere to the following main requirements: - Intermediate code transformation does not interfere with application execution - Portability is not reduced and code transformation is still transparent to a programmer - Comparable performance Existing approaches are evaluated. A concept for real-time suitable code generation is developed. The concept bases on a pre-allocation of the native code and the elimination of indirect references, while considering and optimizing startup time of an application. This concept is implemented by the extension of an existing virtualizing runtime environment, which does not target real-time systems per se. It is evaluated qualitatively and quantitatively. A comparison of the new concept to existing approaches reveals high execution time determinism and good performance and while preserving the portability deployment of applications via intermediate code.
84

Live updates in High-availability (HA) clouds

Sanagari, Vivek January 2018 (has links)
Background. High-availability (HA) is a cloud’s ability to keep functioning after one or more hardware or software components fail. Its purpose is to minimize the system downtime and data loss. Many service providers guarantee a Service Level Agreement including uptime percentage of the computing service, which is calculated based on the available time and system downtime excluding the planned outage time. The aim of the thesis is to perform the update of the virtual machines running in the cloud without causing any interruptions to the user by redirecting the resources/services running on them to an alternative virtual machine before the original VM is updated. Objectives. The objectives for the above aim include. • The first objective is to investigate existing solutions for high-availability and, if possible, adapt them to our aim. The alternative is to design our own solution. • The second objective is to implement the solution in an Open Stack environment. As an alternative, we can try a smaller scale implementation under a virtualization platform such as Virtual Box. • The final objective is to run experiments to quantify the effectiveness of our solution in terms of overhead and degree of seamlessness to the users. Methods. An environment with multiple virtual machines may be created to represent multiple virtual servers in the cloud. The state of service provided by the primary virtual machine is saved to persistent storage and the client is redirected to an alternate virtual machine. At that point the primary virtual machine may reboot for an update or any other issues. Results. In the case of CPU Utilization, the mean CPU utilization on Server and Host in scenario 1 are 0.34% and 3.2% respectively. The mean CPU utilization on Primary server and Host in scenario 2 during the failover cycle are 2.0% and 9.7% respectively. The mean CPU utilization on Secondary server and Host in scenario 2 during failover cycle are 0.99% and 8.0% respectively. For the Memory Utilization, the mean Memory usage on server in scenario 1 is 16%. The mean Memory usage on primary server and secondary server in scenario 2 during failover cycle are 37% and 48% respectively. The Time for failover of the high availability environment remains for 6.8 seconds and the time for the off-line node to rejoin the cluster as on-line when told would take 1.5 seconds. The network traffic is measured in Kilobits per second, it is 1.2 Kilobits per second on port 80 in scenario 2 and is 1.4 Kilobits per second between the client and the server in scenario 1. In addition, data traffic on ports 5405, 2224 and 7788 are captured where port 5405 (Pacemaker/Corosync) contains UDP traffic, port 2224 (Pcsd) contains TCP traffic and port 7788 (DRBD) contains TCP traffic. The traffic captured on these ports represent network overhead due to HA. During failover cycle an additional traffic of 45Kb/s, 1.2Kb/s. 7.0Kb/s flow on 5405, 2224 and 7788 ports respectively. Conclusions. From our experiment results we can say that the overhead to handle live updates on high availability environment is approximately 1.1 - 1.7 % of CPU higher in HA mode than when a stand-alone server is used. The overhead is around 21 - 32 % higher in terms of memory utilization for the live updates on the HA system than for the standard server. The network traffic overhead induced by the ports used by high availability environment (5405, 2224, 7788) is approximately 53 Kilobits /Second while the minimum overhead is approximately 16 Kilobits / Second. The Final and the important metric is the Failover time which tells the seamlessness of the service as the environment needs to provide the services uninterrupted to the users. The failover time of the HA model is about just 6.8 seconds leaving the environment highly available. However, the user may notice slight interruption for the requests made during this span.
85

ESPECIFICAÇÃO DE UMA ARQUITETURA PARA MIGRAÇÃO DE MÁQUINAS VIRTUAIS UTILIZANDO ONTOLOGIAS / SPECIFICATION OF AN ARCHITECTURE FOR MIGRATION OF VIRTUAL MACHINES USING ONTOLOGIES

Rohden, Rafael Barasuol 23 July 2015 (has links)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Cloud computing is a new area in computing, providing new perspectives in the area of interconnect technologies and raises issues in architecture, design and implementation of existing networks and data centers. Currently through technology like server virtualization, has been widely used for providing on-demand services with avoiding the spreading of servers. In this way the servers are used so that its resources be better used to ensure the availability of resources and services for users, enabling, so these users from accessing services based on your needs, regardless of where the services are hosted, or how they are delivered. This being, the main feature of cloud computing. However, some servers become eventually overloaded and others are more idle, and the way to solve this is by using the migration of virtual machines in real time, that is, perform the migration of running virtual machine along with its applications to another server by restoring the balance of the servers. This balance, called load balancing is one of the techniques used by real-time migration technology. That is, the technology of migration of virtual machines in real time has become the key to optimizing computer resources. Thus, it becomes interesting the development of solutions that enable the deployment of this technology. Through a virtualized environment where applications monitors check the load state of the servers it is possible to interact with the virtual machines performing migration to ensure the optimization and utilization of computational resources. Considering this, this work presents an architecture for migration of virtual machines, which uses ontologies for knowledge representation in a virtualization environment. For this, was developed, through the process Ontology Development 101, an ontology, Onto-LM, which represents a virtual machine virtualization environment which offers help to visualize current state of the environment. For the specified architecture in this work was delimited components and their respective information flows between a component and another. Use of ontologies as one of its components. For examples of this architecture has been developed a tool, OntoMig, in the JAVA programming language, which allows to run and manage the information acquired from monitoring of servers, the charge of the ontology and the migration of virtual machines when needed. / A computação em nuvem é um novo campo na computação, sobretudo na Internet, que proporciona novas perspectivas no domínio das tecnologias de interconexões e levanta problemas na arquitetura, design e implementação de redes existentes e de Data Centers. Atualmente, através de tecnologia como virtualização de servidores, vem sendo largamente utilizado para disponibilização de serviços por demanda evitando que haja o espalhamento de servidores. Desta forma, os servidores são utilizados de maneira que seus recursos sejam melhores empregados para garantir a disponibilidade de recursos e serviços para os usuários, permitindo assim, que estes usuários acessem serviços baseados em suas necessidades, independentemente de onde os serviços são hospedados ou como eles são entregues. Sendo esta a característica principal da Computação em Nuvem. No entanto, em algum momento servidores podem ficar sobrecarregados e outros podem ficar mais ociosos, e a maneira para resolver isso é utilizando a migração de máquinas virtuais em tempo real, onde ocorre a migração de máquina virtual em execução juntamente com suas aplicações para outro servidor, restabelecendo, assim, o equilíbrio dos servidores. Este equilíbrio, chamado de balanceamento de carga, é uma das técnicas utilizadas pela tecnologia de migração em tempo real. Ou seja, a aplicação de migração de máquinas virtuais em tempo real tem se tornado a chave para a otimização de recursos computacionais. Assim, torna-se interessante o desenvolvimento de soluções que viabilizem a implantação desta tecnologia. Através de um ambiente virtualizado onde aplicações monitores verificam o estado de carga dos servidores é possível interagir com as máquinas virtuais realizando a migração para garantir a otimização e utilização dos recursos computacionais. Considerando isto, o presente trabalho apresenta uma arquitetura para migração de máquinas virtuais, a qual utiliza ontologias para a representação do conhecimento em um ambiente de virtualização. Para isto, foi desenvolvida, através do processo Ontology Development 101, uma ontologia, Onto- LM, que representa um ambiente de virtualização de máquinas virtuais a qual propõe auxiliar a visualização do estado atual do ambiente. Para a arquitetura especificada neste trabalho foi delimitado componentes e seus respectivos fluxos de informações entre um componente e outro. Utiliza-se de ontologias como um de seus componentes. Para a exemplificação desta arquitetura foi desenvolvida uma ferramenta, OntoMig, em linguagem de programação JAVA, que permite executar e gerenciar as informações obtidas do monitoramento dos servidores, a população da ontologia e a migração de máquinas virtuais quando necessário.
86

Analysis of requirements for an automated testing and grading assistance system / Kravanalys för ett automatiserat stödsystem för testning och betygsättning av programkod

Lindgren, Jonas January 2014 (has links)
This thesis analyzes the configuration and security requirements of an auto-mated assignment testing system. The requirements for a flexible yet powerfulconfiguration format is discussed in depth, and an appropriate configurationformat is chosen. Additionally, the overall security requirements of this systemis discussed, analyzing the different alternatives available to fulfill the require-ments. / <p>Framläggningen redan avklarad.</p>
87

A Performance Study of VM Live Migration over the WAN

Mohammad, Taha, Eati, Chandra Sekhar January 2015 (has links)
Virtualization is the key technology that has provided the Cloud computing platforms a new way for small and large enterprises to host their applications by renting the available resources. Live VM migration allows a Virtual Machine to be transferred form one host to another while the Virtual Machine is active and running. The main challenge in Live migration over WAN is maintaining the network connectivity during and after the migration. We have carried out live VM migration over the WAN migrating different sizes of VM memory states and presented our solutions based on Open vSwitch/VXLAN and Cisco GRE approaches. VXLAN provides the mobility support needed to maintain the network connectivity between the client and the Virtual machine. We have setup an experimental testbed to calculate the concerned performance metrics and analyzed the performance of live migration in VXLAN and GRE network. Our experimental results present that the network connectivity was maintained throughout the migration process with negligible signaling overhead and minimal downtime. The downtime variation experience with change in the applied network delay was relatively higher when compared to variation experienced when migrating different VM memory states. The total migration time experienced showed a strong relationship with size of the migrating VM memory state. / 0763472814
88

A PMIPv6 Approach to Maintain Network Connectivity during VM Live Migration over the Internet / A PMIPv6 Approach to Maintain Network Connectivity during VM Live Migration over the Internet

Kassahun, Solomon, Demissie, Atinkut January 2013 (has links)
Live migration is a mechanism that allows a VM to be moved from one host to another while the guest operating system is running. Current live migration implementations are able to maintain network connectivity in a LAN. However, the same techniques cannot be applied for live migration over the Internet. We present a solution based on PMIPv6, a light-weight mobility protocol standardized by IETF. PMIPv6 handles node mobility without requiring any support from the moving nodes. In addition, PMIPv6 works with IPv4, IPv6 and dual-stack nodes. We have setup a testbed to measure the performance of live migration in a PMIPv6 network. Our results show that network connectivity is successfully maintained with little signaling overhead and short VM downtime. As far as we know, this is the first time PMIPv6 is used to enable live migration beyond the scope of a LAN.
89

Automated Live Migration of Virtual Machines

Glad, Andreas, Forsman, Mattias January 2013 (has links)
This thesis studies the area of virtualization. The focus is on the sub-area live migration, a technique that allows a seamless migration of a virtual machine from one physical machine to another physical machine. Virtualization is an attractive technique, utilized in large computer systems, for example data centers. By using live migration, data center administrators can migrate virtual machines, seamlessly, without the users of the virtual machines taking notice about the migrations. Manually initiated migrations can become cumbersome, with an ever-increasing number of physical machines. The number of physical and virtual machines is not the only problem, deciding when to migrate and where to migrate are other problems that needs to be solved. Manually initiated migrations can also be inaccurate and untimely. Two different strategies for automated live migration have been developed in this thesis. The Push and the Pull strategies. The Push strategy tries to get rid of virtual machines and the Pull strategy tries to steal virtual machines. Both of these strategies, their design and implementation, are presented in the thesis. The strategies utilizes Shannon&apos;s Information Entropy to measure the balance in the system. The strategies further utilizes a cost model to predict the time a migration would require. This is used together with the Information Entropy to decide which virtual machine to migrate if and when a hotspot occurs. The implementation was done with the help of OMNeT++, an open-source simulation tool. The strategies are evaluated with the help of a set of simulations. These simulations include a variety of scenarios with different workloads. Our results shows that the developed strategies can re-balance a system of computers, after a large amount of virtual machines has been added or removed, in only 4-5 minutes. The results further shows that our strategies are able to keep the system balanced when the system load is at medium. This while virtual machines are continuously added or removed from the system. The contribution this thesis brings to the field is a model for how automated live migration of virtual machines can be done to improve the performance of a computer system, for example a data center.
90

Energy Efficient Cloud Computing: Techniques and Tools

Knauth, Thomas 22 April 2015 (has links) (PDF)
Data centers hosting internet-scale services consume megawatts of power. Mainly for cost reasons but also to appease environmental concerns, data center operators are interested to reduce their use of energy. This thesis investigates if and how hardware virtualization helps to improve the energy efficiency of modern cloud data centers. Our main motivation is to power off unused servers to save energy. The work encompasses three major parts: First, a simulation-driven analysis to quantify the benefits of known reservation times in infrastructure clouds. Virtual machines with similar expiration times are co-located to increase the probability to power down unused physical hosts. Second, we propose and prototyped a system to deliver truly on-demand cloud services. Idle virtual machines are suspended to free resources and as a first step to power off the physical server. Third, a novel block-level data synchronization tool enables fast and efficient state replication. Frequent state synchronization is necessary to prevent data unavailability: powering down a server disables access to the locally attached disks and any data stored on them. The techniques effectively reduce the overall number of required servers either through optimized scheduling or by suspending idle virtual machines. Fewer live servers translate into proportional energy savings, as the unused servers must no longer be powered.

Page generated in 0.3854 seconds