• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 31
  • 11
  • 5
  • 5
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 72
  • 37
  • 27
  • 20
  • 15
  • 15
  • 14
  • 14
  • 13
  • 13
  • 11
  • 11
  • 11
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Virtual time-aware virtual machine systems

Yoginath, Srikanth B. 27 August 2014 (has links)
Discrete dynamic system models that track, maintain, utilize, and evolve virtual time are referred to as virtual time systems (VTS). The realization of VTS using virtual machine (VM) technology offers several benefits including fidelity, scalability, interoperability, fault tolerance and load balancing. The usage of VTS with VMs appears in two ways: (a) VMs within VTS, and (b) VTS over VMs. The former is prevalent in high-fidelity cyber infrastructure simulations and cyber-physical system simulations, wherein VMs form a crucial component of VTS. The latter appears in the popular Cloud computing services, where VMs are offered as computing commodities and the VTS utilizes VMs as parallel execution platforms. Prior to our work presented here, the simulation community using VM within VTS (specifically, cyber infrastructure simulations) had little awareness of the existence of a fundamental virtual time-ordering problem. The correctness problem was largely unnoticed and unaddressed because of the unrecognized effects of fair-share multiplexing of VMs to realize virtual time evolution of VMs within VTS. The dissertation research reported here demonstrated the latent incorrectness of existing methods, defined key correctness benchmarks, quantitatively measured the incorrectness, proposed and implemented novel algorithms to overcome incorrectness, and optimized the solutions to execute without a performance penalty. In fact our novel, correctness-enforcing design yields better runtime performance than the traditional (incorrect) methods. Similarly, the VTS execution over VM platforms such as Cloud computing services incurs large performance degradation, which was not known until our research uncovered the fundamental mismatch between the scheduling needs of VTS execution and those of traditional parallel workloads. Consequently, we designed a novel VTS-aware hypervisor scheduler and showed significant performance gains in VTS execution over VM platforms. Prior to our work, the performance concern of VTS over VM was largely unaddressed due to the absence of an understanding of execution policy mismatch between VMs and VTS applications. VTS follows virtual-time order execution whereas the conventional VM execution follows fair-share policy. Our research quantitatively uncovered the exact cause of poor performance of VTS in VM platforms. Moreover, we proposed and implemented a novel virtual time-aware execution methodology that relieves the degradation and provides over an order of magnitude faster execution than the traditional virtual time-unaware execution.
62

Secure System Virtualization : End-to-End Verification of Memory Isolation

Nemati, Hamed January 2017 (has links)
Over the last years, security-kernels have played a promising role in reshaping the landscape of platform security on embedded devices. Security-kernels, such as separation kernels, enable constructing high-assurance mixed-criticality execution platforms on a small TCB, which enforces isolation between components. The reduced TCB  minimizes the system attack surface and facilitates the use of formal methods to ensure the kernel functional correctness and security. In this thesis, we explore various aspects of building a provably secure separation kernel using virtualization technology. We show how the memory management subsystem can be virtualized to enforce isolation of system components. Virtualization is done using direct-paging that enables a guest software to manage its own memory configuration. We demonstrate the soundness of our approach by verifying that the high-level model of the system fulfills the desired security properties. Through refinement, we then propagate these properties (semi-)automatically to the machine-code of the virtualization mechanism. Further, we show how a runtime monitor can be securely deployed alongside a Linux guest on a hypervisor to prevent code injection attacks targeting Linux. The monitor takes advantage of the provided separation to protect itself and to retain a complete view of the guest. Separating components using a low-level software cannot by itself guarantee the system security. Indeed, current processors architecture involves features that can be utilized to violate the isolation of components. We present a new low-noise attack vector constructed by measuring caches effects which is capable of breaching isolation of components and invalidates the verification of a software that has been verified on a memory coherent model. To restore isolation, we provide several countermeasures and propose a methodology to repair the verification by including data-caches in the statement of the top-level security properties of the system. / <p>QC 20170831</p> / PROSPER / HASPOC
63

Survey of VMware NSX Virtualized Network Platform : Utvärdering av VMware NSX Virtualized Network Platform

Gran, Mikael, Karlsson, Claes January 2017 (has links)
Atea Eskilstuna hade behovet av en plattform som kan förenkla och minska antalet konfigurationer vid implementation av kunder. Arbetet gick ut på att utvärdera plattformen VMware NSX och jämföra det mot traditionella nätverkslösningar. I dagens datacenter är virtualisering en viktig del av dess verksamhet. Användandet av virtualisering optimerar hanteringen av hårdvaru-resurser och kostnader. Virtualisering har hittills främst fokuserat på hantering av servrar och klienter, vilket har passerat nätverksutvecklingen, och därför har det uppstått vissa problem med traditionella datacenter gällande trafikflöden, säkerhet och implementering. Datacenter har tidigare varit optimerade för trafik som ska in eller ut ur datacentret. Detta har lett till att brandväggar och säkerhetspolicies ofta placerats vid datacentrets kant. I dagens datacenter har det däremot blivit en ökning på trafik mellan enheter inom datacentret som behöver skyddas. Denna typ av interna säkerhet kan uppnås av interna policies på samtliga nätverksenheter, dock blir det ohållbart vid implementation då antalet konfigurationspunkter i nätverket ökar. Dessa problem kan hanteras med hjälp av VMware NSX som virtualiserar nätverksenheter och centraliserar administration. NSX har en distribuerad brandväggs-funktion vilket medför att policies kan appliceras direkt på virtuella maskiner och virtuella routrar, från en central konfigurationspunkt. Detta ökar säkerheten och minskar implementationstiden jämfört med traditionella datacenter. Arbetet fokuserar på hur NSX arbetar till skillnad från fysiska nätverksenheter samt hur NSX hanterar frågor som trafikflöden, säkerhet och automation. För dessa ändamål byggdes en laborationsmiljö i Ravellos molntjänst med flertalet virtuella maskiner och en litteraturstudie utfördes. Laborationsmiljön användes för att sätta upp kunder med hjälp av virtuella nätverksenheter och virtuella maskiner. Laborationsmiljön användes som referens för hur implementation av NSX och dess funktioner går till. Litteraturstudien fokuserar på vad som är möjligt i NSX och vilka för- och nackdelar som finns med NSX jämfört med traditionella datacenter. Resultaten visade på att den enda nackdelen med NSX var dess licenskostnader. / Atea Eskilstuna had the need of a platform that simplify and reduce the number of configurations while implementing customer environments. The purpose of this thesis was to do a survey of VMware NSX networking platform and compare it to traditional networking solutions. The virtualization is an important part in data centers and its operations today. With the use of virtualization both hardware resources and costs optimizes. Virtualization has primary been focusing on servers and clients and the network evolution has been overlooked. Therefore, some problems have occurred within traditional data centers regarding traffic flows, security and management. Traditional datacenters have previously been optimized for traffic flows inbound or outbound of the datacenter. This optimization has led to implementation of firewalls and security policies at the datacenter edge. However, in the modern datacenters there’s been an increase of traffic flows between devices inside the datacenter, which needs to be secured. Securing these internal traffic flows can be accomplished through internal policies on the network devices. Implementing these policies however is not a scalable solution as the number of configuration points increases. These problems can be handled through VMware NSX which virtualize network units and centralizes administration. NSX provides a distributed firewall function that through a central management platform can be applied directly on groups of virtual machines and virtual routers. This approach increases security inside the datacenter as well as decreasing the implementation time compared to traditional datacenters. This thesis focus how NSX work unlike physical network units and how it handles issues like hairpinning, security and automation. A lab environment was built up in Ravellos cloud service with virtual machines and a literature study was made for this purpose. The lab environment was used to implement different customers with the help of virtual network components and virtual machines. This lab environment served as a reference point how implementation of NSX, its functions and components was made. The literature study focus on what is possible in NSX and which pros and cons that comes with NSX compared to traditional solutions in data centers. Results shows that the only cons with NSX is license costs.
64

End-to-end security architecture for cloud computing environments / Architecture de sécurité de bout en bout et mécanismes d'autoprotection pour les environnements Cloud

Wailly, Aurélien 30 September 2014 (has links)
La virtualisation des infrastructures est devenue un des enjeux majeurs dans la recherche, qui fournissent des consommations d'énergie moindres et des nouvelles opportunités. Face à de multiples menaces et des mécanismes de défense hétérogènes, l'approche autonomique propose une gestion simplifiée, robuste et plus efficace de la sécurité du cloud. Aujourd'hui, les solutions existantes s'adaptent difficilement. Il manque des politiques de sécurité flexibles, une défense multi-niveaux, des contrôles à granularité variable, ou encore une architecture de sécurité ouverte. Ce mémoire présente VESPA, une architecture d'autoprotection pour les infrastructures cloud. VESPA est construit autour de politiques qui peuvent réguler la sécurité à plusieurs niveaux. La coordination flexible entre les boucles d'autoprotection réalise un large spectre de stratégies de sécurité comme des détections et des réactions sur plusieurs niveaux. Une architecture extensible multi plans permet d'intégrer simplement des éléments déjà présents. Depuis peu, les attaques les plus critiques contre les infrastructures cloud visent la brique la plus sensible: l'hyperviseur. Le vecteur d'attaque principal est un pilote de périphérique mal confiné. Les mécanismes de défense mis en jeu sont statiques et difficile à gérer. Nous proposons une approche différente avec KungFuVisor, un canevas logiciel pour créer des hyperviseurs autoprotégés spécialisant l'architecture VESPA. Nous avons montré son application à trois types de protection différents : les attaques virales, la gestion hétérogène multi-domaines et l'hyperviseur. Ainsi la sécurité des infrastructures cloud peut être améliorée grâce à VESPA / Since several years the virtualization of infrastructures became one of the major research challenges, consuming less energy while delivering new services. However, many attacks hinder the global adoption of Cloud computing. Self-protection has recently raised growing interest as possible element of answer to the cloud computing infrastructure protection challenge. Yet, previous solutions fall at the last hurdle as they overlook key features of the cloud, by lack of flexible security policies, cross-layered defense, multiple control granularities, and open security architectures. This thesis presents VESPA, a self-protection architecture for cloud infrastructures. Flexible coordination between self-protection loops allows enforcing a rich spectrum of security strategies. A multi-plane extensible architecture also enables simple integration of commodity security components.Recently, some of the most powerful attacks against cloud computing infrastructures target the Virtual Machine Monitor (VMM). In many case, the main attack vector is a poorly confined device driver. Current architectures offer no protection against such attacks. This thesis proposes an altogether different approach by presenting KungFuVisor, derived from VESPA, a framework to build self-defending hypervisors. The result is a very flexible self-protection architecture, enabling to enforce dynamically a rich spectrum of remediation actions over different parts of the VMM, also facilitating defense strategy administration. We showed the application to three different protection scheme: virus infection, mobile clouds and hypervisor drivers. Indeed VESPA can enhance cloud infrastructure security
65

Maîtrise de la couche hyperviseur sur les architectures multi-coeurs COTS dans un contexte avionique / Hypervisor control of COTS multi-cores processors in order to enforce determinism for future avionics equipment

Jean, Xavier 18 June 2015 (has links)
Nous nous intéressons dans cette thèse à la maîtrise de processeurs multi-cœurs COTS dans le but de les rendre utilisables dans des équipements avioniques, qui ont des exigences temps réelles dures. L’objectif est de permettre l'application de méthodes connues d’évaluation de pire temps d’exécution (WCET) sur un ensemble de tâches représentatif d’applications avioniques. Au cours de leur exécution, les tâches exécutées sur différents cœurs vont accéder simultanément à des ressources matérielles qui sont partagées entre les cœurs, en particulier la mémoire principale. Cela pourra entraîner des mises en attente de certains accès que l'on qualifie d'interférences. Ces interférences peuvent avoir un impact élevé sur le temps d'exécution du logiciel embarqué. Sur un processeur COTS, qui est acheté dans le commerce et vise un marché plus large que l'avionque, cet impact n'est pas borné. Nous cherchons à garantir l'absence d'interférences grâce à des moyens logiciels, dans la mesure où les processeurs COTS ne proposent pas de mécanismes adéquats au niveau matériel. Nous cherchons à étendre des concepts de logiciel déterministe de telle sorte à les rendre compatibles avec un objectif de réutilisation de logiciel existant. A cet effet, nous introduisons la notion de logiciel de contrôle, qui est un élément fonctionnellement neutre, répliqué sur tous les cœurs, et qui contrôle les dates des accès des cœurs aux ressources communes de telle sorte à offrir une isolation temporelle entre ces accès. Nous étudions dans cette thèse le problème de faisabilité d'un logiciel de contrôle sur un processeur COTS, et de son efficacité vis à vis d'applications avioniques. / We focus in this thesis on issues related to COTS multi-core processors mastering, especially regarding hard real-time constraints, in order to enable their usage in future avionics equipment. We aim at applying existing Worst Case Execution Time (WCET) evaluation methods on a set of tasks similar to those we can find in avionics software. At runtime, tasks executed among different cores are likely to access hardware resources at the same time, e.g. the main memory. It may lead to additional delays due to hardware contention, called “interferences”. Interferences slow down embedded software within ranges that may be important. Additionnally, no bound has been established for their impact on WCET when using COTS processors, that target larger markets than avionics. We try to provide guarantees that all interferences are eliminated through software, as COTS processors do not provide adequate mechanisms at hardware level. We extend deterministic software concepts that have been developed in the state of the art, in order to make them compliant with the use of legacy software. We introduce the concept of "control software", which is functionnaly neutral, is replicated among all cores, and performs active control of core's accesses to shared resources, so that concurrent accesses are temporally isolated. We formalize and study in this thesis the problem of control software feasibility on COTS processors, and questions of efficiency with regard to legacy avionics software.
66

Sécurité temps réel dans les systèmes embarqués critiques / Real-time security in critical embedded system

Buret, Pierrick 01 December 2015 (has links)
La croissance des flux d'information à travers le monde est responsable d'une importante utilisation de systèmes embarqués temps-réel, et ce notoirement dans le domaine des satellites. La présence de ces systèmes est devenue indispensable pour la géolocalisation, la météorologie, ou les communications. La forte augmentation du volume de ces matériels, impactée par l'afflux de demande, est à l'origine de l'accroissement de la complexité de ces derniers. Grâce à l'évolution du matériel terrestre, le domaine aérospatial se tourne vers de nouvelles technologies telles que les caches, les multi-coeurs, et les hyperviseurs. L'intégration de ces nouvelles technologies est en adéquation avec de nouveaux défis techniques. La nécessité d'améliorer les performances de ces systèmes induit le besoin de réduction du coût de fabrication et la diminution du temps de production. Les solutions technologiques qui en découlent apportent pour majeure partie des avantages en matière de diminution du nombre global de satellites à besoin constant. La densité d'information traitée est parallèlement accrue par l'augmentation du nombre d'exploitants pour chaque satellite. En effet, plusieurs clients peuvent se voir octroyer tout ou partie d'un même satellite. Intégrer les produits de plusieurs clients sur une même plateforme embarquée la rend vulnérable. Augmenter la complexité du système rend dès lors possible un certain nombre d'actes malveillants. Cette problématique autrefois à l'état d'hypothèse devient aujourd'hui un sujet majeur dans le domaine de l'aérospatial. Figure dans ce document, en premier travail d'exploration, une présentation des actes malveillants sur système embarqué, et en particulier ceux réalisés sur système satellitaire. Une fois le risque exposé, je développe la problématique temps-réel. Je m'intéresse dans cette thèse plus précisément à la sécurité des hyperviseurs spatiaux. Je développe en particulier deux axes de recherche. Le premier porte sur l'évolution des techniques de production et la mise en place d'un système de contrôle des caractéristiques temporelles d'un satellite. Le deuxième axe améliore les connaissances techniques sur un satellite en cours de fonctionnement et permet une prise de décision en cas d'acte malveillant. Je propose plus particulièrement une solution physique permettant de déceler une anomalie sur la gestion des mémoires internes au satellite. En effet, la mémoire est un composant essentiel du fonctionnement du système, et ses propriétés communes entre tous les clients la rend particulièrement vulnérable. De plus, connaître le nombre d'accès en mémoire permet un meilleur ordonnancement et une meilleure prédiction d'un système temps réel. Notre composant permet la détection et l'interprétation d'une potentielle attaque ou d'un problème de sûreté de fonctionnement. Cette thèse met en évidence la complémentarité des deux travaux proposés. En effet, la mesure du nombre d'accès en mémoire peut se mesurer via un algorithme génétique dont la forme est équivalente au programme cherchant le pire temps d'exécution. Il est finalement possible d'étendre nos travaux de la première partie vers la seconde. / Satellites are real-time embedded systems and will be used more and more in the world. Become essential for the geo-location, meteorology or communications across the planet, these systems are increasingly in demand. Due to the influx of requests, the designers of these products are designing a more and more complex hardware and software part. Thanks to the evolution of terrestrial equipment, the aero-space field is turning to new technologies such as caches, multi-core, and hypervisor. The integration of these new technologies bring new technical challenges. In effect, it is necessary to improve the performance of these systems by reducing the cost of manufacturing and the production time. One of the major advantages of these technologies is the possibility of reducing the overall number of satellites in space while increasing the number of operators. Multiple clients softwares may be together today in a same satellite. The ability to integrate multiple customers on the same satellite, with the increasing complexity of the system, makes a number of malicious acts possible. These acts were once considered as hypothetical. Become a priority today, the study of the vulnerability of such systems become major. In this paper, we present first work a quick exploration of the field of malicious acts on onboard system and more specifically those carried out on satellite system. Once the risk presentation we will develop some particular points, such as the problematic real-time. In this thesis we are particularly interested in the security of space hypervisors. We will develop precisely 2 lines of research. The first axis is focused on the development of production technics and implementing a control system of a satellite temporal characteristics. The objective is to adapt an existing system to the constraints of the new highly complex systems. We confront the difficulty of measuring the temporal characteristics running on a satellite system. For this we use an optimization method called dynamic analysis and genetic algorithm. Based on trends, it can automatically search for the worst execution time of a given function. The second axis improves the technical knowledge on a satellite in operation and enables decision making in case of malicious act. We propose specifically a physical solution to detect anomalies in the management of internal memory to the satellite. Indeed, memory is an essential component of system operation, and these common properties between all clients makes them particularly vulnerable to malicious acts. Also, know the number of memory access enables better scheduling and better predictability of a real time system. Our component allows the detection and interpretation of a potential attack or dependability problem. The work put in evidence the complementarity of the two proposed work. Indeed, the measure of the number of memory access that can be measured via a genetic algorithm whose shape is similar to the program seeking the worst execution time. So we can expand our work of the first part with the second.
67

Virtualization Components of the Modern Hypervisor

McAdams, Sean 01 January 2015 (has links)
Virtualization is the foundation on which cloud services build their business. It supports the infrastructure for the largest companies around the globe and is a key component for scaling software for the ever-growing technology industry. If companies decide to use virtualization as part of their infrastructure it is important for them to quickly and reliably have a way to choose a virtualization technology and tweak the performance of that technology to fit their intended usage. Unfortunately, while many papers exist discussing and testing the performance of various virtualization systems, most of these performance tests do not take into account components that can be configured to improve performance for certain scenarios. This study provides a comparison of how three hypervisors (VMWare vSphere, Citrix XenServer, and KVM) perform under different sets of configurations at this point and which system workloads would be ideal for these configurations. This study also provides a means in which to compare different configurations with each other so that implementers of these technologies have a way in which to make informed decisions on which components should be enabled for their current or future systems.
68

Hypervisor-based cloud anomaly detection using supervised learning techniques

Nwamuo, Onyekachi 23 January 2020 (has links)
Although cloud network flows are similar to conventional network flows in many ways, there are some major differences in their statistical characteristics. However, due to the lack of adequate public datasets, the proponents of many existing cloud intrusion detection systems (IDS) have relied on the DARPA dataset which was obtained by simulating a conventional network environment. In the current thesis, we show empirically that the DARPA dataset by failing to meet important statistical characteristics of real-world cloud traffic data centers is inadequate for evaluating cloud IDS. We analyze, as an alternative, a new public dataset collected through cooperation between our lab and a non-profit cloud service provider, which contains benign data and a wide variety of attack data. Furthermore, we present a new hypervisor-based cloud IDS using an instance-oriented feature model and supervised machine learning techniques. We investigate 3 different classifiers: Logistic Regression (LR), Random Forest (RF), and Support Vector Machine (SVM) algorithms. Experimental evaluation on a diversified dataset yields a detection rate of 92.08% and a false-positive rate of 1.49% for the random forest, the best performing of the three classifiers. / Graduate
69

Inter-Core Interference Mitigation in a Mixed Criticality System

Hinton, Michael Glenn 04 August 2020 (has links)
In this thesis, we evaluate how well isolation can be achieved between two virtual machines within a mixed criticality system on a multi-core processor. We achieve this isolation with Jailhouse, an open-source, minimalist hypervisor. We then enhance Jailhouse with core throttling, a technique we use to minimize inter-core interference between VMs. Then, we run workloads with and without core throttling to determine the effect throttling has on interference between a non-real time VM and a real-time VM. We find that Jailhouse provides excellent isolation between VMs even without throttling, and that core throttling suppresses the remaining inter-core interference to a large extent.
70

A Comparison of the Resiliency Against Attacks Between Virtualised Environments and Physical Environments

Tellez Martinez, Albert, Steinhilber, Dennis Dirk January 2020 (has links)
Virtualisation is a technology that is more and more applied due to its advantages regarding cost and operation. It is often believed that it provides a better security for an IT environment since it enables centralisation of hardware. However, virtualisation changes an IT environment fundamentally and contains new vulnerabilities that must be considered. It is of interest to evaluate whether the belief that virtual environments provide a better security for an IT environment is true or not. In this project, the resiliency against attacks for physical environments and virtual environments is analysed to determine which one provides a higher resiliency and why. Therefore, the physical and digital attack surfaces of all entities are analysed to reveal the relevant vulnerabilities that could be exploited. Beside a theoretical research, a physical and a virtual environment have been established to test chosen attacks practically. The results show that virtual environments are less resilient than physical environments, especially to common attacks. This shows that virtualisation is still a technology that is new to many companies and the vulnerabilities it has must be taken seriously.

Page generated in 0.0584 seconds