• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 88
  • 13
  • 10
  • 8
  • 6
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 162
  • 162
  • 59
  • 41
  • 38
  • 35
  • 28
  • 26
  • 24
  • 23
  • 21
  • 21
  • 17
  • 17
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Performance Optimization of Linux Networking for Latency-Sensitive Virtual Systems

January 2015 (has links)
abstract: Virtual machines and containers have steadily improved their performance over time as a result of innovations in their architecture and software ecosystems. Network functions and workloads are increasingly migrating to virtual environments, supported by developments in software defined networking (SDN) and network function virtualization (NFV). Previous performance analyses of virtual systems in this context often ignore significant performance gains that can be acheived with practical modifications to hypervisor and host systems. In this thesis, the network performance of containers and virtual machines are measured with standard network performance tools. The performance of these systems utilizing a standard 3.18.20 Linux kernel is compared to that of a realtime-tuned variant of the same kernel. This thesis motivates improving determinism in virtual systems with modifications to host and guest kernels and thoughtful process isolation. With the system modifications described, the median TCP bandwidth of KVM virtual machines over bridged network interfaces, is increased by 10.8% with a corresponding reduction in standard deviation of 87.6%. Docker containers see a 8.8% improvement in median bandwidth and 4.4% reduction in standard deviation of TCP measurements using similar bridged networking. System tuning also reduces the standard deviation of TCP request/response latency (TCP RR) over bridged interfaces by 86.8% for virtual machines and 97.9% for containers. Hardware devices assigned to virtual systems also see reductions in variance, although not as noteworthy. / Dissertation/Thesis / Masters Thesis Computer Science 2015
22

Optimization of CPU Scheduling in Virtual Machine Environments

Venkatesh, Venkataramanan January 2015 (has links)
Data centres and other infrastructures in the field of information technology suffer from the major issue of ‘server sprawl’, a term used to depict the situation wherein a number of servers consume resources inefficiently, when compared to the business value of outcome obtained from them. Consolidation of servers, rather than dedicating whole servers to individual applications, optimizes the usage of hardware resources, and virtualization achieves this by allowing multiple servers to share a single hardware platform. Server virtualization is facilitated by the usage of hypervisors, among which Xen is widely preferred because of its dual virtualization modes, virtual machine migration support and scalability. This research work involves an analysis of the CPU scheduling algorithms incorporated into Xen, on the basis of the algorithm’s performance in different workload scenarios. In addition to performance evaluation, the results obtained lay emphasis on the importance of compute intensive or I/O intensive domain handling capacity of a hypervisor’s CPU scheduling algorithm in virtualized server environments. Based on this knowledge, the selection of CPU scheduler in a hypervisor can be aligned with the requirements of the hosted applications. A new credit-based VCPU scheduling scheme is proposed, in which the credits remaining for each VCPU after every accounting period plays a significant role in the scheduling decision. The proposed scheduling strategy allows those VCPUs of I/O intensive domains to supersede others, in order to favour the reduction of I/O bound domain response times and the subsequent bottleneck in the CPU run queue. Though a small percentage of context switch overhead is introduced, the results indicate substantial improvement of I/O handling and fairness in re-source allocation between the host and guest domains.
23

Differences between DockerizedContainers and Virtual Machines : A performance analysis for hosting web-applications in a virtualized environment

Al burhan, Mohammad January 2020 (has links)
This is a bachelor thesis regarding the performance differences for hosting a web-application in a virtualized environment. We compare virtual machines against containers and observe their resource usage in categories such as CPU, RAM and disk storage in idle state and perform a range of computation experiments in which response times are measured from a series of request intervals. Response times are measured with the help of a web-application created in Python. The experiments are performed under both normal and stressed conditions to give a better indication in to which virtualized environment outperform the other during different scenarios. The results show that virtual machines and containers remained close to each other in response times during the first request interval, but the containers outperformed virtual machines in terms of resource usages while in idle state, they had less of a burden on the host computer. They were also significantly more rapid in terms of response times. This is also most noticeable during stressed conditions in which the virtual machine almost doubled its sluggishness.
24

Virtual Firewalling For Migrating Virtual Machines In Cloud Computing

Anwar, Mahwish January 2013 (has links)
Context. Cloud Computing (CC) uses virtualization to provide computing resources on demand via Internet. Small and large organizations benefit from CC because of reduced operating costs and increase in business agility. The migrating Virtual Machine (VM) is vulnerable from attacks such as fake migration initiations, service interruptions, manipulation of data or other network attacks. During live migration any security lax in VM firewall policy can put the VM data, OS and the applications on it at risk. A malicious VM can pose threat to other VMs in its host and consequently for VMs in LAN. Hardware firewalls only protect VM before and after migration. Plus, they are blind to virtual traffic. Hence, virtual firewalls (VFs) are used to secure VMs. Mostly; they are deployed at Virtual Machine Monitor-level (VMM) under Cloud provider’s control. Source VMM-level VF provides security to VM before the migration incurs and the destination VMM-level VF starts securing VM after migration is completed. It thus, becomes possible for attacker to use the intermediate migrating window to launch attacks on VM. Considering the potential of VFs there should be a great value in using open source VFs at VM-level for protecting VMs during migration, thereby, reducing the attacker’s slot to gain access to VM. It would enable hardened security for overall VM migration. Objectives. The aim is to investigate VM-level firewalling using open source firewall as a complementary security layer to VMM-level firewalling, to secure migrating VM in the CC domain. The first objective is to identify how virtual firewalls secure migrating VM in CC and to propose VM-level open-source virtual firewalling for protecting VM during migration. Later the VF is implemented to validate and evaluate its intactness or activeness during migration in real Cloud data center. Methods. In the literary review 9 electronic libraries are used, which include IEEE Xplore, ACM Digital Library, SCOPUS, Engineering Village and Web of Knowledge. Studies are selected after querying libraries for 2 key terms ‘virtual machine’ and ‘migration’ (along with other variations/synonyms), in the abstract. Relevant papers on the subject are read and analyzed. Finally, the information gaps are identified. Using a lacuna the experimental solution is designed. To test the potential of VF at VM-level for migrating VM’s security the experimental validation is performed using stratification samples of firewall rules. The VF evaluation is done using continuous ICMP echo packet transmission. The packets are analyzed to determine firewall behavior during migration. To evaluate the validity, the VM migration is performed 8 times in City Network data center. Results. The literary review identified the widespread use of VMM-level firewalling for migrating VM’s security in CC. The VM-level VFs were not researched nor evaluated for intactness during migration. The experiment performed at City Network demonstrated that the VM-level VF secures VM during migration (on average) for 96% of migration time, thereby reducing attack window for attacker during VM mobility. According to the results the average total migration time (TMT) was 16.6 s and average downtime (DT) of firewall was as low as 0.47 s, which means that VF at VM-level protects VM during entire migration span except when VM’s down (4% of migration time). Conclusions. The research concludes that VM-level firewalling using open source VF as an additional security layer in CC for VM migrations is feasible to employ and will enhance the migrating machine’s security by providing hardened firewall service during migration process, thus, reducing the potential attack window. VMM-level VF provides security in post and pre migration phase. Using VM-level VF as a complementary measure to VMM-level VF enables additional protection for VM migration process, thereby reducing the chances for attacker to attack VM during transition. / <p>Email: mahwish.anwar@gmail.com Twitter: Mah__Wish</p><p>ORCID ID: 0000-0001-7486-5216</p>
25

CONSIDERATIONS ON PORTING PERL TO THE JAVA VIRTUAL MACHINE

KUHN, BRADLEY M. 11 October 2001 (has links)
No description available.
26

ByteSTM: Java Software Transactional Memory at the Virtual Machine Level

Mahmoud Mohamedin, Mohamed Ahmed 21 March 2012 (has links)
As chip vendors are increasingly manufacturing a new generation of multi-processor chips called multicores, improving software performance requires exposing greater concurrency in software. Since code that must be run sequentially is often due to the need for synchronization, the synchronization abstraction has a significant effect on program performance. Lock-based synchronization — the most widely used synchronization method — suffers from programability, scalability, and composability challenges. Transactional memory (TM) is an emerging synchronization abstraction that promises to alleviate the difficulties with lock-based synchronization. With TM, code that read/write shared memory objects is organized as transactions, which speculatively execute. When two transactions conflict (e.g., read/write, write/write), one of them is aborted, while the other commits, yielding (the illusion of) atomicity. Aborted transactions are re-started, after rolling-back changes made to objects. In addition to a simple programming model, TM provides performance comparable to lock-based synchronization. Software transactional memory (STM) implements TM entirely in software, without any special hardware support, and is usually implemented as a library, or supported by a compiler or by a virtual machine. In this thesis, we present ByteSTM, a virtual machine-level Java STM implementation. ByteSTM implements two STM algorithms, TL2 and RingSTM, and transparently supports implicit transactions. Program bytecode is automatically modified to support transactions: memory load/store bytecode instructions automatically switch to transactional mode when a transaction starts, and switch back to normal mode when the transaction successfully commits. Being implemented at the VM-level, it accesses memory directly and uses absolute memory addresses to uniformly handle memory. Moreover, it avoids Java garbage collection (which has a negative impact on STM performance), by manually allocating and recycling memory for transactional metadata. ByteSTM uses field-based granularity, and uses the thread header to store transactional metadata, instead of the slower Java ThreadLocal abstraction. We conducted experimental studies comparing ByteSTM with other state-of-the-art Java STMs including Deuce, ObjectFabric, Multiverse, DSTM2, and JVSTM on a set of micro- benchmarks and macro-benchmarks. Our results reveal that, ByteSTM's transactional throughput improvement over competitors ranges from 20% to 75% on micro-benchmarks and from 36% to 100% on macro-benchmarks. / Master of Science
27

Analyses et préconisations pour les centres de données virtualisés / Analysis and recommendations for virtualized datacenters

Dumont, Frédéric 21 September 2016 (has links)
Cette thèse présente deux contributions. La première contribution consiste en l’étude des métriques de performance permettant de superviser l’activité des serveurs physiques et des machines virtuelles s’exécutant sur les hyperviseurs VMware et KVM. Cette étude met en avant les compteurs clés et propose des analyses avancées dans l’objectif de détecter ou prévenir d’anomalies liées aux quatreres sources principales d’un centre de données : le processeur, la mémoire, le disque et le réseau. La seconde contribution porte sur un outil pour la détection de machines virtuelles à comportements pré-déterminés et/ou atypiques. La détection de ces machines virtuelles à plusieurs objectifs. Le premier, permettre d’optimiser l’utilisation des ressources matérielles en libérant des ressources par la suppression de machines virtuelles inutiles ou en les redimensionnant. Le second, optimiser le fonctionnement de l’infrastructure en détectant les machines sous-dimensionnées, surchargées ou ayant une activité différente des autres machines virtuelles de l’infrastructure. / This thesis presents two contributions. The first contribution is the study of key performance indicators to monitor physical and virtual machines activity running on VMware and KVM hypervisors. This study highlights performance metrics and provides advanced analysis with the aim to prevent or detect abnormalities related to the four main resources of a datacenter: CPU, memory, disk and network. Thesecond contribution relates to a tool for virtual machines with pre-determined and / or atypical behaviors detection. The detection of these virtual machines has several objectives. First, optimize the use of hardware resources by freeing up resources by removing unnecessary virtual machines or by resizing those oversized. Second, optimize infrastructure performance by detecting undersized or overworked virtual machines and those having an atypical activity.
28

Trusted Execution Environment deployment through cloud Virtualization : Aproject on scalable deployment of virtual machines / Implementering av Trusted Execution Environment genom Cloud Virtualization : Ett projekt om skalbar distribution av virtuella maskiner

Staboli, Luca January 2022 (has links)
In the context of cloud computing, Trusted Execution Environments (TEE) are isolated areas of application software that can be executed with better security, building a trusted and secure environment that is detached from the rest of the memory. Trusted Execution Environment is a technology that become available only in the last few years, and it is not widespread yet. This thesis investigates the most popular approaches to build a TEE, namely the process-based and the virtualization-based, and will abstract them as much as possible to design a common infrastructure that can deploy TEEs on an external cloud provider, no matter which technology approach is used. The thesis is relevant and novel because the project will give the possibility to use different technologies for the deployment, such as Intel SGX and AMD SEV, which are the two main solutions, but without being reliant on any particular one. If in the future new technologies or vendors’ solutions will become popular, they can be simply added to the list of options. The same can be said for the cloud provider choice. The results show that it is possible to abstract the common features of different TEE’s technologies and to use a unique Application Programming Interface (API) to deploy different TEE´s technologies. We will also ran a performance and quality evaluation, and the results show that the API is performant and respect the common standard quality. This tool is useful for the problem owner and future works on the topic of cloud security. / I samband med cloud computing är Trusted Execution Environments (TEE) isolerade områden av applikationsprogramvara som kan köras med bättre säkerhet, bygga en pålitlig och säker miljö som är frikopplad från resten av minnet. Trusted Execution Environment är en teknik som blivit tillgänglig först under de senaste åren, och den är inte utbredd ännu. Denna avhandling undersöker de mest populära metoderna för att bygga en TEE, nämligen den processbaserade och den virtualiseringsbaserade, och kommer att abstrahera dem så mycket som möjligt för att designa en gemensam infrastruktur som kan distribuera TEEs på en extern molnleverantör, oavsett vilken teknik tillvägagångssätt används. Avhandlingen är relevant och ny eftersom projektet kommer att ge möjligheten att använda olika teknologier för implementeringen, såsom Intel SGX och AMD SEV, som är de två huvudlösningarna, men utan att vara beroende av någon speciell. Om i framtiden nya teknologier eller leverantörers lösningar kommer att bli populära kan de helt enkelt läggas till i listan över alternativ. Detsamma kan sägas om valet av molnleverantör. Resultaten visar att det är möjligt att abstrahera de gemensamma egenskaperna hos olika TEE:s teknologier och att använda ett unikt Application Programming Interface (API) för att distribuera olika TEE:s teknologier. Vi kommer också att göra en prestanda- och kvalitetsutvärdering, och resultaten visar att API:et är prestanda och respekterar den gemensamma standardkvaliteten. Det här verktyget är användbart för problemägaren och framtida arbeten på ämnet molnsäkerhet.
29

Performance Analysis of a Light Weight Packet Scanner

Gandhi, Paras 05 December 2008 (has links)
The growth of networks around the world has also given rise to threats like viruses and Trojans. This rise in threats has resulted in counter measures for these threats. These counter measures are in the form of applications called firewalls or IDS. The incorporation of these applications in the network results in some delay in communications. The aim of the experiment in this thesis is to measure the delay introduced by such a firewall in the best case and compare it with the communication done on a network without such an application. These experiments are done using a special miniature computer called the net4801 with an embedded operating system and the packet scanning application (firewall or IDS) executing on it.
30

Un langage dédié à l'administration d'infrastructures virtualisées / A domain specific language for virtualized infrastructures

Pottier, Rémy 19 September 2012 (has links)
Avec l’émergence de l’informatique dans les nuages, la capacité d’hébergement des centres de données ne cesse d’augmenter afin de répondre à une demande de plus en plus forte. La gestion, appelée l’administration, d’un centre de données entraîne des opérations fréquentes sur des machines virtuelles (VM) ainsi que sur des serveurs. De plus, chaque VM hébergée possède des besoins spécifiques au regard de sa qualité de service, de ses ressources et de son placement qui doit être compatible avec les mécanismes de tolérance aux pannes et la configuration réseau. Les outils de « l’Infrastructure As A Service » tels que Open Nebula et Vmware vSphere simplifient la création et le déploiement de VM. Cependant, l’administration d’une infrastructure virtualisée repose encore sur des changements manuels décidés par les administrateurs. Cette approche n’est plus pertinente pour la gestion d’infrastructures virtualisées de milliers de VM. En effet, les administrateurs ne peuvent pas manipuler des ensembles importants de VM tout en assurant la compatibilité des reconfigurations exécutées avec les besoins des VM. De nouvelles approches d’administration d’infrastructures proposent l’automatisation de certaines tâches d’administration. L’outil décrit dans ce document utilise des langages dédiés pour répondre aux besoins d’administration infrastructures virtualisées de taille conséquente. Dans un premier temps, l’outil propose aux administrateurs des opérations d’introspection pour observer l’organisation des ressources déployées sur l’infrastructure et les reconfigurations habituelles comme le démarrage, l’arrêt et le redémarrage de VM et de serveurs. Dans un second temps les administrateurs définissent le placement des VM à partir de règles de placement. À partir de ces règles, l’outil d’administration vérifie chaque reconfiguration et chaque ajout de règles exécutés par l’administrateur. Si une reconfiguration ou une règle est invalide, l’outil détecte un conflit et avertit l’administrateur de l’échec de l’opération. L’outil d’administration, à l’aide d’algorithmes d’ordonnancement peut calculer un plan de reconfigurations résolvant les conflits. Ces algorithmes peuvent aussi être utilisés pour mettre en place des politiques d’ordonnancement comme la consolidation ou l’équilibrage de charge. / With the emergence of cloud computing, the hosting capacity of the data centers has been continuously growing to support the non stop increasing clients demand. Managing a data center implies to regularly manipulate both virtual machines (VM) and servers. Each hosted VM has specific expectations regarding its quality of service, its resource requirements and its placement that may be compatible with fault tolerance mechanisms and the networking configuration. Infrastructure As A Service solutions such as Open Nebula and VMWare vSphere extremely simplify creations and deployments of VM but virtualized infrastructure management is still relying on manual changes on the environment. This approach is no longer compatible with an infrastructure composed of thousand of VM. Indeed, a system administrator can not manipulate a large set of VMinsuring that its reconfigurations are compatible with the expected VM requirements. This situation has led to new approaches for the infrastructure management employing automation to replace the traditional manual approach. The tool described in this document deals with VM management from Domain Specific Languages. On the one hand, this tool proposes to administrators introspection operations to monitor the infrastructure resources and common reconfigurations including starting, halting, rebooting, of serversand VM. On the other hand, administrators define the VM placement from placement rules. Then, the system checks, according to active rules, the validity of all reconfigurations and rules performed by administrators. If a reconfiguration or a rule is invalid, the administrative tool detects conflicts and warns administrators. To resolve a conflict, the system, by interacting with scheduling algorithms, computes a reconfiguration plan that satisfies all rules.The reconfiguration plan can also apply scheduling policies as consolidation or load balancing with respect to placement rules.

Page generated in 0.6069 seconds