• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 36
  • 23
  • 6
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 82
  • 82
  • 26
  • 25
  • 25
  • 16
  • 15
  • 14
  • 14
  • 14
  • 13
  • 12
  • 11
  • 11
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Camada de gerenciamento para comunicação entre computadores baseada em redes sem fio (WSE-OS) /

Digiere, Adriano Ricardo. January 2011 (has links)
Orientador: Roberta Spolon / Banca: João Paulo Papa / Banca: Regina Helena Carlucci Santana / Resumo: O maior custo de propriedade de computadores não é o hardware ou o software, mas sim o tempo que os profissionais de informática gastam em suporte e manutenção dos ambientes computacionais. Em um conglomerado de computadores em rede, cada computador torna- se uma entidade gerenciada individualmente, o que gera contínuas solicitações de alterações de configuração, como instalação de atualizações de software, conexão e configuração de periféricos, criação de perfis de e-mail e aplicação de patches. Além disso, existe ainda o risco de furto de dados e invasão por hackers quando os computadores dos usuários não estão protegidos. Aliado a este cenário, a constante evolução dos sistemas computacionais e seu potencial de processamento, a cada dia são necessárias novas técnicas de aproveitamento destes recursos. Soluções que visam facilitar o gerenciamento de ambientes com grande massa de computadores de forma a tirar o máximo proveito do poder computacional concentrado em servidores já se tornaram necessidades reais, não só em grandes corporações, mas também em pequenas e médias empresas, além de outros tipos organizações, como por exemplo, instituições de ensino. Frente esta necessidade, focando uma ferramenta compatível neste cenário de crescimento, este trabalho apresenta um modelo de gerenciamento centralizado, nomeado WSE-OS (Wireless Sharing Environment - Operating Systems), baseado em técnicas de virtualização e acesso remoto seguro combinadas a um sistema de arquivos remotos em espaço de usuário. Esta solução elimina a necessidade da instalação e configuração de aplicativos "máquina a máquina", além de tirar maior proveito do poder computacional existente nos servidores. A principal característica deste modelo que o destaca das soluções atuais é que ele é especificamente elaborado para operar sobre redes... (Resumo completo, clicar acesso eletrônico abaixo) / Abstract: The largest cost of desktop ownership is not the hardware or software, but the time that administrators spend on support and maintenance of computing environments. In a conglomerate of computers in a network, each computer becomes an entity managed individually, which generates continuous requests for configuration changes, such as installing software updates, configuration and connection of peripherals, profiling email and applying patches. Moreover, there is the risk of data theft and hacking when users' computers are not protected. Allied to this scenario, the constant evolution of computer systems and their potential for processing, each day requires new techniques for exploitation of these resources. Solutions aimed facilitating the management of environments with large mass of computers to take maximum advantage of computing power concentrated on servers have become real needs, not only in large corporations but also small and medium enterprises, besides other types organizations, such as educational institutions. Facing this need, focusing on a tool that supported this growth scenario, this work presents a centralized management model, named WSE-OS (Wireless Sharing Environment - Operating Systems) based on virtualization techniques and secure remote access combined with a remote file system in user space. This solution eliminates the need for installing and configuring applications "machine to machine", besides take greater advantage of existing computing power on the servers . The main feature of this model that highlights the current solutions is that it is specifically designed to operate on networks with low transmission rates, such as wireless networks. The WSE-OS is able to perform the replication of operating system images in an environment with WLAN communication, which makes management more flexible and independent of physical connections, besides offer... (Complete abstract click electronic access below) / Mestre
72

Performance Specific I/O Scheduling Framework for Cloud Storage

Jain, Nitisha January 2015 (has links) (PDF)
Virtualization is one of the important enabling technologies for Cloud Computing which facilitates sharing of resources among the virtual machines. However, it incurs performance overheads due to contention of physical devices such as disk and network bandwidth. Various I/O applications having different latency requirements may be executing concurrently on different virtual machines provisioned on a single server in Cloud data-centers. It is pertinent that the performance SLAs of such applications are satisfied through intelligent scheduling and allocation of disk resources. The underlying disk scheduler at the server is unable to distinguish between the application requests being oblivious to the characteristics of these applications. Therefore, all the applica- tions are provided best effort services by default. This may lead to performance degradation for the latency sensitive applications. In this work, we propose a novel disk scheduling framework PriDyn (Dynamic Priority) which provides differentiated services to various I/O applications co-located on a single host based on their latency attributes and desired performance. The framework employs a scheduling algorithm which dynamically computes latency estimates for all concurrent I/O applications for a given system state. Based on these, an appropriate pri- ority assignment for the applications is determined which is taken into consideration by the underlying disk scheduler at the host while scheduling the I/O applications on the physical disk. The proposed scheduling framework is able to successfully satisfy QoS requirements for the concurrent I/O applications within system constraints. This has been verified through ex- tensive experimental analysis. In order to realize the benefits of differentiated services provided by the PriDyn scheduler, proper combination of I/O applications must be ensured for the servers through intelligent meta-scheduling techniques at the Cloud data-center level. For achieving this, in the second part of this work, we extended the PriDyn framework to design a proactive admission control and scheduling framework PCOS (P rescient C loud I/O S cheduler). It aims to maximize to Utilization of disk resources without adversely affecting the performance of the applications scheduled on the systems. By anticipating the performance of the systems running multiple I/O applications, PCOS prevents the scheduling of undesirable workloads on them in order to maintain the necessary balance between resource consolidation and application performance guarantees. The PCOS framework includes the PriDyn scheduler as an important component and utilizes the dynamic disk resource allocation capabilities of PriDyn for meeting its goals. Experimental validations performed on real world I/O traces demonstrate that the proposed framework achieves appreciable enhancements in I/O performance through selection of optimal I/O workload combinations, indicating that this approach is a promising step towards enabling QoS guarantees for Cloud data-centers.
73

Benchmarking and Scheduling Strategies for Distributed Stream Processing

Shukla, Anshu January 2017 (has links) (PDF)
The velocity dimension of Big Data refers to the need to rapidly process data that arrives continuously as streams of messages or events. Distributed Stream Processing Systems (DSPS) refer to distributed programming and runtime platforms that allow users to define a composition of dataflow logic that are executed on distributed resources over streams of incoming messages. A DSPS uses commodity clusters and Cloud Virtual Machines (VMs) for its execution. In order to meet the required performance for these applications, the DSPS needs to schedule these dataßows efficiently over the resources. Despite their growing use, resource scheduling for DSPSÕs tends to be done in an ad hoc manner, favoring empirical and reactive approaches, rather than a model-driven and analytical approach. Such empirical strategies may arrive at an approximate schedule for the dataflow that needs further tuning to meet the quality of service. We propose a model-based scheduling approach that makes use of performance profiles and benchmarks developed for tasks in the dataßow to plan both the resource allocation and the resource mapping that together form the schedule planning process. We propose the Model Based Allocation (MBA) and the Slot Aware Mapping (SAM) approaches that efectively utilize knowledge of the performance model of logic tasks to provide an efficient and predictable scheduling behavior. We implemented and validate these algorithms using the popular open source Apache Storm DSPS for several micro and application dataflows. The results show that our model-driven approach is able to reduce the amount of required resources (VMs) by 30% − 50% relative to existing techniques. Also we see that our strategies o↵er a predictable behavior that ensures that the expected and actual rates supported and resources used match closely. This can enable deterministic schedule planning even under dynamic conditions. Besides this static scheduling, we also examine the ability to dynamically consolidate tasks onto fewer VMs when the load on the dataßow decreases or the VMs get fragmented. We propose reliable task migration models for Apache Storm dataßows that are able to rapidly move the task assignment in the cluster, and resume the dataflow execution without any message loss.
74

Camada de gerenciamento para comunicação entre computadores baseada em redes sem fio (WSE-OS)

Digiere, Adriano Ricardo [UNESP] 31 March 2011 (has links) (PDF)
Made available in DSpace on 2014-06-11T19:29:40Z (GMT). No. of bitstreams: 0 Previous issue date: 2011-03-31Bitstream added on 2014-06-13T19:59:30Z : No. of bitstreams: 1 digiere_ar_me_sjrp.pdf: 1496121 bytes, checksum: 2cc2450b0bec5e0610a6820222483893 (MD5) / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / O maior custo de propriedade de computadores não é o hardware ou o software, mas sim o tempo que os profissionais de informática gastam em suporte e manutenção dos ambientes computacionais. Em um conglomerado de computadores em rede, cada computador torna- se uma entidade gerenciada individualmente, o que gera contínuas solicitações de alterações de configuração, como instalação de atualizações de software, conexão e configuração de periféricos, criação de perfis de e-mail e aplicação de patches. Além disso, existe ainda o risco de furto de dados e invasão por hackers quando os computadores dos usuários não estão protegidos. Aliado a este cenário, a constante evolução dos sistemas computacionais e seu potencial de processamento, a cada dia são necessárias novas técnicas de aproveitamento destes recursos. Soluções que visam facilitar o gerenciamento de ambientes com grande massa de computadores de forma a tirar o máximo proveito do poder computacional concentrado em servidores já se tornaram necessidades reais, não só em grandes corporações, mas também em pequenas e médias empresas, além de outros tipos organizações, como por exemplo, instituições de ensino. Frente esta necessidade, focando uma ferramenta compatível neste cenário de crescimento, este trabalho apresenta um modelo de gerenciamento centralizado, nomeado WSE-OS (Wireless Sharing Environment – Operating Systems), baseado em técnicas de virtualização e acesso remoto seguro combinadas a um sistema de arquivos remotos em espaço de usuário. Esta solução elimina a necessidade da instalação e configuração de aplicativos “máquina a máquina”, além de tirar maior proveito do poder computacional existente nos servidores. A principal característica deste modelo que o destaca das soluções atuais é que ele é especificamente elaborado para operar sobre redes... / The largest cost of desktop ownership is not the hardware or software, but the time that administrators spend on support and maintenance of computing environments. In a conglomerate of computers in a network, each computer becomes an entity managed individually, which generates continuous requests for configuration changes, such as installing software updates, configuration and connection of peripherals, profiling email and applying patches. Moreover, there is the risk of data theft and hacking when users' computers are not protected. Allied to this scenario, the constant evolution of computer systems and their potential for processing, each day requires new techniques for exploitation of these resources. Solutions aimed facilitating the management of environments with large mass of computers to take maximum advantage of computing power concentrated on servers have become real needs, not only in large corporations but also small and medium enterprises, besides other types organizations, such as educational institutions. Facing this need, focusing on a tool that supported this growth scenario, this work presents a centralized management model, named WSE-OS (Wireless Sharing Environment – Operating Systems) based on virtualization techniques and secure remote access combined with a remote file system in user space. This solution eliminates the need for installing and configuring applications machine to machine, besides take greater advantage of existing computing power on the servers . The main feature of this model that highlights the current solutions is that it is specifically designed to operate on networks with low transmission rates, such as wireless networks. The WSE-OS is able to perform the replication of operating system images in an environment with WLAN communication, which makes management more flexible and independent of physical connections, besides offer... (Complete abstract click electronic access below)
75

Middleware de comunicação entre objetos distribuídos para gerenciamento de computadores baseado em redes sem fio (WSE-OS)

Crepaldi, Luis Gustavo [UNESP] 31 March 2011 (has links) (PDF)
Made available in DSpace on 2014-06-11T19:29:40Z (GMT). No. of bitstreams: 0 Previous issue date: 2011-03-31Bitstream added on 2014-06-13T18:59:19Z : No. of bitstreams: 1 crepaldi_lg_me_sjrp.pdf: 1446703 bytes, checksum: 5212981cdddeca5acc9e64906c893d50 (MD5) / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / Universidade Estadual Paulista (UNESP) / Para simplificar o gerenciamento de computadores, vários sistemas de administração estruturados por conexões físicas adotam técnicas avançadas para gestão de configuração de software. No entanto, a forte ligação entre hardware e o software faz com que haja uma individualização desta gerência, além da penalização da mobilidade e ubiqüidade do poder computacional. Neste cenário, cada computador torna-se uma entidade individual a ser gerenciada, exigindo operações manuais de configuração da imagem de sistema. Tecnologias que oferecem gestão centralizada baseadas em conexões físicas cliente-servidor, combinando técnicas de virtualização com a utilização de sistemas de arquivos distribuídos, refletem a degradação em flexibilidade e facilidade de instalação deste sistema gerenciador. Outras arquiteturas para gerenciamento centralizado que estruturam o compartilhamento de dados através de conexões físicas e dependem do protocolo PXE, apresentam os mesmos impasses descritos anteriormente. Diante das limitações dos modelos de gerenciamento centralizado baseado em conexões físicas, o objetivo deste trabalho é o desenvolvimento de um middleware de comunicação cliente-servidor como parte integrante e necessária para um ambiente de gerenciamento centralizado em redes de comunicações sem fio. Este ambiente, denominado WSE-OS (Wireless Sharing Enviroment ? Operating Systems), é um modelo baseado Virtual Desktop Infrastructure (VDI) que associa técnicas de virtualização e sistema de acesso remoto seguro para criação de uma arquitetura distribuída como base de um sistema de gestão. WSE-OS é capaz de realizar a replicação de sistemas operacionais em um ambiente de comunicação sem fio além de oferecer abstração de hardware aos clientes. O WSE-OS pode substituir o boot local com disco rígido por um boot de uma Imagem de Sistema Única... / To simplify computer management, various administration systems structured with physical connections adopt advanced techniques to manage software configuration. Nevertheless, the strong link between hardware and software makes for an individualism of that management, besides penalizing computational mobility and ubiquity. In this scenario, each computer becomes an individual entity to be managed, requiring manual operations of the system image configuration. Technologies that offer centralized management based on client-server physical connections, combining virtualization techniques with the use of distributed file systems in clusters with distributed processing on network computers reflect the deterioration in flexibility and ease of installation and maintenance of distributed applications. Other architectures for centralized management that structure the sharing of data through physical connections and depend on the PXE protocol, present the same dilemmas described above. Given the limitations models of centralized management based on physical connections, the objective of this project is the development of a middleware for client-server communication as part necessary of an environment for centralized management in wireless communications networks. This environment, called WSE-OS (Wireless Sharing Environment ? Operating Systems), is a model based Virtual Desktop Infrastructure (VDI), which combines virtualization techniques and secure access system for creating a distributed architecture as the basis for a management system. WSE-OS is capable of replicating operating systems in a wireless environment, addition to providing hardware abstraction to clients. The WSE-OS can replace the boot with local hard disk to a boot from SSI (Single System Image) virtualized in server via communication middleware, increasing flexibility and allowing multiple operating systems... (Complete abstract click electronic access below)
76

A Case for Protecting Huge Pages from the Kernel

Patel, Naman January 2016 (has links) (PDF)
Modern architectures support multiple size pages to facilitate applications that use large chunks of contiguous memory either for buffer allocation, application specific memory management, in-memory caching or garbage collection. Most general purpose processors support larger page sizes, for e.g. x86 architecture supports 2MB and 1GB pages while PowerPC architecture supports 64KB, 16MB, 16GB pages. Such larger size pages are also known as superpages or huge pages. With the help of huge pages TLB reach can be increased significantly. The Linux kernel can transparently use these huge pages to significantly bring down the cost of TLB translations. With Transparent Huge Pages (THP) support in Linux kernel the end users or the application developers need not make any change to their application. Memory fragmentation which has been one of the classical problems in computing systems for decades is a key problem for the allocation of huge pages. Ubiquitous huge page support across architectures makes effective fragmentation management even more critical for modern systems. Applications tend to stress system TLB in the absence of huge pages, for virtual to physical address translation, which adversely affects performance/energy characteristics in long running systems. Since most kernel pages tend to be unmovable, fragmentation created due to their misplacement is more problematic and nearly impossible to recover with memory compaction. In this work, we explore physical memory manager of Linux and the interaction of kernel page placement with fragmentation avoidance and recovery mechanisms. Our analysis reveals that not only a random kernel page layout thwarts the progress of memory compaction; it can actually induce more fragmentation in the system. To address this problem, we propose a new allocator which takes special care for the placement of kernel pages. We propose a new region which represents memory area having kernel as well as user pages. Using this new region we introduce a staged allocator which with change in fragmentation level adapts and optimizes the kernel page placement. Later we introduce Illuminator which with zero overhead outperforms default kernel in terms of huge page allocation success rate and compaction overhead with respect to each huge page. We also show that huge page allocation is not a one dimensional problem but a two fold concern with how the fragmentation recovery mechanism may potentially interfere with the page clustering policy of allocator and worsen the fragmentation. Our results show that with effective kernel page placements the mixed page block counts reduces upto 70%, which allows our system to allocate 3x-4x huge pages than the default Kernel. Using these additional huge pages we show up to 38% improvement in terms of energy consumed and reduction in execution time up to 39% on standard benchmarks.
77

Bootstrapping a Private Cloud

Deepika Kaushal (9034865) 29 June 2020 (has links)
Cloud computing allows on-demand provision, configuration and assignment of computing resources with minimum cost and effort for users and administrators. Managing the physical infrastructure that underlies cloud computing services relies on the need to provision and manage bare-metal computer hardware. Hence there is a need for quick loading of operating systems in bare-metal and virtual machines to service the demands of users. The focus of the study is on developing a technique to load these machines remotely, which is complicated by the fact that the machines can be present in different Ethernet broadcast domains, physically distant from the provisioning server. The use of available bare-metal provisioning frameworks require significant skills and time. Moreover, there is no easily implementable standard method of booting across separate and different Ethernet broadcast domains. This study proposes a new framework to provision bare-metal hardware remotely using layer 2 services in a secure manner. This framework is a composition of existing tools that can be assembled to build the framework.
78

Comparing Cloud Architectures in terms of Performance and Scalability

Jääskeläinen, Perttu January 2019 (has links)
Cloud Computing is becoming increasingly popular, with large amounts of corporations revenue coming in from various cloud solutions offered to customers. When it comes to choosing a solution, multiple options exist for the same problem from many competitors. This report focuses on the ones offered by Microsoft in their Azure platform, and compares the architectures in terms of performance and scalability.In order to determine the most suitable architecture, three offered by Azure are considered: Cloud Services (CS), Service Fabric Mesh (SFM) and Virtual Machines (VM). By developing and deploying a REST Web API to each service and performing a load test, average response times in milliseconds are measured and compared. To determine scalability, the point at which each service starts timing out requests is identified. The services are tested both by scaling up, by increasing the power of a single instance of a machine, and by scaling out, if possible, by duplicating instances of machines running in parallel.The results show that VMs fall considerably behind both CS and SFM in both performance and scalability, for a regular use case. For low amounts of requests, all services perform about the same, but as soon as the requests increase, it is clear that both SFM and CS outperform VMs. In the end, CS comes ahead both in terms of scalability and performance.Further research may be done into other platforms which offer the same service solutions, such as Amazon Web Services (AWS) and Google Cloud, or other architectures within Azure. / Molntjänster blir alltmer populära i dagens industri, där stora mängder av företagens omsättning består av tjänster erbjudna i form av molnlösningar. När det kommer till att välja en lösning finns många för samma problem, där det är upp till kunden att välja vilken som passar bäst. Denna rapport fokuserar på tjänster erbjudna av Microsofts Azure plattform, i en jämförelse av arkitekturer som belastningstestas för att mäta prestanda och skalbarhet.För att avgöra vilken arkitektur som är optimalast mäts tre olika tjänster erbjudna i Azure: Cloud Services (CS), Service Fabric Mesh (SFM) och Virtual Machines (VM). Detta görs genom att utveckla och deploya ett REST Web API som är simulerat med användare, där prestanda mäts genom att ta medelresponstiden i millisekunder per anrop. För att avgöra skalbarhet identifieras en punkt där tjänsten inte längre klarar av antalet inkommande anrop och börjar returnera felkoder. Maskinerna för varje tjänst testas både genom att skala upp, genom att förstärka en maskin, men även genom att skala ut, där det skapas flera instanser av samma maskin.Resultatet visar att Virtual Machines hamnar betydligt efter både CS och SFM i både prestanda och skalbarhet för ett vanligt användarfall. För låga mängder anrop ligger samtliga tjänster väldigt lika, men så fort anropen börjar öka så märks det tydligt att SFM och CS presterar bättre än Virtual Machines. I slutändan ligger CS i framkant, både i form av prestanda och skalbarhet.Vidare undersökning kan göras för de olika plattformarna erbjudna av konkurrenter, så som Amazon Web Services (AWS) och Google Cloud, samt andra arkitekturer från Azure.
79

Performance Analysis of Virtualisation in a Cloud Computing Platform. An application driven investigation into modelling and analysis of performance vs security trade-offs for virtualisation in OpenStack infrastructure as a service (IaaS) cloud computing platform architectures.

Maiyama, Kabiru M. January 2019 (has links)
Virtualisation is one of the underlying technologies that led to the success of cloud computing platforms (CCPs). The technology, along with other features such as multitenancy allows delivering of computing resources in the form of service through efficient sharing of physical resources. As these resources are provided through virtualisation, a robust agreement is outlined for both the quantity and quality-of-service (QoS) in a service level agreement (SLA) documents. QoS is one of the essential components of SLA, where performance is one of its primary aspects. As the technology is progressively maturing and receiving massive acceptance, researchers from industry and academia continue to carry out novel theoretical and practical studies of various essential aspects of CCPs with significant levels of success. This thesis starts with the assessment of the current level of knowledge in the literature of cloud computing in general and CCPs in particular. In this context, a substantive literature review was carried out focusing on performance modelling, testing, analysis and evaluation of Infrastructure as a Service (IaaS), methodologies. To this end, a systematic mapping study (SMSs) of the literature was conducted. SMS guided the choice and direction of this research. The SMS was followed by the development of a novel open queueing network model (QNM) at equilibrium for the performance modelling and analysis of an OpenStack IaaS CCP. Moreover, it was assumed that an external arrival pattern is Poisson while the queueing stations provided exponentially distributed service times. Based on Jackson’s theorem, the model was exactly decomposed into individual M/M/c (c ≥ 1) stations. Each of these queueing stations was analysed in isolation, and closed-form expressions for key performance metrics, such as mean response time, throughput, server (resource) utilisation as well as bottleneck device were determined. Moreover, the research was extended with a proposed open QNM with a bursty external arrival pattern represented by a Compound Poisson Process (CPP) with geometrically distributed batches, or equivalently, variable Generalised Exponential (GE) interarrival and service times. Each queueing station had c (c ≥ 1) GE-type servers. Based on a generic maximum entropy (ME) product form approximation, the proposed open GE-type QNM was decomposed into individual GE/GE/c queueing stations with GE-type interarrival and service times. The evaluation of the performance metrics and bottleneck analysis of the QNM were determined, which provided vital insights for the capacity planning of existing CCP architectures as well as the design and development of new ones. The results also revealed, due to a significant impact on the burstiness of interarrival and service time processes, resulted in worst-case performance bounds scenarios, as appropriate. Finally, an investigation was carried out into modelling and analysis of performance and security trade-offs for a CCP architecture, based on a proposed generalised stochastic Petri net (GSPN) model with security-detection control model (SDCM). In this context, ‘optimal’ combined performance and security metrics were defined with both M-type or GE-type arrival and service times and the impact of security incidents on performance was assessed. Typical numerical experiments on the GSPN model were conducted and implemented using the Möbius package, and an ‘optimal’ trade-offs were determined between performance and security, which are crucial in the SLA of the cloud computing services. / Petroleum technology development fund (PTDF) of the government of Nigeria Usmanu Danfodiyo University, Sokoto
80

FairCPU: Uma Arquitetura para Provisionamento de MÃquinas Virtuais Utilizando CaracterÃsticas de Processamento / FairCPU: An Architecture for Provisioning Virtual Machines Using Processing Features

Paulo Antonio Leal Rego 02 March 2012 (has links)
FundaÃÃo Cearense de Apoio ao Desenvolvimento Cientifico e TecnolÃgico / O escalonamento de recursos à um processo chave para a plataforma de ComputaÃÃo em Nuvem, que geralmente utiliza mÃquinas virtuais (MVs) como unidades de escalonamento. O uso de tÃcnicas de virtualizaÃÃo fornece grande flexibilidade com a habilidade de instanciar vÃrias MVs em uma mesma mÃquina fÃsica (MF), modificar a capacidade das MVs e migrÃ-las entre as MFs. As tÃcnicas de consolidaÃÃo e alocaÃÃo dinÃmica de MVs tÃm tratado o impacto da sua utilizaÃÃo como uma medida independente de localizaÃÃo. à geralmente aceito que o desempenho de uma MV serà o mesmo, independentemente da MF em que ela à alocada. Esta à uma suposiÃÃo razoÃvel para um ambiente homogÃneo, onde as MFs sÃo idÃnticas e as MVs estÃo executando o mesmo sistema operacional e aplicativos. No entanto, em um ambiente de ComputaÃÃo em Nuvem, espera-se compartilhar um conjunto composto por recursos heterogÃneos, onde as MFs podem variar em termos de capacidades de seus recursos e afinidades de dados. O objetivo principal deste trabalho à apresentar uma arquitetura que possibilite a padronizaÃÃo da representaÃÃo do poder de processamento das MFs e MVs, em funÃÃo de Unidades de Processamento (UPs), apoiando-se na limitaÃÃo do uso da CPU para prover isolamento de desempenho e manter a capacidade de processamento das MVs independente da MF subjacente. Este trabalho busca suprir a necessidade de uma soluÃÃo que considere a heterogeneidade das MFs presentes na infraestrutura da Nuvem e apresenta polÃticas de escalonamento baseadas na utilizaÃÃo das UPs. A arquitetura proposta, chamada FairCPU, foi implementada para trabalhar com os hipervisores KVM e Xen, e foi incorporada a uma nuvem privada, construÃda com o middleware OpenNebula, onde diversos experimentos foram realizados para avaliar a soluÃÃo proposta. Os resultados comprovam a eficiÃncia da arquitetura FairCPU em utilizar as UPs para reduzir a variabilidade no desempenho das MVs, bem como para prover uma nova maneira de representar e gerenciar o poder de processamento das MVs e MFs da infraestrutura. / Resource scheduling is a key process for cloud computing platform, which generally uses virtual machines (VMs) as scheduling units. The use of virtualization techniques provides great flexibility with the ability to instantiate multiple VMs on one physical machine (PM), migrate them between the PMs and dynamically scale VMâs resources. The techniques of consolidation and dynamic allocation of VMs have addressed the impact of its use as an independent measure of location. It is generally accepted that the performance of a VM will be the same regardless of which PM it is allocated. This assumption is reasonable for a homogeneous environment where the PMs are identical and the VMs are running the same operating system and applications. Nevertheless, in a cloud computing environment, we expect that a set of heterogeneous resources will be shared, where PMs will face changes both in terms of their resource capacities and as also in data affinities. The main objective of this work is to propose an architecture to standardize the representation of the processing power by using processing units (PUs). Adding to that, the limitation of CPU usage is used to provide performance isolation and maintain the VMâs processing power at the same level regardless the underlying PM. The proposed solution considers the PMs heterogeneity present in the cloud infrastructure and provides scheduling policies based on PUs. The proposed architecture is called FairCPU and was implemented to work with KVM and Xen hypervisors. As study case, it was incorporated into a private cloud, built with the middleware OpenNebula, where several experiments were conducted. The results prove the efficiency of FairCPU architecture to use PUs to reduce VMsâ performance variability, as well as to provide a new way to represent and manage the processing power of the infrastructureâs physical and virtual machines.

Page generated in 0.0509 seconds