• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 32
  • 23
  • 6
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 76
  • 76
  • 24
  • 24
  • 23
  • 14
  • 14
  • 14
  • 13
  • 13
  • 13
  • 12
  • 11
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

A Novel Cloud Broker-based Resource Elasticity Management and Pricing for Big Data Streaming Applications

Runsewe, Olubisi A. 28 May 2019 (has links)
The pervasive availability of streaming data from various sources is driving todays’ enterprises to acquire low-latency big data streaming applications (BDSAs) for extracting useful information. In parallel, recent advances in technology have made it easier to collect, process and store these data streams in the cloud. For most enterprises, gaining insights from big data is immensely important for maintaining competitive advantage. However, majority of enterprises have difficulty managing the multitude of BDSAs and the complex issues cloud technologies present, giving rise to the incorporation of cloud service brokers (CSBs). Generally, the main objective of the CSB is to maintain the heterogeneous quality of service (QoS) of BDSAs while minimizing costs. To achieve this goal, the cloud, although with many desirable features, exhibits major challenges — resource prediction and resource allocation — for CSBs. First, most stream processing systems allocate a fixed amount of resources at runtime, which can lead to under- or over-provisioning as BDSA demands vary over time. Thus, obtaining optimal trade-off between QoS violation and cost requires accurate demand prediction methodology to prevent waste, degradation or shutdown of processing. Second, coordinating resource allocation and pricing decisions for self-interested BDSAs to achieve fairness and efficiency can be complex. This complexity is exacerbated with the recent introduction of containers. This dissertation addresses the cloud resource elasticity management issues for CSBs as follows: First, we provide two contributions to the resource prediction challenge; we propose a novel layered multi-dimensional hidden Markov model (LMD-HMM) framework for managing time-bounded BDSAs and a layered multi-dimensional hidden semi-Markov model (LMD-HSMM) to address unbounded BDSAs. Second, we present a container resource allocation mechanism (CRAM) for optimal workload distribution to meet the real-time demands of competing containerized BDSAs. We formulate the problem as an n-player non-cooperative game among a set of heterogeneous containerized BDSAs. Finally, we incorporate a dynamic incentive-compatible pricing scheme that coordinates the decisions of self-interested BDSAs to maximize the CSB’s surplus. Experimental results demonstrate the effectiveness of our approaches.
62

Modelo para o escoamento de aplicações científicas em ambientes de nuvens baseado em afinidade / Scheduling model for scientific applications in cloud environments based on affinity

Yokoyama, Daniel Massami Muniz 22 June 2015 (has links)
Submitted by Maria Cristina (library@lncc.br) on 2015-09-03T17:53:55Z No. of bitstreams: 1 Dissertacao_Daniel_Yokoyama.pdf: 3080551 bytes, checksum: fb4afe8fd7691c5976810b6e1418b97f (MD5) / Approved for entry into archive by Maria Cristina (library@lncc.br) on 2015-09-03T17:54:16Z (GMT) No. of bitstreams: 1 Dissertacao_Daniel_Yokoyama.pdf: 3080551 bytes, checksum: fb4afe8fd7691c5976810b6e1418b97f (MD5) / Made available in DSpace on 2015-09-03T17:54:48Z (GMT). No. of bitstreams: 1 Dissertacao_Daniel_Yokoyama.pdf: 3080551 bytes, checksum: fb4afe8fd7691c5976810b6e1418b97f (MD5) Previous issue date: 2015-06-22 / Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq) / Confronted by the increase in demand for computing power to solve scientific applications, the need to purchase and maintain a computing infrastructure becomes a must and a hindrance to research institutions. In this backdrop, the technological race and the need to purchase equipment, the cloud computing paradigm focusing on scientific computing emerges as a tool to aid in the advancement of scientific works. The following text presents a private cloud platform focused on the creation and management of computational clusters for application in solving high-performance computing tasks, specifically highly parallelizable processes using MPI. In addition to the system description cluster computing clouds, the work presents a scheduling model of virtual machines based on the affinity of the applications running on the hosts. This allocation model aims to allow better use of the resources available to the platform, increasing the flow of tasks performed. / Mediante a crescente demanda por poder computacional para a resolução de aplicações científicas, a necessidade de aquisição e manutenção de uma infraestrutura computacional torna-se uma obrigação e um empecilho para as instituições de pesquisa. Perante este cenário, de corrida tecnológica e a necessidade de aquisição de equipamentos, o paradigma de computação em nuvem voltado para a computação científica surge como uma ferramenta para auxiliar no avanço dos trabalhos científicos. O texto a seguir apresenta uma plataforma de nuvem privada voltada à criação e gerência de clusters computacionais para a aplicação na resolução de tarefas de computação de alto desempenho, especificamente processos altamente paralelizáveis utilizando MPI . Além da descrição do sistema para clusters computacionais em nuvem, o trabalho segue para apresentar um modelo de escalonamento de máquinas virtuais baseado na afinidade das aplicações em execução nos hospedeiros. Este modelo de alocação busca permitir um melhor aproveitamento dos recursos disponíveis à plataforma, aumentando a vazão de tarefas executadas.
63

Virtual power: um modelo de custo baseado no consumo de energia do processador por máquina virtual em nuvens IaaS / Virtual power: a cost model based on the processor energy consumption per virtual machine in IaaS clouds

Hinz, Mauro 29 September 2015 (has links)
Made available in DSpace on 2016-12-12T20:22:53Z (GMT). No. of bitstreams: 1 Mauro Hinz.pdf: 2658972 bytes, checksum: 50ee82c291499d5ddc390671e05329d4 (MD5) Previous issue date: 2015-09-29 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / The outsourcing of computing services has been through constant evolutions in the past years, due to the increase of demand for computing resources. Accordingly, data centers are the main suppliers of computing service and cloud-based computing services provide a new paradigm for the offer and consumption of these computing resources. A substantial motivator for using cloud computing is its pricing model, which enables to charge the customer only for the resources he used, thus adopting a pay-as-you-use cost model. Among cloud-based computing services, the service type Infrastructure-as-a-Service (IaaS) is the one mostly used by companies that would like to outsource their computing infrastructure. The IaaS service, in most cases, is offered through virtual machines. This paper revisits the cost models used by data centers and analyses the costs of supply of virtual machines based on IaaS. This analysis identifies that electricity represents a considerable portion of this cost and that much of the consumption comes from the use of processors in virtual machines, and that this aspect is not considered in the identified cost models. This paper describes the Virtual Power Model, a cost model based on energy consumption of the processor in cloud-based, virtual machines in IaaS. The model is based on the assumptions of energy consumption vs. processing load, among others, which are proven through experiments in a test environment of a small data center. As a result, the Virtual Power Model proves itself as a fairer pricing model for the consumed resources than the identified models. Finally, a case study is performed to compare the costs charged to a client using the cost model of Amazon for the AWS EC2 service and the same service charged using the Virtual Power Model. / A terceirização dos serviços de computação tem passado por evoluções constantes nos últimos anos em função do contínuo aumento na demanda por recursos computacionais. Neste sentido, os data centers são os principais fornecedores de serviço de computação e os serviços de computação em nuvem proporcionam um novo paradigma na oferta e consumo desses recursos computacionais. Um considerável motivador do uso das nuvens computacionais é o seu modelo de tarifação que possibilita a cobrança do cliente somente dos recursos que ele utilizou, adotando um modelo de custo do tipo pay-as-you-use. Dentre os serviços de computação em nuvem, o serviço do tipo IaaS (Infrastructure-as-a-Service) é um dos mais utilizados por empresas que desejam terceirizar a sua infraestrutura computacional. O serviço de IaaS, na grande maioria dos casos, é ofertado através de instâncias de máquinas virtuais. O presente trabalho revisita os modelos de custos empregados em data centers analisando a formação dos custos no fornecimento de máquina virtuais em nuvens baseadas em IaaS. Com base nesta análise identificasse que a energia elétrica possui uma parcela considerável deste custo e que boa parte deste consumo é proveniente do uso de processadores pelas máquinas virtuais, e que esse aspecto não é considerado nos modelos de custos identificados. Este trabalho descreve o Modelo Virtual Power, um modelo de custo baseado no consumo de energia do processador por máquina virtual em nuvens IaaS. A constituição do modelo está baseada nas premissas de consumo de energia vs. carga de processamento, entre outros, que são comprovados através de experimentação em um ambiente de testes em um data center de pequeno porte. Como resultado o Modelo Virtual Power mostra-se mais justo na precificação dos recursos consumidos do que os modelos identificados. Por fim, é realizado um estudo de caso comparando os custos tarifado a um cliente empregando o modelo de custo da Amazon no serviço AWS EC2 e o mesmo serviço tarifado utilizando o Modelo Virtual Power.
64

Middleware de comunicação entre objetos distribuídos para gerenciamento de computadores baseado em redes sem fio (WSE-OS) /

Crepaldi, Luis Gustavo. January 2011 (has links)
Resumo: Para simplificar o gerenciamento de computadores, vários sistemas de administração estruturados por conexões físicas adotam técnicas avançadas para gestão de configuração de software. No entanto, a forte ligação entre hardware e o software faz com que haja uma individualização desta gerência, além da penalização da mobilidade e ubiqüidade do poder computacional. Neste cenário, cada computador torna-se uma entidade individual a ser gerenciada, exigindo operações manuais de configuração da imagem de sistema. Tecnologias que oferecem gestão centralizada baseadas em conexões físicas cliente-servidor, combinando técnicas de virtualização com a utilização de sistemas de arquivos distribuídos, refletem a degradação em flexibilidade e facilidade de instalação deste sistema gerenciador. Outras arquiteturas para gerenciamento centralizado que estruturam o compartilhamento de dados através de conexões físicas e dependem do protocolo PXE, apresentam os mesmos impasses descritos anteriormente. Diante das limitações dos modelos de gerenciamento centralizado baseado em conexões físicas, o objetivo deste trabalho é o desenvolvimento de um middleware de comunicação cliente-servidor como parte integrante e necessária para um ambiente de gerenciamento centralizado em redes de comunicações sem fio. Este ambiente, denominado WSE-OS (Wireless Sharing Enviroment ? Operating Systems), é um modelo baseado Virtual Desktop Infrastructure (VDI) que associa técnicas de virtualização e sistema de acesso remoto seguro para criação de uma arquitetura distribuída como base de um sistema de gestão. WSE-OS é capaz de realizar a replicação de sistemas operacionais em um ambiente de comunicação sem fio além de oferecer abstração de hardware aos clientes. O WSE-OS pode substituir o boot local com disco rígido por um boot de uma Imagem de Sistema Única... (Resumo completo, clicar acesso eletrônico abaixo) / Abstract: To simplify computer management, various administration systems structured with physical connections adopt advanced techniques to manage software configuration. Nevertheless, the strong link between hardware and software makes for an individualism of that management, besides penalizing computational mobility and ubiquity. In this scenario, each computer becomes an individual entity to be managed, requiring manual operations of the system image configuration. Technologies that offer centralized management based on client-server physical connections, combining virtualization techniques with the use of distributed file systems in clusters with distributed processing on network computers reflect the deterioration in flexibility and ease of installation and maintenance of distributed applications. Other architectures for centralized management that structure the sharing of data through physical connections and depend on the PXE protocol, present the same dilemmas described above. Given the limitations models of centralized management based on physical connections, the objective of this project is the development of a middleware for client-server communication as part necessary of an environment for centralized management in wireless communications networks. This environment, called WSE-OS (Wireless Sharing Environment ? Operating Systems), is a model based Virtual Desktop Infrastructure (VDI), which combines virtualization techniques and secure access system for creating a distributed architecture as the basis for a management system. WSE-OS is capable of replicating operating systems in a wireless environment, addition to providing hardware abstraction to clients. The WSE-OS can replace the boot with local hard disk to a boot from SSI (Single System Image) virtualized in server via communication middleware, increasing flexibility and allowing multiple operating systems... (Complete abstract click electronic access below) / Orientador: Marcos Antônio Cavenaghi / Coorientador: Roberta Spolon / Banca: João Paulo Papa / Banca: Regina Helena Carlucci Santana / Mestre
65

Camada de gerenciamento para comunicação entre computadores baseada em redes sem fio (WSE-OS) /

Digiere, Adriano Ricardo. January 2011 (has links)
Orientador: Roberta Spolon / Banca: João Paulo Papa / Banca: Regina Helena Carlucci Santana / Resumo: O maior custo de propriedade de computadores não é o hardware ou o software, mas sim o tempo que os profissionais de informática gastam em suporte e manutenção dos ambientes computacionais. Em um conglomerado de computadores em rede, cada computador torna- se uma entidade gerenciada individualmente, o que gera contínuas solicitações de alterações de configuração, como instalação de atualizações de software, conexão e configuração de periféricos, criação de perfis de e-mail e aplicação de patches. Além disso, existe ainda o risco de furto de dados e invasão por hackers quando os computadores dos usuários não estão protegidos. Aliado a este cenário, a constante evolução dos sistemas computacionais e seu potencial de processamento, a cada dia são necessárias novas técnicas de aproveitamento destes recursos. Soluções que visam facilitar o gerenciamento de ambientes com grande massa de computadores de forma a tirar o máximo proveito do poder computacional concentrado em servidores já se tornaram necessidades reais, não só em grandes corporações, mas também em pequenas e médias empresas, além de outros tipos organizações, como por exemplo, instituições de ensino. Frente esta necessidade, focando uma ferramenta compatível neste cenário de crescimento, este trabalho apresenta um modelo de gerenciamento centralizado, nomeado WSE-OS (Wireless Sharing Environment - Operating Systems), baseado em técnicas de virtualização e acesso remoto seguro combinadas a um sistema de arquivos remotos em espaço de usuário. Esta solução elimina a necessidade da instalação e configuração de aplicativos "máquina a máquina", além de tirar maior proveito do poder computacional existente nos servidores. A principal característica deste modelo que o destaca das soluções atuais é que ele é especificamente elaborado para operar sobre redes... (Resumo completo, clicar acesso eletrônico abaixo) / Abstract: The largest cost of desktop ownership is not the hardware or software, but the time that administrators spend on support and maintenance of computing environments. In a conglomerate of computers in a network, each computer becomes an entity managed individually, which generates continuous requests for configuration changes, such as installing software updates, configuration and connection of peripherals, profiling email and applying patches. Moreover, there is the risk of data theft and hacking when users' computers are not protected. Allied to this scenario, the constant evolution of computer systems and their potential for processing, each day requires new techniques for exploitation of these resources. Solutions aimed facilitating the management of environments with large mass of computers to take maximum advantage of computing power concentrated on servers have become real needs, not only in large corporations but also small and medium enterprises, besides other types organizations, such as educational institutions. Facing this need, focusing on a tool that supported this growth scenario, this work presents a centralized management model, named WSE-OS (Wireless Sharing Environment - Operating Systems) based on virtualization techniques and secure remote access combined with a remote file system in user space. This solution eliminates the need for installing and configuring applications "machine to machine", besides take greater advantage of existing computing power on the servers . The main feature of this model that highlights the current solutions is that it is specifically designed to operate on networks with low transmission rates, such as wireless networks. The WSE-OS is able to perform the replication of operating system images in an environment with WLAN communication, which makes management more flexible and independent of physical connections, besides offer... (Complete abstract click electronic access below) / Mestre
66

Performance Specific I/O Scheduling Framework for Cloud Storage

Jain, Nitisha January 2015 (has links) (PDF)
Virtualization is one of the important enabling technologies for Cloud Computing which facilitates sharing of resources among the virtual machines. However, it incurs performance overheads due to contention of physical devices such as disk and network bandwidth. Various I/O applications having different latency requirements may be executing concurrently on different virtual machines provisioned on a single server in Cloud data-centers. It is pertinent that the performance SLAs of such applications are satisfied through intelligent scheduling and allocation of disk resources. The underlying disk scheduler at the server is unable to distinguish between the application requests being oblivious to the characteristics of these applications. Therefore, all the applica- tions are provided best effort services by default. This may lead to performance degradation for the latency sensitive applications. In this work, we propose a novel disk scheduling framework PriDyn (Dynamic Priority) which provides differentiated services to various I/O applications co-located on a single host based on their latency attributes and desired performance. The framework employs a scheduling algorithm which dynamically computes latency estimates for all concurrent I/O applications for a given system state. Based on these, an appropriate pri- ority assignment for the applications is determined which is taken into consideration by the underlying disk scheduler at the host while scheduling the I/O applications on the physical disk. The proposed scheduling framework is able to successfully satisfy QoS requirements for the concurrent I/O applications within system constraints. This has been verified through ex- tensive experimental analysis. In order to realize the benefits of differentiated services provided by the PriDyn scheduler, proper combination of I/O applications must be ensured for the servers through intelligent meta-scheduling techniques at the Cloud data-center level. For achieving this, in the second part of this work, we extended the PriDyn framework to design a proactive admission control and scheduling framework PCOS (P rescient C loud I/O S cheduler). It aims to maximize to Utilization of disk resources without adversely affecting the performance of the applications scheduled on the systems. By anticipating the performance of the systems running multiple I/O applications, PCOS prevents the scheduling of undesirable workloads on them in order to maintain the necessary balance between resource consolidation and application performance guarantees. The PCOS framework includes the PriDyn scheduler as an important component and utilizes the dynamic disk resource allocation capabilities of PriDyn for meeting its goals. Experimental validations performed on real world I/O traces demonstrate that the proposed framework achieves appreciable enhancements in I/O performance through selection of optimal I/O workload combinations, indicating that this approach is a promising step towards enabling QoS guarantees for Cloud data-centers.
67

Benchmarking and Scheduling Strategies for Distributed Stream Processing

Shukla, Anshu January 2017 (has links) (PDF)
The velocity dimension of Big Data refers to the need to rapidly process data that arrives continuously as streams of messages or events. Distributed Stream Processing Systems (DSPS) refer to distributed programming and runtime platforms that allow users to define a composition of dataflow logic that are executed on distributed resources over streams of incoming messages. A DSPS uses commodity clusters and Cloud Virtual Machines (VMs) for its execution. In order to meet the required performance for these applications, the DSPS needs to schedule these dataßows efficiently over the resources. Despite their growing use, resource scheduling for DSPSÕs tends to be done in an ad hoc manner, favoring empirical and reactive approaches, rather than a model-driven and analytical approach. Such empirical strategies may arrive at an approximate schedule for the dataflow that needs further tuning to meet the quality of service. We propose a model-based scheduling approach that makes use of performance profiles and benchmarks developed for tasks in the dataßow to plan both the resource allocation and the resource mapping that together form the schedule planning process. We propose the Model Based Allocation (MBA) and the Slot Aware Mapping (SAM) approaches that efectively utilize knowledge of the performance model of logic tasks to provide an efficient and predictable scheduling behavior. We implemented and validate these algorithms using the popular open source Apache Storm DSPS for several micro and application dataflows. The results show that our model-driven approach is able to reduce the amount of required resources (VMs) by 30% − 50% relative to existing techniques. Also we see that our strategies o↵er a predictable behavior that ensures that the expected and actual rates supported and resources used match closely. This can enable deterministic schedule planning even under dynamic conditions. Besides this static scheduling, we also examine the ability to dynamically consolidate tasks onto fewer VMs when the load on the dataßow decreases or the VMs get fragmented. We propose reliable task migration models for Apache Storm dataßows that are able to rapidly move the task assignment in the cluster, and resume the dataflow execution without any message loss.
68

Camada de gerenciamento para comunicação entre computadores baseada em redes sem fio (WSE-OS)

Digiere, Adriano Ricardo [UNESP] 31 March 2011 (has links) (PDF)
Made available in DSpace on 2014-06-11T19:29:40Z (GMT). No. of bitstreams: 0 Previous issue date: 2011-03-31Bitstream added on 2014-06-13T19:59:30Z : No. of bitstreams: 1 digiere_ar_me_sjrp.pdf: 1496121 bytes, checksum: 2cc2450b0bec5e0610a6820222483893 (MD5) / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / O maior custo de propriedade de computadores não é o hardware ou o software, mas sim o tempo que os profissionais de informática gastam em suporte e manutenção dos ambientes computacionais. Em um conglomerado de computadores em rede, cada computador torna- se uma entidade gerenciada individualmente, o que gera contínuas solicitações de alterações de configuração, como instalação de atualizações de software, conexão e configuração de periféricos, criação de perfis de e-mail e aplicação de patches. Além disso, existe ainda o risco de furto de dados e invasão por hackers quando os computadores dos usuários não estão protegidos. Aliado a este cenário, a constante evolução dos sistemas computacionais e seu potencial de processamento, a cada dia são necessárias novas técnicas de aproveitamento destes recursos. Soluções que visam facilitar o gerenciamento de ambientes com grande massa de computadores de forma a tirar o máximo proveito do poder computacional concentrado em servidores já se tornaram necessidades reais, não só em grandes corporações, mas também em pequenas e médias empresas, além de outros tipos organizações, como por exemplo, instituições de ensino. Frente esta necessidade, focando uma ferramenta compatível neste cenário de crescimento, este trabalho apresenta um modelo de gerenciamento centralizado, nomeado WSE-OS (Wireless Sharing Environment – Operating Systems), baseado em técnicas de virtualização e acesso remoto seguro combinadas a um sistema de arquivos remotos em espaço de usuário. Esta solução elimina a necessidade da instalação e configuração de aplicativos “máquina a máquina”, além de tirar maior proveito do poder computacional existente nos servidores. A principal característica deste modelo que o destaca das soluções atuais é que ele é especificamente elaborado para operar sobre redes... / The largest cost of desktop ownership is not the hardware or software, but the time that administrators spend on support and maintenance of computing environments. In a conglomerate of computers in a network, each computer becomes an entity managed individually, which generates continuous requests for configuration changes, such as installing software updates, configuration and connection of peripherals, profiling email and applying patches. Moreover, there is the risk of data theft and hacking when users' computers are not protected. Allied to this scenario, the constant evolution of computer systems and their potential for processing, each day requires new techniques for exploitation of these resources. Solutions aimed facilitating the management of environments with large mass of computers to take maximum advantage of computing power concentrated on servers have become real needs, not only in large corporations but also small and medium enterprises, besides other types organizations, such as educational institutions. Facing this need, focusing on a tool that supported this growth scenario, this work presents a centralized management model, named WSE-OS (Wireless Sharing Environment – Operating Systems) based on virtualization techniques and secure remote access combined with a remote file system in user space. This solution eliminates the need for installing and configuring applications machine to machine, besides take greater advantage of existing computing power on the servers . The main feature of this model that highlights the current solutions is that it is specifically designed to operate on networks with low transmission rates, such as wireless networks. The WSE-OS is able to perform the replication of operating system images in an environment with WLAN communication, which makes management more flexible and independent of physical connections, besides offer... (Complete abstract click electronic access below)
69

Middleware de comunicação entre objetos distribuídos para gerenciamento de computadores baseado em redes sem fio (WSE-OS)

Crepaldi, Luis Gustavo [UNESP] 31 March 2011 (has links) (PDF)
Made available in DSpace on 2014-06-11T19:29:40Z (GMT). No. of bitstreams: 0 Previous issue date: 2011-03-31Bitstream added on 2014-06-13T18:59:19Z : No. of bitstreams: 1 crepaldi_lg_me_sjrp.pdf: 1446703 bytes, checksum: 5212981cdddeca5acc9e64906c893d50 (MD5) / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / Universidade Estadual Paulista (UNESP) / Para simplificar o gerenciamento de computadores, vários sistemas de administração estruturados por conexões físicas adotam técnicas avançadas para gestão de configuração de software. No entanto, a forte ligação entre hardware e o software faz com que haja uma individualização desta gerência, além da penalização da mobilidade e ubiqüidade do poder computacional. Neste cenário, cada computador torna-se uma entidade individual a ser gerenciada, exigindo operações manuais de configuração da imagem de sistema. Tecnologias que oferecem gestão centralizada baseadas em conexões físicas cliente-servidor, combinando técnicas de virtualização com a utilização de sistemas de arquivos distribuídos, refletem a degradação em flexibilidade e facilidade de instalação deste sistema gerenciador. Outras arquiteturas para gerenciamento centralizado que estruturam o compartilhamento de dados através de conexões físicas e dependem do protocolo PXE, apresentam os mesmos impasses descritos anteriormente. Diante das limitações dos modelos de gerenciamento centralizado baseado em conexões físicas, o objetivo deste trabalho é o desenvolvimento de um middleware de comunicação cliente-servidor como parte integrante e necessária para um ambiente de gerenciamento centralizado em redes de comunicações sem fio. Este ambiente, denominado WSE-OS (Wireless Sharing Enviroment ? Operating Systems), é um modelo baseado Virtual Desktop Infrastructure (VDI) que associa técnicas de virtualização e sistema de acesso remoto seguro para criação de uma arquitetura distribuída como base de um sistema de gestão. WSE-OS é capaz de realizar a replicação de sistemas operacionais em um ambiente de comunicação sem fio além de oferecer abstração de hardware aos clientes. O WSE-OS pode substituir o boot local com disco rígido por um boot de uma Imagem de Sistema Única... / To simplify computer management, various administration systems structured with physical connections adopt advanced techniques to manage software configuration. Nevertheless, the strong link between hardware and software makes for an individualism of that management, besides penalizing computational mobility and ubiquity. In this scenario, each computer becomes an individual entity to be managed, requiring manual operations of the system image configuration. Technologies that offer centralized management based on client-server physical connections, combining virtualization techniques with the use of distributed file systems in clusters with distributed processing on network computers reflect the deterioration in flexibility and ease of installation and maintenance of distributed applications. Other architectures for centralized management that structure the sharing of data through physical connections and depend on the PXE protocol, present the same dilemmas described above. Given the limitations models of centralized management based on physical connections, the objective of this project is the development of a middleware for client-server communication as part necessary of an environment for centralized management in wireless communications networks. This environment, called WSE-OS (Wireless Sharing Environment ? Operating Systems), is a model based Virtual Desktop Infrastructure (VDI), which combines virtualization techniques and secure access system for creating a distributed architecture as the basis for a management system. WSE-OS is capable of replicating operating systems in a wireless environment, addition to providing hardware abstraction to clients. The WSE-OS can replace the boot with local hard disk to a boot from SSI (Single System Image) virtualized in server via communication middleware, increasing flexibility and allowing multiple operating systems... (Complete abstract click electronic access below)
70

A Case for Protecting Huge Pages from the Kernel

Patel, Naman January 2016 (has links) (PDF)
Modern architectures support multiple size pages to facilitate applications that use large chunks of contiguous memory either for buffer allocation, application specific memory management, in-memory caching or garbage collection. Most general purpose processors support larger page sizes, for e.g. x86 architecture supports 2MB and 1GB pages while PowerPC architecture supports 64KB, 16MB, 16GB pages. Such larger size pages are also known as superpages or huge pages. With the help of huge pages TLB reach can be increased significantly. The Linux kernel can transparently use these huge pages to significantly bring down the cost of TLB translations. With Transparent Huge Pages (THP) support in Linux kernel the end users or the application developers need not make any change to their application. Memory fragmentation which has been one of the classical problems in computing systems for decades is a key problem for the allocation of huge pages. Ubiquitous huge page support across architectures makes effective fragmentation management even more critical for modern systems. Applications tend to stress system TLB in the absence of huge pages, for virtual to physical address translation, which adversely affects performance/energy characteristics in long running systems. Since most kernel pages tend to be unmovable, fragmentation created due to their misplacement is more problematic and nearly impossible to recover with memory compaction. In this work, we explore physical memory manager of Linux and the interaction of kernel page placement with fragmentation avoidance and recovery mechanisms. Our analysis reveals that not only a random kernel page layout thwarts the progress of memory compaction; it can actually induce more fragmentation in the system. To address this problem, we propose a new allocator which takes special care for the placement of kernel pages. We propose a new region which represents memory area having kernel as well as user pages. Using this new region we introduce a staged allocator which with change in fragmentation level adapts and optimizes the kernel page placement. Later we introduce Illuminator which with zero overhead outperforms default kernel in terms of huge page allocation success rate and compaction overhead with respect to each huge page. We also show that huge page allocation is not a one dimensional problem but a two fold concern with how the fragmentation recovery mechanism may potentially interfere with the page clustering policy of allocator and worsen the fragmentation. Our results show that with effective kernel page placements the mixed page block counts reduces upto 70%, which allows our system to allocate 3x-4x huge pages than the default Kernel. Using these additional huge pages we show up to 38% improvement in terms of energy consumed and reduction in execution time up to 39% on standard benchmarks.

Page generated in 0.4331 seconds