• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 36
  • 23
  • 6
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 82
  • 82
  • 26
  • 25
  • 25
  • 16
  • 15
  • 14
  • 14
  • 14
  • 13
  • 12
  • 11
  • 11
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

FLEXLAB: Middleware de virtualização de hardware para gerenciamento centralizado de computadores em rede

Cruz, Daniel Igarashi [UNESP] 24 July 2008 (has links) (PDF)
Made available in DSpace on 2014-06-11T19:29:40Z (GMT). No. of bitstreams: 0 Previous issue date: 2008-07-24Bitstream added on 2014-06-13T18:39:30Z : No. of bitstreams: 1 cruz_di_me_sjrp.pdf: 2160735 bytes, checksum: 8010a7f3347625f0df8a2602dcebd330 (MD5) / O gerenciamento de um conglomerado de computadores em rede é uma atividade potencialmente complexa devido à natureza heterogênea destes equipamentos. Estas redes podem apresentar computadores com diferentes configurações em sua camada de software básico e aplicativos em função das diferenças de configuração de hardware em cada nó da rede. Neste cenário, cada computador torna-se uma entidade gerenciada individualmente, exigindo uma atividade manual de configuração da imagem de sistema ou com automatização limitada à camada de aplicativos. Tecnologias que oferecem gestão centralizada, como arquiteturas thin-client ou terminal de serviços, penalizam o desempenho das estações e oferecem capacidade reduzida para atender um número crescente de usuários uma vez que todo o processamento dos aplicativos dos clientes é executado em um único nó da rede. Outras arquiteturas para gerenciamento centralizado que atuam em camada de software são ineficazes em oferecer uma administração baseada em uma imagem única de configuração dado o forte acoplamento entre as camadas de software e hardware. Compreendendo as deficiências dos modelos tradicionais de gerenciamento centralizado de computadores, o objetivo deste trabalho é o desenvolvimento do FlexLab, mecanismo de gerenciamento de computadores através de Imagem de Sistema Única baseado em um middleware de virtualização distribuída. Por meio do middleware de virtualização do FlexLab, os computadores em rede de um ambiente são capazes de realizar o processo de boot remoto a partir de uma Imagem de Sistema Única desenvolvida sobre um hardware virtualizado. Esta imagem é hospedada e acessada a partir de um servidor central da rede, padronizando assim as configurações de software básico e aplicativos mesmo em um cenário de computadores com configuração heterogênea de hardware, simplificando... / Computer network management is a potentially complex task due to the heterogeneous nature of the hardware configuration of these machines. These networks may offer computers with different configuration in their basic software layer due to the configuration differences in their hardware layer and thus, in this scenario, each computer becomes an individual managed entity in the computer network and then requiring an individual and manually operated configuration procedure or automated maintenance restricted to application layer. Thin-client or terminal services do offer architectures for centralized management, however these architectures impose performance penalties for client execution and offer reduced scalability support in order to serve a growing number of users since all application processing is hosted and consume processing power of a single network node: the server. In the other hand, architectures for centralized management based on applications running over software layer are inefficient in offer computer management based on a single configuration image due to the tight coupling between software and hardware layers. Understanding the drawbacks of the theses centralized computer management solutions, the aim of this project is to develop the FlexLab, centralized computer management architecture through a Single System Image based on a distributed virtualization middleware. Through FlexLab virtualization middleware, the computers of a network environment are able to remote boot from a Single System Image targeting the virtual machine hardware. This Single System Image is hosted at a central network server and thus, standardizing basic software and applications configurations for networks with heterogeneous computer hardware configuration which simplifies computer management since all computers may be managed through a Single System Image. The experiments have shown that... (Complete abstract click electronic access below)
62

UM MODELO DE DETECÇÃO DE INTRUSÃO PARA AMBIENTES DE COMPUTAÇÃO EM NUVEM / A MODEL OF INTRUSION DETECTION FOR ENVIRONMENTS OF CLOUD COMPUTING

ARAÚJO, Josenilson Dias 28 June 2013 (has links)
Made available in DSpace on 2016-08-17T14:53:24Z (GMT). No. of bitstreams: 1 Dissertacao Josenilson.pdf: 3842701 bytes, checksum: 33761f8b37e7f3c354f33c31fcb658cf (MD5) Previous issue date: 2013-06-28 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / The elasticity and large consumption of computational resources are becoming attractive for intruders to exploit the cloud vulnerabilities to launch attacks or have access of private and privileged data of cloud users. In order to effectively protect the cloud and its users, the IDS must have the capability to quickly scale up and down the quantity of sensors according to the resources provisioned, besides of isolating the access to the system levels and infrastructure. The protection against internal cloud threats must be planned because of the non-adequate threatening identification system in most protection systems. For this, the proposed solution uses virtual machines features as fast recovery, start, stop, migration to other hosts and cross-platform execution in IDS based VM, to monitor the internal environment of the cloud virtual machines by inserting data capture sensors at the local network of the VM users, this way, it can detect suspicious user behaviors. / A elasticidade e abundante disponibilidade de recursos computacionais são atrativos para intrusos explorarem vulnerabilidades da nuvem, podendo assim lançar ataques contra usuários legítimos para terem acesso a dados privados e privilegiados. Para proteger efetivamente os usuários da nuvem, um Sistema de Detecção de Intrusão ou IDS deve ter a capacidade de expandir-se, aumentado ou diminuindo rapidamente a quantidade de sensores, de acordo com o provisionamento de recursos, além de isolar o acesso aos níveis de sistema e infraestrutura. A proteção contra ameaças internas na nuvem deve ser planejada, pois a maioria dos sistemas de proteção não identifica adequadamente ameaças internas ao sistema. Para isso, a solução proposta utiliza as características de máquinas virtuais como rápida inicialização, rápida recuperação, parada, migração entre diferentes hosts e execução em múltiplas plataformas na construção de um IDS que visa monitorar o ambiente interno de máquinas virtuais da nuvem, inserindo sensores de captura de dados na rede local das VMs dos usuários, podendo assim detectar comportamentos suspeitos dos usuários.
63

Elasticity in IaaS Cloud, Preserving Performance SLAs

Dhingra, Mohit January 2014 (has links) (PDF)
Infrastructure-as-a-Service(IaaS), one of the service models of cloud computing, provides resources in the form of Virtual Machines(VMs). Many applications hosted on the IaaS cloud have time varying workloads. These kind of applications benefit from the on-demand provision ing characteristic of cloud platforms. Applications with time varying workloads demand time varying resources in IaaS, which requires elastic resource provisioning in IaaS, such that their performance is intact. In current IaaS cloud systems, VMs are static in nature as their configurations do not change once they are instantiated. Therefore, fluctuation in resource demand is handled in two ways: allocating more VMs to the application(horizontal scaling) or migrating the application to another VM with a different configuration (vertical scaling). This forces the customers to characterize their workloads at a coarse grained level which potentially leads to under-utilized VM resources or under performing application. Furthermore, the current IaaS architecture does not provide performance guarantees to applications, because of two major factors: 1)Performance metrics of the application are not used for resource allocation mechanisms by the IaaS, 2) Current resource allocation mechanisms do not consider virtualization overheads, can significantly impact the application’s performance, especially for I/O workloads. In this work, we develop an Elastic Resource Framework for IaaS, which provides flexible resource provisioning mechanism and at the same time preserves performance of applications specified by the Service Level Agreement(SLA). For identification of workloads which needs elastic resource allocation, variability has been defined as a metric and is associated with the definition of elasticity of a resource allocation system. We introduce new components Forecasting Engine based on a Cost Model and Resource manager in Open Nebula IaaS cloud, which compute a n optimal resource requirement for the next scheduling cycle based on prediction. Scheduler takes this as an input and enables fine grained resource allocation by dynamically adjusting the size of the VM. Since the prediction may not always be entirely correct, there might be under-allocation or over-allocation of resources based on forecast errors. The design of the cost model accounts for both over-allocation of resources and SLA violations caused by under-allocation of resources. Also, proper resource allocation requires consideration of the virtualization overhead, which is not captured by current monitoring frameworks. We modify existing monitoring frameworks to monitor virtualization over head and provide fine-grained monitoring information in the Virtual Machine Monitor (VMM) as well as VMs. In our approach, the performance of the application is preserved by 1)binding the application level performance SLA store source allocation, and 2) accounting for virtualization over-head while allocating resources. The proposed framework is implemented using the forecasting strategies like Seasonal Auto Regressive and Moving Average model (Seasonal ARIMA), and Gaussian Process model. However, this framework is generic enough to use any other forecasting strategy as well. It is applied to the real workloads, namely web server and mail server workloads, obtained through Supercomputer Education and Research Centre, Indian Institute of Science. The results show that significant reduction in the resource requirements can be obtained while preserving the performance of application by restricting the SLA violations. We further show that more intelligent scaling decisions can be taken using the monitoring information derived by the modification in monitoring framework.
64

Design of IP Multimedia Subsystem for Educational Purposes

Rudholm, Mikael January 2015 (has links)
Internet Protocol multimedia subsystem (IMS) is an architecture for services such as voice over Internet Protocol (VoIP) in IP based communication systems. IMS is standardized by the 3GPP standardization forum, and was first released in 2002. Since then IMS has not had the wide adoption by operators as first anticipated. As 3G already supported voice and video, the operators could not justify the expense of IMS. The current emergence of the fourth generation mobile communication system named Long Term Evolution (LTE) has, however, increased the need for knowledge of IMS and of creating services for it. LTE networks are IP only networks that provide low latency. In order to use LTE for making phone calls, VoIP technologies are needed. IMS is the architecture intended to be used for Voice over LTE (VoLTE). The need for tools for education within IMS was seen in 2006 by Enea Experts in Linköping, Sweden. The author of this thesis designed an IMS for educational purposes, but the project was never fully completed. This thesis will reexamine the design decisions previously made by the author. The requirements stated by the customer remain: that an IMS with basic signaling and logging should be easy to install, maintain, and evolve at a low cost. A literature study of IMS and VoLTE is presented to contribute with knowledge in these areas. The previous design and implementation made by the author is presented and analyzed. The third-party software that the previous implementation was based on is reexamined. Existing open source components are analyzed in order to identify how they can be used to solve the problem and to identify what remains to be developed in order to fulfill the requirements. New design suggestions, presented in today´s context, are proposed and verified using analytical reasoning and experiments. The outcome of the final work is new verified design decisions for the customer to use when implementing a new IMS for educational purposes. The thesis should also provide useful insights which instructors and students can use to teach and learn more about IMS. / Internet Protocol multimedia subsystem (IMS) är en arkitektur för tjänster, som IP-telefoni (Voice over Internet Protocol, VoIP), i IP baserade kommunikationssystem. IMS standardi¬seras av standardiseringsforumet 3GPP och första utgåvan släpptes år 2002. IMS fick dock inte det breda genomslag bland operatörer som förväntats. Eftersom 3G redan hade stöd för tal och video kunde operatörerna inte se skäl till ytterligare utgifter för IMS. Den fjärde generationens mobila kommunikationssystem, Long Term Evolution (LTE) är helt IP-baserat och ger lägre fördröjningar i nätet. För att kunna ringa telefonsamtal via LTE krävs VoIP-teknik. IMS är en arkitektur avsedd för att användas för Voice over LTE (VoLTE). Den nuvarande utvecklingen av LTE har därför ökat behovet av kunskap om IMS och av utveckling av IMS-tjänster. Enea Experts i Linköping insåg behovet av verktyg för utbildning inom IMS år 2006. Författaren av det här examensarbetet designade därför ett IMS för utbildningssyfte. Projektet slutfördes dock aldrig. Syftet med examensarbetet är att ompröva de tidigare designbesluten. Kundens krav kvarstår: att ett IMS med grundläggande signalering och loggning bör vara enkelt att installera, enkelt att underhålla och möjligt att utveckla till en låg kostnad. Arbetet innehåller en litteraturstudie av IMS och VoLTE för att ge en inblick i dessa områden. Den tidigare designen och implementationen presenteras och analyseras. Tredjeparts mjukvara, som den tidigare implementationen baserades på, omprövas. Befintliga programvaror med öppen källkod analyseras i syfte att kartlägga hur de kan användas för att lösa uppgiften, samt att identifiera vad som återstår att utveckla för att uppfylla kraven. Nya beslut kring design presenteras och besluten verifieras med experiment och analytiskt resonemang. Resultatet av detta examensarbete innefattar nya verifierade beslut kring design som kunden kan använda vid utveckling av ett nytt IMS för utbildningssyfte. Arbetet erbjuder också värdefulla insikter som instruktörer och elever kan använda för att undervisa samt för att lära sig mer om IMS.
65

Improving Software Deployment and Maintenance : Case study: Container vs. Virtual Machine / Förbättring av utplacering och underhåll av mjukvara : Fallstudie: Containers vs. Virtuella maskiner

Falkman, Oscar, Thorén, Moa January 2018 (has links)
Setting up one's software environment and ensuring that all dependencies and settings are the same across the board when deploying an application, can nowadays be a time consuming and frustrating experience. To solve this, the industry has come up with an alternative deployment environment called software containers, or simply containers. These are supposed to help with eliminating the current troubles with virtual machines to create a more streamlined deployment experience.The aim of this study was to compare this deployment technique, containers, against the currently most popular method, virtual machines. This was done using a case study where an already developed application was migrated to a container and deployed online using a cloud provider’s services. Then the application could be deployed via the same cloud service but onto a virtual machine directly, enabling a comparison of the two techniques. During these processes, information was gathered concerning the usability of the two environments. To gain a broader perspective regarding the usability, an interview was conducted as well. Resulting in more well-founded conclusions. The conclusion is that containers are more efficient regarding the use of resources. This could improve the service provided to the customers by improving the quality of the service through more reliable uptimes and speed of service. However, containers also grant more freedom and transfers most of the responsibility over to the developers. This is not always a benefit in larger companies, where regulations must be followed, where a certain level of control over development processes is necessary and where quality control is very important. Further research could be done to see whether containers can be adapted to another company’s current environment. Moreover, how different cloud provider’s services differ. / Att sätta upp och konfigurera sin utvecklingsmiljö, samt att försäkra sig om att alla beroenden och inställningar är lika överallt när man distribuerar en applikation, kan numera vara en tidskrävande och frustrerande process. För att förbättra detta, har industrin utvecklat en alternativ distributionsmiljö som man kallar “software containers” eller helt enkelt “containers”. Dessa är ämnade att eliminera de nuvarande problemen med virtuella maskiner och skapa en mer strömlinjeformad distributionsupplevlese. Målet med denna studie var att jämföra denna nya distributionsteknik, containrar, med den mest använda tekniken i dagsläget, virtuella maskiner. Detta genomfördes med hjälp av en fallstudie, där en redan färdigutvecklad applikation migrerades till en container, och sedan distribuerades publikt genom en molnbaserad tjänst. Applikationen kunde sedan distribueras via samma molnbaserade tjänst men på en virtuell maskin istället, vilket möjliggjorde en jämförelse av de båda teknikerna. Under denna process, samlades även information in kring användbarheten av de båda teknikerna. För att få ett mer nyanserat perspektiv vad gäller användbarheten, så hölls även en intervju, vilket resulterade i något mer välgrundade slutsatser. Slutsatsen som nåddes var att containrar är mer effektiva resursmässigt. Detta kan förbättra den tjänst som erbjuds kunder genom att förbättra kvalitén på tjänsten genom pålitliga upp-tider och hastigheten av tjänsten. Däremot innebär en kontainerlösning att mer frihet, och därmed även mer ansvar, förflyttas till utvecklarna. Detta är inte alltid en fördel i större företag, där regler och begränsningar måste följas, en viss kontroll över utvecklingsprocesser är nödvändig och där det ofta är mycket viktigt med strikta kvalitetskontroller. Vidare forskning kan utföras för att undersöka huruvida containers kan anpassas till ett företags nuvarande utvecklingsmiljö. Olika molntjänster för distribuering av applikationer, samt skillnaderna mellan dessa, är också ett område där vidare undersökning kan bedrivas.
66

Improving Software Development Environment : Docker vs Virtual Machines

Erlandsson, Rickard, Hedrén, Eric January 2017 (has links)
The choice of development environment can be crucial when it comes to developing a software. Few researches exist on comparing development environments. Docker is a relatively new software for handling and setting up container-environments. In this research, the possibility of using Docker as a software development environment is being investigated and compared against virtual machines as a development environment. The purpose of this research is to examine how the choice of development environment affect the development process. The work was qualitative, with an inductive and a deductive approach. It included a case study with two phases. One in which virtual machines and one in which Docker were used to implement a development environment. Observations were made after each implementation. The data from each implementation were then compared and evaluated against each other. The results from the comparisons and the evaluation clearly shows that the choice of development environment can influence the process of developing software. Different development environments affect the development process differently, both good and bad. With Docker, it’s possible to run more environments at once than when using virtual machines. Also, Docker stores the environments in a clever way that results in the environments taking up less space on the secondary storage compared to virtual machine environments. This is due to that Docker uses a layer system when it comes to containers and their components. When using Docker, no Graphical User Interface (GUI) to install and manage applications inside a container is provided, this can be a drawback since some developers may need a GUI to work. The lack of a GUI makes it harder to get an Integrated Development Environment (IDE) to work properly with a container to for example debug code. / Valet av utvecklingsmiljö kan vara avgörande vid utveckling av mjukvara. Få undersökningar finns idag angående jämförelser mellan utvecklingsmiljöer. Docker är en relativt ny mjukvara för att sätta upp samt hantera container- miljöer. I denna undersökning, kommer möjligheten att använda Docker som utvecklingsmiljö att undersökas och jämföras mot virtuella maskiner som utvecklingsmiljö. Syftet med undersökningen är att se hur valet av utvecklingsmiljö påverkar utvecklingsprocessen av en mjukvara. Arbetet bedrevs på ett kvalitativt sätt, med både ett induktivt samt ett deduktivt tillvägagångssätt. Det inkluderade även en fältstudie med två faser. En där virtuella maskiner och en där Docker användes till att implementera en utvecklingsmiljö. Observationer utfördes efter varje implementation. Data från varje implementation jämfördes och evaluerades mot varandra. Resultaten från jämförelserna och evalueringen visar att valet av utvecklingsmiljö har inflytande på processen av utveckling av mjukvara. Olika utvecklingsmiljöer påverkar utvecklingsprocessen olika, både på bra och dåliga sätt. Med Docker är det möjligt att köra fler miljöer samtidigt än vad som är möjligt vid användande av virtuella maskiner. Docker lagrar även miljöerna på ett smart sätt, som gör att de tar upp mindre plats på den sekundära lagringen jämfört med virtuella maskiner. Detta är på grund av att Docker använder sig av ett lager-system när det gäller containrar och deras komponenter. När Docker används, tillhandhålls inget Graphical User Interface (GUI) för att installera eller hanterar applikationer inuti en container, detta kan vara en nackdel då vissa utvecklare kan behöva ett GUI för att arbeta. Avsaknaden av ett GUI gör det svårare att få en Integrated Development Environment (IDE) att fungera ordentligt med en container för att till exempel avlusa kod.
67

A Novel Cloud Broker-based Resource Elasticity Management and Pricing for Big Data Streaming Applications

Runsewe, Olubisi A. 28 May 2019 (has links)
The pervasive availability of streaming data from various sources is driving todays’ enterprises to acquire low-latency big data streaming applications (BDSAs) for extracting useful information. In parallel, recent advances in technology have made it easier to collect, process and store these data streams in the cloud. For most enterprises, gaining insights from big data is immensely important for maintaining competitive advantage. However, majority of enterprises have difficulty managing the multitude of BDSAs and the complex issues cloud technologies present, giving rise to the incorporation of cloud service brokers (CSBs). Generally, the main objective of the CSB is to maintain the heterogeneous quality of service (QoS) of BDSAs while minimizing costs. To achieve this goal, the cloud, although with many desirable features, exhibits major challenges — resource prediction and resource allocation — for CSBs. First, most stream processing systems allocate a fixed amount of resources at runtime, which can lead to under- or over-provisioning as BDSA demands vary over time. Thus, obtaining optimal trade-off between QoS violation and cost requires accurate demand prediction methodology to prevent waste, degradation or shutdown of processing. Second, coordinating resource allocation and pricing decisions for self-interested BDSAs to achieve fairness and efficiency can be complex. This complexity is exacerbated with the recent introduction of containers. This dissertation addresses the cloud resource elasticity management issues for CSBs as follows: First, we provide two contributions to the resource prediction challenge; we propose a novel layered multi-dimensional hidden Markov model (LMD-HMM) framework for managing time-bounded BDSAs and a layered multi-dimensional hidden semi-Markov model (LMD-HSMM) to address unbounded BDSAs. Second, we present a container resource allocation mechanism (CRAM) for optimal workload distribution to meet the real-time demands of competing containerized BDSAs. We formulate the problem as an n-player non-cooperative game among a set of heterogeneous containerized BDSAs. Finally, we incorporate a dynamic incentive-compatible pricing scheme that coordinates the decisions of self-interested BDSAs to maximize the CSB’s surplus. Experimental results demonstrate the effectiveness of our approaches.
68

Modelo para o escoamento de aplicações científicas em ambientes de nuvens baseado em afinidade / Scheduling model for scientific applications in cloud environments based on affinity

Yokoyama, Daniel Massami Muniz 22 June 2015 (has links)
Submitted by Maria Cristina (library@lncc.br) on 2015-09-03T17:53:55Z No. of bitstreams: 1 Dissertacao_Daniel_Yokoyama.pdf: 3080551 bytes, checksum: fb4afe8fd7691c5976810b6e1418b97f (MD5) / Approved for entry into archive by Maria Cristina (library@lncc.br) on 2015-09-03T17:54:16Z (GMT) No. of bitstreams: 1 Dissertacao_Daniel_Yokoyama.pdf: 3080551 bytes, checksum: fb4afe8fd7691c5976810b6e1418b97f (MD5) / Made available in DSpace on 2015-09-03T17:54:48Z (GMT). No. of bitstreams: 1 Dissertacao_Daniel_Yokoyama.pdf: 3080551 bytes, checksum: fb4afe8fd7691c5976810b6e1418b97f (MD5) Previous issue date: 2015-06-22 / Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq) / Confronted by the increase in demand for computing power to solve scientific applications, the need to purchase and maintain a computing infrastructure becomes a must and a hindrance to research institutions. In this backdrop, the technological race and the need to purchase equipment, the cloud computing paradigm focusing on scientific computing emerges as a tool to aid in the advancement of scientific works. The following text presents a private cloud platform focused on the creation and management of computational clusters for application in solving high-performance computing tasks, specifically highly parallelizable processes using MPI. In addition to the system description cluster computing clouds, the work presents a scheduling model of virtual machines based on the affinity of the applications running on the hosts. This allocation model aims to allow better use of the resources available to the platform, increasing the flow of tasks performed. / Mediante a crescente demanda por poder computacional para a resolução de aplicações científicas, a necessidade de aquisição e manutenção de uma infraestrutura computacional torna-se uma obrigação e um empecilho para as instituições de pesquisa. Perante este cenário, de corrida tecnológica e a necessidade de aquisição de equipamentos, o paradigma de computação em nuvem voltado para a computação científica surge como uma ferramenta para auxiliar no avanço dos trabalhos científicos. O texto a seguir apresenta uma plataforma de nuvem privada voltada à criação e gerência de clusters computacionais para a aplicação na resolução de tarefas de computação de alto desempenho, especificamente processos altamente paralelizáveis utilizando MPI . Além da descrição do sistema para clusters computacionais em nuvem, o trabalho segue para apresentar um modelo de escalonamento de máquinas virtuais baseado na afinidade das aplicações em execução nos hospedeiros. Este modelo de alocação busca permitir um melhor aproveitamento dos recursos disponíveis à plataforma, aumentando a vazão de tarefas executadas.
69

Virtual power: um modelo de custo baseado no consumo de energia do processador por máquina virtual em nuvens IaaS / Virtual power: a cost model based on the processor energy consumption per virtual machine in IaaS clouds

Hinz, Mauro 29 September 2015 (has links)
Made available in DSpace on 2016-12-12T20:22:53Z (GMT). No. of bitstreams: 1 Mauro Hinz.pdf: 2658972 bytes, checksum: 50ee82c291499d5ddc390671e05329d4 (MD5) Previous issue date: 2015-09-29 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / The outsourcing of computing services has been through constant evolutions in the past years, due to the increase of demand for computing resources. Accordingly, data centers are the main suppliers of computing service and cloud-based computing services provide a new paradigm for the offer and consumption of these computing resources. A substantial motivator for using cloud computing is its pricing model, which enables to charge the customer only for the resources he used, thus adopting a pay-as-you-use cost model. Among cloud-based computing services, the service type Infrastructure-as-a-Service (IaaS) is the one mostly used by companies that would like to outsource their computing infrastructure. The IaaS service, in most cases, is offered through virtual machines. This paper revisits the cost models used by data centers and analyses the costs of supply of virtual machines based on IaaS. This analysis identifies that electricity represents a considerable portion of this cost and that much of the consumption comes from the use of processors in virtual machines, and that this aspect is not considered in the identified cost models. This paper describes the Virtual Power Model, a cost model based on energy consumption of the processor in cloud-based, virtual machines in IaaS. The model is based on the assumptions of energy consumption vs. processing load, among others, which are proven through experiments in a test environment of a small data center. As a result, the Virtual Power Model proves itself as a fairer pricing model for the consumed resources than the identified models. Finally, a case study is performed to compare the costs charged to a client using the cost model of Amazon for the AWS EC2 service and the same service charged using the Virtual Power Model. / A terceirização dos serviços de computação tem passado por evoluções constantes nos últimos anos em função do contínuo aumento na demanda por recursos computacionais. Neste sentido, os data centers são os principais fornecedores de serviço de computação e os serviços de computação em nuvem proporcionam um novo paradigma na oferta e consumo desses recursos computacionais. Um considerável motivador do uso das nuvens computacionais é o seu modelo de tarifação que possibilita a cobrança do cliente somente dos recursos que ele utilizou, adotando um modelo de custo do tipo pay-as-you-use. Dentre os serviços de computação em nuvem, o serviço do tipo IaaS (Infrastructure-as-a-Service) é um dos mais utilizados por empresas que desejam terceirizar a sua infraestrutura computacional. O serviço de IaaS, na grande maioria dos casos, é ofertado através de instâncias de máquinas virtuais. O presente trabalho revisita os modelos de custos empregados em data centers analisando a formação dos custos no fornecimento de máquina virtuais em nuvens baseadas em IaaS. Com base nesta análise identificasse que a energia elétrica possui uma parcela considerável deste custo e que boa parte deste consumo é proveniente do uso de processadores pelas máquinas virtuais, e que esse aspecto não é considerado nos modelos de custos identificados. Este trabalho descreve o Modelo Virtual Power, um modelo de custo baseado no consumo de energia do processador por máquina virtual em nuvens IaaS. A constituição do modelo está baseada nas premissas de consumo de energia vs. carga de processamento, entre outros, que são comprovados através de experimentação em um ambiente de testes em um data center de pequeno porte. Como resultado o Modelo Virtual Power mostra-se mais justo na precificação dos recursos consumidos do que os modelos identificados. Por fim, é realizado um estudo de caso comparando os custos tarifado a um cliente empregando o modelo de custo da Amazon no serviço AWS EC2 e o mesmo serviço tarifado utilizando o Modelo Virtual Power.
70

Middleware de comunicação entre objetos distribuídos para gerenciamento de computadores baseado em redes sem fio (WSE-OS) /

Crepaldi, Luis Gustavo. January 2011 (has links)
Resumo: Para simplificar o gerenciamento de computadores, vários sistemas de administração estruturados por conexões físicas adotam técnicas avançadas para gestão de configuração de software. No entanto, a forte ligação entre hardware e o software faz com que haja uma individualização desta gerência, além da penalização da mobilidade e ubiqüidade do poder computacional. Neste cenário, cada computador torna-se uma entidade individual a ser gerenciada, exigindo operações manuais de configuração da imagem de sistema. Tecnologias que oferecem gestão centralizada baseadas em conexões físicas cliente-servidor, combinando técnicas de virtualização com a utilização de sistemas de arquivos distribuídos, refletem a degradação em flexibilidade e facilidade de instalação deste sistema gerenciador. Outras arquiteturas para gerenciamento centralizado que estruturam o compartilhamento de dados através de conexões físicas e dependem do protocolo PXE, apresentam os mesmos impasses descritos anteriormente. Diante das limitações dos modelos de gerenciamento centralizado baseado em conexões físicas, o objetivo deste trabalho é o desenvolvimento de um middleware de comunicação cliente-servidor como parte integrante e necessária para um ambiente de gerenciamento centralizado em redes de comunicações sem fio. Este ambiente, denominado WSE-OS (Wireless Sharing Enviroment ? Operating Systems), é um modelo baseado Virtual Desktop Infrastructure (VDI) que associa técnicas de virtualização e sistema de acesso remoto seguro para criação de uma arquitetura distribuída como base de um sistema de gestão. WSE-OS é capaz de realizar a replicação de sistemas operacionais em um ambiente de comunicação sem fio além de oferecer abstração de hardware aos clientes. O WSE-OS pode substituir o boot local com disco rígido por um boot de uma Imagem de Sistema Única... (Resumo completo, clicar acesso eletrônico abaixo) / Abstract: To simplify computer management, various administration systems structured with physical connections adopt advanced techniques to manage software configuration. Nevertheless, the strong link between hardware and software makes for an individualism of that management, besides penalizing computational mobility and ubiquity. In this scenario, each computer becomes an individual entity to be managed, requiring manual operations of the system image configuration. Technologies that offer centralized management based on client-server physical connections, combining virtualization techniques with the use of distributed file systems in clusters with distributed processing on network computers reflect the deterioration in flexibility and ease of installation and maintenance of distributed applications. Other architectures for centralized management that structure the sharing of data through physical connections and depend on the PXE protocol, present the same dilemmas described above. Given the limitations models of centralized management based on physical connections, the objective of this project is the development of a middleware for client-server communication as part necessary of an environment for centralized management in wireless communications networks. This environment, called WSE-OS (Wireless Sharing Environment ? Operating Systems), is a model based Virtual Desktop Infrastructure (VDI), which combines virtualization techniques and secure access system for creating a distributed architecture as the basis for a management system. WSE-OS is capable of replicating operating systems in a wireless environment, addition to providing hardware abstraction to clients. The WSE-OS can replace the boot with local hard disk to a boot from SSI (Single System Image) virtualized in server via communication middleware, increasing flexibility and allowing multiple operating systems... (Complete abstract click electronic access below) / Orientador: Marcos Antônio Cavenaghi / Coorientador: Roberta Spolon / Banca: João Paulo Papa / Banca: Regina Helena Carlucci Santana / Mestre

Page generated in 0.0511 seconds