Spelling suggestions: "subject:"ehe cloud"" "subject:"ehe aloud""
681 |
Spatial and Temporal Learning in Robotic Pick-and-Place Domains via Demonstrations and ObservationsToris, Russell C 20 April 2016 (has links)
Traditional methods for Learning from Demonstration require users to train the robot through the entire process, or to provide feedback throughout a given task. These previous methods have proved to be successful in a selection of robotic domains; however, many are limited by the ability of the user to effectively demonstrate the task. In many cases, noisy demonstrations or a failure to understand the underlying model prevent these methods from working with a wider range of non-expert users. My insight is that in many mobile pick-and-place domains, teaching is done at a too fine grained level. In many such tasks, users are solely concerned with the end goal. This implies that the complexity and time associated with training and teaching robots through the entirety of the task is unnecessary. The robotic agent needs to know (1) a probable search location to retrieve the task's objects and (2) how to arrange the items to complete the task. This thesis work develops new techniques for obtaining such data from high-level spatial and temporal observations and demonstrations which can later be applied in new, unseen environments. This thesis makes the following contributions: (1) This work is built on a crowd robotics platform and, as such, we contribute the development of efficient data streaming techniques to further these capabilities. By doing so, users can more easily interact with robots on a number of platforms. (2) The presentation of new algorithms that can learn pick-and-place tasks from a large corpus of goal templates. My work contributes algorithms that produce a metric which ranks the appropriate frame of reference for each item based solely on spatial demonstrations. (3) An algorithm which can enhance the above templates with ordering constraints using coarse and noisy temporal information. Such a method eliminates the need for a user to explicitly specify such constraints and searches for an optimal ordering and placement of items. (4) A novel algorithm which is able to learn probable search locations of objects based solely on sparsely made temporal observations. For this, we introduce persistence models of objects customized to a user's environment.
|
682 |
Drizzle: Design and Implementation of a Lightweight Cloud Game Engine with Latency CompensationSun, Jiawei 08 December 2017 (has links)
"With the rapid development of the Internet, cloud gaming has increasingly gained attention. Cloud gaming is a new type of cloud service that allows a game to run on the cloud servers, and players interact with the game remotely on their own light-weight clients. There are many potential benefits for both players and game developers to deploy a game on a cloud server, such as reducing the need for clients to update the game, easing development of cross-device games and helping prevent software piracy. In this work, I developed a cloud game engine, Drizzle, with a time warp algorithm for latency compensation, and implemented a new transmission method that reduces the network bandwidth. Using Drizzle, I also developed a simple cloud game to evaluate the functionality and performance. Experiments with this game in a controlled laboratory environment provide objective measurements of game performance and subjective measurements of user performance. Analysis of the results shows Drizzle with time warp did not reduce noticeable latency but helped players get higher game scores compared to Drizzle without time warp. Moreover, Drizzle reduced network bitrates compared to some conventional cloud transmission methods."
|
683 |
Perception Framework for Activities of Daily Living Manipulation TasksBalasubramanian, Koushik 28 April 2016 (has links)
There is an increasing concern in tackling the problems faced by the elderly community and physically in-locked people to lead an independent life experience problems with self- care. The need for developing service robots that can help people with mobility impairments is hence very essential. Developing a control framework for shared human-robot autonomy will allow locked-in individuals to perform the Activities of Daily Living (ADL) in a exible way. The relevant ADL scenarios were identi ed as handling objects, self-feeding, and opening doors for indoor nav- igation assistance. Multiple experiments were conducted, which demonstrates that the robot executes these daily living tasks reliably without requiring adjustment to the environment. The indoor manipulation tasks hold the challenge of dealing with a wide range of unknown objects. This thesis presents a framework developed for grasping without requiring a priori knowledge of the objects being manipulated. A successful manipulation task requires the combination of aspects such as envi- ronment modeling, object detection with pose estimation, grasp planning, motion planning followed by an e?cient grasp execution, which is validated by a 6+2 Degree of Freedom robotic manipulator.
|
684 |
Competição aplicada ao mercado de software : análise dos fatores determinantes que levam as empresas à adoção da computação em nuvemPontel, Daniel Francisco January 2016 (has links)
A indústria do software dispõe de características muito particulares, como a inexistência de muitos competidores em cada segmento, e ainda assim, ser um mercado caracterizado por muitas inovações. A mais nova, a nuvem, permite a entrada de muitas empresas que ingressam no mercado de software, competindo com as empresas tradicionais de software em microcomputadores. Com o intuito de analisar este mercado, este trabalho tem por objetivo analisar os determinantes e as características estratégicas que levam as empresas desenvolvedoras de software a adoção da computação em nuvem. Para a realização do estudo, inicia-se com uma revisão teórica sobre competição e sua aplicação na indústria da computação. Posteriormente, faz-se uma apresentação da indústria da computação, comparando, em cada período, movimentos competitivos dos atores em referência à literatura revisada. Em seguida, analisam-se os incentivos que motivam o movimento das fabricantes de software para a adoção da computação em nuvem, recorrendo a análises de viabilidade, como comparações de vendas de computadores versus celulares, utilização da internet por dispositivos móveis, índice de conectividade por país, e comparação de valor de mercado entre empresas de software on-premise e empresas de software em nuvem. O trabalho ainda faz menção a efeitos econômicos e sociais da computação em nuvem, como capacidade de sua adoção também por outras indústrias, que agora fazem uso de dispositivos que se conectam à internet para transmissão de dados, coletando informações para aperfeiçoar seus produtos. De uma maneira geral, o estudo concluiu que a indústria da computação tem características de mercados monopolísticos, com alta importância na externalidade de rede e fortes barreiras de entrada, atrelado a custos de distribuição e reprodução muito baixos. Isso explica ascensões muito rápidas de empresas e também rápidas maturidades de produtos e consequentemente, o declínio nas vendas. Deste declínio, surge a necessidade de uma mudança de tecnologia para que as vendas voltem a crescer. Assim, o trabalho conclui que o mercado está em ascensão com o modelo de computação em nuvem em virtude de muitas oportunidades, como o crescente uso de dispositivos móveis, que agora podem conectar-se à internet e disfrutar de softwares para incrementar sua usabilidade. Dessas oportunidades, concluímos ainda que a computação em nuvem fará com que o desenvolvimento do software não seja mais privilégio de empresas com este fim, mas outras indústrias também entrarão neste mercado. / The software industry has characteristics very particular, such the non-existence of many competitors in each segment, and despite that, it is a market characterized for presenting many innovations. The newest one, the cloud, is allowing the entrance of many companies in the software market, competing with traditional microcomputer software companies. With the idea to analyze this market, this study aims to analyze the incentives leading software providers to change their products and begin to offer them in cloud computing models. The study begins with a theoretical review of competition and its application to the information technology (IT) industry. Later, we introduce the IT industry, comparing competitive movements of the actors in different periods, in reference to the literature that we reviewed. Then, we analyze the incentives that stimulate software providers to adopt cloud computing by using feasibility studies, such as sales comparisons of computers vs. cell phones, Internet usage by mobile devices, global connectivity index, and a comparison of the market value growth between on-premise software companies and cloud software companies. The study also addresses economic and social effects of cloud computing, such as the ability of other industries to adopt cloud computing in order to create value in their products with the use of Internet-connected devices, which are able to transmit and collect data. Overall, the study found that the IT industry has peculiar characteristics, such as presence in monopolistic markets, high level of importance in the network externalities, and high barriers to entry, plus very low distribution and reproduction costs. This explains how companies grow and achieve a level of product maturity fast, which leads to a decrease in sales. Out of this decrease comes the need for a change of technology, so that sales will grow again. Therefore, the study concludes that the cloud computing market is ascending due to many opportunities, such as the growing use of mobile devices, which can connect to the Internet and increase its usage through software. By analyzing these opportunities, the study also concludes that cloud computing will make software development no longer a privilege of software companies, since other industries will also join the market.
|
685 |
Evolving geospatial applications: from silos and desktops to Microservices and DevOpsGao, Bing 30 April 2019 (has links)
The evolution of software applications from single desktops to sophisticated cloud-based systems is challenging. In particular, applications that involve massive data sets, such as geospatial applications and data science applications are challenging for domain experts who are suddenly constructing these sophisticated code bases. Relatively new software practices, such as Microservice infrastructure and DevOps, give us an opportunity to improve development, maintenance and efficiency for the entire software lifecycle. Microservices and DevOps have become adopted by software developers in the past few years, as they have relieved many of the burdens associated with software evolution. Microservices is an architectural style that structures an application as a collection of services. DevOps is a set of practices that automates the processes between software development and IT teams, in order to build, test, and release software faster and increase reliability. Combined with lightweight virtualization solutions, such as containers, this technology will not only improve response rates in cloud-based solutions but also drastically improve the efficiency of software development. This thesis studies two applications that apply Microservices and DevOps within a domain-specific application. The advantages and disadvantages of Microservices architecture and DevOps are evaluated through the design and development on two different platforms---a batch-based cloud system, and a general purpose cloud environment. / Graduate
|
686 |
Enhancement of Networking Capabilities in P2P OpenStackPeddireddy, Vidyadhar reddy January 2019 (has links)
In recent times, there’s been a trend towards setting up smaller clouds at the edge of the network and interconnecting them across multiple sites. In these scenarios, the software used for managing the resources should be flexible enough to scale. Considering OpenStack the most widely used cloud software, It is observed that the compute service has shown performance degradation when the deployment reaches fewer hundreds of nodes. Finding out solutions to address the scalability issue in OpenStack, Ericsson has developed a new architecture that supports massive scalability of OpenStack clouds. However, the challenges with multicloud networking in P2P OpenStack remained unsolved. This thesis work as an extension to Ericsson’s P2P OpenStack project investigates various multi-cloud networking techniques and proposes two decentralized designs for cross Neutron networking in P2P OpenStack. The design-1 is based on OpenStack Tricircle project and design-2 is based on VPNaaS. This thesis work implements VPNaaS design to support the automatic interconnection of Virtual machines that belong to the same user but deployed in different OpenStack clouds. We evaluate this thesis for control plane operation under two different scenarios namely single user case and multiple users cases. In both scenarios, request-response time is chosen as an evaluating parameter. Results show that there is an increase in request-response time when users in the system are increased.
|
687 |
Live deduplication storage of virtual machine images in an open-source cloud.January 2012 (has links)
重覆數據删除技術是一個消除冗餘數據存儲塊的技術。尤其是,在儲存數兆位元組的虛擬機器影像時,它已被證明可以減少使用磁碟空間。但是,在會經常加入和讀取虛擬機器影像的雲端平台,部署重覆數據删除技術仍然存在挑戰。我們提出了一個在內核運行的重覆數據删除檔案系統LiveDFS,它可以在一個在低成本硬件配置的開源雲端平台中作為儲存虛擬機器影像的後端。LiveDFS有幾個新穎的特點。具體來說,LiveDFS中最重要的特點是在考慮檔案系統佈局時,它利用空間局部性放置重覆數據删除中繼資料。LiveDFS是POSIX兼容的Linux內核檔案系統。我們透過使用42個不同Linux發行版的虛擬機器影像,在實驗平台測試了LiveDFS的讀取和寫入性能。我們的工作證明了在低成本硬件配置的雲端平台部署LiveDFS的可行性。 / Deduplication is a technique that eliminates the storage of redundant data blocks. In particular, it has been shown to effectively reduce the disk space for storing multi-gigabyte virtual machine (VM) images. However, there remain challenging deployment issues of enabling deduplication in a cloud platform, where VM images are regularly inserted and retrieved. We propose a kernel-space deduplication file systems called LiveDFS, which can serve as a VM image storage backend in an open-source cloud platform that is built on low-cost commodity hardware configurations. LiveDFS is built on several novel design features. Specifically, the main feature of LiveDFS is to exploit spatial locality of placing deduplication metadata on disk with respect to the underlying file system layout. LiveDFS is POSIX-compliant and is implemented as Linux kernel-space file systems. We conduct testbed experiments of the read/write performance of LiveDFS using a dataset of 42 VM images of different Linux distributions. Our work justifies the feasibility of deploying LiveDFS in a cloud platform under commodity settings. / Detailed summary in vernacular field only. / Ng, Chun Ho. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2012. / Includes bibliographical references (leaves 39-42). / Abstracts also in Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 2 --- LiveDFS Design --- p.5 / Chapter 2.1 --- File System Layout --- p.5 / Chapter 2.2 --- Deduplication Primitives --- p.6 / Chapter 2.3 --- Deduplication Process --- p.8 / Chapter 2.3.1 --- Fingerprint Store --- p.9 / Chapter 2.3.2 --- Fingerprint Filter --- p.11 / Chapter 2.4 --- Prefetching of Fingerprint Stores --- p.14 / Chapter 2.5 --- Journaling --- p.15 / Chapter 2.6 --- Ext4 File System --- p.17 / Chapter 3 --- Implementation Details --- p.18 / Chapter 3.1 --- Choice of Hash Function --- p.18 / Chapter 3.2 --- OpenStack Deployment --- p.19 / Chapter 4 --- Experiments --- p.21 / Chapter 4.1 --- I/O Throughput --- p.21 / Chapter 4.2 --- OpenStack Deployment --- p.26 / Chapter 5 --- Related Work --- p.34 / Chapter 6 --- Conclusions and Future Work --- p.37 / Bibliography --- p.39
|
688 |
Practical data integrity protection in network-coded cloud storage.January 2012 (has links)
近年雲存儲發展迅速,它具彈性的收費模式還有使用上的便利性吸引了不少用家把它當作一個備份的平台,如何保障雲端上資料的完整性也就成了一項重要的課題。我們試著探討如何能有效地在客戶端檢查雲端上資料的完整性,並且在探測到雲存儲節點故障以後如何有效地進行修復。抹除碼(Erasure codes)透過產生冗餘,令編碼過後的資料能允許一定程度的缺片。雲端使用者可以利用抹除碼把檔案分散到不同的雲節點,即使其中一些節點壞了用戶還是能透過解碼餘下的資料來得出原檔。我們的研究是基於一種叫再造編碼(Regenerating code)的新興抹除碼。再造編碼借用了網絡編碼(Network coding)的概念,使得在修復錯誤節點的時候並不需要把完整的原檔先重構一遍,相比起一些傳統的抹除碼(如里德所羅門碼Reed-Solomoncode)能減少修復節點時需要下載的資料量。其中我們在FMSR這門再造編碼上實現了一個能有效檢測錯誤的系統FMSR-DIP。FMSR-DIP的好處是在檢測的時候只需要下載一小部份的資料,而且不要求節點有任何的編碼能力,可以直接對應現今的雲存儲。為了驗證我們系統的實用性,我們在雲存儲的測試平台上運行了一系列的測試。 / To protect outsourced data in cloud storage against corruptions, enabling integrity protection, fault tolerance, and efficient recovery for cloud storage becomes critical. To enable fault tolerance from a client-side perspective, users can encode their data with an erasure code and stripe the encoded data across different cloud storage nodes. We base our work on regenerating codes, a recently proposed type of erasure code that borrows the concept of network coding and requires less repair traffic than traditional erasure codes during failure recovery. We study the problem of remotely checking the integrity of regenerating-coded data against corruptions under a real-life cloud storage setting. Specifically, we design a practical data integrity protection (DIP) scheme for a specific regenerating code, while preserving the intrinsic properties of fault tolerance and repair traffic saving. Our DIP scheme is designed under the Byzantine adversarial model, and enables a client to feasibly verify the integrity of random subsets of outsourced data against general or malicious corruptions. It works under the simple assumption of thin-cloud storage and allows different parameters to be fine-tuned for the performance-security trade-off. We implement and evaluate the overhead of our DIP scheme in a cloud storage testbed under different parameter choices. We demonstrate that remote integrity checking can be feasibly integrated into regenerating codes in practical deployment. / Detailed summary in vernacular field only. / Chen, Chuk Hin Henry. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2012. / Includes bibliographical references (leaves 38-41). / Abstracts also in Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 2 --- Preliminaries --- p.4 / Chapter 2.1 --- FMSR Implementation --- p.4 / Chapter 2.2 --- Threat Model --- p.6 / Chapter 2.3 --- Cryptographic Primitives --- p.7 / Chapter 3 --- Design --- p.8 / Chapter 3.1 --- Design Goals --- p.8 / Chapter 3.2 --- Notation --- p.9 / Chapter 3.3 --- Overview of FMSR-DIP --- p.11 / Chapter 3.4 --- Basic Operations --- p.11 / Chapter 3.4.1 --- Upload operation --- p.11 / Chapter 3.4.2 --- Check operation --- p.13 / Chapter 3.4.3 --- Download operation --- p.15 / Chapter 3.4.4 --- Repair operation --- p.16 / Chapter 4 --- Implementation --- p.17 / Chapter 4.1 --- Integration of DIP into NCCloud --- p.17 / Chapter 4.2 --- Instantiating Cryptographic Primitives --- p.18 / Chapter 4.3 --- Trade-off Parameters --- p.19 / Chapter 5 --- Security Analysis --- p.22 / Chapter 5.1 --- Uses of Security Primitives --- p.22 / Chapter 5.2 --- Security Guarantees --- p.23 / Chapter 5.2.1 --- Corrupting an AECC Stripe --- p.23 / Chapter 5.2.2 --- Picking Corrupted Bytes for Checking --- p.25 / Chapter 5.2.3 --- Putting It All Together --- p.26 / Chapter 6 --- Evaluations --- p.27 / Chapter 6.1 --- Running Time Analysis --- p.27 / Chapter 6.2 --- Monetary Cost Analysis --- p.30 / Chapter 6.3 --- Summary --- p.33 / Chapter 7 --- Related Work --- p.34 / Chapter 8 --- Conclusions --- p.37 / Bibliography --- p.38
|
689 |
Escalonamento de workflow com anotações de tarefas sensitivas para otimização de segurança e custo em nuvens / Workflow scheduling with sensitive task annotations for security and cost optimization in cloudsShishido, Henrique Yoshikazu 11 December 2018 (has links)
A evolução dos computadores tem possibilitado a realização de experimentos in-silico, incluindo aplicações baseadas no modelo de workflow. A execução de workflows é uma atividade que pode ser computacional dispendiosa, onde grades e nuvens são adotadas para a sua execução. Inserido nesse contexto, os algoritmos de escalonamento de workflow permitem atender diferentes critérios de execução como o tempo e o custo monetário. Contudo, a segurança é um critério que tem recebido atenção, pois diversas organizações hesitam em implantar suas aplicações em nuvens devido às ameaças presentes em um ambiente aberto e promíscuo como a Internet. Os algoritmos de escalonamento direcionados à segurança consideram dois cenários: (a) nuvens híbridas: mantêm os tarefas que manipulam dados sensitivos/confidenciais na nuvem privada e exporta as demais tarefas para nuvens públicas para satisfazer alguma restrição (ex.: tempo), e; (b) nuvens públicas: considera o uso de serviços de segurança disponíveis em instâncias de máquinas virtuais para proteger tarefas que lidam com dados sensitivos/confidenciais. No entanto, os algoritmos de escalonamento que consideram o uso de serviços de segurança selecionam as tarefas de forma aleatória sem considerar a semântica dos dados. Esse tipo de abordagem pode acabar atribuindo proteção a tarefas não-sensitivas e desperdiçando tempo e recursos, e deixando dados sensitivos sem a proteção necessária. Frente a essas limitações, propõe-se nesta tese duas abordagens de escalonamento de workflow: o Workflow Scheduling - Task Selection Policies (WS-TSP) e a Sensitive Annotation for Security Tasks (SAST). O WS-TSP é uma abordagem de escalonamento que usa um conjunto de políticas para a proteção de tarefas. O SAST é outra abordagem que permite utilizar o conhecimento prévio do Desenvolvedor de Aplicação para identificar quais tarefas devem receber proteção. O WS-TSP e a SAST consideram a aplicação de serviços de segurança como autenticação, verificação de integridade e criptografia para proteger as tarefas sensitivas do workflow. A avaliação dessas abordagens foi realizada através de uma extensão do simulador WorkflowSim que incorpora a sobrecarga dos serviços de segurança no tempo, do custo e do risco de execução do workflow. As duas abordagens apresentaram menor risco de segurança do que as abordagens da literatura, sob um custo e makespan razoáveis. / The evolution of computers has enabled in-silico experiments to take place, including applications based on the workflow model. The execution of workflows is an activity that can be computationally expensive, where grids and clouds are adopted for its execution. In this context, the workflow scheduling algorithms allow meeting different execution criteria such as time and monetary cost. However, security is a criterion that has received attention because several organizations hesitate to deploy their applications in clouds due to the threats present in an open and promiscuous environment like the Internet. Security-oriented scheduling algorithms consider two scenarios: (a) hybrid clouds: holds tasks that manipulate sensitive data in the private cloud and export the other tasks to public clouds to satisfy some constraints (eg, time); (b) public clouds: considers the use of security services available in instances of virtual machines to protect tasks that deal with sensitive data. However, scheduling algorithms that consider the use of security services randomly select tasks without considering data semantics. This type of approach may end up assigning protection to non-sensitive tasks and wasting time and resources and leaving sensitive data without the necessary protection. In view of these limitations, two workflow scheduling approaches are proposed: Workflow Scheduling (WS-TSP) and Sensitive Annotation for Security Tasks (SAST). WS-TSP is a scheduling approach that uses a set of policies for task protection. SAST is another approach that allows using the Application Developers prior knowledge to identify which tasks should be protected. WS-TSP and SAST consider implementing security services such as authentication, integrity verification, and encryption to protect sensitive tasks. The evaluation of these approaches was carried out through an extension of the simulatorWorkflowSim that incorporates the overhead of security services in the execution time, the cost and the risk of execution The two approaches presented a lower security risk than the literature approaches, at a reasonable cost and makespan.
|
690 |
Alta disponibilidade: uma abordagem com DNS e Proxy Reverso em Multi-CloudPires, Luis Paulo Gon?alves 15 December 2016 (has links)
Submitted by SBI Biblioteca Digital (sbi.bibliotecadigital@puc-campinas.edu.br) on 2017-02-01T13:15:39Z
No. of bitstreams: 1
LUIS PAULO GONCALVES PIRES.pdf: 3166033 bytes, checksum: 043d546bf3a8212c07798369bfcc2f7f (MD5) / Made available in DSpace on 2017-02-01T13:15:39Z (GMT). No. of bitstreams: 1
LUIS PAULO GONCALVES PIRES.pdf: 3166033 bytes, checksum: 043d546bf3a8212c07798369bfcc2f7f (MD5)
Previous issue date: 2016-12-15 / Pontif?cia Universidade Cat?lica de Campinas ? PUC Campinas / While there is considerable enthusiasm for the migration of on-premise data centers to cloud computing services, there is still some concern about the availability of these same services. This is due, for example, to historical incidents such as that in 2011, when a crash on Amazon's servers caused sites of several of its customers to go down for almost 36 hours. In view of this, it becomes necessary to develop strategies to guarantee the availability offered by the providers. In the present work, a solution is proposed, which implements high availability in Multi-Cloud environments, through the distribution of DNS access and the use of reverse proxy. A financial analysis was also carried out, taking into account market values in Cloud Computing services, which showed that the proposed solution may even be advantageous with respect to the traditional one. Specifically, a Multi-Cloud system, consisting of two Clouds with 99.90% availability each, provides total availability of 99.999%, and it costs 34% less than a single Cloud with 99.95% availability. The simulation results, obtained in a virtualized environment, using two Clouds, with availability of 99.49% and 99.43%, showed a system availability of 99.9971%. In this way, using Multi-Cloud systems it is possible to obtain high availability systems, from lower availability Clouds, according to user?s needs, besides saving with provider services costs. / A despeito de haver consider?vel entusiasmo quanto ? migra??o de data-centers on-primese para servi?os de Cloud Computing, ainda existe certo receio no que se refere ? disponibilidade destes mesmos servi?os. Isso se deve, por exemplo, a incidentes hist?ricos como o ocorrido em 2011, quando uma falha nos servidores da Amazon fez com que sites de v?rios de seus clientes ficassem fora do ar por quase 36 horas. Em vista disso, torna-se necess?rio desenvolver estrat?gias para garantir a disponibilidade oferecida pelos provedores. No presente trabalho, descreve-se uma solu??o que implementa alta disponibilidade em ambientes Multi-Cloud, mediante a distribui??o de acesso por DNS e a utiliza??o de proxy reverso. Realizou-se tamb?m uma an?lise financeira, levando-se em conta valores de mercado em servi?os de Cloud Computing, o que mostrou que a solu??o proposta pode ser mesmo vantajosa com a rela??o ? solu??o tradicional. Especificamente, um sistema Multi-Cloud, composto por duas Clouds com disponibilidade de 99,90%, que prov? disponibilidade total de 99,999%, custa 34% menos do que uma ?nica Cloud com disponibilidade de 99,95%. Os resultados de simula??o, obtidos em ambiente virtualizado, utilizando-se duas Clouds, com disponibilidades de 99,49% e 99,43%, alcan?aram disponibilidade 99,9971%. Desta forma, utilizando-se sistemas Multi-Cloud ? poss?vel se obter sistemas de alta disponibilidade, de acordo necessidade do usu?rio, a partir de Clouds de mais baixa disponibilidade, al?m de ser poss?vel economizar com os custos dos servi?os do provedor.
|
Page generated in 0.0607 seconds