• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 35
  • 4
  • 4
  • 1
  • 1
  • Tagged with
  • 47
  • 27
  • 20
  • 13
  • 10
  • 10
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Performance evaluation of wireguard in kubernetes cluster

Gunda, Pavan, Voleti, Sri Datta January 2021 (has links)
Containerization has gained popularity for deploying applications in a lightweight environment. Kubernetes and Docker have gained a lot of dominance for scalable deployments of applications in containers. Usually, kubernetes clusters are deployed within a single shared network. For high availability of the application, multiple kubernetes clusters are deployed in multiple regions, due to which the number of kubernetes clusters keeps on increasing over time. Maintaining and managing mul-tiple kubernetes clusters is a challenging and time-consuming process for system administrators or DevOps engineers. These issues can be addressed by deploying a kubernetes cluster in a multi-region environment. A multi-region kubernetes de-ployment reduces the hassle of handling multiple kubernetes masters by having onlyone master with worker nodes spread across multiple regions. In this thesis, we investigated a multi-region kubernetes cluster’s network performance by deploying a multi-region kubernetes cluster with worker nodes across multiple openstack regions and tunneled using wireguard(a VPN protocol). A literature review on the common factors that influence the network performance in a multi-region deployment is conducted for the network performance metrics. Then, we compared the request-response time of this multi-region kubernetes cluster with the regular kubernetes cluster to evaluate the performance of the deployed multi-region kubernetescluster. The results obtained show that a kubernetes cluster with worker nodes ina single shared network has an average request-response time of 2ms. In contrast, the kubernetes cluster with worker nodes in different openstack projects and regions has an average request-response time of 14.804 ms. This thesis aims to provide a performance comparison of the kubernetes cluster with and without wireguard, fac-tors affecting the performance, and an in-depth understanding of concepts related to kubernetes and wireguard.
32

Performance Modeling of OpenStack Controller

Samadi Khah, Pouya January 2016 (has links)
OpenStack is currently the most popular open source platform for Infrastructure as a Service (IaaS) clouds. OpenStack lets users deploy virtual machines and other instances, which handle different tasks for managing a cloud environment on the fly. A lot of cloud platform offerings, including the Ericsson Cloud System, are based on OpenStack. Despite the popularity of OpenStack, there is currently a limited understanding of how much resource is consumed/needed by components of OpenStack under different operating conditions such as number of compute nodes, number of running VMs, the number of users and the rate of requests to the various services. The master thesis attempts to model the resource demand of the various components of OpenStack in function of different operating condition, identify correlations and evaluate how accurate the predictions are. For this purpose, a physical OpenStack is setup with one strong controller node and eight compute nodes. All the experiments and measurements were on virtual OpenStack components on top of the main physical one. In conclusion, a simple model is generated for idle behavior of OpenStack, starting and stopping a Virtual Machine (VM) API calls which predicts the total CPU utilization based on the number of Compute Nodes and VMs.
33

Evaluation and Improvement of Application Deployment in Hybrid Edge Cloud Environment : Using OpenStack, Kubernetes, and Spinnaker

Jendi, Khaled January 2020 (has links)
Traditional mechanisms of deployment of deferent applications can be costly in terms of time and resources, especially when the application requires a specific environment to run upon and has a different kind of dependencies so to set up such an application, it would need an expert to find out all required dependencies. In addition, it is difficult to deploy applications with efficient usage of resources available in the distributed environment of the cloud. Deploying different projects on the same resources is a challenge. To solve this problem, we evaluated different deployment mechanisms using heterogeneous infrastructure-as-a-service (IaaS) called OpenStack and Microsoft Azure. we also used platform-as-a-service called Kubernetes. Finally, to automate and auto integrate deployments, we used Spinnaker as the continuous delivery framework. The goal of this thesis work is to evaluate and improve different deployment mechanisms in terms of edge cloud performance. Performance depends on achieving efficient usage of cloud resources, reducing latency, scalability, replication and rolling upgrade, load balancing between data nodes, high availability and measuring zero- downtime for deployed applications. These problems are solved basically by designing and deploying infrastructure and platform in which Kubernetes (PaaS) is deployed on top of OpenStack (IaaS). In addition, the usage of Docker containers rather than regular virtual machines (containers orchestration) will have a huge impact. The conclusion of the report would demonstrate and discuss the results along with various test cases regarding the usage of different methods of deployment, and the presentation of the deployment process. It includes also suggestions to develop more reliable and secure deployment in the future when having heterogeneous container orchestration infrastructure. / Traditionella mekanismer för utplacering av deferentapplikationer kan vara kostsamma när det gäller tid och resurser, särskilt när applikationen kräver en specifik miljö att löpa på och har en annan typ av beroende, så att en sådan applikation upprättas, skulle det behöva en expert att hitta ut alla nödvändiga beroenden. Dessutom är det svårt att distribuera applikationer med effektiv användning av resurser tillgängliga i molnens distribuerade i Edge Cloud Computing. Att distribuera olika projekt på samma resurser är en utmaning. För att lösa detta problem skulle jag utvärdera olika implementeringsmekanismer genom att använda heterogen infrastruktur-as-a-service (IaaS) som heter OpenStack och Microsoft Azure. Jag skulle också använda plattform-som-en-tjänst som heter Kubernetes. För att automatisera och automatiskt integrera implementeringar skulle jag använda Spinnaker som kontinuerlig leveransram. Målet med detta avhandlingsarbete är att utvärdera och förbättra olika implementeringsmekanismer när det gäller Edge Cloud prestanda. Prestanda beror på att du uppnår effektiv användning av Cloud resurser, reducerar latens, skalbarhet, replikering och rullningsuppgradering, lastbalansering mellan datodenoder, hög tillgänglighet och mätning av nollstanntid för distribuerade applikationer. Dessa problem löses i grunden genom att designa och distribuera infrastruktur och plattform där Kubernetes (PaaS) används på toppen av OpenStack (IaaS). Dessutom kommer användningen av Docker- behållare istället för vanliga virtuella maskiner (behållare orkestration) att ha en stor inverkan. Slutsatsen av rapporten skulle visa och diskutera resultaten tillsammans med olika testfall angående användningen av olika metoder för implementering och presentationen av installationsprocessen. Det innehåller också förslag på att utveckla mer tillförlitlig och säker implementering i framtiden när den har heterogen behållareorkesteringsinfrastruktur.
34

Autoscaling through Self-Adaptation Approach in Cloud Infrastructure. A Hybrid Elasticity Management Framework Based Upon MAPE (Monitoring-Analysis-Planning-Execution) Loop, to Ensure Desired Service Level Objectives (SLOs)

Butt, Sarfraz S. January 2019 (has links)
The project aims to propose MAPE based hybrid elasticity management framework on the basis of valuable insights accrued during systematic analysis of relevant literature. Each stage of MAPE process acts independently as a black box in proposed framework, while dealing with neighbouring stages. Thus, being modular in nature; underlying algorithms in any of the stage can be replaced with more suitable ones, without affecting any other stage. The hybrid framework enables proactive and reactive autoscaling approaches to be implemented simultaneously within same system. Proactive approach is incorporated as a core decision making logic on the basis of forecast data, while reactive approach being based upon actual data would act as a damage control measure; activated only in case of any problem with proactive approach. Thus, benefits of both the worlds; pre-emption as well as reliability can be achieved through proposed framework. It uses time series analysis (moving average method / exponential smoothing) and threshold based static rules (with multiple monitoring intervals and dual threshold settings) during analysis and planning phases of MAPE loop, respectively. Mathematical illustration of the framework incorporates multiple parameters namely VM initiation delay / release criterion, network latency, system oscillations, threshold values, smart kill etc. The research concludes that recommended parameter settings primarily depend upon certain autoscaling objective and are often conflicting in nature. Thus, no single autoscaling system with similar values can possibly meet all objectives simultaneously, irrespective of reliability of an underlying framework. The project successfully implements complete cloud infrastructure and autoscaling environment over experimental platforms i-e OpenStack and CloudSim Plus. In nutshell, the research provides solid understanding of autoscaling phenomenon, devises MAPE based hybrid elasticity management framework and explores its implementation potential over OpenStack and CloudSim Plus.
35

Constraint based network communications in a virtual environment of a proprietary hardware

Bhonagiri, Saaish, Mudugonda, Soumith Kumar January 2022 (has links)
The specialized hardware remains a key component of the mobile networks, but at the same time, the telecom industry is adapting a vision of a fully programable distributed end-to-end network with cloud style management and Software-Defined Networking. In the specialized hardware programmable network, it will be possible to place workloads across abstracted compute and networking infrastructure. But, whereas virtualization standard compute resources is a mature technology and well supported in cloud management systems such as OpenStack and Kubernetes, this is not the case for specialized hardware with more complex constraints. There is a significant gap in terms of advanced constraints and service level aware schedulers. The main objective of this thesis is that the specialised hardware needs to adapt to the features of edge computing. Edge computing provides the opportunity to explore how technologies can advance industrial processes. To achieve flexibility by choosing where the workload should be processed on the board based on available resources. Utilising this technology, highly intensive applications can be handled at the network’s edge. There is a necessity to virtualize the proprietary hardware and run workloads in VMs and containers. In this thesis, we discuss kernel bypass, PCI passthrough and MPI communication technologies in a virtual environment by considering the hardware constraints and software requirements so that these technologies can be integrated into OpenStack and Kubernetes in future.
36

A Modular architecture for Cloud Federation

Panjwani, Rizwan 21 December 2015 (has links)
Cloud Computing is the next step in the evolution of the Internet. It provides seemingly unlimited computation and storage resources by abstracting the networking, hardware, and software components underneath. However, individual cloud service providers do not have unlimited resources to offer. Some of the tasks demand computational resources that these individual cloud service providers can not fulfill themselves. In such cases, it would be optimal for these providers to borrow resources from each other. The process where different cloud service providers pool their resources is called Cloud Federation. There are many aspects to Cloud Federation such as access control and interoperability. Access control ensures that only the permitted users can access these federated resources. Interoperability enables the end-user to have a seamless experience when accessing resources on federated clouds. In this thesis, we detail our project named GENI-SAVI Federation, in which we federated the GENI and SAVI cloud systems. We focus on the access control portion of the project while also discussing the interoperability aspect of it. / Graduate / 0984 / panjwani.riz@gmail.com
37

Estratégias para o suporte a ambientes de execução confiável em sistemas de computação na nuvem.

SAMPAIO, Lília Rodrigues. 06 September 2018 (has links)
Submitted by Emanuel Varela Cardoso (emanuel.varela@ufcg.edu.br) on 2018-09-06T19:50:33Z No. of bitstreams: 1 LÍLIA RODRIGUES SAMPAIO – DISSERTAÇÃO (PPGCC) 2018.pdf: 2337190 bytes, checksum: bfef42889bed89a3860b61f38016b02d (MD5) / Made available in DSpace on 2018-09-06T19:50:33Z (GMT). No. of bitstreams: 1 LÍLIA RODRIGUES SAMPAIO – DISSERTAÇÃO (PPGCC) 2018.pdf: 2337190 bytes, checksum: bfef42889bed89a3860b61f38016b02d (MD5) Previous issue date: 2018-02-15 / Capes / O alto poder computacional oferecido por provedores de nuvem, aliado às suas características de flexibilidade, eficiência e custo reduzido, têm levado cada vez mais usuários a utilizar recursos em nuvem para implantação de aplicações. Como consequência, uma grande quantidade de dados de diversas aplicações críticas, e de alta demanda de processamento, passam a trafegar livremente pela nuvem. Considerando isso, especialmente para aplicações que lidam com dados sensíveis, como sistemas bancários, medidores inteligentes de energia, entre outros, é de grande importância assegurar a integridade e confidencialidade de tais dados. Assim, é cada vez mais frequente que usuários de recursos de uma nuvem exijam garantias de que suas aplicações estão executando em um ambiente de execução confiável. Abordagens tradicionais, como criptografia de dados em repouso e em comunicação, combinadas com políticas rígidas de controle de acesso têm sido usadas com algum êxito, mas ainda têm permitido sérios ataques. No entanto, mais recentemente,Trusted Execution Environments(TEE) têm prometido garantias de integridade e confidencialidade de dados e códigos ao carregá-los e executálos em áreas seguras e isoladas do processador da máquina. Assim, para dar suporte a implementações de TEE, tecnologias de segurança em hardware podem ser utilizadas, como ARM Trustzone e Intel SGX. No contexto deste trabalho, usamos Intel SGX, que se propõe a garantir confidencialidade e integridade de dados e aplicações executadas dentro de áreas protegidas de memória, denominadas enclaves. Assim, o código é protegido até de software com acesso privilegiado, como o próprio sistema operacional, hipervisores, entre outros. Diversos recursos da nuvem podem fazer uso de tais tecnologias de segurança, entre eles máquinas virtuais e contêineres. Neste estudo, propomos estratégias para o suporte de TEE em ambientes de nuvem, integrando Intel SGX e OpenStack, uma plataforma de nuvem de código berto amplamente conhecida e utilizada por grandes empresas. Apresentamos uma nova bordagem no processo de provisionamento e escalonamento de máquinas virtuais e contêineres numa nuvem OpenStack segura, que considera aspectos essenciais para o SGX, como a quantidade de memória segura sendo utilizada, oEnclave Page Cache(EPC). Porfim, validamos esta nuvem com uma aplicação que exige processamento de dados sensíveis. / The high level of computing power offered by cloud providers, together with theflexibility, efficiency and reduced cost they also offer, have increased the number of users wanting to deploy their applications on the cloud. As a consequence, a big amount of data from many critical and high performance applications start to traffic on the cloud. Considering this, specially for applications that deal with sensitive data, such as bank transactions, smart metering, and others, is very important to assure data integrity and confidentiality. Thus, it is increasingly common that users of cloud resources demand guarantees that their applications are running on trusted execution environments. Traditional approaches, such as data cryptography, combined with strict access control policies have been used with a level of success, but still allowing serious attacks. However, more recently, Trusted Execution Environments (TEE) are promising guarantees of data and code integrity and confidentiality by loading and executing them in isolated secure areas of the machine’s processor. This way, to support TEE implementations, hardware technologies such as ARM TrustZone and Intel SGX can be used. In the context of this research, we use Intel SGX, which proposes integrity and confidentiality guarantees for data and applications executed inside protected memory areas called enclaves. Thus, the code is protected even from high privileged software, such as the operational system, hypervisors and others. Many cloud resources can use such security technologies, like virtual machines and containers. In this research, we propose strategies to support TEE in cloud environments, integrating Intel SGX and OpenStack, an open-source cloud platform widely known and used by many big companies. We present a new approach in the provisioning and scheduling of instances in a secure OpenStack cloud, considering essential aspects to SGX such as the amount of secure memory being used, the Enclave Page Cache (EPC). Finally, we validated this cloud by deploying an application that requires processing of sensitive data.
38

Proposta e validação de solução de computação em nuvem para redes com dispositivos nacionais

Zugliani, Ettore 26 February 2016 (has links)
Submitted by Bruna Rodrigues (bruna92rodrigues@yahoo.com.br) on 2016-10-07T12:08:01Z No. of bitstreams: 1 DissEZ.pdf: 1298006 bytes, checksum: a929b220ad3a00bff4e9720457b3d700 (MD5) / Approved for entry into archive by Marina Freitas (marinapf@ufscar.br) on 2016-10-20T19:32:02Z (GMT) No. of bitstreams: 1 DissEZ.pdf: 1298006 bytes, checksum: a929b220ad3a00bff4e9720457b3d700 (MD5) / Approved for entry into archive by Marina Freitas (marinapf@ufscar.br) on 2016-10-20T19:32:08Z (GMT) No. of bitstreams: 1 DissEZ.pdf: 1298006 bytes, checksum: a929b220ad3a00bff4e9720457b3d700 (MD5) / Made available in DSpace on 2016-10-20T19:32:14Z (GMT). No. of bitstreams: 1 DissEZ.pdf: 1298006 bytes, checksum: a929b220ad3a00bff4e9720457b3d700 (MD5) Previous issue date: 2016-02-26 / Não recebi financiamento / Cloud and virtualization are both recurring subjects, today they are used in a variety of systems in oder to provide solutions that are effective, scalable, with easier maintenance and les costly. Among the main names of the area we have the OpenStack software which provides various levels of virtualization from virtual machines to virtual networks, OpeStack code is open source and it‘s network module, Neutron, has the support of several manufactors like Cisco, Brocade and Arista but untill now there is no support for a national manufactor. This work proposes the construction of a national solution for network virtualization, proposing the construction of a driver for the OpenStack network module that supports the equipments of the brazilian manufactor Datacom. This text features first a overview of the actual environment of cloud computing, a study about OpenStack and it‘s network module Neutron, and them taking this study as starting point a set of diagrams are presented in order to propose a solution. This solution is them built using the programing language Python and good practices of programing and them at the end of te implementation presents a solid and highly modular solution. Therefore this proposal is validated through a series of unity tests which are necessary for the approval of the code by the comunity. This work results in a working OpenStack environment using the UFSCar servers, which comunicates in a satisfactory manner with national network equipments, besides the contribution with OpenStack development. / Computação em nuvem e virtualização são assuntos recorrentes utilizados em uma variedade de sistemas a fim de prover soluções eficazes, escaláveis, de facilitada manutenção e de menor custo. Dentre os principais nomes da área se destaca o software OpenStack que provê virtualizações em diversos níveis, desde máquinas virtuais até redes virtuais. O OpenStack possui código aberto e seu módulo de redes Neutron conta com o suporte a diversos grandes fabricantes como Cisco, Brocade e Arista, no entanto, até o presente momento não há suporte para nenhum fabricante nacional. Este trabalho propõe a construção de uma solução nacional para virtualização de redes através da construção de um driver para o módulo de redes do OpenStack, que possa suportar os equipamentos da fabricante brasileira Datacom. O texto apresenta a principio o cenário atual de computação em nuvem para redes, além de, um estudo sobre o OpenStack e seu módulo de redes Neutron, portanto, tomando esse estudo como ponto de partida são apresentados diversos diagramas a fim de propor uma solução. Esta solução é então construída utilizando a linguagem Python e boas práticas de programação alcançando ao final uma proposta sólida e altamente modular, por fim, esta proposta é validada por meio de testes unitários que são inclusive necessários para a aprovação do código na comunidade. O trabalho resulta em um ambiente OpenStack funcional utilizando os servidores da UFSCar que se comunica de maneira satisfatória com equipamentos nacionais de redes, além contribuir com o desenvolvimento do OpenStack.
39

Self-adaptive authorisation in cloud-based systems

Diniz, Thomas Filipe da Silva 02 May 2016 (has links)
Submitted by Automa??o e Estat?stica (sst@bczm.ufrn.br) on 2017-02-02T11:51:15Z No. of bitstreams: 1 ThomasFilipeDaSilvaDiniz_DISSERT.pdf: 1274346 bytes, checksum: 92a77c7516fba88a183765b28a4ae268 (MD5) / Approved for entry into archive by Arlan Eloi Leite Silva (eloihistoriador@yahoo.com.br) on 2017-02-03T21:26:05Z (GMT) No. of bitstreams: 1 ThomasFilipeDaSilvaDiniz_DISSERT.pdf: 1274346 bytes, checksum: 92a77c7516fba88a183765b28a4ae268 (MD5) / Made available in DSpace on 2017-02-03T21:26:05Z (GMT). No. of bitstreams: 1 ThomasFilipeDaSilvaDiniz_DISSERT.pdf: 1274346 bytes, checksum: 92a77c7516fba88a183765b28a4ae268 (MD5) Previous issue date: 2016-05-02 / Apesar dos grandes avan?os realizados visando a prote??o de plataformas de nuvem contra ataques maliciosos, pouco tem sido feito em rela??o a prote??o destas plataformas contra amea?as internas. Este trabalho prop?e lidar com este desafio atrav?s da introdu??o de auto-adapta??o como um mecanismo para lidar com amea?as internas em plataformas de nuvem, e isso ser? demonstrado no contexto de mecanismos de autoriza??o da plataforma OpenStack. OpenStack ? uma plataforma de nuvem popular que se baseia principalmente no Keystone, o componente de gest?o de identidade, para controlar o acesso a seus recursos. A utiliza??o de auto-adapta??o para o manuseio de amea?as internas foi motivada pelo fato de que a auto-adapta??o tem se mostrado bastante eficaz para lidar com incerteza em uma ampla gama de aplica??es. Ataques internos maliciosos se tornaram uma das principais causas de preocupa??o, pois mesmo mal intencionados, os usu?rios podem ter acesso aos recursos e por exemplo, roubar uma grande quantidade de informa??es. A principal contribui??o deste trabalho ? a defini??o de uma solu??o arquitetural que incorpora autoadapta??o nos mecanismos de autoriza??o do OpenStack, a fim de lidar com amea?as internas. Para isso, foram identificados e analisados diversos cen?rios de amea?as internas no contexto desta plataforma, e desenvolvido um prot?tipo para experimentar e avaliar o impacto destes cen?rios nos sistemas de autoriza??o em plataformas em nuvem. / Although major advances have been made in protection of cloud platforms against malicious attacks, little has been done regarding the protection of these platforms against insider threats. This paper looks into this challenge by introducing self-adaptation as a mechanism to handle insider threats in cloud platforms, and this will be demonstrated in the context of OpenStack authorisation. OpenStack is a popular cloud platform that relies on Keystone, its identity management component, for controlling access to its resources. The use of self-adaptation for handling insider threats has been motivated by the fact that self-adaptation has been shown to be quite effective in dealing with uncertainty in a wide range of applications. Malicious insider attacks have become a major cause for concern since legitimate, though malicious, users might have access, in case of theft, to a large amount of information. The key contribution of this work is the definition of an architectural solution that incorporates self-adaptation into OpenStack in order to deal with insider threats. For that, we have identified and analysed several insider threats scenarios in the context of the OpenStack cloud platform, and have developed a prototype that was used for experimenting and evaluating the impact of these scenarios upon the self-adaptive authorisation system for the cloud platforms.
40

Platforma pro virtualizaci komunikační infrastruktury / Communication infrastructure virtualization platform

Stodůlka, Tomáš January 2020 (has links)
The thesis deals with selection of infrastructure virtualization platform focusing on containerization with sandboxing support and with following examination of its difculty. The work begins with an explanation of the basic technologies such as: virtualization, cloud computing and containerization, along with their representatives, that mediate the technology. A special scope is defned for cloud computing platforms: Kubernetes, OpenStack and OpenShift. Futhermore, the most suitable platform is selected and deployed using own technique so that it fullflls all the conditions specifed by thesis supervisor. Within the difculty testing of the selected platform, there are created scripts (mainly in the Bash language) for scanning system load, creating scenarios, stress testing and automation.

Page generated in 0.0489 seconds