• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 34
  • 4
  • 4
  • 1
  • 1
  • Tagged with
  • 47
  • 27
  • 20
  • 13
  • 10
  • 10
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Enhancement of Networking Capabilities in P2P OpenStack

Peddireddy, Vidyadhar reddy January 2019 (has links)
In recent times, there’s been a trend towards setting up smaller clouds at the edge of the network and interconnecting them across multiple sites. In these scenarios, the software used for managing the resources should be flexible enough to scale. Considering OpenStack the most widely used cloud software, It is observed that the compute service has shown performance degradation when the deployment reaches fewer hundreds of nodes. Finding out solutions to address the scalability issue in OpenStack, Ericsson has developed a new architecture that supports massive scalability of OpenStack clouds. However, the challenges with multicloud networking in P2P OpenStack remained unsolved. This thesis work as an extension to Ericsson’s P2P OpenStack project investigates various multi-cloud networking techniques and proposes two decentralized designs for cross Neutron networking in P2P OpenStack. The design-1 is based on OpenStack Tricircle project and design-2 is based on VPNaaS. This thesis work implements VPNaaS design to support the automatic interconnection of Virtual machines that belong to the same user but deployed in different OpenStack clouds. We evaluate this thesis for control plane operation under two different scenarios namely single user case and multiple users cases. In both scenarios, request-response time is chosen as an evaluating parameter. Results show that there is an increase in request-response time when users in the system are increased.
12

Cost Aware Virtual Content Delivery Network for Streaming Multimedia : Cloud Based Design and Performance Analysis

Vishnubhotla Venkata Krishna, Sai Datta January 2015 (has links)
Significant portion of today’s internet traffic emerge from multimedia services. When coupled with growth in number of users accessing these services, there is tremendous increase in network traffic. CDNs aid in handling this traffic and offer reliable services by distributing content across different locations. The concept of virtualization transformed traditional data centers into flexible cloud infrastructure. With the advent of cloud computing technology, multimedia providers have scope for establishing CDN using network operator’s cloud environment. However, the main challenge while establishing such CDN is implementing a cost efficient and dynamic mechanism which guarantees good service quality to users. This thesis aims to develop, implement and assess the performance of a model that coordinates deployment of virtual servers in the cloud. A solution which dynamically spawns and releases virtual servers according to variations in user demand has been proposed. Cost-based heuristic algorithm is presented for deciding the placement of virtual servers in OpenStack based federated clouds. Further, the proposed model is implemented on XIFI cloud and its performance is measured. Results of the performance study indicate that virtual CDNs offer reliable and prompt services. With virtual CDNs, multimedia providers can regulate expenses and have greater level of flexibility for customizing the virtual servers deployed at different locations.
13

Network Security Tool for a Novice

Ganduri, Rajasekhar 08 1900 (has links)
Network security is a complex field that is handled by security professionals who need certain expertise and experience to configure security systems. With the ever increasing size of the networks, managing them is going to be a daunting task. What kind of solution can be used to generate effective security configurations by both security professionals and nonprofessionals alike? In this thesis, a web tool is developed to simplify the process of configuring security systems by translating direct human language input into meaningful, working security rules. These human language inputs yield the security rules that the individual wants to implement in their network. The human language input can be as simple as, "Block Facebook to my son's PC". This tool will translate these inputs into specific security rules and install the translated rules into security equipment such as virtualized Cisco FWSM network firewall, Netfilter host-based firewall, and Snort Network Intrusion Detection. This tool is implemented and tested in both a traditional network and a cloud environment. One thousand input policies were collected from various users such as staff from UNT departments' and health science, including individuals with network security background as well as students with a non-computer science background to analyze the tool's performance. The tool is tested for its accuracy (91%) in generating a security rule. It is also tested for accuracy of the translated rule (86%) compared to a standard rule written by security professionals. Nevertheless, the network security tool built has shown promise to both experienced and inexperienced people in network security field by simplifying the provisioning process to result in accurate and effective network security rules.
14

A comparison between Terraform and Ansible on their impact upon the lifecycle and security management for modifiable cloud infrastructures in OpenStack.

Gurbatov, Gleb January 2022 (has links)
Automating the deployment, security risk minimization, scaling, maintenance and development processes is highly critical, as it enables unleashing the potential of cloud computing. The flexibility and reliability advantages of cloud computing are not fully disclosed without automation of lifecycle processes. The flexibility of the automation solution is directly proportional to the quality of performed lifecycle processes for the entire infrastructure. Nowadays, a lot of companies are in constant search forflexible decisions for their infrastructure for further growth and decrease the usage exploitation of resources when they have a non-use state to avoid additional financial costs. Orchestrator techniques to automate configuration, coordination, and management of computer systems and software are used to meet such infrastructure's demand. Infrastructure as a Code took a large part in automation processes from the beginning of the growing demand for Cloud Computing, but now the new era of orchestration and demand on flexibility capabilities has come, which IaC has to cover. Last decade, multiple IaC solutions appeared. Each of them has a different performance as orchestrators. Flexibility of the orchestrator is measured by configuration capabilities and workflow control of operations via internal features. Nevertheless, time and required computational resources are an important part of orchestrator performance as well. Protracted delays between lifecycle processes and extra-high computational resource demand lead to high financial costs and high service downtime. Computational resource consumption and time metrics, configuration capabilities are the core of orchestrator performance
15

Contributions à la mise en place d'une infrastructure de Cloud Computing à large échelle / Contributions to massively distributed Cloud Computing infrastructures

Pastor, Jonathan 18 October 2016 (has links)
La croissance continue des besoins en puissance de calcul a conduit au triomphe du modèle de Cloud Computing. Des clients demandeurs en puissance de calcul vont s’approvisionner auprès de fournisseurs d’infrastructures de Cloud Computing, mises à disposition via Internet. Pour réaliser des économies d’échelles, ces infrastructures sont toujours plus grandes et concentrées en quelques endroits, conduisant à des problèmes tels que l’approvisionnement en énergie, la tolérance aux pannes et l’éloignement des utilisateurs. Cette thèse s’est intéressée à la mise en place d’un système d’IaaS massivement distribué et décentralisé exploitant un réseau de micros centres de données déployés sur la dorsale Internet, utilisant une version d’OpenStack revisitée pendant cette thèse autour du support non intrusif de bases de données non relationnelles. Des expériences sur Grid’5000 ont montré des résultats intéressants sur le plan des performances, toutefois limités par le fait qu’OpenStack ne tirait pas avantage nativement d’un fonctionnement géographiquement réparti. Nous avons étudié la prise en compte de la localité réseau pour améliorer les performances des services distribués en favorisant les collaborations proches. Un prototype de l’algorithme de placement de machines virtuelles DVMS, fonctionnant sur une topologie non structurée basée sur l’algorithme Vivaldi, a été validé sur Grid’5000. Ce prototype a fait l’objet d’un prix scientifique lors de l’école de printemps Grid’50002014. Enfin, ces travaux nous ont amenés à participer au développement du simulateur VMPlaceS. / The continuous increase of computing power needs has favored the triumph of the Cloud Computing model. Customers asking for computing power will receive supplies via Internet resources hosted by providers of Cloud Computing infrastructures. To make economies of scale, Cloud Computing that are increasingly large and concentrated in few attractive places, leading to problems such energy supply, fault tolerance and the fact that these infrastructures are far from most of their end users. During this thesis we studied the implementation of an fully distributed and decentralized IaaS system operating a network of micros data-centers deployed in the Internet backbone, using a modified version of OpenStack that leverages non relational databases. A prototype has been experimentally validated onGrid’5000, showing interesting results, however limited by the fact that OpenStack doesn’t take advantage of a geographically distributed functioning. Thus, we focused on adding the support of network locality to improve performance of Cloud Computing services by favoring collaborations between close nodes. A prototype of the DVMS algorithm, working with an unstructured topology based on the Vivaldi algorithm, has been validated on Grid’5000. This prototype got the first prize at the large scale challenge of the Grid’5000 spring school in 2014. Finally, the work made with DVMS enabled us to participate at the development of the VMPlaceS simulator.
16

Sammansättning av ett privat moln som infrastruktur för utveckling / Putting together a private cloud as infrastructure for development

Ernfridsson, Alexander January 2017 (has links)
Idag är det vanligt att hantera, beskriva och konfigurera sin datainfrastruktur såsom processer, serverar och miljöer i maskinläsbara konfigurationsfiler istället för fysisk hårdvara eller interaktiva konfigureringsverktyg. Automatiserad datainfrastruktur blir mer och mer vanligt för att kunna fokusera mer på utveckling och samtidigt få ett stabilare system. Detta har gjort att antalet verktyg för automatisering av datainfrastruktur skjutit i höjden det senaste årtiondet. Lösningar för automatisering av olika typer av datainfrastrukturer har blivit mer komplexa och innehåller ofta många verktyg som interagerar med varandra. Det här kandidatarbetet jämför, väljer ut och sätter ihop existerande plattformar och verktyg och skapar ett privat moln som infrastruktur för utveckling. Detta för att effektivera livscykeln för en serverbaserad runtime-miljö. En jämförelse av molnplattformarna OpenStack, OpenNebula, CloudStack och Eucalyptus baserad på litteratur, lägger grunden för molnet. Molnplattformen kompletteras därefter med andra verktyg och lösningar för att fullborda livscykelautomatiseringen av runtime-miljöer. En prototyp av lösningen skapades för att analysera praktiska problem. Arbetet visar att en kombination av OpenStack, Docker, containerorkestrering samt konfigureringsverktyg är en lovande lösning. Lösningen skalar efter behov, automatiserar och hanterar verksamhetens konfigurationer för runtime-miljöer.
17

Automated file extraction in a cloud environment for forensic analysis

Gustafsson, Kevin, Sundstedt, Emil January 2017 (has links)
The possibility to use the snapshot functionality of OpenStack as a method of securing evidence has been examined in this paper. In addition, the possibility of extracting evidence automatically using an existing operation tool has been investigated. The usability of snapshots in a forensic investigation was examined by conducting a series of tests on both snapshots and physical disk images. The results of the tests were then compared to evaluate the usefulness of the snapshot. Automatic extraction of evidence was investigated by implementing a solution using Ansible and evaluating the algorithm based on the existing standard ISO 27037. It was concluded that the snapshots created by OpenStack behaves similar enough to disks to be useful in a forensic investigation. The algorithm proposed to extract evidence automatically seems to not breach the standard. / Möjligheten att använda OpenStacks ögonblicks funktion som metod för att säkra bevis har granskats i detta papper. Dessutom har möjligheten att extrahera bevis automatiskt med ett befintligt automatiseringsverktyg undersökts. Användbarheten av ögonblicksbilder i en rättslig utredning undersöktes genom att genomföra en serie tester påbåde ögonblicksbilder och fysiska disk avbilder. Resultaten av testerna jämfördes sedan för att utvärdera användbarheten av ögonblicksbilden. Automatisk utvinning av bevis undersöktes genom att implementera en lösning med Ansible och utvärdera algoritmen baserat påden befintliga standarden ISO 27037. Det drogs slutsatsen att de ögonblicksbilder som skapats av OpenStack beter sig tillräckligt lika en fysisk disk för att avbilderna ska vara användbara vid en råttslig utredning. Den algoritm som föreslås att extrahera bevis automatiskt tycks inte bryta mot standarden.
18

Cloud Auto-Scaling Control Engine Based on Machine Learning

You, Yantian January 2018 (has links)
With the development of modern data centers and networks, many service providers have moved most of their computing functions to the cloud.  Considering the limitation of network bandwidth and hardware or virtual resources, how to manage different virtual resources in a cloud environment so as to achieve better resource allocation is a big problem.  Although some cloud infrastructures provide simple default auto-scaling and orchestration mechanisms, such as OpenStack Heat service, they usually only depend on a single parameter, such as CPU utilization and cannot respond to the network changes in a timely manner.<p> This thesis investigates different auto-scaling mechanisms and designs an on-line control engine that cooperates with different OpenStack service APIs based on various network resource data.  Two auto-scaling engines, Heat orchestration based engine and machine learning based online control engine, have been developed and compared for different client requests patterns.  Two machine learning methods, neural network, and linear regression have been considered to generate a control signal based on real-time network data.  This thesis also shows the network’s non-linear behaviors for heavy traffic and proposes a scaling policy based on deep network analysis.<p> The results show that for offline training, the neural network and linear regression provide 81.5% and 84.8% accuracy respectively.  However, for online testing with different client request patterns, the neural network results are different than we expected, while linear regression provided us with much better results.  The model comparison showed that these two auto-scaling mechanisms have similar behavior for a SMOOTH-load Pattern.  However, for the SPIKEY-load Pattern, the linear regression based online control engine responded faster to network changes while heat orchestration service shows some delay.  Compared with the proposed scaling policy with fewer web servers in use and acceptable response latency, both of the two auto-scaling models waste network resources. / Med utvecklingen av moderna datacentraler och nätverk har många tjänsteleverant örer flyttat de flesta av sina datafunktioner till molnet. Med tanke på begränsningen av nätverksbandbredd och hårdvara eller virtuella resurser, är det ett stort problem att hantera olika virtuella resurser i en molnmiljö för att uppnå bättre resursallokering. även om vissa molninfrastrukturer tillhandahåller enkla standardskalnings- och orkestrationsmekanismer, till exempel OpenStack Heat service, beror de vanligtvis bara på en enda parameter, som CPU-utnyttjande och kan inte svara på nätverksändringarna i tid. Denna avhandling undersöker olika auto-skaleringsmekanismer och designar en online-kontrollmotor som samarbetar med olika OpenStack-service APIskivor baserat på olika nätverksresursdata. Två auto-skalermotorer, värmeorkestreringsbaserad motor- och maskininlärningsbaserad online-kontrollmotor, har utvecklats och jämförts för olika klientförfråg-ningsmönster. Två maskininl ärningsmetoder, neuralt nätverk och linjär regression har ansetts generera en styrsignal baserad på realtids nätverksdata. Denna avhandling visar också nätverkets olinjära beteenden för tung traffik och föreslår en skaleringspolitik baserad på djup nätverksanalys. Resultaten visar att för nätutbildning, ger neuralt nätverk och linjär regression 81,5% respektive 84,8% noggrannhet. För online-test med olika klientförfrågningsm önster är de neurala nätverksresultaten dock annorlunda än vad vi förväntade oss, medan linjär regression gav oss mycket bättre resultat. Modellen jämförelsen visade att dessa två auto-skala mekanismer har liknande beteende för ett SMOOTH-load mönster. För SPIKEY-load mönster svarade den linjära regressionsbaserade online-kontrollmotorn snabbare än nätverksförändringar medan värme-orkestrationstjänsten uppvisar viss fördröjning. Jämfört med den föreslagna skaleringspolitiken med färre webbservrar i bruk och acceptabel svarsfördröjning, slöser båda de två auto-skalande modellerna nätverksresurser.
19

Multi-Tenancy Security in Cloud Computing : Edge Computing and Distributed Cloud

Shokrollahi Yancheshmeh, Ali January 2019 (has links)
With the advent of technology cloud computing has become the next generation of network computing where cloud computing can deliver both software and hardware as on-demand services over the Internet. Cloud computing has enabled small organizations to build web and mobile apps for millions of users by utilizing the concept of “pay-as-you-go” for applications, computing, network and storage resources as on-demand services. These services can be provided to the tenants in different categories: Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS). In order to decrease the costs for the cloud users and increase resource utilization, cloud providers try to share the resources between different organizations (tenants) through a shared environment which is called Multi-Tenancy. Even though multi-tenancy‟s benefits are tremendous for both cloud providers and users, security and privacy concerns are the primary obstacles to Multi-Tenancy.Since Multi-Tenancy dramatically depends on resource sharing, many experts have suggested different approaches to secure Multi-Tenancy. One of the solutions is resource allocation and isolation techniques. In most cases, resource allocation techniques consider but are not sufficient for security. OpenStack community uses a method to isolate the resources in a Multi-Tenant environment. Even though this method is based on a smart filtering technique to segregate the resources in Compute nodes (the component that the instances are running on it in OpenStack), this method is not flawless. The problem comes up in the Cinder nodes where the resources are not isolated. This failure can be considered as a security concern for a Multi-Tenant environment in OpenStack. In order to solve this problem, this project explores a method to secure MultiTenancy for both sides in the Compute node and for backend where Block Storage devices for the instances can be isolated as well. / Med tillkomsten av teknik har molnberäkning blivit nästa generation nätverksberäkning där molnberäkning kan leverera både mjukvara och hårdvara som on-demand-tjänster över Internet. Cloud computing har gjort det möjligt för små organisationer att bygga webboch mobilappar för miljontals användare genom att använda begreppet ”pay-as-you-go” för applikationer, datoranläggningar, nätverksoch lagringsresurser som on-demand-tjänster. Dessa tjänster kan tillhandahållas hyresgästerna i olika kategorier: Infrastructure as a Service (IaaS), Platform as a Service (PaaS) och Software as a Service (SaaS). För att minska kostnaderna för molnanvändarna och öka resursanvändningen, försöker molnleverantörer att dela resurserna mellan olika organisationer (hyresgäster) genom en delad miljö som kallas Multi-Tenancy. Men fördelarna med flera hyresgäster är enorma för både molnleverantörer och användare, säkerhetsoch integritetsfrågor är de främsta hindren för Multi-Tenancy. Eftersom Multi-Tenancy dramatiskt beror på resursdelning har många experter föreslagit olika metoder för att säkra Multi-Tenancy. En av lösningarna är resursallokering och isoleringstekniker. I de flesta fall beaktar resursallokeringstekniker men är inte tillräckliga för säkerhet. OpenStack community använder en metod för att isolera resurserna i en Multi-Tenant-miljö.Men denna metod är baserad på en smart filtreringsteknik för att separera resurserna i Compute-noder (komponenten som instansen körs på den i OpenStack), den här metoden är inte felfri. Problemet kommer upp i Cinder-noderna där resurserna inte är isolerade. Detta fel kan betraktas som ett säkerhetsproblem för en Multi-Tenant-miljö i OpenStack. För att lösa detta problem försöker detta projekt säkra Multi-Tenancy för båda sidor i Compute-noden och för backend där Block Storage-enheter för instanserna också kan isoleras.
20

Designing and implementing a private cloud for student and faculty software projects / Utformning och implementation av en privat molntjänst för programvaruprojekt av studenter och lärare

Le Fevre, Pierre, Karlsson, Emil January 2022 (has links)
Designing, building, and implementing a private cloud hosting solution can be challenging. This report aims to unify research in multiple areas within cloud hosting to simplify the process by presenting a comprehensive ground-up approach. The proposed approach includes methods used to decide which models and paradigms to be used, such as abstraction level and infrastructure scale. A step-by-step guide is presented, with all considerations made along the way. The result is a platform accessible from a web browser or through a command-line interface and hosts services such as servers for machine learning and containerized applications in Kubernetes. Further work includes increasing the abstraction layer and enabling hardware enrollment over the network. Moreover, whether this implementation will scale in an intended way remains to be examined. / Att designa, bygga och implementera en privat plattform för molntjänster kan vara utmanande. Den här rapportens mål är att sammanställa forskning inom flera olika områden av cloud-hosting genom ett omfattande och grundligt tillvägagångsätt. Det förslagna tillvägagångsättet inkluderar metoder för att bestämma vilka modeller och paradigmer som ska användas, såsom abstraktionsnivå och skala av infrastruktur. Rapporten presenterar en guide till processen, med alla överväganden som gjordes längs vägen. Resultatet är en plattform som är tillgänglig från en webbläsare eller via en kommandotolk, och agerar värd för tjänster som servrar för maskininlärning och containeriserade applikationer i Kubernetes. Ytterligare arbete inkluderar att abstrahera bort fler aspekter och möjliggöra registrering av ny hårdvara över nätverket. Det återstår att undersöka om denna implementering kommer kunna skala på tänkt sätt.

Page generated in 0.0414 seconds