• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 17
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 22
  • 22
  • 13
  • 11
  • 10
  • 8
  • 8
  • 7
  • 6
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Elastic, Interoperable and Container-based Cloud Infrastructures for High Performance Computing

López Huguet, Sergio 02 September 2021 (has links)
Tesis por compendio / [ES] Las aplicaciones científicas implican generalmente una carga computacional variable y no predecible a la que las instituciones deben hacer frente variando dinámicamente la asignación de recursos en función de las distintas necesidades computacionales. Las aplicaciones científicas pueden necesitar grandes requisitos. Por ejemplo, una gran cantidad de recursos computacionales para el procesado de numerosos trabajos independientes (High Throughput Computing o HTC) o recursos de alto rendimiento para la resolución de un problema individual (High Performance Computing o HPC). Los recursos computacionales necesarios en este tipo de aplicaciones suelen acarrear un coste muy alto que puede exceder la disponibilidad de los recursos de la institución o estos pueden no adaptarse correctamente a las necesidades de las aplicaciones científicas, especialmente en el caso de infraestructuras preparadas para la ejecución de aplicaciones de HPC. De hecho, es posible que las diferentes partes de una aplicación necesiten distintos tipos de recursos computacionales. Actualmente las plataformas de servicios en la nube se han convertido en una solución eficiente para satisfacer la demanda de las aplicaciones HTC, ya que proporcionan un abanico de recursos computacionales accesibles bajo demanda. Por esta razón, se ha producido un incremento en la cantidad de clouds híbridos, los cuales son una combinación de infraestructuras alojadas en servicios en la nube y en las propias instituciones (on-premise). Dado que las aplicaciones pueden ser procesadas en distintas infraestructuras, actualmente la portabilidad de las aplicaciones se ha convertido en un aspecto clave. Probablemente, las tecnologías de contenedores son la tecnología más popular para la entrega de aplicaciones gracias a que permiten reproducibilidad, trazabilidad, versionado, aislamiento y portabilidad. El objetivo de la tesis es proporcionar una arquitectura y una serie de servicios para proveer infraestructuras elásticas híbridas de procesamiento que puedan dar respuesta a las diferentes cargas de trabajo. Para ello, se ha considerado la utilización de elasticidad vertical y horizontal desarrollando una prueba de concepto para proporcionar elasticidad vertical y se ha diseñado una arquitectura cloud elástica de procesamiento de Análisis de Datos. Después, se ha trabajo en una arquitectura cloud de recursos heterogéneos de procesamiento de imágenes médicas que proporciona distintas colas de procesamiento para trabajos con diferentes requisitos. Esta arquitectura ha estado enmarcada en una colaboración con la empresa QUIBIM. En la última parte de la tesis, se ha evolucionado esta arquitectura para diseñar e implementar un cloud elástico, multi-site y multi-tenant para el procesamiento de imágenes médicas en el marco del proyecto europeo PRIMAGE. Esta arquitectura utiliza un almacenamiento distribuido integrando servicios externos para la autenticación y la autorización basados en OpenID Connect (OIDC). Para ello, se ha desarrollado la herramienta kube-authorizer que, de manera automatizada y a partir de la información obtenida en el proceso de autenticación, proporciona el control de acceso a los recursos de la infraestructura de procesamiento mediante la creación de las políticas y roles. Finalmente, se ha desarrollado otra herramienta, hpc-connector, que permite la integración de infraestructuras de procesamiento HPC en infraestructuras cloud sin necesitar realizar cambios en la infraestructura HPC ni en la arquitectura cloud. Cabe destacar que, durante la realización de esta tesis, se han utilizado distintas tecnologías de gestión de trabajos y de contenedores de código abierto, se han desarrollado herramientas y componentes de código abierto y se han implementado recetas para la configuración automatizada de las distintas arquitecturas diseñadas desde la perspectiva DevOps. / [CA] Les aplicacions científiques impliquen generalment una càrrega computacional variable i no predictible a què les institucions han de fer front variant dinàmicament l'assignació de recursos en funció de les diferents necessitats computacionals. Les aplicacions científiques poden necessitar grans requisits. Per exemple, una gran quantitat de recursos computacionals per al processament de nombrosos treballs independents (High Throughput Computing o HTC) o recursos d'alt rendiment per a la resolució d'un problema individual (High Performance Computing o HPC). Els recursos computacionals necessaris en aquest tipus d'aplicacions solen comportar un cost molt elevat que pot excedir la disponibilitat dels recursos de la institució o aquests poden no adaptar-se correctament a les necessitats de les aplicacions científiques, especialment en el cas d'infraestructures preparades per a l'avaluació d'aplicacions d'HPC. De fet, és possible que les diferents parts d'una aplicació necessiten diferents tipus de recursos computacionals. Actualment les plataformes de servicis al núvol han esdevingut una solució eficient per satisfer la demanda de les aplicacions HTC, ja que proporcionen un ventall de recursos computacionals accessibles a demanda. Per aquest motiu, s'ha produït un increment de la quantitat de clouds híbrids, els quals són una combinació d'infraestructures allotjades a servicis en el núvol i a les mateixes institucions (on-premise). Donat que les aplicacions poden ser processades en diferents infraestructures, actualment la portabilitat de les aplicacions s'ha convertit en un aspecte clau. Probablement, les tecnologies de contenidors són la tecnologia més popular per a l'entrega d'aplicacions gràcies al fet que permeten reproductibilitat, traçabilitat, versionat, aïllament i portabilitat. L'objectiu de la tesi és proporcionar una arquitectura i una sèrie de servicis per proveir infraestructures elàstiques híbrides de processament que puguen donar resposta a les diferents càrregues de treball. Per a això, s'ha considerat la utilització d'elasticitat vertical i horitzontal desenvolupant una prova de concepte per proporcionar elasticitat vertical i s'ha dissenyat una arquitectura cloud elàstica de processament d'Anàlisi de Dades. Després, s'ha treballat en una arquitectura cloud de recursos heterogenis de processament d'imatges mèdiques que proporciona distintes cues de processament per a treballs amb diferents requisits. Aquesta arquitectura ha estat emmarcada en una col·laboració amb l'empresa QUIBIM. En l'última part de la tesi, s'ha evolucionat aquesta arquitectura per dissenyar i implementar un cloud elàstic, multi-site i multi-tenant per al processament d'imatges mèdiques en el marc del projecte europeu PRIMAGE. Aquesta arquitectura utilitza un emmagatzemament integrant servicis externs per a l'autenticació i autorització basats en OpenID Connect (OIDC). Per a això, s'ha desenvolupat la ferramenta kube-authorizer que, de manera automatitzada i a partir de la informació obtinguda en el procés d'autenticació, proporciona el control d'accés als recursos de la infraestructura de processament mitjançant la creació de les polítiques i rols. Finalment, s'ha desenvolupat una altra ferramenta, hpc-connector, que permet la integració d'infraestructures de processament HPC en infraestructures cloud sense necessitat de realitzar canvis en la infraestructura HPC ni en l'arquitectura cloud. Es pot destacar que, durant la realització d'aquesta tesi, s'han utilitzat diferents tecnologies de gestió de treballs i de contenidors de codi obert, s'han desenvolupat ferramentes i components de codi obert, i s'han implementat receptes per a la configuració automatitzada de les distintes arquitectures dissenyades des de la perspectiva DevOps. / [EN] Scientific applications generally imply a variable and an unpredictable computational workload that institutions must address by dynamically adjusting the allocation of resources to their different computational needs. Scientific applications could require a high capacity, e.g. the concurrent usage of computational resources for processing several independent jobs (High Throughput Computing or HTC) or a high capability by means of using high-performance resources for solving complex problems (High Performance Computing or HPC). The computational resources required in this type of applications usually have a very high cost that may exceed the availability of the institution's resources or they are may not be successfully adapted to the scientific applications, especially in the case of infrastructures prepared for the execution of HPC applications. Indeed, it is possible that the different parts that compose an application require different type of computational resources. Nowadays, cloud service platforms have become an efficient solution to meet the need of HTC applications as they provide a wide range of computing resources accessible on demand. For this reason, the number of hybrid computational infrastructures has increased during the last years. The hybrid computation infrastructures are the combination of infrastructures hosted in cloud platforms and the computation resources hosted in the institutions, which are named on-premise infrastructures. As scientific applications can be processed on different infrastructures, the application delivery has become a key issue. Nowadays, containers are probably the most popular technology for application delivery as they ease reproducibility, traceability, versioning, isolation, and portability. The main objective of this thesis is to provide an architecture and a set of services to build up hybrid processing infrastructures that fit the need of different workloads. Hence, the thesis considered aspects such as elasticity and federation. The use of vertical and horizontal elasticity by developing a proof of concept to provide vertical elasticity on top of an elastic cloud architecture for data analytics. Afterwards, an elastic cloud architecture comprising heterogeneous computational resources has been implemented for medical imaging processing using multiple processing queues for jobs with different requirements. The development of this architecture has been framed in a collaboration with a company called QUIBIM. In the last part of the thesis, the previous work has been evolved to design and implement an elastic, multi-site and multi-tenant cloud architecture for medical image processing has been designed in the framework of a European project PRIMAGE. This architecture uses a storage integrating external services for the authentication and authorization based on OpenID Connect (OIDC). The tool kube-authorizer has been developed to provide access control to the resources of the processing infrastructure in an automatic way from the information obtained in the authentication process, by creating policies and roles. Finally, another tool, hpc-connector, has been developed to enable the integration of HPC processing infrastructures into cloud infrastructures without requiring modifications in both infrastructures, cloud and HPC. It should be noted that, during the realization of this thesis, different contributions to open source container and job management technologies have been performed by developing open source tools and components and configuration recipes for the automated configuration of the different architectures designed from the DevOps perspective. The results obtained support the feasibility of the vertical elasticity combined with the horizontal elasticity to implement QoS policies based on a deadline, as well as the feasibility of the federated authentication model to combine public and on-premise clouds. / López Huguet, S. (2021). Elastic, Interoperable and Container-based Cloud Infrastructures for High Performance Computing [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/172327 / TESIS / Compendio
12

Security challenges within Software Defined Networks

Ahmed, Haroon, Sund, Gabriel January 2014 (has links)
A large amount of today's communication occurs within data centers where a large number of virtual servers (running one or more virtual machines) provide service providers with the infrastructure needed for their applications and services. In this thesis, we will look at the next step in the virtualization revolution, the virtualized network. Software-defined networking (SDN) is a relatively new concept that is moving the field towards a more software-based solution to networking. Today when a packet is forwarded through a network of routers, decisions are made at each router as to which router is the next hop destination for the packet. With SDN these decisions are made by a centralized SDN controller that decides upon the best path and instructs the devices along this path as to what action each should perform. Taking SDN to its extreme minimizes the physical network components and increases the number of virtualized components. The reasons behind this trend are several, although the most prominent are simplified processing and network administration, a greater degree of automation, increased flexibility, and shorter provisioning times. This in turn leads to a reduction in operating expenditures and capital expenditures for data center owners, which both drive the further development of this technology. Virtualization has been gaining ground in the last decade. However, the initial introduction of virtualization began in the 1970s with server virtualization offering the ability to create several virtual server instances on one physical server. Today we already have taken small steps towards a virtualized network by virtualization of network equipment such as switches, routers, and firewalls. Common to virtualization is that it is in early stages all of the technologies have encountered trust issues and general concerns related to whether software-based solutions are as rugged and reliable as hardwarebased solutions. SDN has also encountered these issues, and discussion of these issues continues among both believers and skeptics. Concerns about trust remain a problem for the growing number of cloud-based services where multitenant deployments may lead to loss of personal integrity and other security risks. As a relatively new technology, SDN is still immature and has a number of vulnerabilities. As with most software-based solutions, the potential for security risks increases. This thesis investigates how denial-of-service (DoS) attacks affect an SDN environment and a singlethreaded controller, described by text and via simulations. The results of our investigations concerning trust in a multi-tenancy environment in SDN suggest that standardization and clear service level agreements are necessary to consolidate customers’ confidence. Attracting small groups of customers to participate in user cases in the initial stages of implementation can generate valuable support for a broader implementation of SDN in the underlying infrastructure. With regard to denial-of-service attacks, our conclusion is that hackers can by target the centralized SDN controller, thus negatively affect most of the network infrastructure (because the entire infrastructure directly depends upon a functioning SDN controller). SDN introduces new vulnerabilities, which is natural as SDN is a relatively new technology. Therefore, SDN needs to be thoroughly tested and examined before making a widespread deployment. / Dagens kommunikation sker till stor del via serverhallar där till stor grad virtualiserade servermiljöer förser serviceleverantörer med infrastukturen som krävs för att driva dess applikationer och tjänster. I vårt arbete kommer vi titta på nästa steg i denna virtualiseringsrevolution, den om virtualiserade nätverk. mjukvarudefinierat nätverk (eng. Software-defined network, eller SDN) kallas detta förhållandevis nya begrepp som syftar till mjukvarubaserade nätverk. När ett paket idag transporteras genom ett nätverk tas beslut lokalt vid varje router vilken router som är nästa destination för paketet, skillnaden i ett SDN nätverk är att besluten istället tas utifrån ett fågelperspektiv där den bästa vägen beslutas i en centraliserad mjukvaruprocess med överblick över hela nätverket och inte bara tom nästa router, denna process är även kallad SDN kontroll. Drar man uttrycket SDN till sin spets handlar det om att ersätta befintlig nätverksutrustning med virtualiserade dito. Anledningen till stegen mot denna utveckling är flera, de mest framträdande torde vara; förenklade processer samt nätverksadministration, större grad av automation, ökad flexibilitet och kortare provisionstider. Detta i sin tur leder till en sänkning av löpande kostnader samt anläggningskostnader för serverhallsinnehavare, något som driver på utvecklingen. Virtualisering har sedan början på 2000-talet varit på stark frammarsch, det började med servervirtualisering och förmågan att skapa flertalet virtualiserade servrar på en fysisk server. Idag har vi virtualisering av nätverksutrustning, såsom switchar, routrar och brandväggar. Gemensamt för all denna utveckling är att den har i tidigt stadie stött på förtroendefrågor och överlag problem kopplade till huruvida mjukvarubaserade lösningar är likvärdigt robusta och pålitliga som traditionella hårdvarubaserade lösningar. Detta problem är även något som SDN stött på och det diskuteras idag flitigt bland förespråkare och skeptiker. Dessa förtroendefrågor går på tvären mot det ökande antalet molnbaserade tjänster, typiska tjänster där säkerheten och den personliga integriten är vital. Vidare räknar man med att SDN, liksom annan ny teknik medför vissa barnsjukdomar såsom kryphål i säkerheten. Vi kommer i detta arbete att undersöka hur överbelastningsattacker (eng. Denial-of-Service, eller DoS-attacker) påverkar en SDN miljö och en singel-trådig kontroller, i text och genom simulering. Resultatet av våra undersökningar i ämnet SDN i en multitenans miljö är att standardisering och tydliga servicenivåavtal behövs för att befästa förtroendet bland kunder. Att attrahera kunder för att delta i mindre användningsfall (eng. user cases) i ett inledningsskede är också värdefullt i argumenteringen för en bredare implementering av SDN i underliggande infrastruktur. Vad gäller DoS-attacker kom vi fram till att det som hackare går att manipulera en SDN infrastruktur på ett sätt som inte är möjligt med dagens lösningar. Till exempel riktade attacker mot den centraliserade SDN kontrollen, slår man denna kontroll ur funktion påverkas stora delar av infrastrukturen eftersom de är i ett direkt beroende av en fungerande SDN kontroll. I och med att SDN är en ny teknik så öppnas också upp nya möjligheter för angrepp, med det i åtanke är det viktigt att SDN genomgår rigorösa tester innan större implementation.
13

Integrating Customer Relationship Management into Cloud and Database Courses

Ortiz, David 23 February 2021 (has links)
No description available.
14

Multitenant PrestoDB as a service

Yedurupak, Aruna Kumari January 2017 (has links)
In recent years, there has been tremendous growth in both the volumes of data that is produced, stored, and queried by organizations. Organizations spend more money to investigate and obtain useful information or knowledge against terabytes and even petabytes of data. Large-scale data analysis is the key functionality provided by Big Data platforms. Previously, data platforms would get the information from unstructured data in the form of files, text, and videos. In recent times, the Hadoop stack has played a vital role in Big Data, becoming the defector open source software used to process and analyze Big Data. Hops is a Hadoop distribution developed by KTH and RISE SICS. Hops modifies the Hadoop stack by moving the meta-data for YARN and HDFS to NDB, an open-source in-memory distributed database. HopsWorks is the User Interface for Hops and provides support for multi-tenant users, as well as self-service, graphical access to frameworks such as Hadoop, Flink, Spark, Kafka, and Kibana. HopsWorks currently does not provide a SQL-on-Hadoop service, although work is ongoing for supporting Hive. Presto is one of the main SQL-on-Hadoop platform, but, currently, Presto does not provide multi-tenancy support for users. This thesis investigates providing multitenancy support to Presto with the help of HopsWorks, including both the security problem and the self-service UI requirements of HopsWorks. Presto is a distributed SQL query Engine which can run SQL queries against up to petabytes of data. As HopsWorks provides UI access to services, we decided to build our UI for Presto on an existing open-source UI for Presto, called Airpal, developed by Airbnb. This provided solution of the thesis divided into two functionalities. First one, maintain two separate Applications (HopsWorks and Airpal Applications) run by the help of two JVMs and maintain ProxyServlet to control traffic between them. Second one HopsWorks-Presto-service leverages HopsWorks access-control (Data owner and Data-scientist) and self-service security model. The evaluation of the thesis used qualitative approach by comparing HopsWorks-PrestoService with standalone PrestoDB and comparing HopsWorks-PrestoService with HopsWorks without Presto-Service. / De senaste åren, har det varit en avsevärd ökning vad gäller mängden av data som produceras, lagras och som används för analys av olika organisationer. Organisationer spenderar mer pengar för att undersöka och extrahera information och insikter i enorma datavolymer på flera terabyte eller petabyte. Storskalig dataanalys är en central funktionalitet som tillhandahålls av Big Data plattformar. I tidigare tillvägagångssätt hämtade data plattformaro-strukturerade data i form av filer, texter och videoklipp. I nutid, så har Hadoop-stacken spelat en kärnroll i Big Data, och blivit en viktig öppen källkod mjukvara som används för att processera och analysera Big Data. Hops är en Hadoop distribution som har utvecklats av KTH och RISE SICS. Hops tillför ändringar till Hadoop stacken genom att migrera metadata för YARN och HDFS till NDB, en öppen källkod i-minnet distribuerad databas. HopsWorks är ett användargränssnitt för Hops och tillför stöd för flera användare, med tillgång till självservice och tjänster såsom Hadoop, Flink, Spark, Kafka och Kibana. HopsWorks stödjer i nuläget inte någon SQL på Hadoop tjänst, även om arbete utförs i nuläget för att integrera Hive. Presto är en av de mest populära SQL på Hadoop plattformarna, men i nuläget så stödjer inte Presto flera användare. Den här uppsatsen utreder stöd för flera användare i Presto med hjälp av HopsWorks, både vad gäller säkerhetsproblem och självservice i HopsWorks. Presto är en distribuerad SQL frågespråk motor som kan ställa frågor mot upp till petabyte med data. Eftersom HopsWorks tillhandahåller ett gränssnitt för att interagera med tjänster, beslutade vi oss att bygga ett gränssnitt för Presto på det existerande öppen källkod gränssnittet för Presto, vid namn AirPal, utvecklat av Airbnb. Den utvecklade lösningen för uppsatsen kan delas in i två delar. Den första delen, att hantera två separata applikationer (HopsWorks och AirPal) som kör med hjälp av två Java virtuella maskiner och använder en ProxyServlet för att kontrollera trafik mellan dom. Den andra, HopsWorks-Presto-service som tillhandahåller HopsWorks åtkomstkontroll (Dataägare och Dataforskare) och en självservice säkerhetsmodell. Utvärderingen i uppsatsen är att genom ett kvalitativt tillvägagångssätt jämföra HopsWorks-Presto-service med en fristående PrestoDB och jämföra HopsWorks-Presto-service med HopsWorks utan Presto-service.
15

Tackling the Communication Bottlenecks of Distributed Deep Learning Training Workloads

Ho, Chen-Yu 08 1900 (has links)
Deep Neural Networks (DNNs) find widespread applications across various domains, including computer vision, recommendation systems, and natural language processing. Despite their versatility, training DNNs can be a time-consuming process, and accommodating large models and datasets on a single machine is often impractical. To tackle these challenges, distributed deep learning (DDL) training workloads have gained increasing significance. However, DDL training introduces synchronization requirements among nodes, and the mini-batch stochastic gradient descent algorithm heavily burdens network connections. This dissertation proposes, analyzes, and evaluates three solutions addressing the communication bottleneck in DDL learning workloads. The first solution, SwitchML, introduces an in-network aggregation (INA) primitive that accelerates DDL workloads. By aggregating model updates from multiple workers within the network, SwitchML reduces the volume of exchanged data. This approach, which incorporates switch processing, end-host protocols, and Deep Learning frameworks, enhances training speed by up to 5.5 times for real-world benchmark models. The second solution, OmniReduce, is an efficient streaming aggregation system designed for sparse collective communication. It optimizes performance for parallel computing applications, such as distributed training of large-scale recommendation systems and natural language processing models. OmniReduce achieves maximum effective bandwidth utilization by transmitting only nonzero data blocks and leveraging fine-grained parallelization and pipelining. Compared to state-of-the-art TCP/IP and RDMA network solutions, OmniReduce outperforms them by 3.5 to 16 times, delivering significantly better performance for network-bottlenecked DNNs, even at 100 Gbps. The third solution, CoInNetFlow, addresses congestion in shared data centers, where multiple DNN training jobs compete for bandwidth on the same node. The study explores the feasibility of coflow scheduling methods in hierarchical and multi-tenant in-network aggregation communication patterns. CoInNetFlow presents an innovative utilization of the Sincronia priority assignment algorithm. Through packet-level DDL job simulation, the research demonstrates that appropriate weighting functions, transport layer priority scheduling, and gradient compression on low-priority tensors can significantly improve the median Job Completion Time Inflation by over $70\%$. Collectively, this dissertation contributes to mitigating the network communication bottleneck in distributed deep learning. The proposed solutions can enhance the efficiency and speed of distributed deep learning systems, ultimately improving the performance of DNN training across various domains.
16

Distribuerade datalagringssystem för tjänsteleverantörer : Undersökning av olika användningsfall för distribuerade datalagringssystem / Distributed Data Storage Systems for Service Providers : Investigation of different use cases for distributed data storage systems

Ahmed, Tanvir Saif, Markovic, Bratislav January 2016 (has links)
Detta examensarbete handlar om undersökning av tre olika användningsfall inom datalagring; Cold Storage, High Performance Storage och Virtual Machine Storage. Rapporten har som syfte att ge en översikt över kommersiella distribuerade filsystem samt en djupare undersökning av distribuerade filsystem som bygger på öppen källkod och därmed hitta en optimal lösning för dessa användnings-fall. I undersökningen ingick att analysera och jämföra tidigare arbeten där jämförelser mellan pre-standamätningar, dataskydd och kostnader utfördes samt lyfta upp diverse funktionaliteter (snapshotting, multi-tenancy, datadeduplicering, datareplikering) som moderna distribuerade filsy-stem kännetecknas av. Både kommersiella och öppna distribuerade filsystem undersöktes. Även en kostnadsuppskattning för kommersiella och öppna distribuerade filsystem gjordes för att ta reda på lönsamheten för dessa två typer av distribuerat filsystem.Efter att jämförelse och analys av olika tidigare arbeten utfördes, visade sig att det öppna distribue-rade filsystemet Ceph lämpade sig bra som en lösning utifrån kraven som sattes som mål för High Performance Storage och Virtual Machine Storage. Kostnadsuppskattningen visade att det var mer lönsamt att implementera ett öppet distribuerat filsystem. Denna undersökning kan användas som en vägledning vid val mellan olika distribuerade filsystem. / In this thesis, a study of three different uses cases has been made within the field of data storage, which are as following: Cold Storage, High Performance Storage and Virtual Machine Storage. The purpose of the survey is to give an overview of commercial distributed file systems and a deeper study of open source codes distributed file systems in order to find the most optimal solution for these use cases. Within the study, previous works concerning performance, data protection and costs were an-alyzed and compared in means to find different functionalities (snapshotting, multi-tenancy, data duplication and data replication) which distinguish modern distributed file systems. Both commercial and open distributed file systems were examined. A cost estimation for commercial and open distrib-uted file systems were made in means to find out the profitability for these two types of distributed file systems.After comparing and analyzing previous works, it was clear that the open source distributed file sys-tem Ceph was proper as a solution in accordance to the objectives that were set for High Performance Storage and Virtual Machine Storage. The cost estimation showed that it was more profitable to im-plement an open distributed file system. This study can be used as guidance to choose between different distributed file systems.
17

Cloud computing a jeho aplikace / Cloud Computing and its Applications

Němec, Petr January 2010 (has links)
This diploma work is focused on the Cloud Computing and its possible applications in the Czech Republic. The first part of the work is theoretical and describes accessible sources analysis related to its topic. Historical circumstances are given in the relation to the evolution of this technology. Also few definitions are quoted, followed by the Cloud Computing basic taxonomy and common models of use. The chapter named Cloud Computing Architecture covers the generally accepted model of this technology in details. In the part focused on theory are mentioned some of the services opearating on this technology. At the end the theoretical part brings possibility of the Cloud Computing usage from the customer's and supplier's perspective. The practical part of the diploma work is divided into sections. The first one brings results of the questionnare research, performed by the author in the Czech Republic, focused on the usage of the Cloud Computing and virtualization services. In the sekond section is pre-feasibility study. The study is focused on the providing SaaS Services in the area of long-term and safe digital data store. Lastly there is an author's view on the Cloud Computing technology future and possible evolution.
18

Feature-based Configuration Management of Applications in the Cloud / Feature-basierte Konfigurationsverwaltung von Cloud-Anwendungen

Luo, Xi 27 June 2013 (has links) (PDF)
The complex business applications are increasingly offered as services over the Internet, so-called software-as-a-Service (SaaS) applications. The SAP Netweaver Cloud offers an OSGI-based open platform, which enables multi-tenant SaaS applications to run in the cloud. A multi-tenant SaaS application is designed so that an application instance is used by several customers and their users. As different customers have different requirements for functionality and quality of the application, the application instance must be configurable. Therefore, it must be able to add new configurations into a multi-tenant SaaS application at run-time. In this thesis, we proposed concepts of a configuration management, which are used for managing and creating client configurations of cloud applications. The concepts are implemented in a tool that is based on Eclipse and extended feature models. In addition, we evaluate our concepts and the applicability of the developed solution in the SAP Netwaver Cloud by using a cloud application as a concrete case example.
19

Outsourcing Network Services via the NBI of the SDN / Externalisation de services réseau via l'interface nord de SDN

Aflatoonian, Amin 19 September 2017 (has links)
Au cours des dernières décennies, les fournisseurs de services (SP) ont eu à gérer plusieurs générations de technologies redéfinissant les réseaux et nécessitant de nouveaux modèles économiques. Cette évolution continue du réseau offre au SP l'opportunité d'innover en matière de nouveaux services tout en réduisant les coûts et en limitant sa dépendance auprès des équipementiers. L'émergence récente du paradigme de la virtualisation modifie profondément les méthodes de gestion des services réseau. Ces derniers évoluent vers l'intégration d'une capacité « à la demande » dont la particularité consiste à permettre aux clients du SP de pouvoir les déployer et les gérer de manière autonome et optimale. Pour offrir une telle souplesse de fonctionnement, le SP doit pouvoir s'appuyer sur une plateforme de gestion permettant un contrôle dynamique et programmable du réseau. Nous montrons dans cette thèse qu'une telle plate-forme peut être fournie grâce à la technologie SDN (Software-Defined Networking). Nous proposons dans un premier temps une caractérisation de la classe de services réseau à la demande. Les contraintes de gestion les plus faibles que ces services doivent satisfaire sont identifiées et intégrées à un modèle abstrait de leur cycle de vie. Celui-ci détermine deux vues faiblement couplées, l'une spécifique au client et l'autre au SP. Ce cycle de vie est complété par un modèle de données qui en précise chacune des étapes. L'architecture SDN ne prend pas en charge toutes les étapes du cycle de vie précédent. Nous introduisons un Framework original qui encapsule le contrôleur SDN, et permet la gestion de toutes les étapes du cycle de vie. Ce Framework est organisé autour d'un orchestrateur de services et d'un orchestrateur de ressources communiquant via une interface interne. L'exemple du VPN MPLS sert de fil conducteur pour illustrer notre approche. Un PoC basé sur le contrôleur OpenDaylight ciblant les parties principales du Framework est proposé.Nous proposons de valoriser notre Framework en introduisant un modèle original de contrôle appelé BYOC (Bring Your Own Control) qui formalise, selon différentes modalités, la capacité d'externaliser un service à la demande par la délégation d'une partie de son contrôle à un tiers externe. Un service externalisé à la demande est structurée en une partie client et une partie SP. Cette dernière expose à la partie client des API qui permettent de demander l'exécution des actions induites par les différentes étapes du cycle de vie. Nous illustrons notre approche par l'ouverture d'une API BYOC sécurisée basée sur XMPP. La nature asynchrone de ce protocole ainsi que ses fonctions de sécurité natives facilitent l'externalisation du contrôle dans un environnement SDN multi-tenant. Nous illustrons la faisabilité de notre approche par l¿exemple du service IPS (système de prévention d'intrusion) décliné en BYOC. / Over the past decades, Service Providers (SPs) have been crossed through several generations of technologies redefining networks and requiring new business models. The ongoing network transformation brings the opportunity for service innovation while reducing costs and mitigating the locking of suppliers. Digitalization and recent virtualization are changing the service management methods, traditional network services are shifting towards new on-demand network services. These ones allow customers to deploy and manage their services independently and optimally through a well-defined interface opened to the SP¿s platform. To offer this freedom to its customers, the SP must be able to rely on a dynamic and programmable network control platform. We argue in this thesis that this platform can be provided by Software-Defined Networking (SDN) technology.We first characterize the perimeter of this class of new services. We identify the weakest management constraints that such services should meet and we integrate them in an abstract model structuring their lifecycle. This one involves two loosely coupled views, one specific to the customer and the other one to the SP. This double-sided service lifecycle is finally refined with a data model completing each of its steps.The SDN architecture does not support all stages of the previous lifecycle. We extend it through an original Framework allowing the management of all the steps identified in the lifecycle. This Framework is organized around a service orchestrator and a resource orchestrator communicating via an internal interface. Its implementation requires an encapsulation of the SDN controller. The example of the MPLS VPN serves as a guideline to illustrate our approach. A PoC based on the OpenDaylight controller targeting the main parts of the Framework is proposed. We propose to value our Framework by introducing a new and original control model called BYOC (Bring Your Own Control) which formalizes, according to various modalities, the capability of outsourcing an on-demand service by the delegation of part of its control to an external third party. An outsourced on-demand service is divided into a customer part and an SP one. The latter exposes to the former APIs which allow requesting the execution of the actions involved in the different steps of the lifecycle. We present an XMPP-based Northbound Interface (NBI) allowing opening up a secured BYOC-enabled API. The asynchronous nature of this protocol together with its integrated security functions, eases the outsourcing of control into a multi-tenant SDN framework. We illustrate the feasibility of our approach through a BYOC-based Intrusion Prevention System (IPS) service example.
20

Feature-based Configuration Management of Applications in the Cloud

Luo, Xi 30 April 2013 (has links)
The complex business applications are increasingly offered as services over the Internet, so-called software-as-a-Service (SaaS) applications. The SAP Netweaver Cloud offers an OSGI-based open platform, which enables multi-tenant SaaS applications to run in the cloud. A multi-tenant SaaS application is designed so that an application instance is used by several customers and their users. As different customers have different requirements for functionality and quality of the application, the application instance must be configurable. Therefore, it must be able to add new configurations into a multi-tenant SaaS application at run-time. In this thesis, we proposed concepts of a configuration management, which are used for managing and creating client configurations of cloud applications. The concepts are implemented in a tool that is based on Eclipse and extended feature models. In addition, we evaluate our concepts and the applicability of the developed solution in the SAP Netwaver Cloud by using a cloud application as a concrete case example.:List of Figures i List of Tables iii 1 Introduction 1 1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2 The Structure of This Document . . . . . . . . . . . . . . . . . 2 2 Background 5 2.1 Cloud Computing . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.2 Software Product Line Engineering . . . . . . . . . . . . . . . . 7 2.3 Role Based Access Control . . . . . . . . . . . . . . . . . . . . . 10 2.4 Staged Con guration . . . . . . . . . . . . . . . . . . . . . . . . 12 2.5 Work ow Modeling . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.5.1 Concept . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.5.2 Work ow Modeling Languages . . . . . . . . . . . . . . . 16 2.5.3 Adaptive Work ow . . . . . . . . . . . . . . . . . . . . . 17 2.5.4 Adaptation Patterns . . . . . . . . . . . . . . . . . . . . 17 2.6 Graph Transformation . . . . . . . . . . . . . . . . . . . . . . . 18 2.7 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 3 Analysis 23 3.1 Illustrative Example . . . . . . . . . . . . . . . . . . . . . . . . 23 3.1.1 Domain and Exiting Platform . . . . . . . . . . . . . . . 24 3.1.2 Yard Management System as a SaaS Application . . . . 28 3.2 Requirements Identi cation . . . . . . . . . . . . . . . . . . . . 28 4 Concept 31 4.1 Con guration Management Speci cation . . . . . . . . . . . . . 31 4.1.1 Variability Modeling . . . . . . . . . . . . . . . . . . . . 32 4.1.2 Stakeholder Views Modeling . . . . . . . . . . . . . . . . 34 4.1.3 Con guration Work ow Modeling . . . . . . . . . . . . . 36 4.2 Con guration Work ow Adaptations . . . . . . . . . . . . . . . 41 4.3 Mapping between Problem Space and Solution Space . . . . . . 47 4.4 Con guration Process Simulation . . . . . . . . . . . . . . . . . 50 5 Implementation 53 5.1 Con guration Speci cation . . . . . . . . . . . . . . . . . . . . . 54 5.1.1 Extended Feature Model Speci cation . . . . . . . . . . 55 5.1.2 View Model Speci cation . . . . . . . . . . . . . . . . . . 56 5.1.3 Con guration Work ow Model Speci cation . . . . . . . 57 5.2 Graph Transformation Rules . . . . . . . . . . . . . . . . . . . . 62 5.3 Mapping Realization . . . . . . . . . . . . . . . . . . . . . . . . 65 5.4 Con guration Management Tooling . . . . . . . . . . . . . . . . 67 5.5 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 6 Conclusions and Future Work 77 6.1 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 6.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 Bibliography i

Page generated in 0.0357 seconds