• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 34
  • 4
  • 4
  • 1
  • 1
  • Tagged with
  • 47
  • 27
  • 20
  • 13
  • 10
  • 10
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

A Coordination Framework for Deploying Hadoop MapReduce Jobs on Hadoop Cluster

Raja, Anitha January 2016 (has links)
Apache Hadoop is an open source framework that delivers reliable, scalable, and distributed computing. Hadoop services are provided for distributed data storage, data processing, data access, and security. MapReduce is the heart of the Hadoop framework and was designed to process vast amounts of data distributed over a large number of nodes. MapReduce has been used extensively to process structured and unstructured data in diverse fields such as e-commerce, web search, social networks, and scientific computation. Understanding the characteristics of Hadoop MapReduce workloads is the key to achieving improved configurations and refining system throughput. Thus far, MapReduce workload characterization in a large-scale production environment has not been well studied. In this thesis project, the focus is mainly on composing a Hadoop cluster (as an execution environment for data processing) to analyze two types of Hadoop MapReduce (MR) jobs via a proposed coordination framework. This coordination framework is referred to as a workload translator. The outcome of this work includes: (1) a parametric workload model for the target MR jobs, (2) a cluster specification to develop an improved cluster deployment strategy using the model and coordination framework, and (3) better scheduling and hence better performance of jobs (i.e. shorter job completion time). We implemented a prototype of our solution using Apache Tomcat on (OpenStack) Ubuntu Trusty Tahr, which uses RESTful APIs to (1) create a Hadoop cluster version 2.7.2 and (2) to scale up and scale down the number of workers in the cluster. The experimental results showed that with well tuned parameters, MR jobs can achieve a reduction in the job completion time and improved utilization of the hardware resources. The target audience for this thesis are developers. As future work, we suggest adding additional parameters to develop a more refined workload model for MR and similar jobs. / Apache Hadoop är ett öppen källkods system som levererar pålitlig, skalbar och distribuerad användning. Hadoop tjänster hjälper med distribuerad data förvaring, bearbetning, åtkomst och trygghet. MapReduce är en viktig del av Hadoop system och är designad att bearbeta stora data mängder och även distribuerad i flera leder. MapReduce är använt extensivt inom bearbetning av strukturerad och ostrukturerad data i olika branscher bl. a e-handel, webbsökning, sociala medier och även vetenskapliga beräkningar. Förståelse av MapReduces arbetsbelastningar är viktiga att få förbättrad konfigurationer och resultat. Men, arbetsbelastningar av MapReduce inom massproduktions miljö var inte djup-forskat hittills. I detta examensarbete, är en hel del fokus satt på ”Hadoop cluster” (som en utförande miljö i data bearbetning) att analysera två typer av Hadoop MapReduce (MR) arbeten genom ett tilltänkt system. Detta system är refererad som arbetsbelastnings översättare. Resultaten från denna arbete innehåller: (1) en parametrisk arbetsbelastningsmodell till inriktad MR arbeten, (2) en specifikation att utveckla förbättrad kluster strategier med båda modellen och koordinations system, och (3) förbättrad planering och arbetsprestationer, d.v.s kortare tid att utföra arbetet. Vi har realiserat en prototyp med Apache Tomcat på (OpenStack) Ubuntu Trusty Tahr som använder RESTful API (1) att skapa ”Hadoop cluster” version 2.7.2 och (2) att båda skala upp och ner antal medarbetare i kluster. Forskningens resultat har visat att med vältrimmad parametrar, kan MR arbete nå förbättringar dvs. sparad tid vid slutfört arbete och förbättrad användning av hårdvara resurser. Målgruppen för denna avhandling är utvecklare. I framtiden, föreslår vi tilläggning av olika parametrar att utveckla en allmän modell för MR och liknande arbeten.
22

Cost- and Performance-Aware Resource Management in Cloud Infrastructures

Nasim, Robayet January 2017 (has links)
High availability, cost effectiveness and ease of application deployment have accelerated the adoption rate of cloud computing. This fast proliferation of cloud computing promotes the rapid development of large-scale infrastructures. However, large cloud datacenters (DCs) require infrastructure, design, deployment, scalability and reliability and need better management techniques to achieve sustainable design benefits. Resources inside cloud infrastructures often operate at low utilization, rarely exceeding 20-30%, which increases the operational cost significantly, especially due to energy consumption. To reduce operational cost without affecting quality of service (QoS) requirements, cloud applications should be allocated just enough resources to minimize their completion time or to maximize utilization.  The focus of this thesis is to enable resource-efficient and performance-aware cloud infrastructures by addressing above mentioned cost and performance related challenges. In particular, we propose algorithms, techniques, and deployment strategies for improving the dynamic allocation of virtual machines (VMs) into physical machines (PMs).  For minimizing the operational cost, we mainly focus on optimizing energy consumption of PMs by applying dynamic VM consolidation methods. To make VM consolidation techniques more efficient, we propose to utilize multiple paths to spread traffic and deploy recent queue management schemes which can maximize network resource utilization and reduce both downtime and migration time for live migration techniques. In addition, a dynamic resource allocation scheme is presented to distribute workloads among geographically dispersed DCs considering their location based time varying costs due to e.g. carbon emission or bandwidth provision. For optimizing performance level objectives, we focus on interference among applications contending in shared resources and propose a novel VM consolidation scheme considering sensitivity of the VMs to their demanded resources. Further, to investigate the impact of uncertain parameters on cloud resource allocation and applications’ QoS such as unpredictable variations in demand, we develop an optimization model based on the theory of robust optimization. Furthermore, in order to handle the scalability issues in the context of large scale infrastructures, a robust and fast Tabu Search algorithm is designed and evaluated. / High availability, cost effectiveness and ease of application deployment have accelerated the adoption rate of cloud computing. This fast proliferation of cloud computing promotes the rapid development of large-scale infrastructures. However, large cloud datacenters (DCs) require infrastructure, design, deployment, scalability and reliability and need better management techniques to achieve sustainable design benefits. Resources inside cloud infrastructures often operate at low utilization, rarely exceeding 20-30%, which increases the operational cost significantly, especially due to energy consumption. To reduce operational cost without affecting quality of service (QoS) requirements, cloud applications should be allocated just enough resources to minimize their completion time or to maximize utilization.  The focus of this thesis is to enable resource-efficient and performance-aware cloud infrastructures by addressing above mentioned cost and performance related challenges. In particular, we propose algorithms, techniques, and deployment strategies for improving the dynamic allocation of virtual machines (VMs) into physical machines (PMs).
23

Vendor-Independent Software-Defined Networking : Beyond The Hype / Leverantörsoberoende Mjukvarudefinerade Nätverk

Pagola Moledo, Santiago January 2019 (has links)
Software-Defined Networking (SDN) is an emerging trend in networking that offers a number of advantages such as smoother network management over traditional networks. By decoupling the control and data planes from network elements, a huge amount of new opportunities arise, especially in network virtualization. In cloud datacenters, where virtualization plays a fundamental role, SDN presents itself as the perfect candidate to ease infrastructure management and to ensure correct operation. Even if the original SDN ideology advocates openness of source and interfaces, multiple networking vendors offer their own proprietary solutions. In this work, an open-source SDN solution, named Tungsten Fabric, will be deployed in a virtualized datacenter and a number of SDN-related use-cases will be examined. The main goal of this work is to determine whether Tungsten Fabric can deliver the same set of use-cases as a proprietary solution from Juniper, named Contrail Cloud. Finally, this work will give some guidelines on whether open-source SDN is the right candidate for Ericsson.
24

IaaS-cloud security enhancement : an intelligent attribute-based access control model and implementation

Al-Amri, Shadha M. S. January 2017 (has links)
The cloud computing paradigm introduces an efficient utilisation of huge computing resources by multiple users with minimal expense and deployment effort compared to traditional computing facilities. Although cloud computing has incredible benefits, some governments and enterprises remain hesitant to transfer their computing technology to the cloud as a consequence of the associated security challenges. Security is, therefore, a significant factor in cloud computing adoption. Cloud services consist of three layers: Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS). Cloud computing services are accessed through network connections and utilised by multi-users who can share the resources through virtualisation technology. Accordingly, an efficient access control system is crucial to prevent unauthorised access. This thesis mainly investigates the IaaS security enhancement from an access control point of view.
25

Algorithms for efficient VM placement in data centers : Cloud Based Design and Performance Analysis

Atchukatla, Mahammad suhail January 2018 (has links)
Content: Recent trends show that cloud computing adoption is continuously increasing in every organization. So, demand for the cloud datacenters tremendously increases over a period, resulting in significantly increased resource utilization of the datacenters. In this thesis work, research was carried out on optimizing the energy consumption by using packing of the virtual machines in the datacenter. The CloudSim simulator was used for evaluating bin-packing algorithms and for practical implementation OpenStack cloud computing environment was chosen as the platform for this research.   Objectives:  In this research, our objectives are as follows <ul type="disc">Perform simulation of algorithms in CloudSim simulator. Estimate and compare the energy consumption of different packing algorithms. Design an OpenStack testbed to implement the Bin packing algorithm.   Methods: We use CloudSim simulator to estimate the energy consumption of the First fit, the First fit decreasing, Best fit and Enhanced best-fit algorithms. Design a heuristic model for implementation in the OpenStack environment for optimizing the energy consumption for the physical machines. Server consolidation and live migration are used for the algorithms design in the OpenStack implementation. Our research also extended to the Nova scheduler functionality in an OpenStack environment.   Results: Most of the case the enhanced best-fit algorithm gives the better results. The results are obtained from the default OpenStack VM placement algorithm as well as from the heuristic algorithm developed in this simulation work. The comparison of results indicates that the total energy consumption of the data center is reduced without affecting potential service level agreements.   Conclusions: The research tells that energy consumption of the physical machines can be optimized without compromising the offered service quality. A Python wrapper was developed to implement this model in the OpenStack environment and minimize the energy consumption of the Physical machine by shutdown the unused physical machines. The results indicate that CPU Utilization does not vary much when live migration of the virtual machine is performed.
26

Virtualizace energetické infrastruktury / Virtualization of energy infrastructure

Hraboš, Šimon January 2021 (has links)
This work describes the virtualization process, virtualization tools and virtualization automation. The work also deals with the description of energy infrastructure, KYPO cyber range platform and DLMS/COSEM protocol used in energy. The practical part deals with the virtualization of energy infrastructure using OpenStack and KYPO cyber range platforms. A virtual environment was created using Vagrant application. The OpenStack and KYPO cyber range platforms were subsequently installed in this environment. Next, a sandbox definition was created. The sandbox definition creates a scenario with an energy infrastructure using KYPO platform. The functionality of the energy infrastructure was verified using the Gurux DLMS library.
27

KTHFS Orchestration : PaaS orchestration for Hadoop

Lorente Leal, Alberto January 2013 (has links)
Platform as a Service (PaaS) has produced a huge impact on how we can offer easy and scalable software that adapts to the needs of the users. This has allowed the possibility of systems being capable to easily configure themselves upon the demand of the customers. Based on these features, a large interest has emerged to try and offer virtualized Hadoop solutions based on Infrastructure as a Service (IaaS) architectures in order to easily deploy completely functional Hadoop clusters in platforms like Amazon EC2 or OpenStack. Throughout the thesis work, it was studied the possibility of enhancing the capabilities of KTHFS, a modified Hadoop platform in development; to allow automatic configuration of a whole functional cluster on IaaS platforms. In order to achieve this, we will study different proposals of similar PaaS platforms from companies like VMWare or Amazon EC2 and analyze existing node orchestration techniques to configure nodes in cloud providers like Amazon or Openstack and later on automatize this process. This will be the starting point for this work, which will lead to the development of our own orchestration language for KTHFS and two artifacts (i) a simple Web Portal to launch the KTHFS Dashboard in the supported IaaS platforms, (ii) an integrated component in the Dashboard in charge of analyzing a cluster definition file, and initializing the configuration and deployment of a cluster using Chef. Lastly, we discover new issues related to scalability and performance when integrating the new components to the Dashboard. This will force us to analyze solutions in order to optimize the performance of our deployment architecture. This will allow us to reduce the deployment time by introducing a few modifications in the architecture. Finally, we will conclude with some few words about the on-going and future work.
28

A Study of OpenStack Networking Performance / En studie av Openstack nätverksprestanda

Olsson, Philip January 2016 (has links)
Cloud computing is a fast-growing sector among software companies. Cloud platforms provide services such as spreading out storage and computational power over several geographic locations, on-demand resource allocation and flexible payment options. Virtualization is a technology used in conjunction with cloud technology and offers the possibility to share the physical resources of a host machine by hosting several virtual machines on the same physical machine. Each virtual machine runs its operating system which makes the virtual machines hardware independent. The cloud and virtualization layers add additional layers of software to the server environments to provide the services. The additional layers cause an overlay in latency which can be problematic for latency sensitive applications. The primary goal of this thesis is to investigate how the networking components impact the latency in an OpenStack cloud compared to a traditional deployment. The networking components were benchmarked under different load scenarios, and the results indicate that the additional latency added by the networking components is not too significant in the used network setup. Instead, a significant performance degradation could be seen on the applications running in the virtual machine which caused most of the added latency in the cloud environment. / Molntjänster är en snabbt växande sektor bland mjukvaruföretag. Molnplattformar tillhandahåller tjänster så som utspridning av lagring och beräkningskraft över olika geografiska områden, resursallokering på begäran och flexibla betalningsmetoder. Virtualisering är en teknik som används tillsammans med molnteknologi och erbjuder möjligheten att dela de fysiska resurserna hos en värddator mellan olika virtuella maskiner som kör på samma fysiska dator. Varje virtuell maskin kör sitt egna operativsystem vilket gör att de virtuella maskinerna blir hårdvaruoberoende. Moln och virtualiseringslagret lägger till ytterligare mjukvarulager till servermiljöer för att göra teknikerna möjliga. De extra mjukvarulagrerna orsakar ett pålägg på responstiden vilket kan vara ett problem för applikationer som kräver snabb responstid. Det primära målet i detta examensarbete är att undersöka hur de extra nätverkskomponenterna i molnplattformen OpenStack påverkar responstiden. Nätverkskomonenterna var utvärderade under olika belastningsscenarion och resultaten indikerar att den extra responstiden som orsakades av de extra nätverkskomponenterna inte har allt för stor betydelse på responstiden i den använda nätverksinstallationen. En signifikant perstandaförsämring sågs på applikationerna som körde på den virtuella maskinen vilket stod för den större delen av den ökade responstiden.
29

Investigation of an automatic deployment transformation method for OpenStack

Gudipati, Sai Vivek, Tatta, Vishwa Mithra January 2022 (has links)
Cloud computing is the on-demand availability of computer resources provided as a service over a network. OpenStack is an open-source cloud computing software. Deploying and operating OpenStack manually is a tedious process. To address this,life-cycle management tools have been developed. These tools automate the process of deploying OpenStack and can work as operations and maintenance tools. As OpenStack follows a six-month release cycle, some of the life-cycle management tools can not keep up with the releases and end up outdated due to a lack of support from the OpenStack community. This leads to older OpenStack deployments being stuck on unsupported life-cycle management tools, which could have bugs, security issues and are often more complicated to manage than newer life-cycle management tools(LCMTs). One way to solve this is by moving the OpenStack deployment from one LCMT to another, that is migration of the deployment itself. This thesis addresses the issue by identifying the current popular LCMTs through a secondary survey by OpenStack foundation and the existing migration methods through literature review. Furthermore, the effect of LCMTs on the OpenStack deployment is analysed, and controlled experiments are performed to test non-live migration between different LCMTs based OpenStack deployments. The results from the OpenStack user survey shows that, Kolla-ansible, followed by Puppet and OpenStack-ansible are the current popular LCMTs, based on their usage amongst the survey participants. The literature review combined with experimentation shows that the existing migration models are limited to the LCMT environments and the LCMTs themselves effect the OpenStack deployment in deployment file locations and through underlying technologies. We also propose an experimental method which works for migrating OpenStack from OpenStack-Ansible to Kolla-Ansible through a Manual deployment and vice-versa, which can thereby be generalized.
30

Decentralized Authentication in OpenStack Nova : Integration of OpenID

Khan, Rasib Hassan January 2011 (has links)
The evolution of cloud computing is driving the next generation of internet services. OpenStack is one of the largest open-source cloud computing middleware development communities. Currently, OpenStack supports platform specific signatures and tokens for user authentication. In this thesis, we aim to introduce a platform independent, flexible,and decentralized authentication mechanism in OpenStack. We selected OpenID as an open-source authentication platform. It allows a decentralized framework for user authentication. OpenID has its own advantages for web services, which include improvements in usability and seamless SSO experience for the users. This thesis presents the OpenID-Authentication-as-a-Service APIs in OpenStack for front-end GUI servers, and performs the authentication in the back-end at a single Policy Decision Point. The design was implemented in OpenStack, allowing users to use their OpenID Identifiers from standard OpenID providers and log into the Dashboard/Django- Nova graphical interface of OpenStack. / Utvecklingen av molndatabearbetning är drivande nästa generation av Internet-tjänster. OpenStack är en av de största öppen källkod mellanprogramvara datormoln utveckling samhällen. För närvarande stöder ITplattform specifika signaturer och pollett som för användarautentisering. I denna avhandling vill vi införa en plattformsoberoende, flexibel och decentraliserad autentiseringsmekanism i OpenStack. Vi valde OpenID som en öppen källkod autentisering plattform. Det möjliggör en decentraliserad ram för användarautentisering. OpenID har sina fördelar för webbtjänster, som omfattar förbättringar i användbarhet och sömlös SSO-upplevelse för användarna. Denna avhandling presenterar de OpenID-Autentisering-as-a-Service APIer i OpenStack för front-end GUI servrar och utför autentisering i back-end i ett enda politiskt beslut punkt. Designen genomfördes i OpenStack, så att användarna kan använda sina OpenID kännetecken från standarden OpenID leverantörer och logga in på Dashboard / Django-Nova grafiskt gränssnitt av OpenStack.

Page generated in 0.0358 seconds