931 |
Feature-based Configuration Management of Applications in the Cloud / Feature-basierte Konfigurationsverwaltung von Cloud-AnwendungenLuo, Xi 27 June 2013 (has links) (PDF)
The complex business applications are increasingly offered as services over the Internet, so-called software-as-a-Service (SaaS) applications. The SAP Netweaver Cloud offers an OSGI-based open platform, which enables multi-tenant SaaS applications to run in the cloud. A multi-tenant SaaS application is designed so that an application instance is used by several customers and their users. As different customers have different requirements for functionality and quality of the application, the application instance must be configurable. Therefore, it must be able to add new configurations into a multi-tenant SaaS application at run-time. In this thesis, we proposed concepts of a configuration management, which are used for managing and creating client configurations of cloud applications. The concepts are implemented in a tool that is based on Eclipse and extended feature models. In addition, we evaluate our concepts and the applicability of the developed solution in the SAP Netwaver Cloud by using a cloud application as a concrete case example.
|
932 |
線上遊戲之雲端服務平台發展研究 - 以台灣Z公司為例 / A research on development of the online game cloud service platform - a case study of Taiwan's Z Company傅慧娟, Fu, Hui Chuan Unknown Date (has links)
本篇論文將針對線上遊戲產業之雲端運算服務平台之發展進行研究與探討。從線上遊戲產業發展的現況與趨勢中,看到線上遊戲仍是市場的主流,更是付費機制的成熟產品,但也看到新興遊戲如SNS網頁遊戲、App遊戲的衝擊下,線上遊戲已不再是大型遊戲或是只有大型公司才有能力開發製作的產品,而中小型團隊也開始蜂擁加入開發的行列。
在線上遊戲產品開發流程與產業價值鏈的文獻探討中,看到線上遊戲的開發與營運價值單元,且這些價值單元都是非常符合雲端運算的功能與架構,並再透過詳細解析產品開發流程與營運價值鏈的每個環節中去對應雲端運算架構與功能,分析雲端運算對線上遊戲之開發端及消費端所提昇之價值與效益面,以及線上遊戲之雲端運算服務平台的可能功能與服務模式。
線上遊戲因為具備文件、物件資料量龐大、即時運算需求極高、中小型開發團隊須降低軟硬體開發成本需求殷切以及需要共用之伺服器軟硬體環境,還有備援且不允許因維運、修改、除錯等而可能停機或當機之風險產生等特性,使得在這一波雲端運算的潮流與趨勢中,選擇提供線上遊戲開發與營運之雲端運算平台服務,應是一項可預期的發展機會。
本研究最後透過一家已推出線上遊戲雲端運算服務平台的個案研究,再呼應前面線上遊戲與雲端運算功能與架構的關聯框架及分析,最後提出對個案公司應以大量即時運算為核心能力,提供跨業服務、以核心技術發展技轉與合作營運模式、結合HTML5或Yahoo的YUI等技術,全力搶攻新興連線遊戲市場、研發雲端Server端引擎產品等建議外,也對線上遊戲之雲端運算服務平台提出可發展SaaS、PaaS、IaaS等各別專業及可具備彈性且可發展成機動組合的混合型雲端服務模式之結論與建議等。 / This report is the research and investigation of the cloud computing platform development in online game industry. We find that online game is still the mainstream as well as a developed product under the paid mechanisms through the condition and trend of online game industry. However, because of the impact of SNS website game and App game, the online game no longer just a hug online game or develop by big companies, instead the medium and small groups can develop the online game as well now.
We find out that there are some unites in the literature investigated the development process and industrial value chain of online fit in with the function of cloud computing and its framework. In addition, we will do the detailed analysis of production developing process and operation value to correspond to the function and framework of cloud computing, analyze the value and benefit the cloud computer can do to improve the beginning and consumption-side of online game and its possible function and service mode for online game.
The online game needs huge quantity of data in document and object, so the need of instant cloud computing is high. For that reason, when develop the software and hardware, the medium and small groups need to reduce the cost. Moreover, they need the server that can use in both new software and hardware, the backup, un-interruptible power supply and the feature of operating normally while doing the maintenance, modification, debug, etc. So, in the trend of current cloud computing, one of the predictable development opportunity is have the cloud computing platform service that can help to develop and operate online game.
In the end, the study will take the case study of cloud computing platform that have released online game as example, correspond to the relevant framework and analysis from the previous online game and the function and framework of cloud computing and finally give the suggests and conclusion to the case company, which is taking great amount of instant cloud computing as core ability, provide cross-field services, use core technology development transformation and cooperation service mode, The use of HTML5 or Yahoo's YUI technology put in all effort to earn the new market of online game and research and develop the cloud computer server engine. Other than that, we also bring up the individual professional service that can develop in the cloud computing platform of online game such as SaaS, PaaS, IaaS and the suggestions and conclusions relate to the flexible and assembling-based complex cloud service.
|
933 |
The awareness and perception of cloud computing technology by accounting firms in Cape TownVan den Bergh, Jacobus 11 1900 (has links)
Cloud accounting software (CAS) emerged as part of the overall development of cloud computing. The cloud, as it is referred to, has heralded a new age in information technology and offers new and unique opportunities and challenges for organisations of all sizes. The aim of this study was to determine the awareness and perception of cloud computing technology by accounting firms in Cape Town.
The findings of the survey reveal that significant awareness exists of CAS by firm managers and accountants. In some respects there are significant differences between small and medium-large firms regarding their perceptions of CAS. Smaller firms seem to be more positive toward CAS and also more agile and capable of deploying CAS than medium-large firms, and thereby are taking advantage of CAS more effectively.
It is evident from the study that there are opportunities for both small and medium-large firms to make use of CAS in their attempts to grow their businesses and it is important that they become familiar with CAS and the opportunities and threats which it presents. Marketers of CAS products need to consider the firm’s size, as well as the organisational decision-making process for CAS acquisition, which can aid them in their marketing designs. / Business Management / M. Com. (Business Management)
|
934 |
The role of cloud computing in addressing small, medium enterprise challenges in South AfricaKumalo, Nkosi Hugh 08 1900 (has links)
This thesis was motivated by Roberts (2010) who found that 63% of SMEs in South Africa do not make it past second year of operation. To expand further on this problem, we reviewed literature to understand key business challenges experienced by SMEs in South Africa which contribute to this high failure rate. The challenges include red tape, labour legislation, lack of skills, lack of innovation, impact of crime, and lack of funds. The research project aimed to answer a key question: “How can information technology, in the form of Cloud Computing be used to address the challenges faced by small and medium businesses in South Africa?”
To answer this question, data was collected from 265 SME companies and quantitatively analysed. It is important to note that the profile of SMEs targeted in this study are those that employed fewer than 200 employees, with a turnover of not less than 26 million rand per annum, and registered with South African Revenue Services (SARS) and also with the Companies and Intellectual Property Commission (CIPC) of South Africa. Over 60% of the firms that responded to the survey were in business for more than 10 years which means we are mainly dealing with data from businesses that have past the survivalist stage and are matured businesses. These are businesses that can share their experiences and challenges they faced throughout their journey. The profile of SMEs in this study should not be confused with that of Very Small Medium Enterprise Businesses.
The questionnaire was designed to address four themes being the Demographic profile, SME Business Environment, Threat of Survival, and lastly Technology Adoption. Key finding in this research is that 60% of the panellists stated that red tape is the overriding challenge that small businesses contend with. 67% of the panellists confirmed that they have not invested in their businesses in the past year; and 53% stated that they have not applied for finance from the bank for fear of being rejected. Only 30% of the SME market were found to use enterprise resource planning (ERP) and 62% do not have their own IT department. Of great concern is that 65% of the panellists have experienced server down time at least once in the past year. Inability to predict the rising IT costs in a firm has been cited as the main concern when running IT on premise. The cost predictability finding was also discovered to be a benefit enjoyed by the SMEs who use Cloud Computing.
The conclusion is that there is a relationship between Cloud Computing, Small and Medium Enterprise businesses and the challenges they face in their business environment. To address the identified business challenges, technology adoption studies by Gumbi & Mnkandla (2015), Carcary, Doherty & Conway (2014), Lacovou et al (1995), Mohlomeane & Ruxwana (2014), Kshetri (2010), BMI Research (2018), Conway & Curry (2012), Li, Zhao & Yu (2015), Wernefeldt (1985), Schindehuitte & Morris (2001), Tornatzy & Flesher (1991) were reviewed. From these publications, the Technology, Organisational and Environmental (TOE) was found to be relevant and of interest for use in answering the main research question.
This study developed the Cloud Adoption Framework which is the anchor of all SME challenges. Key study contribution is that the TOE model, which is predominantly used to understand the determinants of technology adoption like various industry applications, infrastructure innovations etc., are now used to address specific challenges that have contributed in the high failure rate of SME business. This is the first-time TOE model has been used to align with key SME challenges that contribute to firms’ failure. Specific technology across Software, Infrastructure and Platform services models are recommended for use by SMEs to ensure challenges are mitigated and improve the chances of survival for SMEs operating in South Africa.
By following the recommended Cloud Adoption Framework, SMEs should be able to navigate the complexities brought about by the tough operating environment and also the technologies available to address those challenges. All six challenges have solutions in Cloud Computing and SMEs are educated on these solutions and also how to access these on a pay as you use model of consumption. / Business Management / D.B.L.
|
935 |
U-SEA: UM AMBIENTE DE APRENDIZAGEM UBÍQUO UTILIZANDO CLOUD COMPUTING / U-SEA: A UBIQUITOUS LEARNING ENVIRONMENT USING CLOUD COMPUITNGPiovesan, Sandra Dutra 05 December 2011 (has links)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / The diffusion of the learning virtual environments use shows a great potential to the applications development that meet the needs in the education area. In view of the importance of a more dynamic application and one that can adapt itself to the needs of the students, it was proposed and developed the U-SEA (oblique adapted teaching system). This system was built based on the learning virtual environment Moodle and on the module Mle-Moodle, available in an infrastructure of Cloud-Computing and has as a main finality the adaptation to the student's computing context, envisioning technical characteristics as the adequacy of the environment to the user's speed connection. The results gotten showed the feasibility of working with systems that are sensitive to the context, bringing improvements to the students' access to the materials and tools. / A difusão do uso dos ambientes virtuais de aprendizagem apresenta um grande potencial para desenvolvimento de aplicações que atendam necessidades na área da educação. Tendo em vista a importância de uma aplicação mais dinâmica e que consiga se adaptar continuamente as necessidades dos estudantes, foi proposto e desenvolvido o U-SEA (Sistema de Ensino Adaptado Ubíquo). Esse sistema foi construído com base no ambiente virtual de aprendizagem Moodle e no Módulo Mle-Moodle, disponibilizado em uma infraestrutura de Cloud Computing e tem como principal finalidade a adaptação ao contexto computacional do aluno, vislumbrando características técnicas como a adequação do ambiente a velocidade de conexão do usuário. Os resultados obtidos demonstraram a viabilidade de se trabalhar com sistemas sensíveis ao contexto, trazendo melhorias no acesso dos estudantes aos materiais e ferramentas.
|
936 |
Chiffrement homomorphe et recherche par le contenu sécurisé de données externalisées et mutualisées : Application à l'imagerie médicale et l'aide au diagnostic / Homomorphic encryption and secure content based image retieval over outsourced data : Application to medical imaging and diagnostic assistanceBellafqira, Reda 19 December 2017 (has links)
La mutualisation et l'externalisation de données concernent de nombreux domaines y compris celui de la santé. Au-delà de la réduction des coûts de maintenance, l'intérêt est d'améliorer la prise en charge des patients par le déploiement d'outils d'aide au diagnostic fondés sur la réutilisation des données. Dans un tel environnement, la sécurité des données (confidentialité, intégrité et traçabilité) est un enjeu majeur. C'est dans ce contexte que s'inscrivent ces travaux de thèse. Ils concernent en particulier la sécurisation des techniques de recherche d'images par le contenu (CBIR) et de « machine learning » qui sont au c'ur des systèmes d'aide au diagnostic. Ces techniques permettent de trouver des images semblables à une image requête non encore interprétée. L'objectif est de définir des approches capables d'exploiter des données externalisées et sécurisées, et de permettre à un « cloud » de fournir une aide au diagnostic. Plusieurs mécanismes permettent le traitement de données chiffrées, mais la plupart sont dépendants d'interactions entre différentes entités (l'utilisateur, le cloud voire un tiers de confiance) et doivent être combinés judicieusement de manière à ne pas laisser fuir d'information lors d'un traitement.Au cours de ces trois années de thèse, nous nous sommes dans un premier temps intéressés à la sécurisation à l'aide du chiffrement homomorphe, d'un système de CBIR externalisé sous la contrainte d'aucune interaction entre le fournisseur de service et l'utilisateur. Dans un second temps, nous avons développé une approche de « Machine Learning » sécurisée fondée sur le perceptron multicouches, dont la phase d'apprentissage peut être externalisée de manière sûre, l'enjeu étant d'assurer la convergence de cette dernière. L'ensemble des données et des paramètres du modèle sont chiffrés. Du fait que ces systèmes d'aides doivent exploiter des informations issues de plusieurs sources, chacune externalisant ses données chiffrées sous sa propre clef, nous nous sommes intéressés au problème du partage de données chiffrées. Un problème traité par les schémas de « Proxy Re-Encryption » (PRE). Dans ce contexte, nous avons proposé le premier schéma PRE qui permet à la fois le partage et le traitement des données chiffrées. Nous avons également travaillé sur un schéma de tatouage de données chiffrées pour tracer et vérifier l'intégrité des données dans cet environnement partagé. Le message tatoué dans le chiffré est accessible que l'image soit ou non chiffrée et offre plusieurs services de sécurité fondés sur le tatouage. / Cloud computing has emerged as a successful paradigm allowing individuals and companies to store and process large amounts of data without a need to purchase and maintain their own networks and computer systems. In healthcare for example, different initiatives aim at sharing medical images and Personal Health Records (PHR) in between health professionals or hospitals with the help of the cloud. In such an environment, data security (confidentiality, integrity and traceability) is a major issue. In this context that these thesis works, it concerns in particular the securing of Content Based Image Retrieval (CBIR) techniques and machine learning (ML) which are at the heart of diagnostic decision support systems. These techniques make it possible to find similar images to an image not yet interpreted. The goal is to define approaches that can exploit secure externalized data and enable a cloud to provide a diagnostic support. Several mechanisms allow the processing of encrypted data, but most are dependent on interactions between different entities (the user, the cloud or a trusted third party) and must be combined judiciously so as to not leak information. During these three years of thesis, we initially focused on securing an outsourced CBIR system under the constraint of no interaction between the users and the service provider (cloud). In a second step, we have developed a secure machine learning approach based on multilayer perceptron (MLP), whose learning phase can be outsourced in a secure way, the challenge being to ensure the convergence of the MLP. All the data and parameters of the model are encrypted using homomorphic encryption. Because these systems need to use information from multiple sources, each of which outsources its encrypted data under its own key, we are interested in the problem of sharing encrypted data. A problem known by the "Proxy Re-Encryption" (PRE) schemes. In this context, we have proposed the first PRE scheme that allows both the sharing and the processing of encrypted data. We also worked on watermarking scheme over encrypted data in order to trace and verify the integrity of data in this shared environment. The embedded message is accessible whether or not the image is encrypted and provides several services.
|
937 |
Optimisation des performances dans les entrepôts distribués avec Mapreduce : traitement des problèmes de partionnement et de distribution des données / Optimizing data management for large-scale distributed data warehouses using MapReduceArres, Billel 08 February 2016 (has links)
Dans ce travail de thèse, nous abordons les problèmes liés au partitionnement et à la distribution des grands volumes d’entrepôts de données distribués avec Mapreduce. Dans un premier temps, nous abordons le problème de la distribution des données. Dans ce cas, nous proposons une stratégie d’optimisation du placement des données, basée sur le principe de la colocalisation. L’objectif est d’optimiser les traitements lors de l’exécution des requêtes d’analyse à travers la définition d’un schéma de distribution intentionnelle des données permettant de réduire la quantité des données transférées entre les noeuds lors des traitements, plus précisément lors phase de tri (shuffle). Nous proposons dans un second temps une nouvelle démarche pour améliorer les performances du framework Hadoop, qui est l’implémentation standard du paradigme Mapreduce. Celle-ci se base sur deux principales techniques d’optimisation. La première consiste en un pré-partitionnement vertical des données entreposées, réduisant ainsi le nombre de colonnes dans chaque fragment. Ce partitionnement sera complété par la suite par un autre partitionnement d’Hadoop, qui est horizontal, appliqué par défaut. L’objectif dans ce cas est d’améliorer l’accès aux données à travers la réduction de la taille des différents blocs de données. La seconde technique permet, en capturant les affinités entre les attributs d’une charge de requêtes et ceux de l’entrepôt, de définir un placement efficace de ces blocs de données à travers les noeuds qui composent le cluster. Notre troisième proposition traite le problème de l’impact du changement de la charge de requêtes sur la stratégie de distribution des données. Du moment que cette dernière dépend étroitement des affinités des attributs des requêtes et de l’entrepôt. Nous avons proposé, à cet effet, une approche dynamique qui permet de prendre en considération les nouvelles requêtes d’analyse qui parviennent au système. Pour pouvoir intégrer l’aspect de "dynamicité", nous avons utilisé un système multi-agents (SMA) pour la gestion automatique et autonome des données entreposées, et cela, à travers la redéfinition des nouveaux schémas de distribution et de la redistribution des blocs de données. Enfin, pour valider nos contributions nous avons conduit un ensemble d’expérimentations pour évaluer nos différentes approches proposées dans ce manuscrit. Nous étudions l’impact du partitionnement et la distribution intentionnelle sur le chargement des données, l’exécution des requêtes d’analyses, la construction de cubes OLAP, ainsi que l’équilibrage de la charge (Load Balacing). Nous avons également défini un modèle de coût qui nous a permis d’évaluer et de valider la stratégie de partitionnement proposée dans ce travail. / In this manuscript, we addressed the problems of data partitioning and distribution for large scale data warehouses distributed with MapReduce. First, we address the problem of data distribution. In this case, we propose a strategy to optimize data placement on distributed systems, based on the collocation principle. The objective is to optimize queries performances through the definition of an intentional data distribution schema of data to reduce the amount of data transferred between nodes during treatments, specifically during MapReduce’s shuffling phase. Secondly, we propose a new approach to improve data partitioning and placement in distributed file systems, especially Hadoop-based systems, which is the standard implementation of the MapReduce paradigm. The aim is to overcome the default data partitioning and placement policies which does not take any relational data characteristics into account. Our proposal proceeds according to two steps. Based on queries workload, it defines an efficient partitioning schema. After that, the system defines a data distribution schema that meets the best user’s needs, and this, by collocating data blocks on the same or closest nodes. The objective in this case is to optimize queries execution and parallel processing performances, by improving data access. Our third proposal addresses the problem of the workload dynamicity, since users analytical needs evolve through time. In this case, we propose the use of multi-agents systems (MAS) as an extension of our data partitioning and placement approach. Through autonomy and self-control that characterize MAS, we developed a platform that defines automatically new distribution schemas, as new queries appends to the system, and apply a data rebalancing according to this new schema. This allows offloading the system administrator of the burden of managing load balance, besides improving queries performances by adopting careful data partitioning and placement policies. Finally, to validate our contributions we conduct a set of experiments to evaluate our different approaches proposed in this manuscript. We study the impact of an intentional data partitioning and distribution on data warehouse loading phase, the execution of analytical queries, OLAP cubes construction, as well as load balancing. We also defined a cost model that allowed us to evaluate and validate the partitioning strategy proposed in this work.
|
938 |
On the mapping of distributed applications onto multiple Clouds / Contributions au placement d'applications distribuées sur multi-cloudsDe Souza Bento Da Silva, Pedro Paulo 11 December 2017 (has links)
Le Cloud est devenu une plate-forme très répandue pour le déploiement d'applications distribuées. Beaucoup d'entreprises peuvent sous-traiter leurs infrastructures d'hébergement et, ainsi, éviter des dépenses provenant d'investissements initiaux en infrastructure et de maintenance.Des petites et moyennes entreprises, en particulier, attirés par le modèle de coûts sur demande du Cloud, ont désormais accès à des fonctionnalités comme le passage à l'échelle, la disponibilité et la fiabilité, qui avant le Cloud étaient presque réservées à de grandes entreprises.Les services du Cloud peuvent être offerts aux utilisateurs de plusieurs façons. Dans cette thèse, nous nous concentrons sur le modèle d'Infrastructure sous Forme de Service. Ce modèle permet aux utilisateurs d’accéder à des ressources de calcul virtualisés sous forme de machine virtuelles (MVs).Pour installer une application distribuée, un client du Cloud doit d'abord définir l'association entre son application et l'infrastructure. Il est nécessaire de prendre en considération des contraintesde coût, de ressource et de communication pour pouvoir choisir un ensemble de MVs provenant d'opérateurs de Cloud publiques et privés le plus adaptés. Cependant, étant donné la quantité exponentiel de configurations, la définition manuelle de l'association entre application et infrastructure peut être un challenge dans des scénarios à large échelle ou ayant des contraintes importantes de temps. En effet, ce problème est une généralisation du problème de calcul de homomorphisme de graphes, qui est NP-complet.Dans cette thèse, nous adressons le problème de calculer des placements initiaux et de reconfiguration pour des applications distribuées sur potentiellement de multiples Clouds. L'objectif est de minimiser les coûts de location et de migration en satisfaisant des contraintes de ressources et communications. Pour cela, nous proposons des heuristiques performantes capables de calculer des placements de bonne qualité très rapidement pour des scénarios à petite et large échelles. Ces heuristiques, qui sont basées sur des algorithmes de partition de graphes et de vector packing, ont été évaluées en les comparant avec des approches de l'état de l'art comme des solveurs exactes et des méta-heuristiques. Nous montrons en utilisant des simulations que les heuristiques proposées arrivent à calculer des solutions de bonne qualité en quelques secondes tandis que des autres approches prennent des heures ou jours pour les calculer. / The Cloud has become a very popular platform for deploying distributed applications. Today, virtually any credit card holder can have access to Cloud services. There are many different ways of offering Cloud services to customers. In this thesis we especially focus on theInfrastructure as a Service (IaaS), a model that, usually, proposes virtualized computing resources to costumers in the form of virtual machines (VMs). Thanks to its attractive pay-as-you-use cost model, it is easier for customers, specially small and medium companies, to outsource hosting infrastructures and benefit of savings related to upfront investments and maintenance costs. Also, customers can have access to features such as scalability, availability, and reliability, which previously were almost exclusive for large companies. To deploy a distributed application, a Cloud customer must first consider the mapping between her application (or its parts) to the target infrastructure. She needs to take into consideration cost, resource, and communication constraints to select the most suitable set of VMs, from private and public Cloud providers. However, defining a mapping manually may be a challenge in large-scale or time constrained scenarios since the number of possible configuration explodes. Furthermore, when automating this process, scalability issues must be taken into account given that this mapping problem is a generalization of the graph homomorphism problem, which is NP-complete.In this thesis we address the problem of calculating initial and reconfiguration placements for distributed applications over possibly multiple Clouds. Our objective is to minimize renting and migration costs while satisfying applications' resource and communication constraints. We concentrate on the mapping between applications and Cloud infrastructure. Using an incremental approach, we split the problem into three different parts and propose efficient heuristics that can compute good quality placements very quickly for small and large scenarios. These heuristics are based on graph partition and vector packing heuristics and have been extensively evaluated against state of the art approaches such as MIP solvers and meta-heuristics. We show through simulations that the proposed heuristics manage to compute solutions in a few seconds that would take many hours or days for other approaches to compute.
|
939 |
Vad är Cloud Computing? : En kvalitativ studie ur ett företagsperspektivNordlindh, Mattias, Suber, Kristoffer January 2010 (has links)
Cloud computing is a new buzzword within the IT-industry, and introduces a whole new way of working with IT. The technique delivers web based services, which results in that the user no longer needs to install an application locally on a computer. Since the application no longer needs to run on a local entity, but in a datacenter located on a service provider, the users no longer need any specific hardware more than a computer with an internet connection. Cloud computing also offers IT-infrastructure and development environments as services, these three service types is better known as cloud services. Through the usage of different types of cloud services, the need for maintenance and hardware is significantly reduced. Therefore, the need for IT-competence in a company is reduced, which offers the company to focus on their core business strategy. A problem with cloud computing is that because it is such a new phenomenon, there is no established definition. This makes the subject hard to understand and easily misunderstood. Cloud computing surely seems to solve many of the problems with reliability of systems and hardware that companies struggle with on a daily basis, but is it really that simple? The purpose of this thesis is to understand which types of company preconditions that affect the integration of cloud services in a company. We will also clarify the concept of Cloud computing by divide and describe its different components. To investigate the different types of company preconditions and there approach to cloud services we have performed interviews at different companies in associations with our case study. The result shows that a cloud service only can be integrated to an organization as long as the organization possesses the right preconditions. We think that cloud services can bring great advantages to organizations that meet these preconditions and that cloud services has the potential to ease the way of work for organizations in the future. / Cloud computing är ett nytt trendord inom IT-branschen och innebär ett nytt sätt att arbeta med IT. Tekniken bygger på att användare av en applikation inte behöver installera en applikation på sin lokala dator utan applikationen förmedlas som en tjänst genom Internet. Då applikationen inte körs på någon lokal enhet utan i en datorhall hos tjänsteleverantören behöver inte användaren ha någon mer specifik hårdvara utöver en dator och en Internetanslutning för att ta del av tjänsten. Även IT-infrastruktur och utvecklingsmiljö som tjänst erbjuds på samma sätt inom Cloud computing, dessa tre typer av tjänster kallas för molntjänster. Genom att använda olika typer av molntjänster minskar den interna driften av system, underhåll av hårdvara och således behövs minmal IT-kompetens inom företag, detta tillåter företag att fokusera på sin kärnverksamhet. Då Cloud computing är ett nytt fenomen finns det ingen erkänd definition av begreppet ännu, detta resulterar i att ämnet blir svårförstått och misstolkas lätt. Cloud computing verkar onekligen lösa problematiken med driftsäkerhet som företag tvingas att handskas med dagligen, men är de verkligen så enkelt? Syftet med uppsatsen är att redogöra för vilka förutsättningar som företag besitter som påverkar hur bra olika typer av molntjänster kan integreras i företagets verksamhet. Uppsatsen ska även redogöra för begreppet Cloud computing genom att dela upp och beskriva de olika delar som begreppet består utav. För att utreda detta har vi bedrivit en fallstudie genom intervjuer hos utvalda företag för att undersöka företagens förutsättningar och förhållningssätt gentemot olika typer av molntjänster. Resultatet visar att de rätta förutsättningarna krävs för att ett företag ska kunna integrera molntjänster i sin verksamhet. Vi anser att molntjänster kan medföra stora fördelar för de företagen som besitter dessa förutsättningar, och att molntjänster har potential att underlätta verksamheten för många organisationer i framtiden.
|
940 |
Autonomie, sécurité et QoS de bout en bout dans un environnement de Cloud Computing / Security, QoS and self-management within an end-to-end Cloud Computing environmentHamze, Mohamad 07 December 2015 (has links)
De nos jours, le Cloud Networking est considéré comme étant l'un des domaines de recherche innovants au sein de la communauté de recherche du Cloud Computing. Les principaux défis dans un environnement de Cloud Networking concernent non seulement la garantie de qualité de service (QoS) et de sécurité mais aussi sa gestion en conformité avec un accord de niveau de service (SLA) correspondant. Dans cette thèse, nous proposons un Framework pour l'allocation des ressources conformément à un SLA établi de bout en bout entre un utilisateur de services Cloud (CSU) et plusieurs fournisseurs de services Cloud (CSP) dans un environnement de Cloud Networking (architectures d’inter-Cloud Broker et Fédération). Nos travaux se concentrent sur les services Cloud de types NaaS et IaaS. Ainsi, nous proposons l'auto-établissement de plusieurs types de SLA ainsi que la gestion autonome des ressources de Cloud correspondantes en conformité avec ces SLA en utilisant des gestionnaires autonomes spécifiques de Cloud. De plus, nous étendons les architectures et les SLA proposés pour offrir un niveau de service intégrant une garantie de sécurité. Ainsi, nous permettons aux gestionnaires autonomes de Cloud d'élargir leurs objectifs de gestion autonome aux fonctions de sécurité (auto-protection) tout en étudiant l'impact de la sécurité proposée sur la garantie de QoS. Enfin, nous validons notre architecture avec différents scénarios de simulation. Nous considérons dans le cadre de ces simulations des applications de vidéoconférence et de calcul intensif afin de leur fournir une garantie de QoS et de sécurité dans un environnement de gestion autonome des ressources du Cloud. Les résultats obtenus montrent que nos contributions permettent de bonnes performances pour ce type d’applications. En particulier, nous observons que l'architecture de type Broker est la plus économique, tout en assurant les exigences de QoS et de sécurité. De plus, nous observons que la gestion autonome des ressources du Cloud permet la réduction des violations, des pénalités et limite l'impact de la sécurité sur la garantie de la QoS. / Today, Cloud Networking is one of the recent research areas within the Cloud Computing research communities. The main challenges of Cloud Networking concern Quality of Service (QoS) and security guarantee as well as its management in conformance with a corresponding Service Level Agreement (SLA). In this thesis, we propose a framework for resource allocation according to an end-to-end SLA established between a Cloud Service User (CSU) and several Cloud Service Providers (CSPs) within a Cloud Networking environment (Inter-Cloud Broker and Federation architectures). We focus on NaaS and IaaS Cloud services. Then, we propose the self-establishing of several kinds of SLAs and the self-management of the corresponding Cloud resources in conformance with these SLAs using specific autonomic cloud managers. In addition, we extend the proposed architectures and the corresponding SLAs in order to deliver a service level taking into account security guarantee. Moreover, we allow autonomic cloud managers to expand the self-management objectives to security functions (self-protection) while studying the impact of the proposed security on QoS guarantee. Finally, our proposed architecture is validated by different simulation scenarios. We consider, within these simulations, videoconferencing and intensive computing applications in order to provide them with QoS and security guarantee in a Cloud self-management environment. The obtained results show that our contributions enable good performances for these applications. In particular, we observe that the Broker architecture is the most economical while ensuring QoS and security requirements. In addition, we observe that Cloud self-management enables violations and penalties’ reduction as well as limiting security impact on QoS guarantee.
|
Page generated in 0.0741 seconds