Spelling suggestions: "subject:"cloud computing"" "subject:"cloud acomputing""
121 |
Autonomic Failure Identification and Diagnosis for Building Dependable Cloud Computing SystemsGuan, Qiang 05 1900 (has links)
The increasingly popular cloud-computing paradigm provides on-demand access to computing and storage with the appearance of unlimited resources. Users are given access to a variety of data and software utilities to manage their work. Users rent virtual resources and pay for only what they use. In spite of the many benefits that cloud computing promises, the lack of dependability in shared virtualized infrastructures is a major obstacle for its wider adoption, especially for mission-critical applications. Virtualization and multi-tenancy increase system complexity and dynamicity. They introduce new sources of failure degrading the dependability of cloud computing systems. To assure cloud dependability, in my dissertation research, I develop autonomic failure identification and diagnosis techniques that are crucial for understanding emergent, cloud-wide phenomena and self-managing resource burdens for cloud availability and productivity enhancement. We study the runtime cloud performance data collected from a cloud test-bed and by using traces from production cloud systems. We define cloud signatures including those metrics that are most relevant to failure instances. We exploit profiled cloud performance data in both time and frequency domain to identify anomalous cloud behaviors and leverage cloud metric subspace analysis to automate the diagnosis of observed failures. We implement a prototype of the anomaly identification system and conduct the experiments in an on-campus cloud computing test-bed and by using the Google datacenter traces. Our experimental results show that our proposed anomaly detection mechanism can achieve 93% detection sensitivity while keeping the false positive rate as low as 6.1% and outperform other tested anomaly detection schemes. In addition, the anomaly detector adapts itself by recursively learning from these newly verified detection results to refine future detection.
|
122 |
Mise en oeuvre d’une plateforme de gestion et de dissémination des connaissances pour des réseaux autonomiques / A knowledge management and dissemination platform for autonomic networksSouihi, Sami 03 December 2013 (has links)
La croissance du réseau Internet, l'émergence de nouveaux besoins par l'avènement des terminaux dits intelligents (smartphones, tablettes tactiles, etc.) et l'apparition de nouvelles applications sous-jacentes induisent de nombreuses mutations dans l'usage de plus en plus massif des technologies de l'information dans notre vie quotidienne et dans tous les secteurs d'activités. Ces nouveaux usages ont nécessité de repenser le fondement même de l'architecture réseau qui a eu pour conséquence l'émergence de nouveaux concepts basés sur une vue "centrée sur l'usage" en lieu et place d'une vue "centrée sur le réseau". De fait, les mécanismes de contrôle du réseau de transport doivent non seulement exploiter les informations relatives aux plans de données, de contrôle et de gestion, mais aussi les connaissances, acquises ou apprises par inférence déductive ou inductive, sur l'état courant du réseau (trafic, ressources, rendu de l'application, etc.) de manière à accélérer la prise de décision par les éléments de contrôle du réseau. Les travaux faits dans le cadre de cette thèse concernent ce dernier aspect et rejoignent plus généralement ceux tournés sur les réseaux autonomiques. Il s'agit dans cette thèse de mettre en oeuvre des méthodes relatives à la gestion, à la distribution et à l'exploitation des connaissances nécessaires au bon fonctionnement du réseau de transport. Le plan de connaissances mis en oeuvre ici se base à la fois sur l'idée de développer une gestion au sein d'une structure hiérarchisée et adaptative où seuls certains noeuds sélectionnés sont en charge de la dissémination des connaissances et l'idée de relier ces noeuds au travers d'un ensemble de réseaux couvrants spécialisés permettant de faciliter l'exploitation de ces connaissances. Comparée aux plateformes traditionnellement utilisées, celle développée dans le cadre de cette thèse montre clairement l'intérêt des algorithmes élaborés au regard des temps d'accès, de distribution et de partage de charge entre les noeuds de contrôle pour la gestion des connaissances. A des fins de validation, cette plateforme a été utilisée dans deux exemples d'application: le Cloud computing et les smartgrids / The growth of the Internet, the emergence of new needs expressed by the advent of smart devices ( smartphones, touchpads , etc. ) and the development of new underlying applications induce many changes in the use of information technology in our everyday life and in all sectors. This new use that match new needs required to rethink the foundation of the network architecture itself, which has resulted in the emergence of new concepts based on a "use-centeric" view instead of a "network-centric" view. In fact, the control mechanisms of the transmission network must not only exploit the information on data, control and management planes, but also the knowledge acquired or learned by inductive or deductive inference on the current state of the network (traffic, resources, the rendering of the application, etc.) to accelerate decision making by the control elements of the network. This thesis is dealing with this latter aspect, which makes it consistent with work done on autonomic networks. It is about conceiving and implementing methods for the management, distribution and exploitation of knowledge necessary for the proper functioning of the transmission network. The knowledge plane that we implemented is based on both the idea of developing a management within an adaptive hierarchical structure where only some selected nodes are responsible for the dissemination of knowledge and the idea of linking these nodes through a spanning set of specialized networks to facilitate the exploitation of this knowledge. Compared to traditionally used platforms, the one developed in this thesis clearly shows the interest of the developed algorithms in terms of access time, distribution and load sharing between the control nodes for knowledge management. For validation purposes, our platform was tested on two application examples : Cloud computing and smart grids
|
123 |
La mobilité du code dans les systèmes embarqués / The Code mobility in embedded systemsDjiken, Guy Lahlou 14 December 2018 (has links)
Avec l’avènement du nomadisme, des périphériques mobiles, de la virtualisation et du Cloud Computing ces dernières années, de nouvelles problématiques sont nées aux vues des considérations écologiques, de la gestion d’énergie, de la qualité de service, des normes sécuritaires et bien d’autres aspects liés à nos sociétés. Pour apporter une solution à ces problèmes, nous avons défini la notion de Cloudlet tel un Cloud local où peuvent se virtualiser des périphériques et ses applications embarquées. Ensuite, nous avons conçu une architecture distribuée basée sur ce pattern d’architecture lié au Cloud Computing et à la virtualisation de ressources. Ces définitions permettent de placer notre travail par rapport aux autres approches de déportation d’applications mobiles.D’autre part, un réseau de Cloudlets permet la protection de l’activité effectuée sur un périphérique mobile par la déportation d’applications embarquées dans une machine virtuelle s’exécutant dans la Cloudlet, ainsi que le suivi des usagers dans leur déplacement.Ces définitions nous ont guidées dans l’écriture de spécifications formelles via une algèbre de processus d’ordre supérieure. Elles autorisent le calcul de la sémantique opérationnelle pour les différentes études de cas basées sur ce concept de Cloudlet. Ces spécifications ont permis de décrire une nouvelle vision de la composition des périphériques virtuels applicables à tous les périphériques, les capteurs ou les actuateurs. L’ensemble des équations obtenues constitue une définition formelle de référence non seulement pour le prototypage d’une Cloudlet mais aussi pour la construction des automates temporisés.En se basant sur la structure de nos spécifications, nous avons construit un modèle d’automates temporisés pour un réseau de Cloudlets. Par l’emploi de technique de model checking, nous avons établi des propriétés temporelles montrant que toute exécution d’une application mobile sur un périphérique mobile pouvait être déportée dans une Cloudlet sous condition d’une structure applicative. Ces travaux ont abouti à des choix techniques donnant lieu à un prototype d’une telle architecture distribuée par l’emploi de serveurs OSGi. D’une part, nous fournissons une architecture logicielle d’application mobile. D’autre part, nous mettons en œuvre le principe de migration vers une Cloudlet voisine et son retour. Ces résultats sont une validation de nos choix initiaux et attestent de la réalité de nos travaux. Ils autorisent la prise de mesure permettant de définir le coût d’une migration vers une Cloudlet pendant une exécution, ainsi que son suivi au cours du déplacement de l’usager / With the advent of nomadism, mobile devices, virtualization and cloud computing in recent years, new problems have arisen taking into account ecological concerns, energy management, quality of service, security standards and many other aspects related to our societies. To solve these problems, we define the concept of Cloudlet as a local cloud where virtual devices and embedded applications can be virtualized. Then, we design a distributed architecture based on this architectural pattern related to cloud computing and virtualization of resources. These notions allow us to position our work among other approaches to offload mobile applications in a Cloudlet.On the other hand, a network of Cloudlets helps to secure the activity carried out on a mobile device by offloading embedded applications in a running virtual machine in the Cloudlet, and also to monitor users during their movements.These definitions guided us towards writing formal specifications via a higher order processes of algebra. They facilitate the calculation of operational semantics for different case studies based on this Cloudlet concept. These specifications foster a new vision for designing virtual devices suitable to all devices, sensors or actuators. This set of equations constitutes a formal definition relevant not only for prototyping a Cloudlet but also for constructing a timed automata system.Following the structure of our specifications, we built a model of timed automata for a network of Cloudlets. Exploiting the model checking techniques, we have established temporal properties showing that any execution of a mobile application on a mobile device could be offloaded in a Cloudlet depending on a given software architecture. This work resulted in making technical choices leading to a prototype of such a distributed architecture using an OSGi server. A first result leads us to define a software architecture for mobile applications. Secondly, we implement the principle of migration to a Cloudlet neighbor. Our tests validate our initial choices and confirm the hypotheses of our work. They allow taking measures in order to assess the cost of an offloading to a Cloudlet during runtime, as well as keeping track during user’s movements
|
124 |
Fiabilisation du change dans le Cloud au niveau Platform as a Service / Reliability of changes in cloud environment at PaaS levelTao, Xinxiu 29 January 2019 (has links)
Les architectures de microservices sont considérées comme une architecture qui promet pour réaliser DevOps dans les organisations informatiques, car elles divisent les applications en services pouvant être mis à jour indépendamment. Toutefois, pour protéger les propriétés SLA (Service Level Agreement) lors de la mise à jour des microservices, les équipes de DevOps doivent gérer des scripts d'opérations complexes et sujets aux erreurs. Dans cet article, on utilise une approche basée sur l'architecture pour fournir un moyen simple et sûr pour mettre à jour les microservices. / Microservice architectures are considered really promising to achieve DevOps in IT organizations, because they split applications into services that can be updated independently from each others. But to protect SLA (Service Level Agreement) properties when updating microservices, DevOps teams have to deal with complex and error-prone scripts of management operations. In this paper, we leverage an architecture-based approach to provide an easy and safe way to update microservices.
|
125 |
Implantações de sistemas ERP em cloud computing: um estudo sobre os fatores críticos de sucesso em organizações brasileiras / Implementations of ERP on cloud computing: a study on the critical success factors in Brazilian organizationsOliveira, Eduardo Thomazim de 05 December 2012 (has links)
A história dos sistemas ERP tem início nos anos 90 com sua adoção por grandes corporações. Seu uso tem se intensificado, bem como suas funcionalidades complementares, com o objetivo de integrar os processos de gestão da empresa dentro e fora do espaço físico tradicional. Já sua adoção pelas organizações tem sido alavancada com objetivo de redução de custos, mas, justamente o custo de sua implantação tem sido um limitador. A possibilidade de utilização do ERP em cloud computing se mostra uma alternativa viável, pois reduz uma série de custos da implantação. No entanto, a implantação do ERP em cloud computing traz influências sobre o formato em que o ERP é implantado, bem como modifica os fatores relevantes (críticos para o sucesso) da sua implantação e utilização. Este trabalho analisa os fatores críticos de sucesso existentes na literatura atual e como estes foram relevantes em implantações feitas em cloud computing nas 3 empresas estudadas. Trata-se então de um estudo de casos realizado a partir de um roteiro de entrevistas, aplicado aos responsáveis pela implantação interna e externamente em três empresas brasileiras de ramos de atuação e sistemas implantados diferentes. Este trabalho apresenta conceitos relacionados aos sistemas ERP e os fatores críticos de sucesso disponíveis na literatura, bem como uma caracterização deste novo ambiente de cloud computing e a relação existente com implantações de ERP já registrados. A partir destes resultados outros estudos podem acompanhar a evolução de cloud computing ligado ao ERP ou a partir de uma base instalada maior, segmentar as análises e até mesmo consolidar metodologias de implantação para este novo formato. Por se tratar de um estudo de caso, as conclusões não podem ser generalizadas para todas as organizações, além disso, a existência de poucos fornecedores e poucas implantações de ERP no formato cloud computing, tratando-se de uma tecnologia muito recente, conferem outra limitação para este estudo. / The history of ERP\'s systems starts in 90 years with the adoption by large corporations. Its use has intensified since then as well as additional features, in order to integrate the company\'s management processes within and outside the traditional physical space. Since its adoption by organizations have been leveraged in order to reduce costs, but, just the cost of its implementation has been a limiter. The usability of ERP on cloud computing proves a viable alternative because it reduces a number of deployment costs. However, the implementation of ERP in cloud computing brings influences on the format in which the ERP is implemented, as well as modify the relevant factors (critical success) of their deployment and use. This paper analyzes the critical success factors in the existing literature and how these were relevant in cloud computing deployments made in 3 companies studied. It is then a case study carried out from a set of interviews applied to those responsible for implementing internally and externally in three Brazilian companies midsize segments of operation and different systems deployed. This paper presents concepts related to ERP systems and the critical success factors available in the literature, as well as a characterization of this new cloud computing environment and the relationship with existing ERP implementations already registered. From these results, other studies may follow the evolution of cloud computing ERP connected to or from a larger installed base, segment analysis and even consolidate deployment methodologies for this new format. Because it is a case study, the findings can\'t be generalized to all organizations, moreover, that there are few suppliers and few ERP implementations in the format cloud computing, as it is a very recent technology, provide another limitation for this study.
|
126 |
RESOURCE MANAGEMENT FRAMEWORK FOR VOLUNTEER CLOUD COMPUTINGMengistu, Tessema Mindaye 01 December 2018 (has links)
The need for high computing resources is on the rise, despite the exponential increase of the computing capacity of workstations, the proliferation of mobile devices, and the omnipresence of data centers with massive server farms that housed tens (if not hundreds) of thousands of powerful servers. This is mainly due to the unprecedented increase in the number of Internet users worldwide and the Internet of Things (IoTs). So far, Cloud Computing has been providing the necessary computing infrastructures for applications, including IoT applications. However, the current cloud infrastructures that are based on dedicated datacenters are expensive to set-up; running the infrastructure needs expertise, a lot of electrical power for cooling the facilities, and redundant supply of everything in a data center to provide the desired resilience. Moreover, the current centralized cloud infrastructures will not suffice for IoT's network intensive applications with very fast response requirements. Alternative cloud computing models that depend on spare resources of volunteer computers are emerging, including volunteer cloud computing, in addition to the conventional data center based clouds. These alternative cloud models have one characteristic in common -- they do not rely on dedicated data centers to provide the cloud services. Volunteer clouds are opportunistic cloud systems that run over donated spare resources of volunteer computers. On the one hand, volunteer clouds claim numerous outstanding advantages: affordability, on-premise, self-provision, greener computing (owing to consolidate use of existent computers), etc. On the other hand, full-fledged implementation of volunteer cloud computing raises unique technical and research challenges: management of highly dynamic and heterogeneous compute resources, Quality of Service (QoS) assurance, meeting Service Level Agreement (SLA), reliability, security/trust, which are all made more difficult due to the high dynamics and heterogeneity of the non-dedicated cloud hosts. This dissertation investigates the resource management aspect of volunteer cloud computing. Due to the intermittent availability and heterogeneity of computing resource involved, resource management is one of the challenging tasks in volunteer cloud computing. The dissertation, specifically, focuses on the Resource Discovery and VM Placement tasks of resource management. The resource base over which volunteer cloud computing depends on is a scavenged, sporadically available, aggregate computing power of individual volunteer computers. Delivering reliable cloud services over these unreliable nodes is a big challenge in volunteer cloud computing. The fault tolerance of the whole system rests on the reliability and availability of the infrastructure base. This dissertation discusses the modelling of a fault tolerant prediction based resource discovery in volunteer cloud computing. It presents a multi-state semi-Markov process based model to predict the future availability and reliability of nodes in volunteer cloud systems. A volunteer node is modelled as a semi-Markov process, whose future state depends only on its current state. This exactly matches with a key observation made in analyzing the traces of personal computers in enterprises that the daily patterns of resource availability are comparable to those in the most recent days. The dissertation illustrates how prediction based resource discovery enables volunteer cloud systems to provide reliable cloud services over the unreliable and non-dedicated volunteer hosts with empirical evidences. VM placement algorithms play crucial role in Cloud Computing in fulfilling its characteristics and achieving its objectives. In general, VM placement is a challenging problem that has been extensively studied in conventional Cloud Computing context. Due to its divergent characteristics, volunteer cloud computing needs a novel and unique way of solving the existing Cloud Computing problems, including VM placement. Intermittent availability of nodes, unreliable infrastructure, and resource constrained nodes are some of the characteristics of volunteer cloud computing that make VM placement problem more complicated. In this dissertation, we model the VM placement problem as a \textit{Bounded 0-1 Multi-Dimensional Knapsack Problem}. As a known NP-hard problem, the dissertation discusses heuristic based algorithms that takes the typical characteristics of volunteer cloud computing into consideration, to solve the VM placement problem formulated as a knapsack problem. Three algorithms are developed to meet the objectives and constraints specific to volunteer cloud computing. The algorithms are tested on a real volunteer cloud computing test-bed and showed a good performance results based on their optimization objectives. The dissertation also presents the design and implementation of a real volunteer cloud computing system, cuCloud, that bases its resource infrastructure on donated computing resource of computers. The need for the development of cuCloud stems from the lack of experimentation platform, real or simulation, that specifically works for volunteer cloud computing. The cuCloud is a system that can be called a genuine volunteer cloud computing system, which manifests the concept of ``Volunteer Computing as a Service'' (VCaaS), with a particular significance in edge computing and related applications. In the course of this dissertation, empirical evaluations show that volunteer clouds can be used to execute range of applications reliably and efficiently. Moreover, the physical proximity of volunteer nodes to where applications originate, edge of the network, helps them in reducing the round trip time latency of applications. However, the overall computing capability of volunteer clouds will not suffice to handle highly resource intensive applications by itself. Based on these observations, the dissertation also proposes the use of volunteer clouds as a resource fabric in the emerging Edge Computing paradigm as a future work.
|
127 |
Ressource allocation and schelduling models for cloud computing / Management des données et ordonnancement des tâches sur architectures distribuéesTeng, Fei 21 October 2011 (has links)
Le cloud computing est l’accomplissement du rêve de nombreux informaticiens désireux de transformer et d’utiliser leurs logiciels comme de simples services, rendant ces derniers plus attractifs et séduisants pour les utilisateurs. Dans le cadre de cette thèse, les technologies du cloud computing sont présentées, ainsi que les principaux défis que ce dernier va rencontrer dans un futur proche, notamment pour la gestion et l’analyse des données. A partir de la théorie moderne d'ordonnancements des tâches, nous avons proposé une gestion hiérarchique d’ordonnancements des tâches qui satisfait aux différentes demandes des cloud services. D’un point de vue théorique, nous avons principalement répondu à trois questions cruciales de recherche. Premièrement, nous avons résolu le problème de l'allocation des ressources au niveau de l’utilisateur. Nous avons en particulier proposé des algorithmes basés sur la théorie des jeux. Avec une méthode Bayésienne d’apprentissage, l'allocation des ressources atteint l'équilibre de Nash parmi les utilisateurs en compétition malgré une connaissance insuffisante des comportements de ces derniers. Deuxièmement, nous avons abordé le problème d'ordonnancements des tâches au niveau du système. Nous avons trouvé un nouveau seuil pour l'utilisation d’ordonnancements des tâches en ligne, considérant le dispositif séquentiel de MapReduce. Ce seuil donne de meilleurs résultats que les méthodes existantes dans l’industrie. Troisièmement, nous avons défini un critère de comparaison pour les tests d’ordonnancements de tâches en ligne. Nous avons proposé un concept de fiabilité d'essai pour évaluer la probabilité qu'un ensemble de tâches aléatoires passe un essai donné. Plus la probabilité est grande, plus la fiabilité est élevée. Un test présentant une grande fiabilité garantit une bonne utilisation du système. D’un point de vue pratique, nous avons développé un simulateur basé sur le concept de MapReduce. Ce simulateur offre un environnement directement utilisable par les chercheurs familiers avec SimMapReduce, leur permettant de s’affranchir des aspects informatiques d’implémentations et leur permettant notamment de se concentrer sur les aspects algorithmiques d’un point de vue théorique. / Cloud computing, the long-held dream of computing as a utility, has the potential to transform a large part of the IT industry, making software even more attractive as a service and shaping the way in which hardware is designed and purchased. In this thesis, we reviewed the new cloud computing technologies, and indicated the main challenges for their development in future, among which resource management problem stands out and attracts our attention. Combining the current scheduling theories, we proposed cloud scheduling hierarchy to deal with different requirements of cloud services. From the theoretical aspects, we have accomplished three main research issues. Firstly, we solved the resource allocation problem in the user-level of cloud scheduling. We proposed game theoretical algorithms for user bidding and auctioneer pricing. With Bayesian learning prediction, resource allocation can reach Nash equilibrium among non-cooperative users even though common knowledge is insufficient. Secondly, we addressed the task scheduling problem in the system-level of cloud scheduling. We proved a new utilization bound for on-line schedulability test, considering the sequential feature of MapReduce. We deduced the relationship between cluster utilization bound and the ratio of Map to Reduce. This new schedulable bound with segmentation uplifts classic bound which is most used in industry. Thirdly, we settled the comparison problem among on-line schedulability tests in cloud computing. We proposed a concept of test reliability to evaluate the probability that a random task set could pass a given schedulability test. The larger the probability is, the more reliable the test is. From the aspect of system, a test with high reliability can guarantee high system utilization. From the practical aspects, we have developed a simulator to model MapReduce framework. This simulator offers a simulated environment directly used by MapReduce theoretical researchers. The users of SimMapReduce only concentrate on specific research issues without getting concerned about finer implementation details for diverse service models, so that they can accelerate study progress of new cloud technologies.
|
128 |
Scientific High Performance Computing (HPC) Applications On The Azure Cloud PlatformAgarwal, Dinesh 10 May 2013 (has links)
Cloud computing is emerging as a promising platform for compute and data intensive scientific applications. Thanks to the on-demand elastic provisioning capabilities, cloud computing has instigated curiosity among researchers from a wide range of disciplines. However, even though many vendors have rolled out their commercial cloud infrastructures, the service offerings are usually only best-effort based without any performance guarantees. Utilization of these resources will be questionable if it can not meet the performance expectations of deployed applications. Additionally, the lack of the familiar development tools hamper the productivity of eScience developers to write robust scientific high performance computing (HPC) applications. There are no standard frameworks that are currently supported by any large set of vendors offering cloud computing services. Consequently, the application portability among different cloud platforms for scientific applications is hard. Among all clouds, the emerging Azure cloud from Microsoft in particular remains a challenge for HPC program development both due to lack of its support for traditional parallel programming support such as Message Passing Interface (MPI) and map-reduce and due to its evolving application programming interfaces (APIs). We have designed newer frameworks and runtime environments to help HPC application developers by providing them with easy to use tools similar to those known from traditional parallel and distributed computing environment set- ting, such as MPI, for scientific application development on the Azure cloud platform. It is challenging to create an efficient framework for any cloud platform, including the Windows Azure platform, as they are mostly offered to users as a black-box with a set of application programming interfaces (APIs) to access various service components. The primary contributions of this Ph.D. thesis are (i) creating a generic framework for bag-of-tasks HPC applications to serve as the basic building block for application development on the Azure cloud platform, (ii) creating a set of APIs for HPC application development over the Azure cloud platform, which is similar to message passing interface (MPI) from traditional parallel and distributed setting, and (iii) implementing Crayons using the proposed APIs as the first end-to-end parallel scientific application to parallelize the fundamental GIS operations.
|
129 |
Scalability and performance management of internet applications in the cloudDawoud, Wesam January 2013 (has links)
Cloud computing is a model for enabling on-demand access to a shared pool of computing resources. With virtually limitless on-demand resources, a cloud environment enables the hosted Internet application to quickly cope when there is an increase in the workload. However, the overhead of provisioning resources exposes the Internet application to periods of under-provisioning and performance degradation. Moreover, the performance interference, due to the consolidation in the cloud environment, complicates the performance management of the Internet applications.
In this dissertation, we propose two approaches to mitigate the impact of the resources provisioning overhead. The first approach employs control theory to scale resources vertically and cope fast with workload. This approach assumes that the provider has knowledge and control over the platform running in the virtual machines (VMs), which limits it to Platform as a Service (PaaS) and Software as a Service (SaaS) providers. The second approach is a customer-side one that deals with the horizontal scalability in an Infrastructure as a Service (IaaS) model. It addresses the trade-off problem between cost and performance with a multi-goal optimization solution. This approach finds the scale thresholds that achieve the highest performance with the lowest increase in the cost. Moreover, the second approach employs a proposed time series forecasting algorithm to scale the application proactively and avoid under-utilization periods. Furthermore, to mitigate the interference impact on the Internet application performance, we developed a system which finds and eliminates the VMs suffering from performance interference. The developed system is a light-weight solution which does not imply provider involvement.
To evaluate our approaches and the designed algorithms at large-scale level, we developed a simulator called (ScaleSim). In the simulator, we implemented scalability components acting as the scalability components of Amazon EC2. The current scalability implementation in Amazon EC2 is used as a reference point for evaluating the improvement in the scalable application performance. ScaleSim is fed with realistic models of the RUBiS benchmark extracted from the real environment. The workload is generated from the access logs of the 1998 world cup website. The results show that optimizing the scalability thresholds and adopting proactive scalability can mitigate 88% of the resources provisioning overhead impact with only a 9% increase in the cost. / Cloud computing ist ein Model fuer einen Pool von Rechenressourcen, den sie auf Anfrage zur Verfuegung stellt. Internetapplikationen in einer Cloud-Infrastruktur koennen bei einer erhoehten Auslastung schnell die Lage meistern, indem sie die durch die Cloud-Infrastruktur auf Anfrage zur Verfuegung stehenden und virtuell unbegrenzten Ressourcen in Anspruch nehmen. Allerdings sind solche Applikationen durch den Verwaltungsaufwand zur Bereitstellung der Ressourcen mit Perioden von Verschlechterung der Performanz und Ressourcenunterversorgung konfrontiert. Ausserdem ist das Management der Performanz aufgrund der Konsolidierung in einer Cloud Umgebung kompliziert.
Um die Auswirkung des Mehraufwands zur Bereitstellung von Ressourcen abzuschwächen, schlagen wir in dieser Dissertation zwei Methoden vor. Die erste Methode verwendet die Kontrolltheorie, um Ressourcen vertikal zu skalieren und somit schneller mit einer erhoehten Auslastung umzugehen. Diese Methode setzt voraus, dass der Provider das Wissen und die Kontrolle über die in virtuellen Maschinen laufende Plattform hat. Der Provider ist dadurch als „Plattform als Service (PaaS)“ und als „Software als Service (SaaS)“ Provider definiert. Die zweite Methode bezieht sich auf die Clientseite und behandelt die horizontale Skalierbarkeit in einem Infrastruktur als Service (IaaS)-Model. Sie behandelt den Zielkonflikt zwischen den Kosten und der Performanz mit einer mehrzieloptimierten Loesung. Sie findet massstaebliche Schwellenwerte, die die hoechste Performanz mit der niedrigsten Steigerung der Kosten gewaehrleisten. Ausserdem ist in der zweiten Methode ein Algorithmus der Zeitreifenvorhersage verwendet, um die Applikation proaktiv zu skalieren und Perioden der nicht optimalen Ausnutzung zu vermeiden. Um die Performanz der Internetapplikation zu verbessern, haben wir zusaetzlich ein System entwickelt, das die unter Beeintraechtigung der Performanz leidenden virtuellen Maschinen findet und entfernt. Das entwickelte System ist eine leichtgewichtige Lösung, die keine Provider-Beteiligung verlangt.
Um die Skalierbarkeit unserer Methoden und der entwickelten Algorithmen auszuwerten, haben wir einen Simulator namens „ScaleSim“ entwickelt. In diesem Simulator haben wir Komponenten implementiert, die als Skalierbarkeitskomponenten der Amazon EC2 agieren. Die aktuelle Skalierbarkeitsimplementierung in Amazon EC2 ist als Referenzimplementierung fuer die Messesung der Verbesserungen in der Performanz von skalierbaren Applikationen. Der Simulator wurde auf realistische Modelle der RUBiS-Benchmark angewendet, die aus einer echten Umgebung extrahiert wurden. Die Auslastung ist aus den Zugriffslogs der World Cup Website von 1998 erzeugt. Die Ergebnisse zeigen, dass die Optimierung der Schwellenwerte und der angewendeten proaktiven Skalierbarkeit den Verwaltungsaufwand zur Bereitstellung der Ressourcen bis um 88% reduziert kann, während sich die Kosten nur um 9% erhöhen.
|
130 |
Technology support and demand for cloud infrastructure services: the role of service providersRetana Solano, German F. 13 January 2014 (has links)
Service providers have long recognized that their customers play a vital role in the service delivery process since they are not only recipients but also producers, or co-producers, of the service delivered. Moreover, in the particular context of self-service technology (SST) offerings, it is widely recognized that customers’ knowledge, skills and abilities in co-producing the service are key determinants of the services’ adoption and usage. However, despite the importance of customers’ capabilities, prior research has not yet paid much attention to the mechanisms by which service providers can influence them and, in turn, how the providers’ efforts affect customers’ use of the service.
This dissertation addresses research questions associated with the role of a provider’s technology support and education in influencing customer use of an SST, namely public cloud computing infrastructure services. The unique datasets used to answer these research questions were collected from one of the major global providers in the cloud infrastructure services industry. This research context offers an excellent opportunity to study the role of technology support since, when adapting the standardized and commoditized components of the cloud service to their individual needs, customers may face important co-production costs that can be mitigated by the provider’s assistance. Specifically, customers must configure their computing servers and deploy their software applications on their own, relying on their own capabilities. Moreover, the cloud’s offering of on-demand computing servers through a fully pay-per-use model allows us to directly observe variation in the actual use customers make of the service.
The first study of this dissertation examines how varying levels of technology support, which differ in the level of participation and assistance of the provider in customers’ service co-production process, influence the use that customers make of the service. The study matches and compares 20,179 firms that used the service between March 2009 and August 2012, and who over time accessed one of the two levels of support available: full and basic. Using fixed effects panel data models and a difference-in-difference identification strategy, we find that customers who have access to full support or accessed it in the past use (i.e., consume) more of the service than customers who have only accessed basic support. Moreover, the provider’s involvement in the co-production process is complementary with firm size in the sense that larger firms use more of the service than smaller ones if they upgrade from basic to full support. Finally, the provider’s co-participation through full support also has a positive influence on the effectiveness with which buyers make use of the service. Firms that access full support are more likely to deploy computing architectures that leverage on the cloud’s advanced features.
The second study examines the value of early proactive education, which is defined as any provider-initiated effort to increase its customers’ service co-production related knowledge and skills immediately after service adoption. The study analyzes the outcome of a field experiment executed by the provider between October and November 2011, during which 366 randomly-selected customers out of 2,673 customers that adopted during the field experiment period received early proactive education treatment. The treatment consisted in a short phone call followed up by a support ticket through which the provider offered initial guidance on how to use the basic features of the service. We use survival analysis (i.e., hazard models) to compare the treatment’s effect on customer retention, and find that it reduces by half the number of customers who leave the service offering during the first week. We also use count data models to examine the treatment’s effect on customers’ demand for technology support, and find that the treated customers ask about 19.55% fewer questions during the first week of their lifetimes than the controls.
|
Page generated in 0.0513 seconds