• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 778
  • 217
  • 122
  • 65
  • 54
  • 33
  • 32
  • 30
  • 28
  • 21
  • 15
  • 14
  • 9
  • 9
  • 7
  • Tagged with
  • 1596
  • 1596
  • 391
  • 281
  • 244
  • 242
  • 235
  • 231
  • 231
  • 226
  • 216
  • 210
  • 176
  • 173
  • 153
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Model-driven software engineering for virtual machine images provisioning in cloud computing / L'ingénierie de logiciel dirigée par les modèles pour l'approvisionnement des images de machines virtuelles dans le cloud computing

Le, Nhan Tam 10 December 2013 (has links)
La couche Infrastructure- as-a-Service (IaaS) de Cloud Computing offre un service de déploiement des images de machines virtuelles (VMIs) à la demande. Ce service fournit une plate-forme flexible pour les utilisateurs de cloud computing pour développer , déployer et tester leurs applications. Le déploiement d'une VMI implique généralement le démarrage de l'image, l'installation et la configuration des paquets de logiciels. Dans l'approche traditionnelle, lorsqu'un utilisateur de cloud demande une nouvelle plate-forme, le fournisseur de cloud sélectionne une image de modèle approprié pour cloner et déployer sur ​​les nœuds de cloud​​. L'image de modèle contient des paquets de logiciel pré-installés. Si elle ne correspond pas aux exigences, alors elle sera personnalisée ou la nouvelle image sera créé à partir de zéro pour s'adapter à la demande. Dans le cadre de la gestion des services de cloud, l'approche traditionnelle face aux questions difficiles de la manipulation de la complexité de l'interdépendance entre les paquets de logiciel, mise à l'échelle et le maintien de l' image déployée à l'exécution. Les fournisseurs de cloud souhaitent automatiser ce processus pour améliorer la performance de processus d'approvisionnement des VMIs, et de donner aux utilisateurs de cloud plus de flexibilité pour la sélection ou la création des images appropriées, tout en maximisant les avantages pour les fournisseurs en termes de temps, de ressources et de coût opérationnel. Cette thèse propose une approche pour gérer l'interdépendance des paquets de logiciels, pour modéliser et automatiser le processus de déploiement VMIs, et pour soutenir la reconfiguration VMIS à l'exécution, appelée l'approche dirigée par les modèle (Model-Driven approach). Nous nous adressons particulièrement aux défis suivants: (1) la modélisation de la variabilité des configurations d'image de machine virtuelle, (2) la réduction la quantité de transfert de données à travers le réseau, (3) l'optimisation de la consommation d'énergie des machines virtuelles; (4) la facilité à utiliser pour les utilisateurs de cloud; (5) l'automatisation du déploiement des VMIs; (6) le support de la mise à l'échelle et la reconfiguration de VMIS à l'exécution; (7) la manipulation de la topologie de déploiement complexe des VMIs . Dans notre approche, nous utilisons des techniques d'ingénierie dirigée par les modèles pour modéliser les représentations d'abstraction des configurations de VMI, le déploiement et les processus de reconfiguration d'image de machine virtuelle. Nous considérons que les VMIS comme une gamme de produits et utiliser les modèles de caractère pour représenter les configurations de VMIs. Nous définissons également le déploiement, les processus de reconfiguration et leurs facteurs (par exemple: les images de machines virtuelles, les paquets de logiciel, la plate-forme, la topologie de déploiement, etc.) comme les modèles. D'autre part, l'approche dirigée par les modèles s'appuie sur les abstractions de haut niveau de la configuration de VMIs et le déploiement de VMIs pour rendre la gestion d'images virtuelles dans le processus d'approvisionnement pour être plus flexible et plus facile que les approches traditionnelles. / The Cloud Computing Infastructure-as-a-Service (IaaS) layer provides a service for on demand virtual machine images (VMIs) deployment. This service provides a flexible platform for cloud users to develop, deploy, and test their applications. The deployment of a VMI typically involves booting the image, installing and configuring the software packages. In the traditional approach, when a cloud user requests a new platform, the cloud provider selects an appropriate template image for cloning and deploying on the cloud nodes. The template image contains pre-installed software packages. If it does not fit the requirements, then it will be customized or the new one will be created from scratch to fit the request. In the context of cloud service management, the traditional approach faces the difficult issues of handling the complexity of interdependency between software packages, scaling and maintaining the deployed image at runtime. The cloud providers would like to automate this process to improve the performance of the VMIs provisioning process, and to give the cloud users more flexibility for selecting or creating the appropriate images while maximizing the benefits for providers intern of time, resources and operational cost. This thesis proposes an approach to manage the interdependency of the software packages, to model and automate the VMIs deployment process, to support the VMIs reconfiguration at runtime, called the Model-Driven approach. We particularly address the following challenges: (1) modeling the variability of virtual machine image configurations; (2) reducing the amount of data transfer through the network; (3) optimizing the power consumption of virtual machines; (4) easy-to-use for cloud users; (5) automating the deployment of VMIs; (6) supporting the scaling and reconfiguration of VMIs at runtime; (7) handling the complex deployment topology of VMIs. In our approach, we use Model-Driven Engineering techniques to model the abstraction representations of the VMI configurations, the deployment and the reconfiguration processes of virtual machine image. We consider the VMIs as a product line and use the feature models to represent the VMIs configurations. We also define the deployment, re-configuration processes and their factors (e.g. virtual machine images, software packages, platform, deployment topology, etc.) as the models. On the other hand, the Model-Driven approach relies on the high-level abstractions of the VMIs configuration and the VMIs deployment to make the management of virtual images in the provisioning process to be more flexible and easier than traditional approaches.
162

Supporting system deployment decisions in public clouds

Khajeh-Hosseini, Ali January 2013 (has links)
Decisions to deploy IT systems on public Infrastructure-as-a-Service clouds can be complicated as evaluating the benefits, risks and costs of using such clouds is not straightforward. The aim of this project was to investigate the challenges that enterprises face when making system deployment decisions in public clouds, and to develop vendor-neutral tools to inform decision makers during this process. Three tools were developed to support decision makers: 1. Cloud Suitability Checklist: a simple list of questions to provide a rapid assessment of the suitability of public IaaS clouds for a specific IT system. 2. Benefits and Risks Assessment tool: a spreadsheet that includes the general benefits and risks of using public clouds; this provides a starting point for risk assessment and helps organisations start discussions about cloud adoption. 3. Elastic Cost Modelling: a tool that enables decision makers to model their system deployment options in public clouds and forecast their costs. These three tools collectively enable decision makers to investigate the benefits, risks and costs of using public clouds, and effectively support them in making system deployment decisions. Data was collected from five case studies and hundreds of users to evaluate the effectiveness of the tools. This data showed that the cost effectiveness of using public clouds is situation dependent rather than universally less expensive than traditional forms of IT provisioning. Running systems on the cloud using a traditional 'always on' approach can be less cost effective than on-premise servers, and the elastic nature of the cloud has to be considered if costs are to be reduced. Decision makers have to model the variations in resource usage and their systems' deployment options to obtain accurate cost estimates. Performing upfront cost modelling is beneficial as there can be significant cost differences between different cloud providers, and different deployment options within a single cloud. During such modelling exercises, the variations in a system's load (over time) must be taken into account to produce more accurate cost estimates, and the notion of elasticity patterns that is presented in this thesis provides one simple way to do this.
163

Sicheres Cloud Computing in der Praxis: Identifikation relevanter Kriterien zur Evaluierung der Praxistauglichkeit von Technologieansätzen im Cloud Computing Umfeld mit dem Fokus auf Datenschutz und Datensicherheit

Reinhold, Paul 02 February 2017 (has links)
In dieser Dissertation werden verschiedene Anforderungen an sicheres Cloud Computing untersucht. Insbesondere geht es dabei um die Analyse bestehender Forschungs- und Lösungsansätze zum Schutz von Daten und Prozessen in Cloud-Umgebungen und um die Bewertung ihrer Praxistauglichkeit. Die Basis für die Vergleichbarkeit stellen spezifizierte Kriterien dar, nach denen die untersuchten Technologien bewertet werden. Hauptziel dieser Arbeit ist zu zeigen, auf welche Weise technische Forschungsansätze verglichen werden können, um auf dieser Grundlage eine Bewertung ihrer Eignung in der Praxis zu ermöglichen. Hierzu werden zunächst relevante Teilbereiche der Cloud Computing Sicherheit aufgezeigt, deren Lösungsstrategien im Kontext der Arbeit diskutiert und State-of-the-Art Methoden evaluiert. Die Aussage zur Praxistauglichkeit ergibt sich dabei aus dem Verhältnis des potenziellen Nutzens zu den damit verbundene erwartenden Kosten. Der potenzielle Nutzen ist dabei als Zusammenführung der gebotenen Leistungsfähigkeit, Sicherheit und Funktionalität der untersuchten Technologie definiert. Zur objektiven Bewertung setzten sich diese drei Größen aus spezifizierten Kriterien zusammen, deren Informationen direkt aus den untersuchten Forschungsarbeiten stammen. Die zu erwartenden Kosten ergeben sich aus Kostenschlüsseln für Technologie, Betrieb und Entwicklung. In dieser Arbeit sollen die zugleich spezifizierten Evaluierungskriterien sowie die Konstellation der obig eingeführten Begriffe ausführlich erläutert und bewertet werden. Für die bessere Abschätzung der Eignung in der Praxis wird in der Arbeit eine angepasste SWOT-Analyse für die identifizierten relevanten Teilbereiche durchgeführt. Neben der Definition der Praktikabilitätsaussage, stellt dies die zweite Innovation dieser Arbeit dar. Das konkrete Ziel dieser Analyse ist es, die Vergleichbarkeit zwischen den Teilbereichen zu erhöhen und so die Strategieplanung zur Entwicklung sicherer Cloud Computing Lösungen zu verbessern.
164

Application of Amazon Web Services in software development

Werlinder, Marcus, Tham, Emelie January 2018 (has links)
During these last recent years cloud computing and cloud services have started to gain traction, which has been most notable among companies. Amazon have proven to be one of the powerhouses on providing scalable and flexible cloud computing services. However, cloud computing is still a relatively new area. From an outsider’s point of view, the overwhelming information and available services might prove to be difficult to familiarize with. The aim of this thesis is to explore how Amazon Web Services can be applied during software development and observing how difficult it might be to use these services. Three test applications that utilized different Amazon Web Services were implemented to get an insight into how Amazon Web Services can be applied from a cloud computing beginner’s point of view. These applications were developed in an iterative manner, where a case study was performed on each application. At the start of each new iteration a literature study was conducted, where sources were reviewed to see if it provided essential information. In total, nine different Amazon Web Services were used to implement and test the three respective test applications. Results of the case study were interpreted and evaluated with regards to the learnability and appliance of Amazon Web Services. Issues that were identified during the development process showed that Amazon Web Services were not userfriendly for users that have little to no experience with cloud computing services. Further research on other Amazon Web Services, such as Elastic Cloud Computing, as well as other cloud computing platforms like Google or IBM, may provide a deeper and more accurate insight on the appliances of cloud computing. / Under dem senaste åren så har molntjänster blivit ett allt mer populärt område, speciellt inom företag. Ett av dem största utgivare inom molntjänst branschen är Amazon som erbjuder skalbara och flexibla molntjänster. Molntjänster är dock ett relativt nytt område, vilket innebär att någon som inte är insatt i ämnet kan finna all tillgänglig information överväldigande och svår att bekanta sig med. Målet med det här tesen är att utforska olika Amazon Web Service som kan användas inom mjukvaruutveckling och observera problem som kan uppstå med dessa tjänster. Tre testapplikationer som använde sig av Amazon Web Services var skapade för att få en fördjupad kunskap om hur dessa tjänster fungerar och vad för möjligheter de har. Dessa applikationer utvecklades iterativt och en fallstudie utfördes för varje applikation. I början av varje ny iteration genomfördes en litteraturstudie, där källorna var kritiskt granskade för att se ifall dem innehöll väsentlig information för tesen. Sammanlagt användes nio olika Amazon Web Services för att implementera och testa de tre olika testapplikationerna. Resultaten från fallstudien tolkades och utvärderades med avseende på Amazon Web Services lärbarhet och tillämpningsbarhet. Problem som samlades ihop under utvecklingsprocessen visade att Amazons Web Services inte var särskilt användarvänligt för utvecklare med liten eller ingen erfarenhet inom Amazon Web Services. Ytterligare forskning inom andra Amazon Web Services som Elastic Cloud Computing och forskning som undersöker andra molntjänst plattformar som Google Cloud, skulle kunna bidra med en djupare förståelse och mer exakt inblick kring tillämpning av molntjänster.
165

A Cloud Computing Framework for Computer Science Education

Aldakheel, Eman A. 06 December 2011 (has links)
No description available.
166

Investigating performance and energy efficiency on a private cloud

Smith, James William January 2014 (has links)
Organizations are turning to private clouds due to concerns about security, privacy and administrative control. They are attracted by the flexibility and other advantages of cloud computing but are wary of breaking decades-old institutional practices and procedures. Private Clouds can help to alleviate these concerns by retaining security policies, in-organization ownership and providing increased accountability when compared with public services. This work investigates how it may be possible to develop an energy-aware private cloud system able to adapt workload allocation strategies so that overall energy consumption is reduced without loss of performance or dependability. Current literature focuses on consolidation as a method for improving the energy-efficiency of cloud systems, but if consolidation is undesirable due to the performance penalties, dependability or latency then another approach is required. Given a private cloud in which the machines are constant, with no machines being powered down in response to changing workloads, and a set of virtual machines to run, each with different characteristics and profiles, it is possible to mix the virtual machine placement to reduce energy consumption or improve performance of the VMs. Through a series of experiments this work demonstrates that workload mixes can have an effect on energy consumption and the performance of applications running inside virtual machines. These experiments took the form of measuring the performance and energy usage of applications running inside virtual machines. The arrangement of these virtual machines on their hosts was varied to determine the effect of different workload mixes. The insights from these experiments have been used to create a proof-of- concept custom VM Allocator system for the OpenStack private cloud computing platform. Using CloudMonitor, a lightweight monitoring application to gather data on system performance and energy consumption, the implementation uses a holistic view of the private cloud state to inform workload placement decisions.
167

Ad hoc cloud computing

McGilvary, Gary Andrew January 2014 (has links)
Commercial and private cloud providers offer virtualized resources via a set of co-located and dedicated hosts that are exclusively reserved for the purpose of offering a cloud service. While both cloud models appeal to the mass market, there are many cases where outsourcing to a remote platform or procuring an in-house infrastructure may not be ideal or even possible. To offer an attractive alternative, we introduce and develop an ad hoc cloud computing platform to transform spare resource capacity from an infrastructure owner’s locally available, but non-exclusive and unreliable infrastructure, into an overlay cloud platform. The foundation of the ad hoc cloud relies on transferring and instantiating lightweight virtual machines on-demand upon near-optimal hosts while virtual machine checkpoints are distributed in a P2P fashion to other members of the ad hoc cloud. Virtual machines found to be non-operational are restored elsewhere ensuring the continuity of cloud jobs. In this thesis we investigate the feasibility, reliability and performance of ad hoc cloud computing infrastructures. We firstly show that the combination of both volunteer computing and virtualization is the backbone of the ad hoc cloud. We outline the process of virtualizing the volunteer system BOINC to create V-BOINC. V-BOINC distributes virtual machines to volunteer hosts allowing volunteer applications to be executed in the sandbox environment to solve many of the downfalls of BOINC; this however also provides the basis for an ad hoc cloud computing platform to be developed. We detail the challenges of transforming V-BOINC into an ad hoc cloud and outline the transformational process and integrated extensions. These include a BOINC job submission system, cloud job and virtual machine restoration schedulers and a periodic P2P checkpoint distribution component. Furthermore, as current monitoring tools are unable to cope with the dynamic nature of ad hoc clouds, a dynamic infrastructure monitoring and management tool called the Cloudlet Control Monitoring System is developed and presented. We evaluate each of our individual contributions as well as the reliability, performance and overheads associated with an ad hoc cloud deployed on a realistically simulated unreliable infrastructure. We conclude that the ad hoc cloud is not only a feasible concept but also a viable computational alternative that offers high levels of reliability and can at least offer reasonable performance, which at times may exceed the performance of a commercial cloud infrastructure.
168

AUTOMATION OF A CLOUD HOSTED APPLICATION : Performance, Automated Testing, Cloud Computing / AUTOMATION OF A CLOUD HOSTED APPLICATION : Performance, Automated Testing, Cloud Computing

Penmetsa, Jyothi Spandana January 2016 (has links)
Context: Software testing is the process of assessing quality of a software product to determine whether it matches with the existing requirements of the customer or not. Software testing is one of the “Verification and Validation,” or V&V, software practices. The two basic techniques of software testing are Black-box testing and White box testing. Black-box testing focuses solely on the outputs generated in response to the inputs supplied neglecting the internal components of the software. Whereas, White-box testing focuses on the internal mechanism of the software of any application. To explore the feasibility of black-box and white-box testing under a given set of conditions, a proper test automation framework needs to be deployed. Automation is deployed in order to reduce the manual effort and to perform testing continuously, thereby increasing the quality of the product. Objectives: In this research, cloud hosted application is automated using TestComplete tool. The objective of this thesis is to verify the functionality of cloud application such as test appliance library through automation and to measure the impact of the automation on release cycles of the organisation. Methods: Here automation is implemented using scrum methodology which is an agile development software process. Using scrum methodology, the product with working software can be delivered to the customers incrementally and empirically with updating functionalities in it. Test appliance library functionality is verified deploying testing device thereby keeping track of automatic software downloads into the testing device and licenses updating in the testing device. Results: Automation of test appliance functionality of cloud hosted application is made using TestComplete tool and impact of automation on release cycles is found reduced. Through automation of cloud hosted application, nearly 24% of reduction in level of release cycles can be observed thereby reducing the manual effort and increasing the quality of delivery. Conclusion: Automation of a cloud hosted application provides no manual effort thereby utilisation of time can be made effectively and application can be tested continuously increasing the efficiency and / AUTOMATION OF A CLOUD HOSTED APPLICATION
169

Workload characterization, controller design and performance evaluation for cloud capacity autoscaling

Ali-Eldin Hassan, Ahmed January 2015 (has links)
This thesis studies cloud capacity auto-scaling, or how to provision and release re-sources to a service running in the cloud based on its actual demand using an auto-matic controller. As the performance of server systems depends on the system design,the system implementation, and the workloads the system is subjected to, we focuson these aspects with respect to designing auto-scaling algorithms. Towards this goal,we design and implement two auto-scaling algorithms for cloud infrastructures. Thealgorithms predict the future load for an application running in the cloud. We discussthe different approaches to designing an auto-scaler combining reactive and proactivecontrol methods, and to be able to handle long running requests, e.g., tasks runningfor longer than the actuation interval, in a cloud. We compare the performance ofour algorithms with state-of-the-art auto-scalers and evaluate the controllers’ perfor-mance with a set of workloads. As any controller is designed with an assumptionon the operating conditions and system dynamics, the performance of an auto-scalervaries with different workloads.In order to better understand the workload dynamics and evolution, we analyze a6-years long workload trace of the sixth most popular Internet website. In addition,we analyze a workload from one of the largest Video-on-Demand streaming servicesin Sweden. We discuss the popularity of objects served by the two services, the spikesin the two workloads, and the invariants in the workloads. We also introduce, a mea-sure for the disorder in a workload, i.e., the amount of burstiness. The measure isbased on Sample Entropy, an empirical statistic used in biomedical signal processingto characterize biomedical signals. The introduced measure can be used to charac-terize the workloads based on their burstiness profiles. We compare our introducedmeasure with the literature on quantifying burstiness in a server workload, and showthe advantages of our introduced measure.To better understand the tradeoffs between using different auto-scalers with differ-ent workloads, we design a framework to compare auto-scalers and give probabilisticguarantees on the performance in worst-case scenarios. Using different evaluation cri-teria and more than 700 workload traces, we compare six state-of-the-art auto-scalersthat we believe represent the development of the field in the past 8 years. Knowingthat the auto-scalers’ performance depends on the workloads, we design a workloadanalysis and classification tool that assigns a workload to its most suitable elasticitycontroller out of a set of implemented controllers. The tool has two main components;an analyzer, and a classifier. The analyzer analyzes a workload and feeds the analysisresults to the classifier. The classifier assigns a workload to the most suitable elasticitycontroller based on the workload characteristics and a set of predefined business levelobjectives. The tool is evaluated with a set of collected real workloads, and a set ofgenerated synthetic workloads. Our evaluation results shows that the tool can help acloud provider to improve the QoS provided to the customers.
170

Google app engine case study : a micro blogging site

Kajita, Marcos Suguru 27 August 2010 (has links)
Cloud computing refers to the combination of large scale hardware resources at datacenters integrated by system software that provides services, commonly known as Software-as-a-Service (SaaS), over the Internet. As a result of more affordable datacenters, cloud computing is slowly making its way into the mainstream business arena and has the potential to revolutionize the IT industry. As more cloud computing solutions become available, it is expected that there will be a shift to what is sometimes referred to as the Web Operating System. The Web Operating System, along with the sense of infinite computing resources on the “cloud” has the potential to bring new challenges in software engineering. The motivation of this report, which is divided into two parts, is to understand these challenges. The first part gives a brief introduction and analysis of cloud computing. The second part focuses on Google’s cloud computing platform and evaluates the implementation of a micro blogging site using Google’s App Engine. / text

Page generated in 0.0936 seconds