• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1681
  • 332
  • 250
  • 173
  • 127
  • 117
  • 53
  • 52
  • 44
  • 44
  • 25
  • 20
  • 19
  • 18
  • 11
  • Tagged with
  • 3366
  • 1662
  • 733
  • 506
  • 440
  • 422
  • 402
  • 338
  • 326
  • 323
  • 319
  • 315
  • 306
  • 265
  • 261
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
591

Autonomic Failure Identification and Diagnosis for Building Dependable Cloud Computing Systems

Guan, Qiang 05 1900 (has links)
The increasingly popular cloud-computing paradigm provides on-demand access to computing and storage with the appearance of unlimited resources. Users are given access to a variety of data and software utilities to manage their work. Users rent virtual resources and pay for only what they use. In spite of the many benefits that cloud computing promises, the lack of dependability in shared virtualized infrastructures is a major obstacle for its wider adoption, especially for mission-critical applications. Virtualization and multi-tenancy increase system complexity and dynamicity. They introduce new sources of failure degrading the dependability of cloud computing systems. To assure cloud dependability, in my dissertation research, I develop autonomic failure identification and diagnosis techniques that are crucial for understanding emergent, cloud-wide phenomena and self-managing resource burdens for cloud availability and productivity enhancement. We study the runtime cloud performance data collected from a cloud test-bed and by using traces from production cloud systems. We define cloud signatures including those metrics that are most relevant to failure instances. We exploit profiled cloud performance data in both time and frequency domain to identify anomalous cloud behaviors and leverage cloud metric subspace analysis to automate the diagnosis of observed failures. We implement a prototype of the anomaly identification system and conduct the experiments in an on-campus cloud computing test-bed and by using the Google datacenter traces. Our experimental results show that our proposed anomaly detection mechanism can achieve 93% detection sensitivity while keeping the false positive rate as low as 6.1% and outperform other tested anomaly detection schemes. In addition, the anomaly detector adapts itself by recursively learning from these newly verified detection results to refine future detection.
592

Mixed-phase regime cloud thinning could help restore sea ice

Villanueva, Diego, Possner, Anna, Neubauer, David, Gasparini, Blaž, Lohmann, Ulrike, Tesche, Matthias 30 September 2024 (has links)
Cloud geoengineering approaches aim to mitigate global warming by seeding aerosols into clouds to change their radiative properties and ocurrence frequency. Ice-nucleating particles (INPs) can enhance droplet freezing in clouds, reducing their water content. Until now, the potential of these particles has been mainly studied for weather modification and cirrus cloud thinning. Here, using a cloud-resolving model and a climate model we show that INPs could decrease the heat-trapping effect of mixed-phase regime clouds over the polar oceans during winter, slowing down sea-ice melting and partially offsetting the ice-albedo feedback. We refer to this concept as mixed-phase regime cloud thinning (MCT). We estimate that MCT could offset about 25% of the expected increase in polar sea-surface temperature due to the doubling of CO2. This is accompanied by an annual increase in sea-ice surface area of 8% around the Arctic, and 14% around Antarctica.
593

Risks and rewards of cloud computing in the UK public sector: A reflection on three Organisational case studies

Jones, S., Irani, Zahir, Sivarajah, Uthayasankar, Love, P.E.D. 04 December 2017 (has links)
Yes / Government organisations have been shifting to cloud-based services in order to reduce their total investments in IT infrastructures and resources (e.g. data centers), as well as capitalise on cloud computing’s numerous rewards. However, just like any other technology investments there are also concerns over the potential risks of implementing cloud-based technologies. Such concerns and the paucity of scholarly literature focusing on cloud computing from a governmental context confirm the need for exploratory research and to draw lessons for government authorities and others in order to ensure a reduction in costly mistakes. This paper therefore investigates the implementation of cloud computing in both a practical setting and from an organisational user perspective via three UK local government authorities. Through the qualitative case study enquiries, the authors are able to extrapolate perceived rewards and risks factors which are mapped against the literature so that emergent factors can be identified. All three cloud deployments resulted in varying outcomes which included key rewards such as improved information management, flexibility of work practices and also posed risks such as loss of control and lack of data ownership to the organisations. These findings derived from the aggregated organisational user perspectives will be of benefit to both academics and practitioners engaged in cloud computing research and its strategic implementation in the public sector.
594

Fuzzy-Logic Based Call Admission Control in 5G Cloud Radio Access Networks with Pre-emption

Sigwele, Tshiamo, Pillai, Prashant, Alam, Atm S., Hu, Yim Fun 31 August 2017 (has links)
Yes / Fifth generation (5G) cellular networks will be comprised of millions of connected devices like wearable devices, Androids, iPhones, tablets and the Internet of Things (IoT) with a plethora of applications generating requests to the network. The 5G cellular networks need to cope with such sky-rocketing tra c requests from these devices to avoid network congestion. As such, cloud radio access networks (C-RAN) has been considered as a paradigm shift for 5G in which requests from mobile devices are processed in the cloud with shared baseband processing. Despite call admission control (CAC) being one of radio resource management techniques to avoid the network congestion, it has recently been overlooked by the community. The CAC technique in 5G C-RAN has a direct impact on the quality of service (QoS) for individual connections and overall system e ciency. In this paper, a novel Fuzzy-Logic based CAC scheme with pre-emption in C-RAN is proposed. In this scheme, cloud bursting technique is proposed to be used during congestion, where some delay tolerant low-priority connections are pre-empted and outsourced to a public cloud with a penalty charge. Simulation results show that the proposed scheme has low blocking probability below 5%, high throughput, low energy consumption and up to 95% of return on revenue.
595

Optimising Fault Tolerance in Real-time Cloud Computing IaaS Environment

Mohammed, Bashir, Kiran, Mariam, Awan, Irfan U., Maiyama, Kabiru M. 22 August 2016 (has links)
Yes / Fault tolerance is the ability of a system to respond swiftly to an unexpected failure. Failures in a cloud computing environment are normal rather than exceptional, but fault detection and system recovery in a real time cloud system is a crucial issue. To deal with this problem and to minimize the risk of failure, an optimal fault tolerance mechanism was introduced where fault tolerance was achieved using the combination of the Cloud Master, Compute nodes, Cloud load balancer, Selection mechanism and Cloud Fault handler. In this paper, we proposed an optimized fault tolerance approach where a model is designed to tolerate faults based on the reliability of each compute node (virtual machine) and can be replaced if the performance is not optimal. Preliminary test of our algorithm indicates that the rate of increase in pass rate exceeds the decrease in failure rate and it also considers forward and backward recovery using diverse software tools. Our results obtained are demonstrated through experimental validation thereby laying a foundation for a fully fault tolerant IaaS Cloud environment, which suggests a good performance of our model compared to current existing approaches. / Petroleum Technology Development Fund (PTDF)
596

Failure Analysis Modelling in an Infrastructure as a Service (Iaas) Environment

Mohammed, Bashir, Modu, Babagana, Maiyama, Kabiru M., Ugail, Hassan, Awan, Irfan U., Kiran, Mariam 30 October 2018 (has links)
Yes / Failure Prediction has long known to be a challenging problem. With the evolving trend of technology and growing complexity of high-performance cloud data centre infrastructure, focusing on failure becomes very vital particularly when designing systems for the next generation. The traditional runtime fault-tolerance (FT) techniques such as data replication and periodic check-pointing are not very effective to handle the current state of the art emerging computing systems. This has necessitated the urgent need for a robust system with an in-depth understanding of system and component failures as well as the ability to predict accurate potential future system failures. In this paper, we studied data in-production-faults recorded within a five years period from the National Energy Research Scientific computing centre (NERSC). Using the data collected from the Computer Failure Data Repository (CFDR), we developed an effective failure prediction model focusing on high-performance cloud data centre infrastructure. Using the Auto-Regressive Moving Average (ARMA), our model was able to predict potential future failures in the system. Our results also show a failure prediction accuracy of 95%, which is good.
597

Analysis of cloud-based e-government services acceptance in Jordan: challenges and barriers

Alkhwaldi, Abeer F.A.H., Kamala, Mumtaz A., Qahwaji, Rami S.R. 11 September 2018 (has links)
Yes / There is increasing evidence that the Cloud Computing services have become a strategic direction for governments' IT work by the dawn of the third-millennium. The inevitability of this computing technology has been recognized not only in the developed countries like the UK, USA and Japan, but also in the developing countries like the Middle East region and Malaysia, who have launched migrations towards Cloud platforms for more flexible, open, and collaborative public services. In Jordan, the cloud-based e-government project has been deemed as one of the high priority areas for the government agencies. In spite of its phenomenal evolution, various governmental cloud-based services still facing adoption challenges of e-government projects like technological, human-aspects, social, and financial which need to be treated and considered carefully by any government agency contemplating its implementation. While there have been extensive efforts to investigate the e-government adoption from the citizens' perspective using different theories and models, none have paid adequate attention to the security issues. This paper explores the different perspectives of the extent in which these challenges inhibit the acceptance and use of cloud computing in Jordanian public sector. In addition to examining the effect of these challenges on the participants’ security perception. The empirical evidence provided a total of 220 valid responses to our online questionnaire from Jordanian citizens including IT- staff from different government sectors. Based on the data analysis some significant challenges were identified. The results can help the policy makers in the public sector to guide successful acceptance and adoption of cloud-based e-government services in Jordan. / Mutah University - Jordan
598

Orchestrating cloud resources to optimize performance and cost

Raza, Ali 30 January 2025 (has links)
2023 / In the last decade, Function-as-a-Service (FaaS) became one of the popular choices for building and deploying cloud applications. Compared to Infrastructure-as-a-Service (IaaS), FaaS offers an abstraction of backend management, an easy programming model, low cold starts, and a true “pay as you go” pricing model. While efficient and relatively simpler, the cost and performance of an application, when deployed using FaaS, can be adversely affected if not properly managed and configured. Previous approaches have advocated the limited use of FaaS while scaling out the virtual machine (VM) based resources to avoid Service-Level-Objective (SLO) violations. However, these approaches miss out on potential long-term cost savings by employing FaaS consistently. Similarly, to manage a FaaS deployment, various machine learning and optimization techniques have been suggested but these techniques either have high costs or fall short as they fail to adapt to the dynamic nature of FaaS platforms. To this end, we present Thrifty, a hybrid approach to leveraging FaaS in conjunction with other cloud services to optimize both cost and performance. Thrifty consists of two main components: 1) LIBRA: a load-balancing framework to utilize IaaS and FaaS resources efficiently. Based on the demand, it decides to use either FaaS, IaaS, or both to maximize cost savings while meeting the SLOs; 2) xCOSE: a resource configuration and placement technique for FaaS deployments. It addresses the performance variability of FaaS platforms and meets the SLO by adapting the resource configurations with minimal sampling cost. It can configure single- and multi-function (service graph) applications. We evaluate Thrifty in extensive simulations and on the Amazon Web Services (AWS) cloud platform using real applications. Our evaluations show that consistent and opportunistic usage of FaaS through LIBRA can reduce SLO violations by up to 85% and cost by up to 53% when compared to other approaches to deploying cloud applications. Furthermore, xCOSE has the ability to configure simple and complex FaaS applications with minimal sampling cost.
599

Service-based applications provisioning in the cloud / Déploiement des applications à base de services dans le cloud

Yangui, Sami 02 October 2014 (has links)
Le Cloud Computing ou "informatique en nuage" est un nouveau paradigme émergeant pour l’exploitation des services informatiques distribuées à large échelle s’exécutant à des emplacements géographiques répartis. Ce paradigme est de plus en plus utilisé pour le déploiement et l’exécution des applications en général et des applications à base de services en particulier. Les applications à base de services sont décrites à l’aide du standard Service Component Architecture (SOA) et consistent à inter-lier un ensemble de services élémentaires et hétérogènes en utilisant des spécifications de composition de services appropriées telles que Service Component Architecture (SCA) ou encore Business Process Execution Language (BPEL). Provisionner une application dans le Cloud consiste à : (1) allouer les ressources dont elle a besoin pour s’exécuter, (2) déployer ses sources sur les ressources allouées et (3) démarrer l’application. Cependant, les solutions Cloud existantes sont limitées en termes de plateformes d’exécution. Ils ne peuvent pas toujours satisfaire la forte hétérogénéité des composants des applications à base de services. Pour remédier à ces problèmes, les mécanismes de provisioning des applications dans le Cloud doivent être reconsidérés. Ces mécanismes doivent être assez flexibles pour supporter la forte hétérogénéité des composants sans imposer de modifications et/ou d’adaptations du côté du fournisseur Cloud. Elles doivent également permettre le déploiement automatique des composants dans le Cloud. Si l’application à déployer est mono-composant, le déploiement est fait automatiquement et de la même manière, et ce quelque soit le fournisseur Cloud choisi. Si l’application est à base de services hétérogènes, des fonctionnalités appropriées doivent être mises à la disposition des développeurs pour qu’ils puissent définir et créer les ressources nécessaires aux composants avant de déployer l’application. Dans ce travail, nous proposons une approche appelée SPD permettant le provisioning des applications à base de services dans le Cloud. L’approche SPD est constituée de 3 étapes : (1) découper des applications à base de services en un ensemble de services élémentaires et autonomes, (2) encapsuler les services dans des micro-conteneurs spécifiques et (3) déployer les micro-conteneurs dans le Cloud. Pour le découpage, nous avons élaboré un ensemble d’algorithmes formels assurant la préservation de la sémantique des applications une fois découpées. Pour l’encapsulation, nous avons réalisé des prototypes de conteneurs de services permettant l’hébergement et l’exécution des services avec seulement le minimum des fonctionnalités nécessaires. Pour le déploiement, deux cas sont traités i.e. déploiement sur une infrastructure Cloud (IaaS) et déploiement sur une plateforme Cloud (PaaS). Pour automatiser le processus de déploiement, nous avons défini : (i) un modèle de description des ressources unifié basé sur le standard Open Cloud Computing Interface (OCCI) permettant de décrire l’application et ses ressources d’une manière générique quelque soit la plateforme de déploiement cible et (ii) une API appelée COAPS implémentant ce modèle et permettant de l’approvisionnement et la gestion des applications en utilisant des opérations génériques quelque soit la plateforme cible / Cloud Computing is a new supplement, consumption, and delivery model for IT services based on Internet protocols. It is increasingly used for hosting and executing applications in general and service-based applications in particular. Service-based applications are described according to Service Oriented Architecture (SOA) and consist of assembling a set of elementary and heterogeneous services using appropriate service composition specifications like Service Component Architecture (SCA) or Business Process Execution Language (BPEL). Provision an application in the Cloud consists of allocates its required resources from a Cloud provider, upload source codes over their resources before starting the application. However, existing Cloud solutions are limited to static programming frameworks and runtimes. They cannot always meet with the application requirements especially when their components are heterogeneous as service-based applications. To address these issues, application provisioning mechanisms in the Cloud must be reconsidered. The deployment mechanisms must be flexible enough to support the strong application components heterogeneity and requires no modification and/or adaptation on the Cloud provider side. They also should support automatic provisioning procedures. If the application to deploy is mono-block (e.g. one-tier applications), the provisioning is performed automatically and in a unified way whatever is the target Cloud provider through generic operations. If the application is service-based, appropriate features must be provided to developers in order to create themselves dynamically the required resources before the deployment in the target provider using generic operations. In this work, we propose an approach (called SPD) to provision service-based applications in the Cloud. The SPD approach consists of 3 steps: (1) Slicing the service-based application into a set of elementary and autonomous services, (2) Packaging the services in micro-containers and (3) Deploying the micro-containers in the Cloud. Slicing the applications is carried out by formal algorithms that we have defined. For the slicing, proofs of preservation of application semantics are established. For the packaging, we performed prototype of service containers which provide the minimal functionalities to manage hosted services life cycle. For the deployment, both cases are treated i.e. deployment in Cloud infrastructure (IaaS) and deployment in Cloud platforms (PaaS). To automate the deployment, we defined: (i) a unified description model based on the Open Cloud Computing Interface (OCCI) standard that allows the representation of applications and its required resources independently of the targeted PaaS and (ii) a generic PaaS application provisioning and management API (called COAPS API) that implements this model
600

Teaching Cloud Deployment

Farjami, Hannah, Agartz Nilbrink, Simon January 2019 (has links)
In today’s IT-landscape cloud computing is one of the hottest topics. There are many emerging uses and technologies for the cloud. Deployment of applications is one of the main usages of the cloud today. This has led to companies giving developers more responsibilities with deployment. Therefore, there is a need to update educations in computer science by including cloud deployment. For these reasons, this thesis attempts to give a reasonable proposal for how cloud deployment could be taught in a university course.A literature study was conducted to gather information about topics surrounding cloud deployment. These were topics like cloud computing, service models, building techniques and cloud services. Then a case study was conducted on three different cloud services, OpenShift, Cloud Foundry, and Heroku. This was to learn how to deploy. Lastly, two interviews and a survey were conducted with people that have an insight into the subject and could provide reasonable information.Based on our case study, interviews and survey we concluded a reasonable approach to how deployment with cloud services could be taught. It can be taught with a theoretical and practical part. The theoretical part could be a lecture introducing Heroku and OpenShift, followed by an assignment where students deploy an application to them. The reasons we recommend Heroku and OpenShift is for Heroku’s simple and fast deployment and OpenShift for being more educative.We also realized that cloud deployment would work best as a stand-alone course. Because during the degree project it became clear how broad cloud deployment is. / I dagens IT-miljö är molnet ett av de hetaste ämnena. Det finns många nya användningsområden och teknologier för molnet. Driftsättning av applikationer är ett av de viktigaste användningsområdena av molnet idag. Detta har lett till att företag ger utvecklare mer ansvar vid driftsättning. Därför är det nödvändigt att förändra utbildningar i datorvetenskap genom att inkludera driftsättning i molnmiljö. Av dessa skäl försöker denna avhandling ge ett rimligt förslag på hur driftsättning i molnmiljö kan läras ut på ett universitet.En litteraturstudie genomfördes för att samla information om ämnen som berör driftsättning i molnmiljö. Dessa var ämnen som molnet, servicemodeller, byggtekniker och molntjänster. Sedan genomfördes en fallstudie på tre olika molntjänster, OpenShift, Cloud Foundry och Heroku. Detta var för att lära sig hur man driftsätter. Slutligen genomfördes två intervjuer och en undersökning med personer som har insikt i ämnet och som kan ge rimlig information.Baserat på vår fallstudie, intervjuer och undersökning drog vi en slutsats för ett rimligt tillvägagångssätt för hur driftsättning i molnmiljö kunde läras ut. Det kan undervisas med en teoretisk och praktisk del. Den teoretiska delen kan vara en föreläsning som introducerar Heroku och OpenShift, följt av en uppgift där studenter driftsätter en applikation till dem. Anledningarna till att vi rekommenderar Heroku och OpenShift är för Heroku’s enkla och snabba driftsättning och OpenShift för att den är mycket mer lärorik.Vi insåg också att driftsättning i molnmiljö skulle fungera bäst som en fristående kurs. Eftersom det under examensprojektet blev klart hur brett driftsättning i molnmiljö är.

Page generated in 0.052 seconds