• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 770
  • 220
  • 122
  • 62
  • 54
  • 33
  • 32
  • 29
  • 28
  • 20
  • 15
  • 14
  • 9
  • 9
  • 7
  • Tagged with
  • 1584
  • 1584
  • 385
  • 277
  • 240
  • 239
  • 238
  • 236
  • 231
  • 222
  • 213
  • 208
  • 174
  • 171
  • 151
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.

Energy efficiency models and optimization algoruthm to enhance on-demand resource delivery in a cloud computing environment / Thusoyaone Joseph Moemi

Moemi, Thusoyaone Joseph January 2013 (has links)
Online hosed services are what is referred to as Cloud Computing. Access to these services is via the internet. h shifts the traditional IT resource ownership model to renting. Thus, high cost of infrastructure cannot limit the less privileged from experiencing the benefits that this new paradigm brings. Therefore, c loud computing provides flexible services to cloud user in the form o f software, platform and infrastructure as services. The goal behind cloud computing is to provide computing resources on-demand to cloud users efficiently, through making data centers as friendly to the environment as possible, by reducing data center energy consumption and carbon emissions. With the massive growth of high performance computational services and applications, huge investment is required to build large scale data centers with thousands o f centers and computing model. Large scale data centers consume enormous amount s of electrical energy. The computational intensity involved in data center is likely to dramatically increase the difference between the amount of energy required for peak periods and of T-peak periods in a cloud computing data center. In addition to the overwhelming operational cost, the overheating caused by high power consumption will affect the reliability o f machines and hence reduce their lifetime. There fore, in order to make the best u e of precious electricity resources, it is important to know how much energy will be required under a certain circumstance in a data center. Consequently, this dissertation addresses the challenge by developing and energy-efficient model and a defragmentation algorithm. We further develop an efficient energy usage metric to calculate the power consumption along with a Load Balancing Virtual Machine Aware Model for improving delivery of no-demand resource in a cloud-computing environment. The load balancing model supports the reduction of energy consumption and helps to improve quality of service. An experimental design was carried out using cloud analyst as a simulation tool. The results obtained show that the LBVMA model and throttled load balancing algorithm consumed less energy. Also, the quality or service in terms of response time is much better for data centers that have more physical machines. but memory configurations at higher frequencies consume more energy. Additionally, while using the LBVMA model in conjunction with the throttled load balancing algorithm, less energy is consumed. meaning less carbon is produced by the data center. / Thesis (M.Sc.(Computer Science) North-West University, Mafikeng Campus, 2013

Stratus: Building and Evaluating a Private Cloud for a Real-World Financial Application

Bajpai, Deepak 16 August 2016 (has links)
Cloud computing technology has been emerging and spreading at a great pace due to its service oriented architecture, elasticity, cost-effectiveness, etc. Many organizations are using Infrastructure-as-a-Service (IaaS) public Clouds for data migration away from traditional IT Infrastructure but there are a few fields such as finance, hospitals, military and others that are reluctant to use public Clouds due to perceived security vulnerability. Enterprises in such fields feel more vulnerable to security breaches and feel secure using in-house IT infrastructure. The introduction of private Clouds is a solution for these businesses. Private Clouds have been substituted for the traditional IT Infrastructure due to its flexible ``pay-as-you-go" model within an organization by departments and enhanced privacy relative to public Clouds in the form of administration control and supervision. Goal of my thesis is to build and evaluate a private Cloud that can provide virtual machines (VMs) as a service and applications as a service. To achieve this goal, in my thesis, I have built and evaluated a service oriented IaaS model of private Cloud. I have used off-the-shelf servers and open-source software for this purpose. I have proposed a new replication strategy using an Openstack component called Cinder. My experiments show that efficient VM failure recovery on the basis of ``preparation delay" time can be achieved using my strategy. I have studied a real-world application of option pricing from the finance market and have used that application for the purpose of testing my private Cloud for compute workload and accuracy of the pricing results. Later, I have compared performance between Cloud VMs and standalone servers. The performance of Cloud VM is found to be better to standalone servers as long as the number of virtual CPUs (vCPUs) are limited to single node. Stratus clouds are groups of small clouds that collectively give a spectacular sight in the sky. The private Cloud I have built uses multiple small modules to achieve the stated goal and hence I named my private Cloud ``Stratus". This Stratus private Cloud is now ready for deploying applications, and for providing VMs on-demand. / October 2016

Reducing communication overheads in a cloud environment through unix-like features

pan, Wei 17 February 2011 (has links)
This thesis describes an approach to add functionality and improved performance to the Hadoop infrastructure for cloud computing. In particular, we have added code to the Hadoop source files, to allow unix scripts to run on the task nodes of the cloud, from within the mapper phase of Hadoop execution. Our results show that the new approach is easier to program than other alternatives, more easy to understand for experienced UNIX programmers, more powerful in terms of the kinds of computations that are possible, and as fast or faster to compute than would be the alternatives.

Dynamic utility maximization for multi-cloud based services

Qiu, Xuanjia, 邱炫佳 January 2014 (has links)
More and more clouds with diversified properties have been built. Many of them span multiple geographical locations over the globe, imposing time-varying costs on and offering different service proximities to users. Clouds could be private or public, requiring different levels of administration effort and providing different levels of freedom to control. Hybrid clouds, which blend together multiple public and private clouds, possess properties of both types. Based on these diversified properties, multiple clouds have the potential to provide services with higher scalability, lower operational cost and better QoS. To exploit this potential, I examine the means of deploying services on multiple clouds, that can maximize utility in dynamic environments. Firstly, I consider the migration of an important representative application, content distribution services, to geo-distributed hybrid clouds. I model the problem of joint content data migration and request dispatching as a unified optimization framework, and then design a dynamic control algorithm to solve it. The algorithm bounds the response times within the preset QoS target, and guarantees that the overall cost is within a small constant gap from the optimum that can be achieved by a T-slot look ahead mechanism with known future information. Secondly, I study the problem of efficient scheduling for disparate MapReduce workloads on hybrid clouds. I build a fine-grained and tractable model to characterize the scheduling of heterogeneous MapReduce workloads. An online algorithm is proposed for joint task admission control for the private cloud, task outsourcing to the public cloud, and VM allocation to execute the admitted tasks on the private cloud, such that the time-averaged task outsourcing cost is minimized over the long run. The online algorithm features preemptive scheduling of the tasks, where a task executed partially on the on-premise infrastructure can be paused and scheduled to run later. It also achieves such desirable properties as meeting a pre-set task admission ratio and bounding the worst-case task completion time. Thirdly, I consider a cloud computing resource market where a broker is employed that pools the spare resources of multiple private clouds and leases them to serve external users' jobs. I model the interaction between the broker and the private clouds as a two-stage Stackelberg game. As the leader in the game, the broker decides on the pricing for renting VMs from each private cloud. As a follower, each private cloud responds with the number of VMs that it is willing to lease. Combining all this with Lyapunov optimization theory, I design online algorithms for the broker to set the prices and schedule the jobs on the private clouds, and for each private cloud to decide the numbers of VMs to lease. The broker achieves a time-averaged profit that is close to the offline optimum with complete information on future job arrivals and resource availability, while each private cloud earns the best that it can. Through theoretical analysis and empirical study, I rigorously examine the cost or profit optimality, and QoS guarantee of my design, and show that they can indeed outperform existing solutions. / published_or_final_version / Computer Science / Doctoral / Doctor of Philosophy

The Impact of Cloud Computing on Organizations in Regard to Cost and Security

Dimitrov, Mihail, Osman, Ibrahim January 2012 (has links)
Throughout the recent years cloud computing has gained large popularity in the information technology domain. Despite its popularity, there are many organizations that are lacking broader understanding of implementing and utilizing cloud computing for business and operating purpose due to the existing vagueness regarding its cost and security effect associated. It is argued that the main attractiveness of cloud computing for organizations is its cost effectiveness, whilst the major concern relates to the risks for security. Accordingly, more effort has been made in exploring these issues of cloud computing impact. However, little effort has been focused at critically examining the cost risks and security benefits which cloud computing bring to organizations. By using a qualitative method this research examines in detail the essential benefits and risks of cloud computing utilization for organizations in terms of cost and security. Unlike prior studies, it also explores the cost risks and security benefits and shows that they should be taken into consideration by organizations. The findings are based on empirical data collected via interviews with IT professionals. The main cost risk identified is the lack of accurate and sophisticated cost models on the current cloud market. Among the identified security benefits are increased data safety, faster data recovery and transfer, centralization, and improved security software mechanisms and maintenance. Moreover, this research shows several major implications that organizations should keep in mind while utilizing cloud computing and provides some suggestions on how to avoid the cost and security risks identified. At present, reduction of the operational and administrative costs is seen by organizations as the most essential cost benefit. The results show that cloud computing is better for small- and medium-sized organizations and that the hybrid cloud is the most appropriate model for them. Furthermore, the cost and security risks of cloud computing cannot be avoided without resolution of the problem with the lack of accurate cost models, international regulatory frameworks and interoperable security standards on supranational levels.

A Cloud Computing Based Platform for Geographically Distributed Health Data Mining

Guo, Yunyong 30 August 2013 (has links)
With cloud computing emerging in recent years, more and more interest has been sparked from a variety of institutions, organizations and individual users, as they intend to take advantage of web applications to share a huge amount of public and private data and information in a more affordable way and using a reliable IT architecture. In the area of healthcare, medical and health information systems based on cloud computing are desired, in order to realize the sharing of medical data and health information, coordination of clinical service, along with effective and cost-contained clinical information system infrastructure via the implementation of a distributed and highly-integrated platform. The objective of this study is to discuss the challenges of adopting cloud computing for collaborative health research information management and provide recommendations to deal with corresponding challenges. More specially, the study will propose a cloud computing based platform according to recommendations. The platform can be used to bring together health informatics researchers from the different geographical locations to share medical data for research purposes, for instance, data mining used for improving liver cancer early detection and treatment. Finding from a literature review will be discussed to highlight challenges of applying cloud computing in a wide range of areas, and recommendations will be paired with each challenge. A proof of concept prototype research methodology will be employed to illustrate the proposed cross national cloud computing model for geographically distributed health data mining applied to a health informatics research. / Graduate / 0573

Efficient Data Access in Cloudlet-based Mobile Cloud Computing

Hou, Zhijun 14 August 2018 (has links)
The growth in mobile devices and applications has leveraged the emergence of mobile cloud computing, which allows the access to services at any place and extends mobile computing. Usually, the current mobile network consists of a restricting factor in supporting such access because, from a global perspective, cloud servers are distant from most mobile users, which introduces signi cant latency and results in considerably delays on applications in mobile devices. On the other hand, Cloudlet are usually on the edge of Mobile Networks and can serve content to mobile users with high availability and high performance. This thesis reviews both the traditional mobile cloud computing and the Cloudlet architecture. A taxonomy on the Cloudlet architecture is introduced and three related technologies are discussed. Based on the user needs in this environment, personal model which is used to predict individual behaviour and group model which considers caching popular data for several users are proposed. Making use of these two models and the Cloudlet architecture, two data access schemes are designed based on model distribution and data pre-distribution. We have conducted experiments and analysis for both the models and data access schemes. For the models, model efficiency and comparisons among different technologies are analysed. Simulation results for the data access schemes show that the proposed schemes outperform the existing method from both battery consumption and performance aspects.

Adaptation et cloud computing : un besoin d'abstraction pour une gestion transverse / Cloud computing : a need for abstraction to manage adaptation as an orthogonal concern

Daubert, Erwan 24 May 2013 (has links)
Le Cloud Computing est devenu l'un des grands paradigmes de l'informatique et propose de fournir les ressources informatiques sous forme de services accessibles au travers de l'Internet. Ces services sont généralement organisés selon trois types ou niveaux. On parle de modèle SPI pour “Software, Platform, Infrastructure” en anglais. De la même façon que pour les applications ``standard'', les services de Cloud doivent être capables de s'adapter de manière autonome afin de tenir compte de l'évolution de leur environnement. À ce sujet, il existe de nombreux travaux tels que ceux concernant la consolidation de serveur et l'économie d'énergie. Mais ces travaux sont généralement spécifiques à l'un des niveaux et ne tiennent pas compte des autres. Pourtant, comme l'a affirmé Kephart et al. en 2000, même s'il existe des adaptations à priori indépendantes les unes des autres, celles-ci ont un impact sur l'ensemble du système informatique dans lequel elles sont appliquées. De ce fait, une adaptation au niveau infrastructure peut avoir un impact au niveau plate-forme ou au niveau application. L'objectif de cette thèse est de fournir un support pour l'adaptation permettant de gérer celle-ci comme une problématique transverse au différents niveaux afin d'assurer la cohérence et l'efficacité de l'adaptation. Pour cela, nous proposons une abstraction capable de représenter l'ensemble des niveaux et servant de support pour la définition des reconfigurations. Cette abstraction repose sur les techniques de modèle à l'exécution (Model at Runtime en anglais) qui propose de porter les outils utilisés à la conception pour définir, valider et appliquer une nouvelle configuration pendant l'exécution du système lui-même. Afin de montrer l'utilisabilité de cette abstraction, nous présentons trois expérimentations permettant de montrer l'extensibilité et la généricité de notre solution, de montrerque l'impact sur les performances du système est faible, et de montrer que cette abstraction permet de faire de l'adaptation multiniveaux. / Cloud Computing is becoming the new paradigm for information technology to provide resources as Internet-based services. These services are basically categorized according to three layers also called SPI model (Software, Platform, Infrastructure). The same way as ``non-Cloud'' applications, Cloud services must be able to adapt themselves according to the evolution of their environment. There are many works on dynamic adaptation such as server consolidation and green computing but these works are generally specifics to one layer and do not take the others into account. However Kephart et al. have explain in 2000 that even if adaptations are, in theory, independant, they have an impact on the overall system. Consequently, an adaptation at the infrastructure layer can have an impact at the platform or at the application layers.This thesis provides an abstraction to manage adaptation as an orthogonal concern overs Cloud layers. Based on Model atRuntime (M@R) techniques which offer to use design tools to build and validate new configuration of the system at the runtime, this abstraction is able to modelize all the Cloud layers. To show the usability of this abstraction, we provide three experimentations showing the extensibility and genericity of our approach, showing that performance overhead on the system (infrastructure or platform) is weak and showing that the abstraction allows to build multi-layers adaptations.

Cloud Based Point-of-Sale / Molnbaserad kassalösning för butik

Lehndal, Anders January 2013 (has links)
A point-of-sale (POS) system is a transaction system allowing retail transactions to be completed. The purpose of this degree project was to examine the potential benefits and risks of implementing a POS as a cloud-based application in comparison to a traditional on-site POS solution. The main focus of this project was put on identifying the benefits and possibilities of using a cloud solution but effort was also geared towards exploring the functionality of an existing web-based POS prototype as a cloud client. A further aspect of this project was to address some of the problems associated with the use of peripheral equipment while maintaining a thin client. This has been achieved by studying cloud computing theory in modern day literature, analysing and working with traditional POS solutions and doing hands-on testing with peripheral equipment. The project results supports the notion that a cloud based POS solution is not only feasible but may perhaps provide some benefits in comparison to a traditional POS solution. However, great care must be taken, both to avoid vendor lock-in and in designing a cloud based system to allow continued operation of client and eventu-al peripheral equipment in an Internet Service Provider (ISP) or Cloud Service Provider (CSP) outage situation. / Ett Point-of-sale (POS) system är ett system som möjliggör fullförande av transaktioner. Syftet med detta examensarbetet var att undersöka möjliga fördelar och risker med att implementera ett POS system som en molnbaserad applikation i jämförelse med en traditionell POS lösning. Huvudfokus lades på att identifiera de fördelar och möjligheter en molnlösning medför men arbete riktades även mot att utforska funktionaliteten hos en existerade webbaserad POS prototyp i rollen som molnklient för detta projekt. En ytterligare aspekt var att adressera några av de problem som associeras med användandet av tredjeparts periferiutrustning tillsammans med en tunn klient. Detta har gjorts genom att studera cloud computing i modern litteratur, analysera och arbeta med traditionella POS lösningar och genom praktisk testning av periferiutrustning. Projektets slutsatser stödjer iden att en molnbaserad POS inte bara är genomförbart utan kan under vissa förhållanden medföra vissa fördelar gentemot en traditionell POS. Men, viss eftertänksamhet är på sin plats vid övervägandet att bygga och använda ett molnbaserat system för att garantera fortsatt operation av klient och periferiutrustning vid förlorad kontakt med internetleverantör och/eller molntjänstleverantör.

Reducing Communication Overhead and Computation Costs in a Cloud Network by Early Combination of Partial Results

Huang, Jun-neng 22 August 2011 (has links)
This thesis describes a method of reducing communication overheads within the MapReduce infrastructure of a cloud computing environment. MapReduce is an framework for parallelizing the processing on massive data systems stored across a distributed computer network. One of the benefits of MapReduce is that the computation is usually performed on a computer (node) that holds the data file. Not only does this approach achieve parallelism, but it also benefits from a characteristic common to many applications: that the answer derived from a computation is often smaller than the size of the input file. Our new method benefits also from this feature. We delay the transmission of individual answers out a given node, so as to allow these answers to be combined locally, first. This combination has two advantages. First, it allows for a further reduction in the amount of data to ultimately transmit. And second, it allows for additional computation across files (such as a merge-sort). There is a limit to the benefit of delaying transmission, however, because the reducer stage of MapReduce cannot begin its work until the nodes transmit their answers. We therefore consider a mechanism to allow the user to adjust the amount of delay before data transmission out of each node.

Page generated in 0.0648 seconds