• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 777
  • 217
  • 122
  • 65
  • 54
  • 33
  • 32
  • 30
  • 28
  • 21
  • 15
  • 14
  • 9
  • 9
  • 7
  • Tagged with
  • 1595
  • 1595
  • 390
  • 281
  • 244
  • 242
  • 235
  • 231
  • 231
  • 226
  • 215
  • 210
  • 176
  • 173
  • 152
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

A study on resource allocation strategies for cloud robotic systems / CUHK electronic theses & dissertations collection

January 2014 (has links)
The new approach of cloud robotics takes advantage of cloud computing as a vast resource pool for massively parallel computation and sharing of data. Besides, the cloud robotic system removes overheads for maintenance and updates, and reduces dependence on user middleware. This is of particular interest for service robots, because on-board computation entails additional power requirements which may reduce operating duration and constrain robot mobility as well as costs. In order to utilize the cloud technology in service robots, it is crucial to allow different types of robots to share information and to develop new skills on the cloud. / In general, it is cast as a dynamic resource allocation problem. Given a set of resources and a sequence of agents, the goal is to distribute resources to agents in an optimal manner. The resource allocation problem is an NP-hard problem in general. This thesis strives to minimize the resource usage and task completion time by scheduling a number of requests from robots. However, actual realization of fully distributed cloud robotic systems is rarely found in the community. Moreover, unconstrained resources in the cloud are not commonly implemented. Therefore, the optimization of autonomously implemented resource allocation is the primary focus of the thesis. / While a respectable amount of work is done on both resource and task allocation, there is still the need for research towards the integration of problems in a typical cloud robotic system. For the outlined difficulties, this thesis addresses novel research on the following aspects: At first, the underlying architecture of Multi Sensor Data Retrieval (MSDR) is implemented on the twisted-based socket for asynchronous data transmission, which is also investigated as effective decentralized methods for multi-robot coordination, task assignment and service contract establishing. Second, the market-based scheduling mechanism is proposed for the dynamic resource allocation problem in cloud robotics. A set of criteria as empirical Quality of Service (QoS) is optimized, especially Time to Response (ToR) is minimized to fulfill Firm Real-Time (FRT) requirement of robotic tasks. Third, a Link Quality Matrix (LQM) auction-based negotiation strategy is proposed to relieve the competition among multi-robot systems in Mobile Ad-hoc Networks (MANETs). Besides, an incremental auction-based strategy is proposed considering hops, time delay as well as link quality. Both fair allocation when unmanned interference and biased allocation when users have preferences are optimized among multi-robot systems in MANETs. By tackling all these issues, this thesis contributes to general implementation of cloud robotic system into daily. / Future research will focus on task-oriented problems, such as smart home surveillance, guiding and etc., which could be better solved benefiting from cloud robotics. Solutions will proposed in a bidirectional way considering both data uploading and downloading. / 雲機器人是利用雲計算作為龐大的資源池進行大規模并行計算和資源共用。此外,雲機器人系統避免了用於維護和更新各個機器人客戶端的開銷,並降低了機器人客戶端對中間件的依賴。這對於服務機器人尤其有益,因為大量計算所需要的能量會減少機器人運行的持續時間,並且約束機器人移動性能以及增加機器人的成本。爲了更好的利用雲技術提高機器人的服務性能,最重要的是允許不同類型的機器人共用資源,尤其是多傳感器的信息,並在雲上開發新的功能和新的應用。 / 此類問題一般被規劃為動態資源配置問題,即給定一個資源集合和一個多客戶端的序列,最優化地進行資源的分配。資源配置問題是一個非確定性多項式複雜 (NP-hard) 問題 。本文通過優化調度多個資源請求,力求最大限度地減少資源的使用和任務完成的時間。目前很少有真正實現了完全分布式的雲機器人系統。此外,在實際的雲系統中並不存在無限的不受約束的資源。因此,自主地優化雲機器人系統的資源配置是雲機器人系統的關鍵問題,有著重要的現實意義。 / 雖然在資源配置和任務分配領域已經有大量的研究工作,當這兩者在典型的雲機器人系統中結合時,仍然有大量新的問題需要研究。本文著重於以下幾個方面:首先,建立多傳感器資料的檢索架構,即基於twisted的socket 框架建立任務分配和服務合同的構建方法,用於實現多傳感器信息的異步傳輸,同時將其用於研究有效的分布式多機器人協作。其次,提出基於市場范式的調度機制,用於解決雲計算機器人系統的動態資源配置問題;並針對一系列服務品質指標進行優化和驗證,特別是實現回應時間的最小化,以滿足機器人任務即時性的要求。第三,提出基於鏈路信號強度矩陣的協商策略以減輕在移動自組網路中多個機器人的通信競爭;此外,考慮到多跳、時延和鏈路品質等問題,本文提出了增量式的拍賣算法策略;當移動自組網中存在多機器人系統時,所涉算法對無人干擾情況下的公平分配和當有使用者有偏好情況下的偏好分配分別進行了優化。通過解決以上問題,本文的貢獻有助於通用的雲機器人系統融入到人類日常生活和工作中。 / 未來的研究將側重于面向機器人任務分配的問題,例如監控,多機器人嚮導等,及其他受益于雲機器人平臺的各類應用解決方案。 / Wang, Lujia. / Thesis Ph.D. Chinese University of Hong Kong 2014. / Includes bibliographical references (leaves 115-127). / Abstracts also in Chinese. / Title from PDF title page (viewed on 07, October, 2016). / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only.
192

A tunable version control system for virtual machines in an open-source cloud / CUHK electronic theses & dissertations collection

January 2013 (has links)
Open-source cloud platforms provide a feasible alternative of deploying cloud computing in low-cost commodity hardware and operating systems. To enhance the reliability of an open-source cloud, we design and implement CloudVS, a practical add-on system that enables version control for virtual machines (VMs). CloudVS targets a commodity cloud platform that has limited available resources. It exploits content similarities across different VM versions using redundancy elimination (RE), such that only non-redundant data chunks of a VM version are transmitted over the network and kept in persistent storage. Using RE as a building block, we propose a suite of performance adaptation mechanisms that make CloudVS amenable to different commodity settings. Specifically, we propose a tunable mechanism to balance the storage and disk seek overheads, as well as various I/O optimization techniques to minimize the interferences to other co-resident processes. We further exploit a higher degree of content similarity by applying RE to multiple VM images simultaneously, and support the copy-on-write image format. Using real-world VM snapshots, we experiment CloudVS in an open-source cloud testbed built on Eucalyptus. We demonstrate how CloudVS can be parameterized to balance the performance trade-offs between version control and normal VM operations. / 開源雲端平台為供低成本硬件及作業系統提供一個可行的替代方案。為了提高開源雲的可靠性,我們設計及實踐了CloudVS,一個針對虛擬機的實用版本控制系統。CloudVS針對有限資源的低成本硬件雲平台,利用內容相似性,在不同的虛擬機版本使用冗餘消除。這樣,在虛擬機版本數據中只有非冗餘的部分在網絡上傳輸,並保存在持久存儲。使用冗餘消除作為構建塊,我們提出了一套性能適應機制,使CloudVS適合於不同的低成本硬件配置。具體而言,我們提出了一種可調諧的機制來平衡存儲和磁盤尋道開銷,以及應用各種I/O優化技術去最大限度地減少對其他同時運行進程的干擾。我們應用冗餘消除多個虛擬機影像去進一步利用其內容相似度,同時,我們更進一步支持寫時複製格式。使用來自真實世界的虛擬機快照,我們嘗試在開放源碼的雲測試平台Eucalyptus中測試CloudVS。我們演示CloudVS如何可以參數化,以平衡版本控制和正常的虛擬機操作之間的性能取捨。 / Tang, Chung Pan. / Thesis M.Phil. Chinese University of Hong Kong 2013. / Includes bibliographical references (leaves 57-65). / Abstracts also in Chinese. / Title from PDF title page (viewed on 07, October, 2016). / Detailed summary in vernacular field only.
193

Failure Prediction using Machine Learning in a Virtualized HPC System and application

Mohammed, Bashir, Awan, Irfan U., Ugail, Hassan, Muhammad, Y. January 2019 (has links)
Yes / Failure is an increasingly important issue in high performance computing and cloud systems. As large-scale systems continue to grow in scale and complexity, mitigating the impact of failure and providing accurate predictions with sufficient lead time remains a challenging research problem. Traditional existing fault-tolerance strategies such as regular checkpointing and replication are not adequate because of the emerging complexities of high performance computing systems. This necessitates the importance of having an effective as well as proactive failure management approach in place aimed at minimizing the effect of failure within the system. With the advent of machine learning techniques, the ability to learn from past information to predict future pattern of behaviours makes it possible to predict potential system failure more accurately. Thus, in this paper, we explore the predictive abilities of machine learning by applying a number of algorithms to improve the accuracy of failure prediction. We have developed a failure prediction model using time series and machine learning, and performed comparison based tests on the prediction accuracy. The primary algorithms we considered are the Support Vector Machine (SVM), Random Forest(RF), k-Nearest Neighbors (KNN), Classi cation and Regression Trees (CART) and Linear Discriminant Analysis (LDA). Experimental results show that the average prediction accuracy of our model using SVM when predicting failure is 90% accurate and effective compared to other algorithms. This fi nding means that our method can effectively predict all possible future system and application failures within the system. / The full-text of this article will be released for public view a year after publication.
194

Review of Getting Started with Cloud Computing: A LITA Guide

Tolley, Rebecca 01 May 2012 (has links)
Review of Getting Started with Cloud Computing: A LITA Guide. Eds. Edward M. Corrado and Heather Lea Moulaison. New York: Neal-Schuman Publisher, Inc., 2011. 214p. alk. paper, $65 (ISBN 9781555707491).
195

Green Cloud - Load Balancing, Load Consolidation using VM Migration

Do, Manh Duc 01 October 2017 (has links)
Recently, cloud computing is a new trend emerging in computer technology with a massive demand from the clients. To meet all requirements, a lot of cloud data centers have been constructed since 2008 when Amazon published their cloud service. The rapidly growing data center leads to the consumption of a tremendous amount of energy even cloud computing has better improved in the performance and energy consumption, but cloud data centers still absorb an immense amount of energy. To raise company’s income annually, the cloud providers start considering green cloud concepts which gives an idea about how to optimize CPU’s usage while guaranteeing the quality of service. Many cloud providers are paying more attention to both load balancing and load consolidation which are two significant components of a cloud data center. Load balancing is taken into account as a vital part of managing income demand, improving the cloud system’s performance. Live virtual machine migration is a technique to perform the dynamic load balancing algorithm. To optimize the cloud data center, three issues are considered: First, how does the cloud cluster distribute the virtual machine (VM) requests from clients to all physical machine (PM) when each computer has a different capacity. Second, what is the solution to make CPU’s usage of all PMs to be nearly equal? Third, how to handle two extreme scenarios: rapidly rising CPU’s usage of a PM due to sudden massive workload requiring VM migration immediately and resources expansion to respond to substantial cloud cluster through VM requests. In this chapter, we provide an approach to work with those issues in the implementation and results. The results indicated that the performance of the cloud cluster was improved significantly. Load consolidation is the reverse process of load balancing which aims to provide sufficient cloud servers to handle the client requests. Based on the advance of live VM migration, cloud data center can consolidate itself without interrupting the cloud service, and superfluous PMs are turned to save mode to reduce the energy consumption. This chapter provides a solution to approach load consolidation including implementation and simulation of cloud servers.
196

HADOOP-EDF: LARGE-SCALE DISTRIBUTED PROCESSING OF ELECTROPHYSIOLOGICAL SIGNAL DATA IN HADOOP MAPREDUCE

Wu, Yuanyuan 01 January 2019 (has links)
The rapidly growing volume of electrophysiological signals has been generated for clinical research in neurological disorders. European Data Format (EDF) is a standard format for storing electrophysiological signals. However, the bottleneck of existing signal analysis tools for handling large-scale datasets is the sequential way of loading large EDF files before performing an analysis. To overcome this, we develop Hadoop-EDF, a distributed signal processing tool to load EDF data in a parallel manner using Hadoop MapReduce. Hadoop-EDF uses a robust data partition algorithm making EDF data parallel processable. We evaluate Hadoop-EDF’s scalability and performance by leveraging two datasets from the National Sleep Research Resource and running experiments on Amazon Web Service clusters. The performance of Hadoop-EDF on a 20-node cluster improves 27 times and 47 times than sequential processing of 200 small-size files and 200 large-size files, respectively. The results demonstrate that Hadoop-EDF is more suitable and effective in processing large EDF files.
197

Quality of service in cloud computing: Data model; resource allocation; and data availability and security

Akintoye, Samson Busuyi January 2019 (has links)
Philosophiae Doctor - PhD / Recently, massive migration of enterprise applications to the cloud has been recorded in the Information Technology (IT) world. The number of cloud providers offering their services and the number of cloud customers interested in using such services is rapidly increasing. However, one of the challenges of cloud computing is Quality-of-Service management which denotes the level of performance, reliability, and availability offered by cloud service providers. Quality-of-Service is fundamental to cloud service providers who find the right tradeoff between Quality-of-Service levels and operational cost. In order to find out the optimal tradeoff, cloud service providers need to comply with service level agreements contracts which define an agreement between cloud service providers and cloud customers. Service level agreements are expressed in terms of quality of service (QoS) parameters such as availability, scalability performance and the service cost. On the other hand, if the cloud service provider violates the service level agreement contract, the cloud customer can file for damages and claims some penalties that can result in revenue losses, and probably detriment to the provider’s reputation. Thus, the goal of any cloud service provider is to meet the Service level agreements, while reducing the total cost of offering its services.
198

Flexible Computing with Virtual Machines

Lagar Cavilla, Horacio Andres 30 March 2011 (has links)
This thesis is predicated upon a vision of the future of computing with a separation of functionality between core and edges, very similar to that governing the Internet itself. In this vision, the core of our computing infrastructure is made up of vast server farms with an abundance of storage and processing cycles. Centralization of computation in these farms, coupled with high-speed wired or wireless connectivity, allows for pervasive access to a highly-available and well-maintained repository for data, configurations, and applications. Computation in the edges is concerned with provisioning application state and user data to rich clients, notably mobile devices equipped with powerful displays and graphics processors. We define flexible computing as systems support for applications that dynamically leverage the resources available in the core infrastructure, or cloud. The work in this thesis focuses on two instances of flexible computing that are crucial to the realization of the aforementioned vision. Location flexibility aims to, transparently and seamlessly, migrate applications between the edges and the core based on user demand. This enables performing the interactive tasks on rich edge clients and the computational tasks on powerful core servers. Scale flexibility is the ability of applications executing in cloud environments, such as parallel jobs or clustered servers, to swiftly grow and shrink their footprint according to execution demands. This thesis shows how we can use system virtualization to implement systems that provide scale and location flexibility. To that effect we build and evaluate two system prototypes: Snowbird and SnowFlock. We present techniques for manipulating virtual machine state that turn running software into a malleable entity which is easily manageable, is decoupled from the underlying hardware, and is capable of dynamic relocation and scaling. This thesis demonstrates that virtualization technology is a powerful and suitable tool to enable solutions for location and scale flexibility.
199

Replication, Security, and Integrity of Outsourced Data in Cloud Computing Systems

Barsoum, Ayad Fekry 14 February 2013 (has links)
In the current era of digital world, the amount of sensitive data produced by many organizations is outpacing their storage ability. The management of such huge amount of data is quite expensive due to the requirements of high storage capacity and qualified personnel. Storage-as-a-Service (SaaS) offered by cloud service providers (CSPs) is a paid facility that enables organizations to outsource their data to be stored on remote servers. Thus, SaaS reduces the maintenance cost and mitigates the burden of large local data storage at the organization's end. For an increased level of scalability, availability and durability, some customers may want their data to be replicated on multiple servers across multiple data centers. The more copies the CSP is asked to store, the more fees the customers are charged. Therefore, customers need to have a strong guarantee that the CSP is storing all data copies that are agreed upon in the service contract, and these copies remain intact. In this thesis we address the problem of creating multiple copies of a data file and verifying those copies stored on untrusted cloud servers. We propose a pairing-based provable multi-copy data possession (PB-PMDP) scheme, which provides an evidence that all outsourced copies are actually stored and remain intact. Moreover, it allows authorized users (i.e., those who have the right to access the owner's file) to seamlessly access the file copies stored by the CSP, and supports public verifiability. We then direct our study to the dynamic behavior of outsourced data, where the data owner is capable of not only archiving and accessing the data copies stored by the CSP, but also updating and scaling (using block operations: modification, insertion, deletion, and append) these copies on the remote servers. We propose a new map-based provable multi-copy dynamic data possession (MB-PMDDP) scheme that verifies the intactness and consistency of outsourced dynamic multiple data copies. To the best of our knowledge, the proposed scheme is the first to verify the integrity of multiple copies of dynamic data over untrusted cloud servers. As a complementary line of research, we consider protecting the CSP from a dishonest owner, who attempts to get illegal compensations by falsely claiming data corruption over cloud servers. We propose a new cloud-based storage scheme that allows the data owner to benefit from the facilities offered by the CSP and enables mutual trust between them. In addition, the proposed scheme ensures that authorized users receive the latest version of the outsourced data, and enables the owner to grant or revoke access to the data stored by cloud servers.
200

Flexible Computing with Virtual Machines

Lagar Cavilla, Horacio Andres 30 March 2011 (has links)
This thesis is predicated upon a vision of the future of computing with a separation of functionality between core and edges, very similar to that governing the Internet itself. In this vision, the core of our computing infrastructure is made up of vast server farms with an abundance of storage and processing cycles. Centralization of computation in these farms, coupled with high-speed wired or wireless connectivity, allows for pervasive access to a highly-available and well-maintained repository for data, configurations, and applications. Computation in the edges is concerned with provisioning application state and user data to rich clients, notably mobile devices equipped with powerful displays and graphics processors. We define flexible computing as systems support for applications that dynamically leverage the resources available in the core infrastructure, or cloud. The work in this thesis focuses on two instances of flexible computing that are crucial to the realization of the aforementioned vision. Location flexibility aims to, transparently and seamlessly, migrate applications between the edges and the core based on user demand. This enables performing the interactive tasks on rich edge clients and the computational tasks on powerful core servers. Scale flexibility is the ability of applications executing in cloud environments, such as parallel jobs or clustered servers, to swiftly grow and shrink their footprint according to execution demands. This thesis shows how we can use system virtualization to implement systems that provide scale and location flexibility. To that effect we build and evaluate two system prototypes: Snowbird and SnowFlock. We present techniques for manipulating virtual machine state that turn running software into a malleable entity which is easily manageable, is decoupled from the underlying hardware, and is capable of dynamic relocation and scaling. This thesis demonstrates that virtualization technology is a powerful and suitable tool to enable solutions for location and scale flexibility.

Page generated in 0.1107 seconds