• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 580
  • 204
  • 117
  • 53
  • 46
  • 32
  • 28
  • 26
  • 26
  • 19
  • 12
  • 10
  • 9
  • 7
  • 6
  • Tagged with
  • 1290
  • 1290
  • 290
  • 234
  • 219
  • 215
  • 214
  • 195
  • 193
  • 179
  • 179
  • 153
  • 148
  • 136
  • 129
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Cloud BI : a multi-party authentication framework for securing business intelligence on the Cloud

Al-Aqrabi, Hussain January 2016 (has links)
Business intelligence (BI) has emerged as a key technology to be hosted on Cloud computing. BI offers a method to analyse data thereby enabling informed decision making to improve business performance and profitability. However, within the shared domains of Cloud computing, BI is exposed to increased security and privacy threats because an unauthorised user may be able to gain access to highly sensitive, consolidated business information. The business process contains collaborating services and users from multiple Cloud systems in different security realms which need to be engaged dynamically at runtime. If the heterogamous Cloud systems located in different security realms do not have direct authentication relationships then it is technically difficult to enable a secure collaboration. In order to address these security challenges, a new authentication framework is required to establish certain trust relationships among these BI service instances and users by distributing a common session secret to all participants of a session. The author addresses this challenge by designing and implementing a multiparty authentication framework for dynamic secure interactions when members of different security realms want to access services. The framework takes advantage of the trust relationship between session members in different security realms to enable a user to obtain security credentials to access Cloud resources in a remote realm. This mechanism can help Cloud session users authenticate their session membership to improve the authentication processes within multi-party sessions. The correctness of the proposed framework has been verified by using BAN Logics. The performance and the overhead have been evaluated via simulation in a dynamic environment. A prototype authentication system has been designed, implemented and tested based on the proposed framework. The research concludes that the proposed framework and its supporting protocols are an effective functional basis for practical implementation testing, as it achieves good scalability and imposes only minimal performance overhead which is comparable with other state-of-art methods.
112

Cloud Services : ett förslag på hur de kan användas inom e-handel / Cloud Services : a proposal on how they can be used in e-commerce

Skoglund, Erik, Auyeung, Ginwah January 2011 (has links)
E-handelsföretagen försöker ständigt sträva efter expansion. Samtidigt så vill de ha en IT-lösning som är så kostnadseffektiv som möjligt. Säkerhet är också någonting som företagen värderar väldigt högt. Cloud services är något som kan användas för detta. Men e-handelsföretagen måste först och främst känna att de har någonting att utvinna av dessa. Säkerheten är här ett hinder. Efter våra intervjuer med e-handelsföretagen inom klädesbranschen i Borås så betonade samtliga företag hur viktigt det var att hemlig företagsinformation absolut inte fick läcka ut. Det är säkerhet som är den största frågan med moln. Ansvaret för säkerheten ligger just på företaget som köper tjänstens sida. Även om en cloud provider erbjuder säkerhet, backup och bra upptid, så är det fortfarande upp till företaget att se till att de levererar detta. Samtidigt så går det mycket diskussion kring ämnet och många menar att det är nytt och bristen på kunskap som gör att företagen är rädda för riskerna. Vill företaget absolut inte att utomstående ska få komma åt viktig företagsinformation, då kan de välja att ha en kombination av privata och publika moln i det som kallas hybridmoln. Detta beror på att privata moln är byggda och driftade inom företaget. På så vis är data som finns lagrat i det privata molnet inte delat med andra företag. Det viktiga är för företaget att själva se vad de själva har för krav på säkerhet. Finns det krav på säkerhet på vissa delar men inte på andra, så kan hybridmoln vara en lösning. Genom att intervjua tre av de största e-handelsföretagen i Borås så har vi sett en tendens till detta.Det finns många områden som de företag vi intervjuat skulle kunna använda sig av cloud services på. Vi har fokuserat på kärnverksamheten, de stödsystem som de använde, webbsidan, kommunikation och kontorsprogram. Anledningen till detta är att företagen som vi intervjuat tyckte att dessa delar var viktigast. / Program: Kandidatutbildning i informatik
113

Live deduplication storage of virtual machine images in an open-source cloud.

January 2012 (has links)
重覆數據删除技術是一個消除冗餘數據存儲塊的技術。尤其是,在儲存數兆位元組的虛擬機器影像時,它已被證明可以減少使用磁碟空間。但是,在會經常加入和讀取虛擬機器影像的雲端平台,部署重覆數據删除技術仍然存在挑戰。我們提出了一個在內核運行的重覆數據删除檔案系統LiveDFS,它可以在一個在低成本硬件配置的開源雲端平台中作為儲存虛擬機器影像的後端。LiveDFS有幾個新穎的特點。具體來說,LiveDFS中最重要的特點是在考慮檔案系統佈局時,它利用空間局部性放置重覆數據删除中繼資料。LiveDFS是POSIX兼容的Linux內核檔案系統。我們透過使用42個不同Linux發行版的虛擬機器影像,在實驗平台測試了LiveDFS的讀取和寫入性能。我們的工作證明了在低成本硬件配置的雲端平台部署LiveDFS的可行性。 / Deduplication is a technique that eliminates the storage of redundant data blocks. In particular, it has been shown to effectively reduce the disk space for storing multi-gigabyte virtual machine (VM) images. However, there remain challenging deployment issues of enabling deduplication in a cloud platform, where VM images are regularly inserted and retrieved. We propose a kernel-space deduplication file systems called LiveDFS, which can serve as a VM image storage backend in an open-source cloud platform that is built on low-cost commodity hardware configurations. LiveDFS is built on several novel design features. Specifically, the main feature of LiveDFS is to exploit spatial locality of placing deduplication metadata on disk with respect to the underlying file system layout. LiveDFS is POSIX-compliant and is implemented as Linux kernel-space file systems. We conduct testbed experiments of the read/write performance of LiveDFS using a dataset of 42 VM images of different Linux distributions. Our work justifies the feasibility of deploying LiveDFS in a cloud platform under commodity settings. / Detailed summary in vernacular field only. / Ng, Chun Ho. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2012. / Includes bibliographical references (leaves 39-42). / Abstracts also in Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 2 --- LiveDFS Design --- p.5 / Chapter 2.1 --- File System Layout --- p.5 / Chapter 2.2 --- Deduplication Primitives --- p.6 / Chapter 2.3 --- Deduplication Process --- p.8 / Chapter 2.3.1 --- Fingerprint Store --- p.9 / Chapter 2.3.2 --- Fingerprint Filter --- p.11 / Chapter 2.4 --- Prefetching of Fingerprint Stores --- p.14 / Chapter 2.5 --- Journaling --- p.15 / Chapter 2.6 --- Ext4 File System --- p.17 / Chapter 3 --- Implementation Details --- p.18 / Chapter 3.1 --- Choice of Hash Function --- p.18 / Chapter 3.2 --- OpenStack Deployment --- p.19 / Chapter 4 --- Experiments --- p.21 / Chapter 4.1 --- I/O Throughput --- p.21 / Chapter 4.2 --- OpenStack Deployment --- p.26 / Chapter 5 --- Related Work --- p.34 / Chapter 6 --- Conclusions and Future Work --- p.37 / Bibliography --- p.39
114

Comparison of Auto-Scaling Policies Using Docker Swarm / Jämförelse av autoskalningspolicies med hjälp av Docker Swarm

Adolfsson, Henrik January 2019 (has links)
When deploying software engineering applications in the cloud there are two similar software components used. These are Virtual Machines and Containers. In recent years containers have seen an increase in popularity and usage, in part because of tools such as Docker and Kubernetes. Virtual Machines (VM) have also seen an increase in usage as more companies move to solutions in the cloud with services like Amazon Web Services, Google Compute Engine, Microsoft Azure and DigitalOcean. There are also some solutions using auto-scaling, a technique where VMs are commisioned and deployed to as load increases in order to increase application performace. As the application load decreases VMs are decommisioned to reduce costs. In this thesis we implement and evaluate auto-scaling policies that use both Virtual Machines and Containers. We compare four different policies, including two baseline policies. For the non-baseline policies we define a policy where we use a single Container for every Virtual Machine and a policy where we use several Containers per Virtual Machine. To compare the policies we deploy an image serving application and run workloads to test them. We find that the choice of deployment strategy and policy matters for response time and error rate. We also find that deploying applications as described in the methodis estimated to take roughly 2 to 3 minutes.
115

Access-pattern-aware data management in cloud platforms / CUHK electronic theses & dissertations collection

January 2015 (has links)
Database outsourcing is an emerging paradigm for data management in which data are stored in third-party servers. With the advance of cloud computing, database outsourcing has become popular and highly adopted. However, as a result, many technology challenges have arisen. / In this thesis, we study two problems with respect to the challenges, and propose solutions for each problem with the consideration of access patterns. The first problem is raised from theviewpoint of service providers. We study the problem of data allocation in scalable distributed database systems for achieving the high availability feature of cloud services. We propose a data allocation algorithm, which makes use of time series models from previous access patterns to perform load forecasting and reallocate data fragments to balance the workload within the system. Simulation results show that, with accurate forecasting, the proposed algorithm gives a better performance than general threshold-based algorithms. / The second problem addresses the clients' concern that service providers may not be trustworthy. We first illustrate how service providers can infer sensitive information through query access patterns even when data are encrypted. Then, we propose techniques that break down large queries and randomize query access patterns such that service providers cannot infer sensitive information with a high degree of certainty. Experiments on benchmark data show that a high level of access privacy can be achieved by the proposed techniques with a reasonable overhead. / 數據庫外包是近年新興的一種數據管理服務,其特點是數據儲存於第三方的伺服器內。隨著雲端科技的發展,數據庫外包服務日趨普及,同時亦產生不少技術問題。 / 本文著重探討兩個問題。首先,從服務供應商的角度研究可擴展的分布式數據庫系統如何分配數據來提供高可用性的雲端服務。鑑於用戶訪問模式會隨著時間轉變,我們提出以時間序列模型預測負荷的算法重新分配數據,以平衡系統的工作量。通過模擬實驗可知在準確的負荷預測下,我們提出的算法比基於闆值的算法有更好的表現。 / 第二個探討的問題是如何保障用戶私隱,避免洩露給服務供應商。文中列舉了數據加密的情況下,服務供應商如何通過分析用戶訪問模式獲取資料,進而提出相應的保障技術。透過用戶訪問模式的隨機化,能使服務供應商無法準確比對用戶的資料。基準數據實驗指出此項技術可有效保護私隱,而且不會對訪問速度造成太大影響。 / Li, Shun Pun. / Thesis M.Phil. Chinese University of Hong Kong 2015. / Includes bibliographical references (leaves 86-93). / Abstracts also in Chinese. / Title from PDF title page (viewed on 11, October, 2016). / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only.
116

A study on resource allocation strategies for cloud robotic systems / CUHK electronic theses & dissertations collection

January 2014 (has links)
The new approach of cloud robotics takes advantage of cloud computing as a vast resource pool for massively parallel computation and sharing of data. Besides, the cloud robotic system removes overheads for maintenance and updates, and reduces dependence on user middleware. This is of particular interest for service robots, because on-board computation entails additional power requirements which may reduce operating duration and constrain robot mobility as well as costs. In order to utilize the cloud technology in service robots, it is crucial to allow different types of robots to share information and to develop new skills on the cloud. / In general, it is cast as a dynamic resource allocation problem. Given a set of resources and a sequence of agents, the goal is to distribute resources to agents in an optimal manner. The resource allocation problem is an NP-hard problem in general. This thesis strives to minimize the resource usage and task completion time by scheduling a number of requests from robots. However, actual realization of fully distributed cloud robotic systems is rarely found in the community. Moreover, unconstrained resources in the cloud are not commonly implemented. Therefore, the optimization of autonomously implemented resource allocation is the primary focus of the thesis. / While a respectable amount of work is done on both resource and task allocation, there is still the need for research towards the integration of problems in a typical cloud robotic system. For the outlined difficulties, this thesis addresses novel research on the following aspects: At first, the underlying architecture of Multi Sensor Data Retrieval (MSDR) is implemented on the twisted-based socket for asynchronous data transmission, which is also investigated as effective decentralized methods for multi-robot coordination, task assignment and service contract establishing. Second, the market-based scheduling mechanism is proposed for the dynamic resource allocation problem in cloud robotics. A set of criteria as empirical Quality of Service (QoS) is optimized, especially Time to Response (ToR) is minimized to fulfill Firm Real-Time (FRT) requirement of robotic tasks. Third, a Link Quality Matrix (LQM) auction-based negotiation strategy is proposed to relieve the competition among multi-robot systems in Mobile Ad-hoc Networks (MANETs). Besides, an incremental auction-based strategy is proposed considering hops, time delay as well as link quality. Both fair allocation when unmanned interference and biased allocation when users have preferences are optimized among multi-robot systems in MANETs. By tackling all these issues, this thesis contributes to general implementation of cloud robotic system into daily. / Future research will focus on task-oriented problems, such as smart home surveillance, guiding and etc., which could be better solved benefiting from cloud robotics. Solutions will proposed in a bidirectional way considering both data uploading and downloading. / 雲機器人是利用雲計算作為龐大的資源池進行大規模并行計算和資源共用。此外,雲機器人系統避免了用於維護和更新各個機器人客戶端的開銷,並降低了機器人客戶端對中間件的依賴。這對於服務機器人尤其有益,因為大量計算所需要的能量會減少機器人運行的持續時間,並且約束機器人移動性能以及增加機器人的成本。爲了更好的利用雲技術提高機器人的服務性能,最重要的是允許不同類型的機器人共用資源,尤其是多傳感器的信息,並在雲上開發新的功能和新的應用。 / 此類問題一般被規劃為動態資源配置問題,即給定一個資源集合和一個多客戶端的序列,最優化地進行資源的分配。資源配置問題是一個非確定性多項式複雜 (NP-hard) 問題 。本文通過優化調度多個資源請求,力求最大限度地減少資源的使用和任務完成的時間。目前很少有真正實現了完全分布式的雲機器人系統。此外,在實際的雲系統中並不存在無限的不受約束的資源。因此,自主地優化雲機器人系統的資源配置是雲機器人系統的關鍵問題,有著重要的現實意義。 / 雖然在資源配置和任務分配領域已經有大量的研究工作,當這兩者在典型的雲機器人系統中結合時,仍然有大量新的問題需要研究。本文著重於以下幾個方面:首先,建立多傳感器資料的檢索架構,即基於twisted的socket 框架建立任務分配和服務合同的構建方法,用於實現多傳感器信息的異步傳輸,同時將其用於研究有效的分布式多機器人協作。其次,提出基於市場范式的調度機制,用於解決雲計算機器人系統的動態資源配置問題;並針對一系列服務品質指標進行優化和驗證,特別是實現回應時間的最小化,以滿足機器人任務即時性的要求。第三,提出基於鏈路信號強度矩陣的協商策略以減輕在移動自組網路中多個機器人的通信競爭;此外,考慮到多跳、時延和鏈路品質等問題,本文提出了增量式的拍賣算法策略;當移動自組網中存在多機器人系統時,所涉算法對無人干擾情況下的公平分配和當有使用者有偏好情況下的偏好分配分別進行了優化。通過解決以上問題,本文的貢獻有助於通用的雲機器人系統融入到人類日常生活和工作中。 / 未來的研究將側重于面向機器人任務分配的問題,例如監控,多機器人嚮導等,及其他受益于雲機器人平臺的各類應用解決方案。 / Wang, Lujia. / Thesis Ph.D. Chinese University of Hong Kong 2014. / Includes bibliographical references (leaves 115-127). / Abstracts also in Chinese. / Title from PDF title page (viewed on 07, October, 2016). / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only.
117

Failure Prediction using Machine Learning in a Virtualized HPC System and application

Mohammed, Bashir, Awan, Irfan U., Ugail, Hassan, Muhammad, Y. January 2019 (has links)
Yes / Failure is an increasingly important issue in high performance computing and cloud systems. As large-scale systems continue to grow in scale and complexity, mitigating the impact of failure and providing accurate predictions with sufficient lead time remains a challenging research problem. Traditional existing fault-tolerance strategies such as regular checkpointing and replication are not adequate because of the emerging complexities of high performance computing systems. This necessitates the importance of having an effective as well as proactive failure management approach in place aimed at minimizing the effect of failure within the system. With the advent of machine learning techniques, the ability to learn from past information to predict future pattern of behaviours makes it possible to predict potential system failure more accurately. Thus, in this paper, we explore the predictive abilities of machine learning by applying a number of algorithms to improve the accuracy of failure prediction. We have developed a failure prediction model using time series and machine learning, and performed comparison based tests on the prediction accuracy. The primary algorithms we considered are the Support Vector Machine (SVM), Random Forest(RF), k-Nearest Neighbors (KNN), Classi cation and Regression Trees (CART) and Linear Discriminant Analysis (LDA). Experimental results show that the average prediction accuracy of our model using SVM when predicting failure is 90% accurate and effective compared to other algorithms. This fi nding means that our method can effectively predict all possible future system and application failures within the system. / The full-text of this article will be released for public view a year after publication.
118

HADOOP-EDF: LARGE-SCALE DISTRIBUTED PROCESSING OF ELECTROPHYSIOLOGICAL SIGNAL DATA IN HADOOP MAPREDUCE

Wu, Yuanyuan 01 January 2019 (has links)
The rapidly growing volume of electrophysiological signals has been generated for clinical research in neurological disorders. European Data Format (EDF) is a standard format for storing electrophysiological signals. However, the bottleneck of existing signal analysis tools for handling large-scale datasets is the sequential way of loading large EDF files before performing an analysis. To overcome this, we develop Hadoop-EDF, a distributed signal processing tool to load EDF data in a parallel manner using Hadoop MapReduce. Hadoop-EDF uses a robust data partition algorithm making EDF data parallel processable. We evaluate Hadoop-EDF’s scalability and performance by leveraging two datasets from the National Sleep Research Resource and running experiments on Amazon Web Service clusters. The performance of Hadoop-EDF on a 20-node cluster improves 27 times and 47 times than sequential processing of 200 small-size files and 200 large-size files, respectively. The results demonstrate that Hadoop-EDF is more suitable and effective in processing large EDF files.
119

Flexible Computing with Virtual Machines

Lagar Cavilla, Horacio Andres 30 March 2011 (has links)
This thesis is predicated upon a vision of the future of computing with a separation of functionality between core and edges, very similar to that governing the Internet itself. In this vision, the core of our computing infrastructure is made up of vast server farms with an abundance of storage and processing cycles. Centralization of computation in these farms, coupled with high-speed wired or wireless connectivity, allows for pervasive access to a highly-available and well-maintained repository for data, configurations, and applications. Computation in the edges is concerned with provisioning application state and user data to rich clients, notably mobile devices equipped with powerful displays and graphics processors. We define flexible computing as systems support for applications that dynamically leverage the resources available in the core infrastructure, or cloud. The work in this thesis focuses on two instances of flexible computing that are crucial to the realization of the aforementioned vision. Location flexibility aims to, transparently and seamlessly, migrate applications between the edges and the core based on user demand. This enables performing the interactive tasks on rich edge clients and the computational tasks on powerful core servers. Scale flexibility is the ability of applications executing in cloud environments, such as parallel jobs or clustered servers, to swiftly grow and shrink their footprint according to execution demands. This thesis shows how we can use system virtualization to implement systems that provide scale and location flexibility. To that effect we build and evaluate two system prototypes: Snowbird and SnowFlock. We present techniques for manipulating virtual machine state that turn running software into a malleable entity which is easily manageable, is decoupled from the underlying hardware, and is capable of dynamic relocation and scaling. This thesis demonstrates that virtualization technology is a powerful and suitable tool to enable solutions for location and scale flexibility.
120

Flexible Computing with Virtual Machines

Lagar Cavilla, Horacio Andres 30 March 2011 (has links)
This thesis is predicated upon a vision of the future of computing with a separation of functionality between core and edges, very similar to that governing the Internet itself. In this vision, the core of our computing infrastructure is made up of vast server farms with an abundance of storage and processing cycles. Centralization of computation in these farms, coupled with high-speed wired or wireless connectivity, allows for pervasive access to a highly-available and well-maintained repository for data, configurations, and applications. Computation in the edges is concerned with provisioning application state and user data to rich clients, notably mobile devices equipped with powerful displays and graphics processors. We define flexible computing as systems support for applications that dynamically leverage the resources available in the core infrastructure, or cloud. The work in this thesis focuses on two instances of flexible computing that are crucial to the realization of the aforementioned vision. Location flexibility aims to, transparently and seamlessly, migrate applications between the edges and the core based on user demand. This enables performing the interactive tasks on rich edge clients and the computational tasks on powerful core servers. Scale flexibility is the ability of applications executing in cloud environments, such as parallel jobs or clustered servers, to swiftly grow and shrink their footprint according to execution demands. This thesis shows how we can use system virtualization to implement systems that provide scale and location flexibility. To that effect we build and evaluate two system prototypes: Snowbird and SnowFlock. We present techniques for manipulating virtual machine state that turn running software into a malleable entity which is easily manageable, is decoupled from the underlying hardware, and is capable of dynamic relocation and scaling. This thesis demonstrates that virtualization technology is a powerful and suitable tool to enable solutions for location and scale flexibility.

Page generated in 0.0499 seconds