Spelling suggestions: "subject:"cloud atorage"" "subject:"cloud 2storage""
31 |
The Multi-tiered Future of Storage: Understanding Cost and Performance Trade-offs in Modern Storage SystemsIqbal, Muhammad Safdar 19 September 2017 (has links)
In the last decade, the landscape of storage hardware and software has changed considerably. Storage hardware has diversified from hard disk drives and solid state drives to include persistent memory (PMEM) devices such as phase change memory (PCM) and Flash-backed DRAM. On the software side, the increasing adoption of cloud services for building and deploying consumer and enterprise applications is driving the use of cloud storage services. Cloud providers have responded by providing a plethora of choices of storage services, each of which have unique performance characteristics and pricing. We argue this variety represents an opportunity for modern storage systems, and it can be leveraged to improve operational costs of the systems.
We propose that storage tiering is an effective technique for balancing operational or de- ployment costs and performance in such modern storage systems. We demonstrate this via three key techniques. First, THMCache, which leverages tiering to conserve the lifetime of PMEM devices, hence saving hardware upgrade costs. Second, CAST, which leverages tiering between multiple types of cloud storage to deliver higher utility (i.e. performance per unit of cost) for cloud tenants. Third, we propose a dynamic pricing scheme for cloud storage services, which leverages tiering to increase the cloud provider's profit or offset their management costs. / Master of Science / Storage and retrival of data is one of the key functions of any computer system. Improvements in hardware and software related to data storage can help computer users store (a) store the data faster, which makes for overall faster performance; and (b) increase the storage capacity, which helps store the increasing amount of data generated by modern computer users. Typically, most computers are equipped with either a hard disk drive (HDD) or, the newer and faster, solid state drive (SSD) for data storage. In the last decade however, the landscape of data storage hardware and software has advanced considerably. On the hardware side, several hardware makers are introducing persistent memory (PMEM) devices, which provide very high speed, high capacity storage at reasonable price points. On the software side, the increasing adoption of cloud services by software developers that are building and operating consumer and enterprise applications is driving the use of cloud storage services. These services allow the developers to store a large amount of data without having to manage any physical hardware, paying for the service on a usage-based pricing structure. However, every application’s speed and capacity needs are not the same; hence, cloud service providers have responded by providing a plethora of choices of storage services, each of which have unique performance characteristics and pricing. We argue this variety represents an opportunity for modern storage systems, and it can be leveraged to improve the operating costs of the systems.
Storage tiering is a classical technique that involves partitioning the stored data and placing each partition in a different storage device. This lets the applications use mulitple devices at once, taking advantage of each’s sterngths and mitigating their weaknesses. We propose that storage tiering is a relevant and effective technique for balancing operational or deployment costs and performance in modern storage systems such as PMEM devices and cloud storage services. We demonstrate this via three key techniques. First, THMCACHE, which leverages tiering between multiple types of storage hardware to conserve the lifetime of PMEM devices, hence saving hardware upgrade costs. Second, CAST, which leverages tiering between multiple types of cloud storage services to deliver higher utility (i.e. performance per unit of cost) for software developers using these services. Third, we propose a dynamic pricing scheme for cloud storage services, which leverages tiering between multiple cloud storage services to increase the cloud service provider’s profit or offset their management costs.
|
32 |
Latency-Aware Pricing in the Cloud MarketYang Zhang (6622382) 10 June 2019 (has links)
<div>Latency is regarded as the Achilles heel of cloud computing. Pricing is an essential component in the cloud market since it not only directly affects a cloud service provider's (CSP's) revenue but also a user's budget. This dissertation investigates the latency-aware pricing schemes that provide rigorous performance guarantees for the cloud market. The research is conducted along the following major problems as summarized below:</div><div><br></div><div>First, we will address a major challenge confronting the CSPs utilizing a tiered storage (with cold storage and hot storage) architecture - how to maximize their overall profit over a variety of storage tiers that offer distinct characteristics, as well as file placement and access request scheduling policies. To this end, we propose a scheme where the CSP offers a two-stage auction process for (a) requesting storage capacity, and (b) requesting accesses with latency requirements. Our two-stage bidding scheme provides a hybrid storage and access optimization framework with the objective of maximizing the CSP's total net profit over four dimensions: file acceptance decision, placement of accepted files, file access decision and access request scheduling policy. The proposed optimization is a mixed-integer nonlinear program that is hard to solve. We propose an efficient heuristic to relax the integer optimization and to solve the resulting nonlinear stochastic programs. The algorithm is evaluated under different scenarios and with different storage system parameters, and insightful numerical results are reported by comparing the proposed approach with other profit-maximization models. We see a profit increase of over 60% of our proposed method compared to other schemes in certain simulation scenarios.</div><div><br></div><div>Second, we will resolve one of the challenges when using Amazon Web Services (AWS). Amazon Elastic Compute Cloud (EC2) provides two most popular pricing schemes--i) the costly on-demand instance where the job is guaranteed to be completed, and ii) the cheap spot instance where a job may be interrupted. We consider a user can select a combination of on-demand and spot instances to finish a task. Thus he needs to find the optimal bidding price for the spot-instance, and the portion of the job to be run on the on-demand instance. We formulate the problem as an optimization problem and seek to find the optimal solution. We consider three bidding strategies: one-time requests with expected guarantee, one-time requests with penalty for incomplete job and violating the deadline, and persistent requests. Even without a penalty on incomplete jobs, the optimization problem turns out to be non-convex. Nevertheless, we show that the portion of the job to be run on the on-demand instance is at most half. If the job has a higher execution time or smaller deadline, the bidding price is higher and vice versa. Additionally, the user never selects the on-demand instance if the execution time is smaller than the deadline. The numerical results illustrate the sensitivity of the effective portfolio to several of the parameters involved in the model. Our empirical analysis on the Amazon EC2 data shows that our strategies can be employed on the real instances, where the expected total cost of the proposed scheme decreases over 45% compared to the baseline strategy.<br></div>
|
33 |
Decentraliserad datalagring baserad på blockkedjan : En studie som jämför Storj.io och Microsoft Azure Blob Storage / Decentralized data storage based on a blockchain : A comparative study between Storj.io and Microsoft Azure Blob StorageAy, Konstantin, George, Joshua January 2018 (has links)
The majority of cloud storage platforms rely on a centralized structure, with the most popular being Microsoft Azure. Centralization causes consumers to rely on the provider to maintain accessibility and security of data. However, platforms such as Storj.io are based on a decentralized structure. To become decentralized, Storj.io uses blockchain technology in a means to create an automated consensus mechanism between the entities storing the data. There is however little research regarding performance and security issues on a decentralized platform based on blockchain technology. The purpose of this study is to identify the beneficial and non-beneficial aspects of using blockchain-based decentralized cloud storage as a substitute for centralized ones. The study focuses on performance and security. A comparative case study has been executed, consisting of an experiment and literature study. Quantitative data from an experiment was used in a hypothesis test to determine whether there were any performance differences between Microsoft Azure Blob Storage and Storj.io. A literature study generating qualitative data was then made to identify differences in security measures and from that also discuss potential security risks on a service like Storj.io. This study found that the performance of Storj.io was lower than Microsoft Azure’s Blob Storage. Causes of these results were identified to be due to the many more steps during resource allocation in Storj.io, compared to Blob Storage. Security risks identified in Storj.io through the literature study were generally connected to the consensus mechanism. However, research shows that it is very unlikely for the consensus mechanism to be compromised. Because Microsoft Azure’s service does not use a blockchain, these risks do not exist. For secure data transfer to Azure’s service, consumers have to implement encryption manually client-side. Therefore, this study could not conclude whether Storj.io is a safe alternative because a consumer using the Microsoft Azure service is responsible for implementing security measures. Conclusions drawn from this study are intended to act as new knowledge in the field of blockchain-based decentralized cloud storage. It is an outset to decide whether to use centralized cloud storage or blockchain-based decentralized cloud storage from a performance and security perspective. / Majoriteten av datalagringsmolntjänsterna är centraliserade, varav Microsoft Azure står som den mest använda molntjänsten. Centralisering innebär att konsumenten behöver lita på att värdföretaget hanterar tillgänglighet och säkerheten av data på bästa möjliga sätt. I kontrast mot en centraliserad molnplattform finns Storj.io som är en decentraliserad molnlagringstjänst. För att åstadkomma decentralisering använder sig Storj.io av blockkedjan som används för att uppnå den autonoma konsensusmekanismen mellan noderna som lagrar data. Syftet med denna studie är att identifiera för- respektive nackdelarna med en decentraliserad blockkedjebaserad molnplattform i jämförelse mot en centraliserad molnplattform. Specifikt fokuserar studien på prestanda och säkerhet. En komparativ fallstudie har utförts med ett experiment och en litteraturstudie som datainsamlingsmetoder. Den kvantitativa datan från experimentet användes i en hypotesprövning för att identifiera om det fanns någon skillnad i prestanda mellan Microsoft Azure och Storj.io. Litteraturstudien användes i syfte för att kunna styrka skillnader om säkerhetsåtgärder och säkerhetsrisker mellan molnplattformarna. Resultatet av denna studie visar att prestandan för Storj.io är lägre än Microsoft Azures molnplattform. De identifierade faktorerna som orsakade resultatet anses vara på grund av de flertal steg som krävs vid resursallokering för Storj.io. De säkerhetsrisker som uppstår hos Storj.io kom till i samband med konsensusmekanismen. För att en säkerhetsrisk skall uppstå mot konsensusmekanismen behöver det decentraliserade nätverket hotas med majoritet. Eftersom Microsoft Azure inte använder sig av blockkedjan uppstår inte dessa typer av säkerhetsrisker. För dataöverföring till Azures datalagringstjänst behöver konsumenten själv säkerställa en krypterad kommunikationskanal. I Storj.ios fall sköts alla typer av säkerhetsåtgärder automatiskt vilket eliminerar risken för säkerhetsattacker vid överföringar. Sammanfattningsvis tyder denna studie på att Storj.io inte är ett optimalt val vid prioritering av prestanda. Eftersom konsumenten som använder Microsoft Azures tjänst ansvarar för säkerhetsåtgärder drogs ingen direkt slutsats huruvida Storj.io är ett säkert substitut. Studien visar på att det existerar konensusrisker med en tjänst som Storj.io och det är upp till envar konsument att förlita sig på att dessa inte uppstår. De slutsatser som har dragits från denna studie är avsedda som ny kunskap inom fältet som berör decentraliserade molnplattformar baserade på blockkedjan. Studien kan användas som en utgångspunkt för val mellan en centraliserad och decentraliserad molntjänst baserad på blockkedjan med prioritet för prestanda och säkerhet.
|
34 |
Jämförelse mellan populära molnlagringstjänster : Ur ett hastighetsperspektivMalmborg, Rasmus, Ödalen Frank, Leonard January 2014 (has links)
Molnlagringstjänster används alltmer och är en växande marknad. Uppsatsen har fokuserat på att undersöka olika molnlagringstjänster ur ett hastighetsperspektiv. När små filer utbyts mellan klient och server har hastigheten på överföringen en mindre betydelse. Vid större överföringar får hastighetsaspekten en alltmer viktig roll. Regelbundna hastighetsmätningar har utförts mot de mest populära molnlagringstjänsterna. Testerna har utförts från Sverige och USA. Testerna har utförts under flera dagar och under olika tidpunkter på dygnet, för att undersöka om hastighetsskillnader existerar. Resultaten visar att stora skillnader finns i hastighet mellan Sverige och USA. Inom Sverige hade Mega och Goolgle Drive högst medelhastighet. Inom USA hade Google Drive högst medelhastighet, men här var variationerna mellan tjänsterna ej lika stora som i Sverige. I resultaten mellan olika tidpunkter var det svårare att urskilja ett mönster, med undantag för Google Drive i Sverige som konsistent fungerade bäst på natten/morgonen. Även Mega fungerade bäst under natten. / Cloud Storage services have seen increased usage and is an emerging market. This paper has focused on examining various cloud storage services from a speed perspective. When small files are exchanged between client and server, the speed of the service is of little importance. For larger transfers however, the speed of the service used plays a more important role. Regular speed measurements have been carried out against the most popular cloud storage services. The tests have been performed from Sweden and USA. The tests have been carried out over several days and at different times of day, to determine if speed differences exist. The results show that there are significant differences in speed between Sweden and the United States. In Sweden, Mega and Google Drive had the highest average speed. Within the United States, Google Drive had the highest average speed, but the variability between the services was not as great as in Sweden. In the results between different timeperiods, it was difficult to discern a pattern, with the exception of Google Drive in Sweden which consistently worked best during the night / morning. Mega also worked best during the night.
|
35 |
Tromos : a software development kit for virtual storage systems / Tromos : un cadre pour la construction de systèmes de stockage distribuésNikolaidis, Fotios 22 May 2019 (has links)
Les applications modernes ont des tendances de diverger à la fois le profile I/O et les requiers du stockage. La liaison d'une application scientifique ou commerciale avec un system "general-purpose" produit probablement un résultât sous-optimale. Même sous la présence des systèmes "purpose specific" des application aux classes multiples de workloads ont encore besoin de distribuer du travail de calcul au correct system. Cependant, cette stratégie n'est pas triviale comme des plateformes différentes butent diversifier leur propos et par conséquence elles requièrent que l'application intégrée des chemins multiples de code. Le but de l'implémentation de ces chemins n'est pas trivial, il requiert beaucoup d'effort et des capacités de codage. Le problème devient vaste quand les applications ont besoin de bénéficier de plusieurs data-stores en parallèle. Dans cette dissertation, on va introduire les "storage containers" comme le prochain étape logique, mais révolutionnaire. Un "storage container" est une infrastructure virtuelle qui découple une application de ses data-stores correspondants avec la même manière que Docker découple l'application runtime des servers physiques. En particulier, un "storage container" est un middleware qui sépare des changements fait pour bouts de code des application par des utilisateurs scientifiques, de celui fait pour des actions de I/O par des développeurs ou des administrateurs.Pour faciliter le développement et déploiement d'un "storage container" on va introduire un cadre appelé Tromos. Parmi son filtre, tout qui est nécessaire pour qu'un architecte d'une application construite une solution de stockage est de modéliser l'environnement voulu dans un fichier de définition and laisser le reste au logiciel. Tromos est livré avec un dépôt de plugins parmi les quelles l'architecte peut choisir d'optimiser le conteneur pour l'application activée. Parmi des options disponibles, sont inclus des transformations des données, des politiques de placement des données, des méthodes de reconstruction des données, du management d'espace de noms, et de la gestion de la cohérence à la demande. Comme preuve de concept, on utilisera Tromos pour créer des environnements de stockage personnalisés facilement comparés à Gluster, un système de stockage bien établi et polyvalent. Les résultats vous montrent que les "storage containers" adaptés aux applications, même s'ils sont auto-produits, peuvent surpasser les systèmes "general purpose" les plus sophistiqués en supprimant simplement la surcharge inutile de fonctionnalités factices. / Modern applications tend to diverge both in the I/O profile and storage requirements. Matching a scientific or commercial application with a general-purpose system will most likely yield suboptimal performance. Even in the presence of purpose-specific' systems, applications with multiple classes of workloads are still in need to disseminate the workload to the right system. This strategy, however, is not trivial as different platforms aim at diversified goals and therefore require the application to incorporate multiple codepaths. Implementing such codepaths is non-trivial, requires a lot of effort and programming skills, and is error-prone. The hurdles are getting worse when applications need to leverage multiple data-stores in parallel. In this dissertation, we introduce "storage containers" as the next logical in the storage evolution. A "storage container" is virtual infrastructure that decouples the application from the underlying data-stores in the same way Docker decouples the application runtime from the physical servers. In other words, it is middleware that separate changes made to application codes by science users from changes made to I/O actions by developers or administrators.To facilitate the development and deployment of a "storage container" we introduce a framework called Tromos. Through its lens, all that it takes for an application architect to spin-up a custom storage solution is to model the target environment into a definition file and let the framework handles the rest. Tromos comes with a repository of plugins which the architect can choose as to optimize the container for the application at hand. Available options include data transformations, data placement policies, data reconstruction methods, namespace management, and on-demand consistency handling.As a proof-of-concept we use Tromos to prototype customized storage environments which we compare against Gluster; a well-estalished and versatile storage system. The results have shown that application-tailored "storage containers", even if they are auto-produced, can outperform more mature "general-purpose" systems by merely removing the unnecessary overhead of unused features.
|
36 |
Vklass datalagring : En studie om datalagring på ett kostands- och prestandaeffektivt sätt / Vklass Data Storage : A study on data storage in a cost and performance effective wayZalet, Ayman January 2016 (has links)
Studien som utförs i detta examensarbete behandlar ämnet datalagring för applikationen Vklass. Problemet som studeras i detta arbete består i att det varje månad inkommer ett stort antal filer till applikationen som lagras på en server lokalt i Stockholm där regelbun-den säkerhetskopiering görs till en annan server. Den implementerade datalagringslösning-en innebär att servern som lagrar data växer ständigt och utan särskild struktur då filerna lagras i en och samma mapp, vilket i sin tur innebär en mer komplex och mindre överskåd-lig lagring av filerna i lagringsmedierna samtidigt som prestandabegränsningar uppstår vid återläsning av säkerhetskopian. Kunden efterfrågar en lösning som innebär en mer effektiv hantering för alla de filer som lagras i applikationen på ett kostnads- och prestandaeffektivt vis. Lösningen bör även möjliggöra lagring av data med en mer överskådlig och lätthanter-lig struktur. De lösningsmetoder som undersökts i arbetet avser datalagring lokalt samt datalagring i molntjänster från de mest framstående aktörerna. Undersökningen och analysen av de valda lösningsmetoderna visade att datalagring i molntjänster uppfyller kraven som den nya data-lagringslösningen ämnar uppnå till skillnad från datalagring lokalt som visade brister och som därmed inte uppfyllde dessa krav. Slutsatsen i arbetet identifierade Microsoft Azure Storage som den mest lämpliga molnleverantören för datalagring då data lagras på ett pre-standa- och struktureffektivt sätt. Lösningen är även kostnadseffektiv då det dyraste lag-ringsalternativet som Microsoft erbjuder sänker lagringskostnaderna för Vklass med 84 % under de kommande fem åren i jämförelse med aktuell lagringsmetod. / The study in this report examines data storage for the application Vklass. The problem that has been studied in this work is that the application receives and stores a big amount of files which are stored on a server locally in Stockholm where back up is made to another server on a regular basis. Today’s implemented solution for data storage endorse that the server that stores the files continues to grow each month without further structure since all the upload files is stored in the same folder which yields a more complex and less lucid storage and management for the data. This yields limitations for the performance of the application when the backup-copy needs to be restored. The owners of the application request a solu-tion that gives a more effective management of the stored data with a cost and performance effective technique. The solution that will be presented in this study should also enable stor-ing data with a more lucid and convenient structure. The studied methods for the solution include storing data locally and data storage in the biggest public cloud services. The investigation and analysis of the chosen methods proved that data storage in cloud services fulfilled the requirements for the identified solution as opposed to the methods for storing data locally that proved to yield deficits which would not fulfill these requirements. The deduction of this study identified Microsoft Azure Sto-rage to be the best public cloud solution for the given problem since data is stored in a per-formance and structure effective way. It was also proven that even the most expensive sto-rage solution provided by Azure Storage lowered the costs for data storage by 84 percent compared to today’s data storage solution during the first five years.
|
37 |
LEIA: The Live Evidence Information Aggregator : A Scalable Distributed Hypervisor‐based Peer‐2‐Peer Aggregator of Information for Cyber‐Law Enforcement IHomem, Irvin January 2013 (has links)
The Internet in its most basic form is a complex information sharing organism. There are billions of interconnected elements with varying capabilities that work together supporting numerous activities (services) through this information sharing. In recent times, these elements have become portable, mobile, highly computationally capable and more than ever intertwined with human controllers and their activities. They are also rapidly being embedded into other everyday objects and sharing more and more information in order to facilitate automation, signaling that the rise of the Internet of Things is imminent. In every human society there are always miscreants who prefer to drive against the common good and engage in illicit activity. It is no different within the society interconnected by the Internet (The Internet Society). Law enforcement in every society attempts to curb perpetrators of such activities. However, it is immensely difficult when the Internet is the playing field. The amount of information that investigators must sift through is incredibly massive and prosecution timelines stated by law are prohibitively narrow. The main solution towards this Big Data problem is seen to be the automation of the Digital Investigation process. This encompasses the entire process: From the detection of malevolent activity, seizure/collection of evidence, analysis of the evidentiary data collected and finally to the presentation of valid postulates. This paper focuses mainly on the automation of the evidence capture process in an Internet of Things environment. However, in order to comprehensively achieve this, the subsequent and consequent procedures of detection of malevolent activity and analysis of the evidentiary data collected, respectively, are also touched upon. To this effect we propose the Live Evidence Information Aggregator (LEIA) architecture that aims to be a comprehensive automated digital investigation tool. LEIA is in essence a collaborative framework that hinges upon interactivity and sharing of resources and information among participating devices in order to achieve the necessary efficiency in data collection in the event of a security incident. Its ingenuity makes use of a variety of technologies to achieve its goals. This is seen in the use of crowdsourcing among devices in order to achieve more accurate malicious event detection; Hypervisors with inbuilt intrusion detection capabilities to facilitate efficient data capture; Peer to Peer networks to facilitate rapid transfer of evidentiary data to a centralized data store; Cloud Storage to facilitate storage of massive amounts of data; and the Resource Description Framework from Semantic Web Technologies to facilitate the interoperability of data storage formats among the heterogeneous devices. Within the description of the LEIA architecture, a peer to peer protocol based on the Bittorrent protocol is proposed, corresponding data storage and transfer formats are developed, and network security protocols are also taken into consideration. In order to demonstrate the LEIA architecture developed in this study, a small scale prototype with limited capabilities has been built and tested. The prototype functionality focuses only on the secure, remote acquisition of the hard disk of an embedded Linux device over the Internet and its subsequent storage on a cloud infrastructure. The successful implementation of this prototype goes to show that the architecture is feasible and that the automation of the evidence seizure process makes the otherwise arduous process easy and quick to perform.
|
38 |
HUR DATALAGRING KAN MÖJLIGGÖRA OCH BEGRÄNSA VÄRDESKAPANDE MED BUSINESS INTELLIGENCENamér, Samuel, Shadman, Altai Jörgen, Svensson, Thomas January 2021 (has links)
Organizations depend on IT for the successful completion of many organizationalactivities. In this paper, we aim to contribute to the research field and the awareness of the opportunities and limitations data storage puts on value creation with Business Intelligence. Thus, the research question asked in this thesis is: Which opportunities and limitations does data storage put on the value creation with Business Intelligence? A case study was conducted on an IT-organization along with two expert interviews in order to answer the research question. Semi-structured interviews were held with developers and an IT-architect of the IT-organization. We conclude that there might be situations where data storage affects BI-systems, but there are factors such as BI-maturity, time and budget that play a big part in how the value that an IT-organization aim to create can be realized. We identified that a migration to a graph database could be applied to the IT-organization for a more effective and optimized value creation through the BI-system. This due to the advantages with graph databases in relation to the type of data that the IT-organization is working with.
|
39 |
Framtidens produktionspersonal i den Smarta fabriken / The production staff of the future within the smart factoryNilsson, Amanda, Lindqvist, Hanna January 2016 (has links)
The project has explored the topic Smart factory with main focus on the future production staff. The project aims to investigate how the production staff is affected by Volvo Cars Skövde Engine Plant (SkEP) becoming a Smart factory, in the era of Industry 4.0. The definition of the Smart factory is a demand of Mobile- and wireless technologies, Human-oriented, pursue a Flexible production with Sustainable manufacturing, as well as utilization of CPS (Cyber-Physical Systems), IoT (Internet-of-Things) and Cloud storage. The current situation and the future five to twenty years were examined in order to define the future production staff. This by conducting an observational study and several interviews. The studies’ results were that SkEP cannot be regarded as smart since several demands are inadequate by definition. Five years are considered too short of a time for the plant to fulfill the demands. However, according to the interviews and literature, SkEP are expected to become smart in twenty years after time refinement of existing technologies and implementation of new ones. The authors estimate Leadership, Information, IT and Production lay-out to be the areas that require the most effort. The future production staff are expected to be flexible with workplace, working hours and able to manage multiple variants. They should be included in self-supporting teams where every individual possesses an expertise, are motivated and participating. Production staff should perform complex, varied jobs with more responsibility by endorsement of decision support systems. The staffs’ competence should consist of technical education, high basic and lay-out knowledge and the ability to contribute to the collection of information and analyses. Interaction with technology is expected to expand and the personnel must therefore have a well-established comprehension of technology. The concept Smart factory is extensive and relatively new, which means that it is constantly evolving. Thus it is important for SkEP to be updated and adjust to the impact from the outside world.
|
40 |
混合雲帳號整合、檔案權限管理與同步系統之研究 / A Research into Account Integration, Authorization and Content Synchronization of Hybrid Cloud丁柏元 Unknown Date (has links)
隨著網際網路的蓬勃發展、雲端運算的興起,各種公有雲端服務林立,企業組織擁有更多不同的選擇與更經濟的解決方案,因此也願意投入更多花費在公有雲上。其中,最重要的一項服務為檔案同步與分享服務(File Synchronization and Sharing, FSS),其可為企業組織帶來生產力,但在無法完全信任公有雲端服務的情況下,在檔案管理及服務上勢必會採用混合雲部署,私有雲端環境用來處理敏感度較高的資料而敏感度較低的檔案及文件才會採用公有雲端服務。
本研究將探討在混合雲環境下會遇到的使用者多重帳號及身分,以及檔案一致性的問題。我們提出一套整合不同雲端平台帳號的架構和方法,在檔案同步上,設計出三次同步訊息交換和兩階段同步的機制,並且架構出一個入口網站(Portal)服務,根據以上設計架構,實作出一個在混合雲環境,跨本地端、公有雲與私有雲的雲端帳號整合,檔案權限管理與同步之系統,最終可以解決在混合雲環境,不同雲端平台多重帳號的困擾,並可以維持不同裝置和雲端之間檔案資料和權限一致性。本研究最後針對實作出的系統之三大功能模組進行測試,驗證本系統各元件模組的正確性和穩定性,測試結果都是通過的;也針對公、私有雲端,測試檔案同步時間,衡量各種大小檔案同步所需要花費的時間,驗證本系統的效能和實用性。 / As rapid growth of cloud computing and Internet, there are variety public cloud service providers on the market, and enterprises will have more choices and economical IT solutions, therefore, they are willing to spend more on public cloud.
Although, the most significant of cloud-based productivity services is file synchronization and sharing, enterprises cannot fully trust public cloud, they will deploy file management system with hybrid cloud which they store less sensitive data on public cloud and highly sensitive data on private cloud.
This study aim to design a system on hybrid cloud deployment that can solve the problems regarding too many online accounts and synchronize data and permissions between different devices and cloud to maintain consistency. We propose a method to integrate accounts on different cloud and design a mechanize of file synchronization which contains three steps messages flows and two stages synchronization to implement the system, otherwise, we also create a portal service.
Last but not least, we test three main modules of the system to verify correctness and stability and the results are all pass. Also, we measure the synchronization time of files with different size to verify effectiveness and practicality.
|
Page generated in 0.0976 seconds