Spelling suggestions: "subject:"'cloud computing'"" "subject:"'aloud computing'""
61 |
PURIC: A Multimedia Uniform Resource Identifier Management System2013 June 1900 (has links)
The types of content being transferred over the Internet are getting richer and larger; the number of social media channels users have to sift through to publish and find content is also increasing. Average users are uploading and downloading richer and larger media files as they feel the urge to share their content with others. This work explores a novel process for publishing personal media files on social applications, where the publisher retains control over the media, while the implementation follows the principles of the WWW. The Personal URI Channel (PURIC) system is introduced as a process that can take place along side social applications like email clients, social networking sites (i.e. Twitter and Facebook), and emerging decentralized social networking sites. The PURIC system is a media resource link management tool used for publishing and maintaining the links published on social applications. This work explores the feasibility, benefits, and drawbacks of the PURIC system. It reveals the modularity and scalability of the system, and how it compliments social applications without placing too much load on network traffic and server-side cpu processing.
|
62 |
Efficient Resource Management for Cloud Computing EnvironmentsZhang, Qi 23 September 2013 (has links)
Cloud computing has recently gained popularity as a cost-effective model for hosting and delivering services over the Internet. In a cloud computing environment, a cloud provider packages its physical resources in data centers into virtual resources and offers them to service providers using a pay-as-you-go pricing model. Meanwhile, a service provider uses the rented virtual resources to host its services. This large-scale multi-tenant architecture of cloud computing systems raises key challenges regarding how data centers resources should be controlled and managed by both service and cloud providers.
This thesis addresses several key challenges pertaining to resource management in cloud environments. From the perspective of service providers, we address the problem of selecting appropriate data centers for service hosting with consideration of resource price, service quality as well as dynamic reconfiguration costs. From the perspective of cloud providers, as it has been reported that workload in real data centers can be typically divided into server-based applications and MapReduce applications with different performance and scheduling criteria, we provide separate resource management solutions for each type of workloads. For server-based applications, we provide a dynamic capacity provisioning scheme that dynamically adjusts the number of active servers to achieve the best trade-off between energy savings and scheduling delay, while considering heterogeneous resource characteristics of both workload and physical machines. For MapReduce applications, we first analyzed
task run-time resource consumption of a large variety of MapReduce jobs and discovered it can vary significantly over-time, depending on the phase the task is currently executing. We then present a novel scheduling algorithm that controls task execution at the level of phases with the aim of improving both job running time and resource utilization. Through detailed simulations and experiments using real cloud clusters, we have found our proposed solutions achieve substantial gain compared to current state-of-art resource management solutions, and therefore have strong implications in the design of real cloud resource management systems in practice.
|
63 |
Cloud Computing : The Adoption of Cloud Computing for Small and Medium EnterprisesGustafsson, Bennet, Orrgren, Alexander January 2012 (has links)
The objective with this research was to investigate and understand the adoption of cloud computing and to find the process of adopting cloud services. The method used to collect data was interviews. To find both the users and the providers perspective two cases were investigated, one user case and one provider case. The results were divided into two parts, the first a number of categories that were found when comparing the user case to the provider case, the second a process that describes the adoption of cloud computing. The categories in the first part of the results are; decision process, definition of cloud computing, integration and security, adoption and future development. When analyzing the results we came to the conclusion that both users and providers are striving for simplicity, security and to move the responsibility away from the user. The adoption of cloud computing is not as complex as many organizations have thought and by moving the applications and hardware out of the organization the user can focus on its core strategies.
|
64 |
Dynamic Scale-out Mechanisms for Partitioned Shared-Nothing DatabasesKaryakin, Alexey January 2011 (has links)
For a database system used in pay-per-use cloud environments, elastic scaling becomes an essential feature, allowing for minimizing costs while accommodating fluctuations of load. One approach to scalability involves horizontal database partitioning and dynamic migration of partitions between servers. We define a scale-out operation as a combination of provisioning a new server followed by migration of one or more partitions to the newly-allocated server.
In this thesis we study the efficiency of different implementations of the scale-out operation in the context of online transaction processing (OLTP) workloads. We designed and implemented three migration mechanisms featuring different strategies for data transfer. The first one is based on a modification of the Xen hypervisor, Snowflock, and uses on-demand block transfers for both server provisioning and partition migration. The second one is implemented in a database management system (DBMS) and uses bulk transfers for partition migration, optimized for higher bandwidth utilization. The third one is a conventional application, using SQL commands to copy partitions between servers.
We perform an experimental comparison of those scale-out mechanisms for disk-bound and CPU-bound configurations. When comparing the mechanisms we analyze their impact on whole-system performance and on the experience of individual clients.
|
65 |
Application of MapReduce to Ranking SVM for Large-Scale DatasetsHu, Su-Hsien 10 August 2010 (has links)
Nowadays, search engines are more relying on machine learning techniques to construct a model, using past user queries and clicks as training data, for ranking web pages. There are several learning to rank methods for information retrieval, and among them ranking support vector machine (SVM) attracts a lot of attention in the information retrieval community. One difficulty with Ranking SVM is that the computation cost is very high for constructing a ranking model due to the huge number of training data pairs when the size of training dataset is large. We adopt the MapReduce programming model to solve this difficulty. MapReduce is a distributed computing framework introduced by Google and is commonly adopted in cloud computing centers. It can deal easily with large-scale datasets using a large number of computers. Moreover, it hides the messy details of parallelization, fault-tolerance, data distribution, and load balancing from the programmer and allows him/her to focus on only the underlying problem to be solved. In this paper, we apply MapReduce to Ranking SVM for processing large-scale datasets. We specify the Map function to solve the dual sub problems involved in Ranking SVM and the Reduce function to aggregate all the outputs having the same intermediate key from Map functions of distributed machines. Experimental results show efficiency improvement on ranking SVM by our proposed approach.
|
66 |
Detecting Attack Sequence in Cloud Based on Hidden Markov ModelHuang, Yu-Zhi 26 July 2012 (has links)
Cloud computing provides business new working paradigm with the benefit of cost reduce and resource sharing. Tasks from different users may be performed on the same machine. Therefore, one primary security concern is whether user data is secure in cloud. On the other hand, hacker may facilitate cloud computing to launch larger range of attack, such as a request of port scan in cloud with virtual machines executing such malicious action. In addition, hacker may perform a sequence of attacks in order to compromise his target system in cloud, for example, evading an easy-to-exploit machine in a cloud and then using the previous compromised to attack the target. Such attack plan may be stealthy or inside the computing environment, so intrusion detection system or firewall has difficulty to identify it.
The proposed detection system analyzes logs from cloud to extract the intensions of the actions recorded in logs. Stealthy reconnaissance actions are often neglected by administrator for the insignificant number of violations. Hidden Markov model is adopted to model the sequence of attack performed by hacker and such stealthy events in a long time frame will become significant in the state-aware model. The preliminary results show that the proposed system can identify such attack plans in the real network.
|
67 |
A magnetic intruder detection system based on cloud computingSun, Rui-Ting 21 November 2012 (has links)
Taiwan is surrounded by ocean, thus the ocean transportation has become the necessary support of Taiwan's economy. Due to this fact, this research provides a system based on cloud computing and distributed storage which is applied to compute large amount of data provided by many sensors on the sea in order to diagnose the existence of possible magnetized invaders.
We use Hadoop platform from Apache Foundation to proceed distributable K-means clustering computation to process the data collected f
rom many sensor nodes containing DGPS and magnetic sensors. With these data, it is possible to diagnose the existence and the moving direction of the possible invader. And the result can be return to remote monitoring terminal. Not only K-means can detect the irregularity of any axis of the magnetic field well, but also this system obtain good reliability and performance by Hadoop platform.
The goal system can detect the irregularity of any axis of the magnetic field well enough by deploying K-Means clustering and obtain good reliability and performance by Hadoop platform.
|
68 |
Classification of encrypted cloud computing service traffic using data mining techniquesQian, Cheng 27 February 2012 (has links)
In addition to the wireless network providers’ need for traffic classification, the need is more and more common in the Cloud Computing environment. A data center hosting Cloud Computing services needs to apply priority policies and Service Level Agreement (SLA) rules at the edge of its network. Overwhelming requirements about user privacy protection and the trend of IPv6 adoption will contribute to the significant growth of encrypted Cloud Computing traffic. This report presents experiments focusing on application of data mining based Internet traffic classification methods to classify encrypted Cloud Computing service traffic. By combining TCP session level attributes, client and host connection patterns and Cloud Computing service Message Exchange Patterns (MEP), the best method identified in this report yields 89% overall accuracy. / text
|
69 |
WAVNet: wide-area virtual networks for dynamic provisioning of IaaSXu, Zheming, 徐哲明 January 2010 (has links)
published_or_final_version / Computer Science / Master / Master of Philosophy
|
70 |
Move my data to the cloud: an online cost-minimizing approachZhang, Linquan, 张琳泉 January 2012 (has links)
Cloud computing has rapidly emerged as a new computation paradigm, providing agile and scalable resource access in a utility-like fashion. Processing of massive amounts of data has been a primary usage of the clouds in practice. While many efforts have been devoted to designing the computation models (e.g., MapReduce), one important issue has been largely neglected in this respect: how do we efficiently move the data, practically generated from different geographical locations over time, into a cloud for effective processing? The usual approach of shipping data using hard disks lacks flexibility and security. As the first dedicated effort, this paper tackles this massive, dynamic data migration issue. Targeting a cloud encompassing disparate data centers of different resource charges, we model the cost-minimizing data migration problem, and propose efficient offline and online algorithms, which optimize the routes of data into the cloud and the choice of the data center to aggregate the data for processing, at any give time. Three online algorithms are proposed to practically guide data migration over time. With no need of any future information on the data generation pattern, an online lazy migration (OLM) algorithm achieves a competitive ratio as low as 2:55 under typical system settings, and a work function algorithm (WFA) has a linear 2K-1 (K is the number of data centers) competitive ratio. The rest one randomized fixed horizon control algorithm (RFHC) achieves 1+ 1/(l+1 ) κ/λ competitive ratio in theory with a lookahead window of l into the future, where κ and λ are protocol parameters. We conduct extensive experiments to evaluate our online algorithms, using real-world meteorological data generation traces, under realistic cloud settings. Comparisons among online and offline algorithms show a close-to-offline-optimum performance and demonstrate the effectiveness of our online algorithms in practice. / published_or_final_version / Computer Science / Master / Master of Philosophy
|
Page generated in 0.0806 seconds