Spelling suggestions: "subject:" cloud computing"" "subject:" aloud computing""
71 |
Intrusion Detection System as a Service : Providing intrusion detection system on a subscription basis for cloud deploymentGade, Vaibhav January 2015 (has links)
No description available.
|
72 |
An Investigation of CPU utilization relationship between host and guests in a Cloud infrastructureAhmadi Mehri, Vida January 2015 (has links)
Cloud computing stands as a revolution in IT world in recent years. This technology facilitates resource sharing by reducing hardware costs for business users and promises energy efficiency and better resource utilization to the service providers. CPU utilization is a key metric considered in resource management across clouds. The main goal of this thesis study is directed towards investigating CPU utilization behavior with regard to host and guest, which would help us in understanding the relationship between them. It is expected that perception of these relationships would be helpful in resource management. Working towards our goal, the methodology we adopted is experi- mental research. This involves experimental modeling, measurements and observations from the results. The experimental setup covers sev- eral complex scenarios including cloud and a standalone virtualization system. The results are further analyzed for a visual correlation. Results show that CPU utilization in cloud and virtualization sce- nario coincides. More experimental scenarios are designed based on the first observations. The obtaining results show the irregular behav- ior between PM and VM in variable workload. CPU utilization retrieved from both cloud and a standalone system is similar. 100% workload situations showed that CPU utilization is constant with no correlation co-efficient obtained. Lower workloads showed (more/less) correlation in most of the cases in our correlation analysis. It is expected that more number of iterations can possibly vary the output. Further analysis of these relationships for proper resource management techniques will be considered.
|
73 |
POLICY-BASED MIDDLEWARE FOR MOBILE CLOUD COMPUTING2013 August 1900 (has links)
Mobile devices are the dominant interface for interacting with online services as well as an efficient platform for cloud data consumption. Cloud computing allows the delivery of applications/functionalities as services over the internet and provides the software/hardware infrastructure to host these services in a scalable manner. In mobile cloud computing, the apps running on the mobile device use cloud hosted services to overcome resource constraints of the host device. This approach allows mobile devices to outsource the resource-consuming tasks. Furthermore, as the number of devices owned by a single user increases, there is the growing demand for cross-platform application deployment to ensure a consistent user experience. However, the mobile devices communicate through unstable wireless networks, to access the data and services hosted in the cloud. The major challenges that mobile clients face when accessing services hosted in the cloud, are network latency and synchronization of data.
To address the above mentioned challenges, this research proposed an architecture which introduced a policy-based middleware that supports user to access cloud hosted digital assets and services via an application across multiple mobile devices in a seamless manner. The major contribution of this thesis is identifying different information, used to configure the behavior of the middleware towards reliable and consistent communication among mobile clients and the cloud hosted services. Finally, the advantages of the using policy-based middleware architecture are illustrated by experiments conducted on a proof-of-concept prototype.
|
74 |
Efficient Resource Management for Cloud Computing EnvironmentsZhang, Qi 23 September 2013 (has links)
Cloud computing has recently gained popularity as a cost-effective model for hosting and delivering services over the Internet. In a cloud computing environment, a cloud provider packages its physical resources in data centers into virtual resources and offers them to service providers using a pay-as-you-go pricing model. Meanwhile, a service provider uses the rented virtual resources to host its services. This large-scale multi-tenant architecture of cloud computing systems raises key challenges regarding how data centers resources should be controlled and managed by both service and cloud providers.
This thesis addresses several key challenges pertaining to resource management in cloud environments. From the perspective of service providers, we address the problem of selecting appropriate data centers for service hosting with consideration of resource price, service quality as well as dynamic reconfiguration costs. From the perspective of cloud providers, as it has been reported that workload in real data centers can be typically divided into server-based applications and MapReduce applications with different performance and scheduling criteria, we provide separate resource management solutions for each type of workloads. For server-based applications, we provide a dynamic capacity provisioning scheme that dynamically adjusts the number of active servers to achieve the best trade-off between energy savings and scheduling delay, while considering heterogeneous resource characteristics of both workload and physical machines. For MapReduce applications, we first analyzed
task run-time resource consumption of a large variety of MapReduce jobs and discovered it can vary significantly over-time, depending on the phase the task is currently executing. We then present a novel scheduling algorithm that controls task execution at the level of phases with the aim of improving both job running time and resource utilization. Through detailed simulations and experiments using real cloud clusters, we have found our proposed solutions achieve substantial gain compared to current state-of-art resource management solutions, and therefore have strong implications in the design of real cloud resource management systems in practice.
|
75 |
High Availability for Database Systems in Geographically Distributed Cloud Computing EnvironmentsMeng, Huangdong January 2014 (has links)
In recent years, cloud storage systems have become very popular due to their good scal-
ability and high availability. However, these storage systems provide limited transactional capabilities, which makes developing applications that use these systems substantially more difficult than developing applications that use a traditional SQL-based relational database management systems (DBMS). There have been solutions that provide transactional SQL-based DBMS services on the cloud, including solutions that use cloud shared storage systems to store the data. However, none of these solutions take advantage of the shared cloud storage architecture to provide DBMS high availability. These solutions typically deal with the failure of a DBMS server by restarting this server and going through crash recovery based on the transaction log, which can lead to long DBMS service downtimes that are not acceptable to users. It is possible to run traditional DBMS high availability solutions in cloud environments. These solutions are typically based on shipping the transaction log from a primary server to a backup server, and replaying the log at the backup server to keep it up to date with the primary. However, these solutions do not work well if the primary and backup are in different, geographically distributed data centers due to the high latency of log shipping. Furthermore, these solutions do not take advantage of the capabilities of the underlying shared storage system.
We present a new transparent high availability system for transactional SQL-based
DBMS on a shared storage architecture, which we call CAC-DB (Continuous Access Cloud DataBase). Our system is especially designed for eventually consistent cloud storage systems that run efficiently in multiple geographically distributed data centers. The database and transaction logs are stored in such a storage system, and therefore remain available after a failure up to the failure of an entire data center (e.g., in a natural disaster). CAC-DB takes advantage of this shared storage to ensure that the DBMS service remains available and transactionally consistent in the face of failures up to the loss of one or more data centers. By taking advantage of shared storage, CAC-DB can run in a geographically distributed environment with minimal overhead as compared to traditional log shipping solutions.
In CAC-DB, an active (primary) and a standby (backup) DBMS run on different servers
in different data centers. The standby catches up with the active's memory state by replaying the shared log. When the active crashes, the standby can finish the failover process and reach peak throughput very quickly. The DBMS service only experiences several seconds of downtime. While the basic idea of replaying the log is simple and not new, the shared storage environment poses many new challenges including the need for synchronization protocols, new buffer pool management mechanisms, approaches for guaranteeing strong consistency without sacrifi cing performance and new shared storage based failure detection mechanism. This thesis solves these challenges and presents a system that achieves the following goal: if a data center fails, not only does the persistent image of the database on the storage tier survive, but also the DBMS service can resume almost uninterrupted and reach peak throughput in a very short time. At the same time, the throughput of the DBMS service in normal processing is not negatively affected. Our experiments with CAC-DB running on EC2 con rm that it can achieve the above goals.
|
76 |
Algorithms and Systems for Virtual Machine Scheduling in Cloud InfrastructuresLi, Wubin January 2014 (has links)
With the emergence of cloud computing, computing resources (i.e., networks, servers, storage, applications, etc.) are provisioned as metered on-demand services over net- works, and can be rapidly allocated and released with minimal management effort. In the cloud computing paradigm, the virtual machine (VM) is one of the most com- monly used resource units in which business services are encapsulated. VM schedul- ing optimization, i.e., finding optimal placement schemes for VMs and reconfigu- rations according to the changing conditions, becomes challenging issues for cloud infrastructure providers and their customers. The thesis investigates the VM scheduling problem in two scenarios: (i) single- cloud environments where VMs are scheduled within a cloud aiming at improving criteria such as load balancing, carbon footprint, utilization, and revenue, and (ii) multi-cloud scenarios where a cloud user (which could be the owner of the VMs or a cloud infrastructure provider) schedules VMs across multiple cloud providers, target- ing optimization for investment cost, service availability, etc. For single-cloud scenar- ios, taking load balancing as the objective, an approach to optimal VM placement for predictable and time-constrained peak loads is presented. In addition, we also present a set of heuristic methods based on fundamental management actions (namely, sus- pend and resume physical machines, VM migration, and suspend and resume VMs), continuously optimizing the profit for the cloud infrastructure provider regardless of the predictability of the workload. For multi-cloud scenarios, we identify key re- quirements for service deployment in a range of common cloud scenarios (including private clouds, bursted clouds, federated clouds, multi-clouds, and cloud brokering), and present a general architecture to meet these requirements. Based on this architec- ture, a set of placement algorithms tuned for cost optimization under dynamic pricing schemes are evaluated. By explicitly specifying service structure, component relation- ships, and placement constraints, a mechanism is introduced to enable service owners the ability to influence placement. In addition, we also study how dynamic cloud scheduling using VM migration can be modeled using a linear integer programming approach. The primary contribution of this thesis is the development and evaluation of al- gorithms (ranging from combinatorial optimization formulations to simple heuristic algorithms) for VM scheduling in cloud infrastructures. In addition to scientific pub- lications, this work also contributes software tools (in the OPTIMIS project funded by the European Commissions Seventh Framework Programme) that demonstrate the feasibility and characteristics of the approaches presented. / I datormoln tillhandahålls datorresurser (dvs., nätverk, servrar, lagring, applikationer, etc.) som tjänster åtkomliga via Internet. Resurserna, som t.ex. virtuella maskiner (VMs), kan snabbt och enkelt allokeras och frigöras alltefter behov. De potentiellt snabba förändringarna i hur många och hur stora VMs som behövs leder till utmanade schedulerings- och konfigureringsproblem. Scheduleringsproblemen uppstår både för infrastrukturleverantörer som behöver välja vilka servrar olika VMs ska placeras på inom ett moln och deras kunder som behöver välja vilka moln VMs ska placeras på. Avhandlingen fokuserar på VM-scheduleringsproblem i dessa två scenarier, dvs (i) enskilda moln där VMs ska scheduleras för att optimera lastbalans, energiåtgång, resursnyttjande och ekonomi och (ii) situationer där en molnanvändare ska välja ett eller flera moln för att placera VMs för att optimera t.ex. kostnad, prestanda och tillgänglighet för den applikation som nyttjar resurserna. För det förstnämnda scenar- iot presenterar avhandlingen en scheduleringsmetod som utifrån förutsägbara belast- ningsvariationer optimerar lastbalansen mellan de fysiska datorresurserna. Därtill pre- senteras en uppsättning heuristiska metoder, baserade på fundamentala resurshanter- ingsåtgärder, fö att kontinuerligt optimera den ekonomiska vinsten för en molnlever- antör, utan krav på lastvariationernas förutsägbarhet. För fallet med flera moln identifierar vi viktiga krav för hur resurshanteringstjänster ska konstrueras för att fungera väl i en rad konceptuellt olika fler-moln-scenarier. Utifrån dessa krav definierar vi också en generell arkitektur som kan anpassas till dessa scenarier. Baserat pp vår arkitektur utvecklar och utvärderar vi en uppsättning algoritmer för VM-schedulering avsedda att minimera kostnader för användning av molninfrastruktur med dynamisk prissättning. Användaren ges genom ny funktionalitet möjlighet att explicit specificera relationer mellan de VMs som allokeras och andra bivillkor för hur de ska placeras. Vi demonstrerar också hur linjär heltals- programmering kan användas för att optimera detta scheduleringsproblem. Avhandlingens främsta bidrag är utveckling och utvärdering av nya metoder för VM-schedulering i datormoln, med lösningar som inkluderar såväl kombinatorisk op- timering som heuristiska metoder. Utöver vetenskapliga publikationer bidrar arbetet även med programvaror för VM-schedulering, utvecklade inom ramen för projektet OPTIMIS som finansierats av EU-kommissionens sjunde ramprogram. metoder för VM-schedulering i datormoln, med lösningar som inkluderar såväl kombinatorisk op- timering som heuristiska metoder. Utöver vetenskapliga publikationer bidrar arbetet även med programvaror för VM-schedulering, utvecklade inom ramen för projektet OPTIMIS som finansierats av EU-kommissionens sjunde ramprogram.
|
77 |
Mitigation of Virtunoid Attacks on Cloud Computing SystemsForsell, Daniel McKinnon January 2015 (has links)
Virtunoid is a proof of concept exploit abusing a vulnerability in the open source hardware virtualisation control program QEMU-KVM. The vulnerability originally stems from improper hotplugging of emulated embedded circuitry in the Intel PIIX4 southbridge resulting in memory corruption and dangling pointers. The exploit can be used to compromise the availability of the virtual machine, or to escalate privileges compromising the confidentiality of the resources in the host system. The research presented in this dissertation shows that the discretionary access control system, provided by default in most Linux operating systems, is insufficient in protecting the QEMU-KVM hypervisor against the Virtunoid exploit. Further, the research presented in this dissertation shows that the open source solutions AppArmor and grsecurity enhances the Linux operating system with additional protection against the Virtunoid exploit through mandatory access control, either through profiling or role-based access control. The research also shows that the host intrusion prevention system PaX does not provide any additional protection against the Virtunoid exploit. The comprehensive and detailed hands-on approach of this dissertation holds the ability to be reproduced and quantified for comparison necessary for future research.
|
78 |
Dynamic Scale-out Mechanisms for Partitioned Shared-Nothing DatabasesKaryakin, Alexey January 2011 (has links)
For a database system used in pay-per-use cloud environments, elastic scaling becomes an essential feature, allowing for minimizing costs while accommodating fluctuations of load. One approach to scalability involves horizontal database partitioning and dynamic migration of partitions between servers. We define a scale-out operation as a combination of provisioning a new server followed by migration of one or more partitions to the newly-allocated server.
In this thesis we study the efficiency of different implementations of the scale-out operation in the context of online transaction processing (OLTP) workloads. We designed and implemented three migration mechanisms featuring different strategies for data transfer. The first one is based on a modification of the Xen hypervisor, Snowflock, and uses on-demand block transfers for both server provisioning and partition migration. The second one is implemented in a database management system (DBMS) and uses bulk transfers for partition migration, optimized for higher bandwidth utilization. The third one is a conventional application, using SQL commands to copy partitions between servers.
We perform an experimental comparison of those scale-out mechanisms for disk-bound and CPU-bound configurations. When comparing the mechanisms we analyze their impact on whole-system performance and on the experience of individual clients.
|
79 |
Physical Resource Management and Access Mediation Within the Cloud Computing ParadigmBetts, Hutson 2012 August 1900 (has links)
Cloud computing has seen a surge over the past decade as corporations and institutions have sought to leverage the economies-of-scale achievable through this new computing paradigm. However, the rapid adoptions of cloud computing technologies that implement the existing cloud computing paradigm threaten to undermine the long-term utility of the cloud model of computing. In this thesis we address how to accommodate the variety of access requirements and diverse hardware platforms of cloud computing users by developing extensions to the existing cloud computing paradigm that afford consumer-driven access requirements and integration of new physical hardware platforms.
|
80 |
Smart TV front-end application for cloud computingMiguel Montero, Jaime January 2012 (has links)
This master project focuses on the development of a front-end applicationfor cloud computing. Traditionally, televisions have been excluded from thealways connected world. With the appearance of the smart televisions it isnow possible to connect them to the Internet. However, there still exists agap between televisions and services in the cloud.To solve the problem,we have developed a JavaScript application. This application allows the user to log into their CloudMe account from a SamsungSmartTV with multimedia support. This application is centered on improving the responsiveness performance of a cloud computing application. It alsoenhances the user experience by creating a user-friendly UI for a television.During the course of this thesis, the application and its functionalities havebeen studied, designed, developed, optimized and finally tested. We havealso done a set of measurements to validate the responsiveness of the proposed design.The development of this TV application shows the TV is a potential targetdevice for cloud computing services due to its better resources and capabilities in di↵erent areas such as multimedia reproduction.
|
Page generated in 0.0851 seconds