231 |
Performance modeling of cloud computing centersKhazaei, Hamzeh 21 February 2013 (has links)
Cloud computing is a general term for system architectures that involves delivering hosted services over the Internet, made possible by significant innovations in virtualization and distributed computing, as well as improved access to high-speed Internet. A cloud service differs from traditional hosting in three principal aspects.
First, it is provided on demand, typically by the minute or the hour; second, it is elastic since the user can have as much or as little of a service as they want at any given time; and third, the service is fully managed by the provider -- user needs little more than computer and Internet access. Typically a contract is negotiated and agreed between a customer and a service provider; the service provider is required to execute service requests from a customer within negotiated quality of service (QoS) requirements for a given price.
Due to dynamic nature of cloud environments, diversity of user's requests, resource virtualization, and time dependency of load, provides expected quality of service while avoiding over-provisioning is not a simple task. To this end, cloud provider must have efficient and accurate techniques for performance evaluation of cloud computing centers. The development of such techniques is the focus of this thesis.
This thesis has two parts. In first part, Chapters 2, 3 and 4, monolithic performance models are developed for cloud computing performance analysis. We begin with Poisson task arrivals, generally distributed service times, and a large number of physical servers. Later on, we extend our model to include finite buffer capacity, batch task arrivals, and virtualized servers with a large number of virtual machines in each physical machine.
However, a monolithic model may suffer from intractability and poor scalability due to large number of parameters. Therefore, in the second part of the thesis (Chapters 5 and 6) we develop and evaluate tractable functional performance sub-models for different servicing steps in a complex cloud center and the overall solution obtains by iteration over individual sub-model solutions. We also extend the proposed interacting analytical sub-models to capture other important aspects including pool management, power consumption, resource assigning process and virtual machine deployment of nowadays cloud centers. Finally, a performance model suitable for cloud computing centers with heterogeneous requests and resources using interacting stochastic models is proposed and evaluated.
|
232 |
Multiple criteria decision analysis in autonomous computing: a study on independent and coordinated self-management.Yazir, Yagiz Onat 26 August 2011 (has links)
In this dissertation, we focus on the problem of self-management in distributed systems. In this context, we propose a new methodology for reactive self-management based on multiple criteria decision analysis (MCDA). The general structure of the proposed methodology is extracted from the commonalities of the former well-established approaches that are applied in other problem domains. The main novelty of this work, however, lies in the usage of MCDA during the reaction processes
in the context of the two problems that the proposed methodology is applied to.
In order to provide a detailed analysis and assessment of this new approach, we have used the proposed methodology to design distributed autonomous agents that can provide self-management in two outstanding problems. These two problems also represent the two distinct ways in which the methodology can be applied to self-management problems. These two cases are: 1) independent self management, and 2) coordinated self-management. In the simulation case study regarding independent self-management, the methodology is used to design and implement a distributed resource consolidation manager for clouds, called IMPROMPTU. In IMPROMPTU, each autonomous agent is attached to a unique physical machine in the cloud, where it manages resource consolidation independently from the rest of the autonomous agents. On the other hand, the simulation case study regarding coordinated self-management focuses on the problem of adaptive routing in mobile ad hoc networks (MANET). The resulting system carries out adaptation through autonomous agents that are attached to each MANET node in a coordinated manner. In
this context, each autonomous node agent expresses its opinion in the form of a decision regarding which routing algorithm should be used given the perceived conditions. The opinions are aggregated through coordination in order to produce a
final decision that is to be shared by every node in the MANET.
Although MCDA has been previously considered within the context of artificial intelligence---particularly with respect to algorithms and frameworks that represent different requirements for MCDA problems, to the best of our knowledge, this dissertation outlines a work where MCDA is applied for the first time in the domain of these two problems that are represented as
simulation case studies. / Graduate
|
233 |
Energy-oriented Partial Desktop Virtual Machine MigrationBila, Nilton 02 August 2013 (has links)
Modern offices are crowded with personal computers. While studies have shown these to be idle most of the time, they remain powered, consuming up to 60% of their peak power. Hardware based solutions engendered by PC vendors (e.g., low power states, Wake-on-LAN) have proven unsuccessful because, in spite of user inactivity, these machines often need to remain network active in support of background applications that maintain network presence.
Recent solutions have been proposed that perform consolidation of idle desktop virtual machines. However, desktop VMs are often large requiring gigabytes of memory. Consolidating such VMs, creates large network transfers lasting in the order of minutes, and utilizes server memory inefficiently. When multiple VMs migrate simultaneously, each VM’s experienced migration latency grows, and this limits the use of VM consolidation to environments in which only a few daily migrations are expected per VM. This thesis introduces partial VM migration, an approach that transparently migrates only the working set of an idle VM, by migrating memory pages on-demand. It creates a partial replica of the desktop VM on the consolidation server by copying only VM metadata, and transferring pages to the server, as the VM accesses them. This approach places desktop PCs in low power state when inactive and resumes them to running state when pages are needed by the VM running on the consolidation server.
Jettison, our software prototype of partial VM migration for off-the-shelf PCs, can
deliver 78% to 91% energy savings during idle periods lasting more than an hour, while providing low migration latencies of about 4 seconds, and migrating minimal state that is under an order of magnitude of the VM’s memory footprint. In shorter idle periods of up to thirty minutes, Jettison delivers savings of 7% to 31%.
We present two approaches that increase energy savings attained with partial VM migration, especially in short idle periods. The first, Context-Aware Selective Resume, expedites PC resume and suspend cycle times by supplying a context identifier at desktop resume, and initializing only devices and code that are relevant to the context. CAESAR, the Context-Aware Selective Resume framework, enables applications to register context vectors that are invoked when the desktop is resumed with matching context. CAESAR increases energy savings in short periods of five minutes to an hour by up to 66%.
The second approach, the low power page cache, embeds network accessible low power hardware in the PC, to enable serving of pages to the consolidation server, while the PC is in low power state. We show that Oasis, our prototype page cache, addresses the shortcomings of energy-oriented on-demand page migration by increasing energy savings, especially during short idle periods. In periods of up to an hour, Oasis increases savings by up to twenty times.
|
234 |
Energy-oriented Partial Desktop Virtual Machine MigrationBila, Nilton 02 August 2013 (has links)
Modern offices are crowded with personal computers. While studies have shown these to be idle most of the time, they remain powered, consuming up to 60% of their peak power. Hardware based solutions engendered by PC vendors (e.g., low power states, Wake-on-LAN) have proven unsuccessful because, in spite of user inactivity, these machines often need to remain network active in support of background applications that maintain network presence.
Recent solutions have been proposed that perform consolidation of idle desktop virtual machines. However, desktop VMs are often large requiring gigabytes of memory. Consolidating such VMs, creates large network transfers lasting in the order of minutes, and utilizes server memory inefficiently. When multiple VMs migrate simultaneously, each VM’s experienced migration latency grows, and this limits the use of VM consolidation to environments in which only a few daily migrations are expected per VM. This thesis introduces partial VM migration, an approach that transparently migrates only the working set of an idle VM, by migrating memory pages on-demand. It creates a partial replica of the desktop VM on the consolidation server by copying only VM metadata, and transferring pages to the server, as the VM accesses them. This approach places desktop PCs in low power state when inactive and resumes them to running state when pages are needed by the VM running on the consolidation server.
Jettison, our software prototype of partial VM migration for off-the-shelf PCs, can
deliver 78% to 91% energy savings during idle periods lasting more than an hour, while providing low migration latencies of about 4 seconds, and migrating minimal state that is under an order of magnitude of the VM’s memory footprint. In shorter idle periods of up to thirty minutes, Jettison delivers savings of 7% to 31%.
We present two approaches that increase energy savings attained with partial VM migration, especially in short idle periods. The first, Context-Aware Selective Resume, expedites PC resume and suspend cycle times by supplying a context identifier at desktop resume, and initializing only devices and code that are relevant to the context. CAESAR, the Context-Aware Selective Resume framework, enables applications to register context vectors that are invoked when the desktop is resumed with matching context. CAESAR increases energy savings in short periods of five minutes to an hour by up to 66%.
The second approach, the low power page cache, embeds network accessible low power hardware in the PC, to enable serving of pages to the consolidation server, while the PC is in low power state. We show that Oasis, our prototype page cache, addresses the shortcomings of energy-oriented on-demand page migration by increasing energy savings, especially during short idle periods. In periods of up to an hour, Oasis increases savings by up to twenty times.
|
235 |
Study and Implementation of Patient Data Collection and Presentation for an eHealth ApplicationSong, Qunying, Xu, Jingjing January 2013 (has links)
This degree project is a part of information and communication technology supported self-care system for the diabetes, mainly in diabetes data collection and visualization. The report is organized in four main sections: investigation and internet search, literature review, application design and implementation, system test and evaluation. Existed applications and research studies has been compared and, a responsive web application is developed aiming at providing relevant functionalities and services regarding diabetes self-management.
|
236 |
Replication, Security, and Integrity of Outsourced Data in Cloud Computing SystemsBarsoum, Ayad Fekry 14 February 2013 (has links)
In the current era of digital world, the amount of sensitive data produced by many organizations is outpacing their storage ability. The management of such huge amount of data is quite expensive due to the requirements of high storage capacity and qualified personnel. Storage-as-a-Service (SaaS) offered by cloud service providers (CSPs) is a paid facility that enables organizations to outsource their data to be stored on remote servers. Thus, SaaS reduces the maintenance cost and mitigates the burden of large local data storage at the organization's end.
For an increased level of scalability, availability and durability, some customers may want their data to be replicated on multiple servers across multiple data centers. The more copies the CSP is asked to store, the more fees the customers are charged. Therefore, customers need to have a strong guarantee that the CSP is storing all data copies that are agreed upon in the service contract, and these copies remain intact.
In this thesis we address the problem of creating multiple copies of a data file and verifying those copies stored on untrusted cloud servers. We propose a pairing-based provable multi-copy data possession (PB-PMDP) scheme, which provides an evidence that all outsourced copies are actually stored and remain intact. Moreover, it allows authorized users (i.e., those who have the right to access the owner's file) to seamlessly access the file copies stored by the CSP, and supports public verifiability.
We then direct our study to the dynamic behavior of outsourced data, where the data owner is capable of not only archiving and accessing the data copies stored by the CSP, but also updating and scaling (using block operations: modification, insertion, deletion, and append) these copies on the remote servers. We propose a new map-based provable multi-copy dynamic data possession (MB-PMDDP) scheme that verifies the intactness and consistency of outsourced dynamic multiple data copies. To the best of our knowledge, the proposed scheme is the first to verify the integrity of multiple copies of dynamic data over untrusted cloud servers.
As a complementary line of research, we consider protecting the CSP from a dishonest owner, who attempts to get illegal compensations by falsely claiming data corruption over cloud servers. We propose a new cloud-based storage scheme that allows the data owner to benefit from the facilities offered by the CSP and enables mutual trust between them. In addition, the proposed scheme ensures that authorized users receive the latest version of the outsourced data, and enables the owner to grant or revoke access to the data stored by cloud servers.
|
237 |
Δημιουργία υπολογιστικών κόμβων σε υποδομές cloud computingΨιλόπουλος, Κωνσταντίνος 05 March 2012 (has links)
Η παρούσα διπλωματική έχει σαν σκοπό τη διερεύνηση της τεχνολογίας του
Cloud Computing και της τεχνολογίας της Virtualization που την στηρίζει.
Παρουσίαση της ιστορίας και μια τεχνική παρουσίαση των δυνατοτήτων και
των καταβολών των τεχνολογιών. Αναφέρονται πρακτικές εφαρμογές που
μπορούν οι συγκεκριμένες τεχνολογίες να εφαρμοστούν και τους σκοπούς που
θα εξυπηρετήσουν. Επίσης γίνεται μια πιο αναλυτική παρουσίαση δυο
προγραμμάτων (Xen Hypervisor – για το επίπεδο της Virtualization, Eucalyptus
– σαν πλατφόρμα για τη δημιουργία IaaS Clouds). Παρουσιάζονται επίσης
σύντομοι οδηγοί για την εγκατάσταση ενός Cloud, καθώς και το configuration
μαζί με τους λόγους που χρησιμοποιήθηκε. / The scope of this thesis is to study the technology of Cloud Computing and the
Virtualization technology that is supporting it. A presentation of the history, a
technical overview and the origins of these technologies. There are mentioned
some fields that the specified technologies could apply and the purposes that
they would serve. On the third chapter, a more detailed presentation of two
pieces of software is given (Xen Hypervisor – for the Virtualization Layer,
Eucalyptus – as the platform to create IaaS Clouds). In the end quick how-to
guides are described on the procedure to install a Cloud, the configuration and
the reasons of the specific set up as well.
|
238 |
Approches collaboratives pour la classification des données complexes / Collaborative approaches for complex data classificationRabah, Mazouzi 12 December 2016 (has links)
La présente thèse s'intéresse à la classification collaborative dans un contexte de données complexes, notamment dans le cadre du Big Data, nous nous sommes penchés sur certains paradigmes computationels pour proposer de nouvelles approches en exploitant des technologies de calcul intensif et large echelle. Dans ce cadre, nous avons mis en oeuvre des classifieurs massifs, au sens où le nombre de classifieurs qui composent le multi-classifieur peut être tres élevé. Dans ce cas, les méthodes classiques d'interaction entre classifieurs ne demeurent plus valables et nous devions proposer de nouvelles formes d'interactions, qui ne se contraignent pas de prendre la totalité des prédictions des classifieurs pour construire une prédiction globale. Selon cette optique, nous nous sommes trouvés confrontés à deux problèmes : le premier est le potientiel de nos approches à passer à l'echelle. Le second, relève de la diversité qui doit être créée et maintenue au sein du système, afin d'assurer sa performance. De ce fait, nous nous sommes intéressés à la distribution de classifieurs dans un environnement de Cloud-computing, ce système multi-classifieurs est peut etre massif et ses propréités sont celles d'un système complexe. En terme de diversité des données, nous avons proposé une approche d'enrichissement de données d'apprentissage par la génération de données de synthèse, à partir de modèles analytiques qui décrivent une partie du phenomène étudié. Aisni, la mixture des données, permet de renforcer l'apprentissage des classifieurs. Les expérientations menées ont montré un grand potentiel pour l'amélioration substantielle des résultats de classification. / This thesis focuses on the collaborative classification in the context of complex data, in particular the context of Big Data, we used some computational paradigms to propose new approaches based on HPC technologies. In this context, we aim at offering massive classifiers in the sense that the number of elementary classifiers that make up the multiple classifiers system can be very high. In this case, conventional methods of interaction between classifiers is no longer valid and we had to propose new forms of interaction, where it is not constrain to take all classifiers predictions to build an overall prediction. According to this, we found ourselves faced with two problems: the first is the potential of our approaches to scale up. The second, is the diversity that must be created and maintained within the system, to ensure its performance. Therefore, we studied the distribution of classifiers in a cloud-computing environment, this multiple classifiers system can be massive and their properties are those of a complex system. In terms of diversity of data, we proposed a training data enrichment approach for the generation of synthetic data from analytical models that describe a part of the phenomenon studied. so, the mixture of data reinforces learning classifiers. The experimentation made have shown the great potential for the substantial improvement of classification results.
|
239 |
On Efficient and Scalable Attribute Based Security SystemsJanuary 2011 (has links)
abstract: This dissertation is focused on building scalable Attribute Based Security Systems (ABSS), including efficient and privacy-preserving attribute based encryption schemes and applications to group communications and cloud computing. First of all, a Constant Ciphertext Policy Attribute Based Encryption (CCP-ABE) is proposed. Existing Attribute Based Encryption (ABE) schemes usually incur large, linearly increasing ciphertext. The proposed CCP-ABE dramatically reduces the ciphertext to small, constant size. This is the first existing ABE scheme that achieves constant ciphertext size. Also, the proposed CCP-ABE scheme is fully collusion-resistant such that users can not combine their attributes to elevate their decryption capacity. Next step, efficient ABE schemes are applied to construct optimal group communication schemes and broadcast encryption schemes. An attribute based Optimal Group Key (OGK) management scheme that attains communication-storage optimality without collusion vulnerability is presented. Then, a novel broadcast encryption model: Attribute Based Broadcast Encryption (ABBE) is introduced, which exploits the many-to-many nature of attributes to dramatically reduce the storage complexity from linear to logarithm and enable expressive attribute based access policies. The privacy issues are also considered and addressed in ABSS. Firstly, a hidden policy based ABE schemes is proposed to protect receivers' privacy by hiding the access policy. Secondly,a new concept: Gradual Identity Exposure (GIE) is introduced to address the restrictions of hidden policy based ABE schemes. GIE's approach is to reveal the receivers' information gradually by allowing ciphertext recipients to decrypt the message using their possessed attributes one-by-one. If the receiver does not possess one attribute in this procedure, the rest of attributes are still hidden. Compared to hidden-policy based solutions, GIE provides significant performance improvement in terms of reducing both computation and communication overhead. Last but not least, ABSS are incorporated into the mobile cloud computing scenarios. In the proposed secure mobile cloud data management framework, the light weight mobile devices can securely outsource expensive ABE operations and data storage to untrusted cloud service providers. The reported scheme includes two components: (1) a Cloud-Assisted Attribute-Based Encryption/Decryption (CA-ABE) scheme and (2) An Attribute-Based Data Storage (ABDS) scheme that achieves information theoretical optimality. / Dissertation/Thesis / Ph.D. Computer Science 2011
|
240 |
A prescriptive analytics approach for energy efficiency in datacentresPanneerselvam, John January 2018 (has links)
Given the evolution of Cloud Computing in recent years, users and clients adopting Cloud Computing for both personal and business needs have increased at an unprecedented scale. This has naturally led to the increased deployments and implementations of Cloud datacentres across the globe. As a consequence of this increasing adoption of Cloud Computing, Cloud datacentres are witnessed to be massive energy consumers and environmental polluters. Whilst the energy implications of Cloud datacentres are being addressed from various research perspectives, predicting the future trend and behaviours of workloads at the datacentres thereby reducing the active server resources is one particular dimension of green computing gaining the interests of researchers and Cloud providers. However, this includes various practical and analytical challenges imposed by the increased dynamism of Cloud systems. The behavioural characteristics of Cloud workloads and users are still not perfectly clear which restrains the reliability of the prediction accuracy of existing research works in this context. To this end, this thesis presents a comprehensive descriptive analytics of Cloud workload and user behaviours, uncovering the cause and energy related implications of Cloud Computing. Furthermore, the characteristics of Cloud workloads and users including latency levels, job heterogeneity, user dynamicity, straggling task behaviours, energy implications of stragglers, job execution and termination patterns and the inherent periodicity among Cloud workload and user behaviours have been empirically presented. Driven by descriptive analytics, a novel user behaviour forecasting framework has been developed, aimed at a tri-fold forecast of user behaviours including the session duration of users, anticipated number of submissions and the arrival trend of the incoming workloads. Furthermore, a novel resource optimisation framework has been proposed to avail the most optimum level of resources for executing jobs with reduced server energy expenditures and job terminations. This optimisation framework encompasses a resource estimation module to predict the anticipated resource consumption level for the arrived jobs and a classification module to classify tasks based on their resource intensiveness. Both the proposed frameworks have been verified theoretically and tested experimentally based on Google Cloud trace logs. Experimental analysis demonstrates the effectiveness of the proposed framework in terms of the achieved reliability of the forecast results and in reducing the server energy expenditures spent towards executing jobs at the datacentres.
|
Page generated in 0.1224 seconds