171 |
ENERGY-AWARE OPTIMIZATION FOR EMBEDDED SYSTEMS WITH CHIP MULTIPROCESSOR AND PHASE-CHANGE MEMORYLi, Jiayin 01 January 2012 (has links)
Over the last two decades, functions of the embedded systems have evolved from simple real-time control and monitoring to more complicated services. Embedded systems equipped with powerful chips can provide the performance that computationally demanding information processing applications need. However, due to the power issue, the easy way to gain increasing performance by scaling up chip frequencies is no longer feasible. Recently, low-power architecture designs have been the main trend in embedded system designs.
In this dissertation, we present our approaches to attack the energy-related issues in embedded system designs, such as thermal issues in the 3D chip multiprocessor (CMP), the endurance issue in the phase-change memory(PCM), the battery issue in the embedded system designs, the impact of inaccurate information in embedded system, and the cloud computing to move the workload to remote cloud computing facilities.
We propose a real-time constrained task scheduling method to reduce peak temperature on a 3D CMP, including an online 3D CMP temperature prediction model and a set of algorithm for scheduling tasks to different cores in order to minimize the peak temperature on chip. To address the challenging issues in applying PCM in embedded systems, we propose a PCM main memory optimization mechanism through the utilization of the scratch pad memory (SPM). Furthermore, we propose an MLC/SLC configuration optimization algorithm to enhance the efficiency of the hybrid DRAM + PCM memory. We also propose an energy-aware task scheduling algorithm for parallel computing in mobile systems powered by batteries.
When scheduling tasks in embedded systems, we make the scheduling decisions based on information, such as estimated execution time of tasks. Therefore, we design an evaluation method for impacts of inaccurate information on the resource allocation in embedded systems. Finally, in order to move workload from embedded systems to remote cloud computing facility, we present a resource optimization mechanism in heterogeneous federated multi-cloud systems. And we also propose two online dynamic algorithms for resource allocation and task scheduling. We consider the resource contention in the task scheduling.
|
172 |
IDSAAS: INTRUSION DETECTION SYSTEM AS A SERVICE IN PUBLIC CLOUDSAlharkan, TURKI 11 January 2013 (has links)
In a public cloud computing environment, consumers cannot always just depend on the cloud provider’s security infrastructure. They may need to monitor and protect their virtual existence by implementing their own intrusion detection capabilities along with other security technologies within the cloud fabric. Also, cloud consumers may want to collect network traffic and log them for further analysis. This can help them in writing tailor-made attacking scenarios specifically designed based on the nature of the application they want to protect. Furthermore, consumers’ applications can be distributed among different regions of the cloud or in non-cloud locations. The need to protect all these assets from a centralized location is fundamental to many cloud consumers.
We provide a framework and implementation for an intrusion detection system that is suitable for the public cloud environment. The Intrusion Detection as a Service (IDSaaS) targets security of the infrastructure level for a public cloud (IaaS) by providing intrusion detection technology that is highly elastic, portable and fully controlled by the cloud consumer. These features allow cloud consumers to protect their cloud-based applications from security threats and unauthorized intruders. We developed a proof-of-concept prototype on Amazon EC2 cloud and performed different experiments to evaluate its performance. After examining the experimental results, we found that IDSaaS can provide the required protection in a reasonable and effective manner. / Thesis (Master, Computing) -- Queen's University, 2013-01-10 08:29:23.136
|
173 |
Discovery, Personalization and Resource Provisioning of Mobile ServicesElgazzar, Khalid 04 September 2013 (has links)
Mobile service provisioning is intended to serve interoperable functionality from mobile devices over the network. The mobile service paradigm shifts the role of mobile devices from consumers to providers, opening up new opportunities for a multitude of collaborative services and applications ranging from sharing personal information to collaborative participatory sensing. Although many basic principles of the standard Web service approach continue to apply, the inherent limitations of mobile devices and broadband wireless access render the deployment of standard architectures in mobile environments inefficient. This research introduces two concepts that revolutionize mobile service provisioning: personal and cloud-assisted service provisioning. Personal services are intended to offer a range of user-centric data services to a limited set of consumers that are explicitly authorized by the user providing the service. Personal services facilitate prevailing trends such as social networking and mobile healthcare services, without compromising personal privacy. Cloud-assisted service provisioning bridges the gap between limited resources of mobile devices and increasing resource demands of mobile applications. This approach provides reliable and efficient mobile services, while alleviating the burden on limited mobile resources. Both approaches take advantage of the device's mobility and real time access to various context information. Experimental results reveal that personal services offer personalization based on the user's context and preferences, while cloud-assisted service provisioning, in addition to optimizing the consumption of mobile scarce resources, offers significant improvement to the reliability and availability of mobile services. / Thesis (Ph.D, Computing) -- Queen's University, 2013-09-03 10:28:42.795
|
174 |
On the Prevention of Cache-Based Side-Channel Attacks in a Cloud EnvironmentGodfrey, Michael 26 September 2013 (has links)
As Cloud services become more commonplace, recent works have uncovered vulnerabilities unique to such systems. Specifi cally, the paradigm promotes a risk of information leakage across virtual machine isolation via side-channels. Unlike conventional
computing, the infrastructure supporting a Cloud environment allows mutually dis-
trusting clients simultaneous access to the underlying hardware, a seldom met requirement for a side-channel attack. This thesis investigates the current state of
side-channel vulnerabilities involving the CPU cache, and identifi es the shortcomings
of traditional defenses in a Cloud environment. It explores why solutions to non-Cloud cache-based side-channels cease to work in Cloud environments, and describes
new mitigation techniques applicable for Cloud security. Speci cally, it separates
canonical cache-based side-channel attacks into two categories, Sequential and Parallel attacks, based on their implementation and devises a unique mitigation technique
for each. Applying these solutions to a canonical Cloud environment, this thesis
demonstrates the validity of these Cloud-specifi c, cache-based side-channel mitigation techniques. Furthermore, it shows that they can be implemented, together, as a
server-side approach to improve security without inconveniencing the client. Finally,
it conducts a comparison of our solutions to the current state-of-the-art. / Thesis (Master, Computing) -- Queen's University, 2013-09-25 18:03:47.737
|
175 |
Dynamic Cloud Resource Management : Scheduling, Migration and Server DisaggregationSvärd, Petter January 2014 (has links)
A key aspect of cloud computing is the promise of infinite, scalable resources, and that cloud services should scale up and down on demand. This thesis investigates methods for dynamic resource allocation and management of services in cloud datacenters, introducing new approaches as well as improvements to established technologies.Virtualization is a key technology for cloud computing as it allows several operating system instances to run on the same Physical Machine, PM, and cloud services normally consists of a number of Virtual Machines, VMs, that are hosted on PMs. In this thesis, a novel virtualization approach is presented. Instead of running each PM isolated, resources from multiple PMs in the datacenter are disaggregated and exposed to the VMs as pools of CPU, I/O and memory resources. VMs are provisioned by using the right amount of resources from each pool, thereby enabling both larger VMs than any single PM can host as well as VMs with tailor-made specifications for their application. Another important aspect of virtualization is live migration of VMs, which is the concept moving VMs between PMs without interruption in service. Live migration allows for better PM utilization and is also useful for administrative purposes. In the thesis, two improvements to the standard live migration algorithm are presented, delta compression and page transfer reordering. The improvements can reduce migration downtime, i.e., the time that the VM is unavailable, as well as the total migration time. Postcopy migration, where the VM is resumed on the destination before the memory content is transferred is also studied. Both userspace and in-kernel postcopy algorithms are evaluated in an in-depth study of live migration principles and performance.Efficient mapping of VMs onto PMs is a key problem for cloud providers as PM utilization directly impacts revenue. When services are accepted into a datacenter, a decision is made on which PM should host the service VMs. This thesis presents a general approach for service scheduling that allows for the same scheduling software to be used across multiple cloud architectures. A number of scheduling algorithms to optimize objectives like revenue or utilization are also studied. Finally, an approach for continuous datacenter consolidation is presented. As VM workloads fluctuate and server availability varies any initial mapping is bound to become suboptimal over time. The continuous datacenter consolidation approach adjusts this VM-to-PM mapping during operation based on combinations of management actions, like suspending/resuming PMs, live migrating VMs, and suspending/resuming VMs. Proof-of-concept software and a set of algorithms that allows cloud providers to continuously optimize their server resources are presented in the thesis.
|
176 |
Head into the Cloud: An Analysis of the Emerging Cloud InfrastructureChandrasekaran, Balakrishnan January 2016 (has links)
<p>We are witnessing a paradigm shift in computing---people are increasingly using Web-based software for tasks that only a few years ago were carried out using software running locally on their computers. The increasing use of mobile devices, which typically have limited processing power, is catalyzing the idea of offloading computations to the cloud. It is within this context of cloud computing that this thesis attempts to address a few key questions: (a) With more computations moving to the cloud, what is the state of the Internet's core? In particular, do routing changes and consistent congestion in the Internet's core affect end users' experiences? (b) With software-defined networking (SDN) principles increasingly being used to manage cloud infrastructures, are the software solutions robust (i.e., resilient to bugs)? With service outage costs being prohibitively expensive, how can we support network operators in experimenting with novel ideas without crashing their SDN ecosystems? (c) How can we build a large-scale passive IP geolocation system to geolocate the entire IP address space at once so that cloud-based software can utilize the geolocation database in enhancing the end-user experience? (d) Why is the Internet so slow? Since a low-latency network allows more offloading of computations to the cloud, how can we reduce the latency in the Internet?</p> / Dissertation
|
177 |
Efficient Bare Metal Backup and Restore in OpenStack Based Cloud InfrastructureDesign : Implementation and Testing of a PrototypeTADESSE, ADDISHIWOT January 2016 (has links)
No description available.
|
178 |
A Parallel Genetic Algorithm for Placement and Routing on Cloud Computing PlatformsBerlier, Jacob A. 05 May 2011 (has links)
The design and implementation of today's most advanced VLSI circuits and multi-layer printed circuit boards would not be possible without automated design tools that assist with the placement of components and the routing of connections between these components. In this work, we investigate how placement and routing can be implemented and accelerated using cloud computing resources. A parallel genetic algorithm approach is used to optimize component placement and the routing order supplied to a Lee's algorithm maze router. A study of mutation rate, dominance rate, and population size is presented to suggest favorable parameter values for arbitrary-sized printed circuit board problems. The algorithm is then used to successfully design a Microchip PIC18 breakout board and Micrel Ethernet Switch. Performance results demonstrate that a 50X runtime performance improvement over a serial approach is achievable using 64 cloud computing cores. The results further suggest that significantly greater performance could be achieved by requesting additional cloud computing resources for additional cost. It is our hope that this work will serve as a framework for future efforts to improve parallel placement and routing algorithms using cloud computing resources.
|
179 |
Projektový management s podporou nástroja MS Project Server, možnosti nasadenia a správy / Project management with support of MS Project Server, deployment and administration scenariosHajžuš, Miroslav January 2010 (has links)
Primary goal of my thesis is managerial recommendation, which should ease the management decision making process whether AutoCont would add Private cloud (SaaS) option of project management software in its service portfolio. There will be presented the Total Cost of Ownership analysis for both On premise and Private cloud option on an example of SME in practical part.
|
180 |
Flexible framework for elasticity in cloud computing / Un cadre flexible pour l’élasticité dans les nuagesAl-Dhuraibi, Yahya 10 December 2018 (has links)
Le Cloud computing a gagné beaucoup de popularité et a reçu beaucoup d'attention des deux mondes, industriel et académique, puisque cela les libère de la charge et le coût de la gestion de centres de données locaux. Toutefois, le principal facteur motivant l'utilisation du Cloud est sa capacité de fournir des ressources en fonction des besoins du client. Ce concept est appelé l’élasticité. Adapter les applications Cloud lors de leur exécution en fonction des variations de la demande est un grand défi. En outre, l'élasticité de Cloud est diverse et hétérogène car elle englobe différentes approches, stratégies, objectifs, etc. Nous sommes intéressés à étudier: Comment résoudre le problème de sur/sous-approvisionnement? Comment garantir la disponibilité des ressources et surmonter les problèmes d'hétérogénéité et de granularité des ressources? Comment standardiser, unifier les solutions d'élasticité et de modéliser sa diversité à un haut niveau d'abstraction? Dans cette thèse, trois majeures contributions ont été proposées: Tout d’abord, un état de l’art à jour de l’élasticité du Cloud ; cet état de l’art passe en revue les différents travaux relatifs à l’élasticité des machines virtuelles et des conteneurs. Deuxièmement, ElasticDocker, une approche permettant de gérer l’élasticité des conteneurs, notamment l’élasticité verticale, la migration et l’élasticité combinée. Troisièmement, MoDEMO, un nouveau cadre de gestion d'élasticité unifié, basé sur un standard, dirigé par les modèles, hautement extensible et reconfigurable, supportant plusieurs stratégies, différents types d’élasticité, différentes techniques de virtualisation et plusieurs fournisseurs de Cloud. / Cloud computing has been gaining popularity and has received a great deal of attention from both industrial and academic worlds since it frees them from the burden and cost of managing local data centers. However, the main factor motivating the use of cloud is its ability to provide resources according to the customer needs or what is referred to as elasticity. Adapting cloud applications during their execution according to demand variation is a challenging task. In addition, cloud elasticity is diverse and heterogeneous because it encompasses different approaches, policies, purposes, etc. We are interested in investigating: How to overcome the problem of over-provisioning/under-provisioning? How to guaranty the resource availability and overcome the problems of heterogeneity and resource granularity? How to standardize, unify elasticity solutions and model its diversity at a high level of abstraction? In this thesis, we solved such challenges and we investigated many aspects of elasticity to manage efficiently the resources in the cloud. Three contributions are proposed. Firstly, an up-to-date state-of-the-art of the cloud elasticity, this state of art reviews different works related to elasticity for both Virtual Machines and containers. Secondly, ElasticDocker, an approach to manage container elasticity including vertical elasticity, live migration, and elasticity combination between different virtualization techniques. Thirdly, MoDEMO, a new unified standard-based, model-driven, highly extensible and reconfigurable framework that supports multiple elasticity policies, vertical and horizontal elasticity, different virtualization techniques and multiple cloud providers.
|
Page generated in 0.0816 seconds