Spelling suggestions: "subject:"ehe cloud"" "subject:"ehe aloud""
641 |
Investigating performance and energy efficiency on a private cloudSmith, James William January 2014 (has links)
Organizations are turning to private clouds due to concerns about security, privacy and administrative control. They are attracted by the flexibility and other advantages of cloud computing but are wary of breaking decades-old institutional practices and procedures. Private Clouds can help to alleviate these concerns by retaining security policies, in-organization ownership and providing increased accountability when compared with public services. This work investigates how it may be possible to develop an energy-aware private cloud system able to adapt workload allocation strategies so that overall energy consumption is reduced without loss of performance or dependability. Current literature focuses on consolidation as a method for improving the energy-efficiency of cloud systems, but if consolidation is undesirable due to the performance penalties, dependability or latency then another approach is required. Given a private cloud in which the machines are constant, with no machines being powered down in response to changing workloads, and a set of virtual machines to run, each with different characteristics and profiles, it is possible to mix the virtual machine placement to reduce energy consumption or improve performance of the VMs. Through a series of experiments this work demonstrates that workload mixes can have an effect on energy consumption and the performance of applications running inside virtual machines. These experiments took the form of measuring the performance and energy usage of applications running inside virtual machines. The arrangement of these virtual machines on their hosts was varied to determine the effect of different workload mixes. The insights from these experiments have been used to create a proof-of- concept custom VM Allocator system for the OpenStack private cloud computing platform. Using CloudMonitor, a lightweight monitoring application to gather data on system performance and energy consumption, the implementation uses a holistic view of the private cloud state to inform workload placement decisions.
|
642 |
Ad hoc cloud computingMcGilvary, Gary Andrew January 2014 (has links)
Commercial and private cloud providers offer virtualized resources via a set of co-located and dedicated hosts that are exclusively reserved for the purpose of offering a cloud service. While both cloud models appeal to the mass market, there are many cases where outsourcing to a remote platform or procuring an in-house infrastructure may not be ideal or even possible. To offer an attractive alternative, we introduce and develop an ad hoc cloud computing platform to transform spare resource capacity from an infrastructure owner’s locally available, but non-exclusive and unreliable infrastructure, into an overlay cloud platform. The foundation of the ad hoc cloud relies on transferring and instantiating lightweight virtual machines on-demand upon near-optimal hosts while virtual machine checkpoints are distributed in a P2P fashion to other members of the ad hoc cloud. Virtual machines found to be non-operational are restored elsewhere ensuring the continuity of cloud jobs. In this thesis we investigate the feasibility, reliability and performance of ad hoc cloud computing infrastructures. We firstly show that the combination of both volunteer computing and virtualization is the backbone of the ad hoc cloud. We outline the process of virtualizing the volunteer system BOINC to create V-BOINC. V-BOINC distributes virtual machines to volunteer hosts allowing volunteer applications to be executed in the sandbox environment to solve many of the downfalls of BOINC; this however also provides the basis for an ad hoc cloud computing platform to be developed. We detail the challenges of transforming V-BOINC into an ad hoc cloud and outline the transformational process and integrated extensions. These include a BOINC job submission system, cloud job and virtual machine restoration schedulers and a periodic P2P checkpoint distribution component. Furthermore, as current monitoring tools are unable to cope with the dynamic nature of ad hoc clouds, a dynamic infrastructure monitoring and management tool called the Cloudlet Control Monitoring System is developed and presented. We evaluate each of our individual contributions as well as the reliability, performance and overheads associated with an ad hoc cloud deployed on a realistically simulated unreliable infrastructure. We conclude that the ad hoc cloud is not only a feasible concept but also a viable computational alternative that offers high levels of reliability and can at least offer reasonable performance, which at times may exceed the performance of a commercial cloud infrastructure.
|
643 |
Cloud-based design and manufacturing: a network perspectiveWu, Dazhong 12 January 2015 (has links)
The motivation of this research is the need for reducing time and cost associated with maintaining information and communication technology infrastructure for design and manufacturing in digitally networked environments, enhancing design communication and collaboration in distributed and collaborative design processes, and adapting to rapidly changing market demands. The objective of this dissertation is to propose a new design and manufacturing paradigm, namely, Cloud-Based Design and Manufacturing (CBDM), for enhancing collaborative product development in distributed settings. In this dissertation, the following challenges pertaining to CBDM are addressed: the systematic development of a conceptual framework and a holistic vision for CBDM systems; the development of a new approach for visualizing distributed and collaborative design processes, measuring tie strengths in a complex and large design collaboration network, and detecting design communities with common design activities in cloud-based design (CBD) settings from a social network perspective; and the development of a new approach that helps identify potential manufacturing bottlenecks and scale manufacturing capacity in cloud-based manufacturing (CBM) settings from a manufacturing network perspective. The contributions of this dissertation are categorized in three research domains: (1) proposing the first definition, a holistic vision, and an example of application scenario for CBDM, (2) modeling and analyzing information flow in cloud-based design for improving design collaboration, and (3) modeling and analyzing material flow in cloud-based manufacturing for planning manufacturing scalability.
|
644 |
Evaluating and quantifying the feasibility and effectiveness of whole IT system moving target defensesBardas, Alexandru Gavril January 1900 (has links)
Doctor of Philosophy / Computing and Information Sciences / Scott A. DeLoach / Xinming (Simon) Ou / The Moving Target Defense (MTD) concept has been proposed as an approach to rebalance the security landscape by increasing uncertainty and apparent complexity for attackers, reducing their window of opportunity, and raising the costs of their reconnaissance and attack efforts. Intuitively, the idea of applying MTD techniques to a whole IT system should provide enhanced security; however, little research has been done to show that it is feasible or beneficial to the system’s security. This dissertation presents an MTD platform at the whole IT system level in which any component of the IT system can be automatically and reliably replaced with a fresh new one. A component is simply a virtual machine (VM) instance or a cluster of instances. There are a number of security benefits when leveraging such an MTD platform. Replacing a VM instance with a new one with the most up-to-date operating system and applications eliminates security problems caused by unpatched vulnerabilities and all the privileges the attacker has obtained on the old instance. Configuration parameters for the new instance, such as IP address, port numbers for services, and credentials, can be changed from the old ones, invalidating the knowledge the attackers already obtained and forcing them to redo the work to re-compromise the new instance. In spite of these obvious security benefits, building a system that supports live replacement with minimal to no disruption to the IT system’s normal operations is difficult. Modern enterprise IT systems have complex dependencies among services so that changing even a single instance will almost certainly disrupt the dependent services. Therefore, the replacement of instances must be carefully orchestrated with updating the settings of the dependent instances. This orchestration of changes is notoriously error-prone if done manually, however, limited tool support is available to automate this process. We designed and built a framework (ANCOR) that captures the requirements and needs of a whole IT system (in particular, dependencies among various services) and compiles them into a working IT system. ANCOR is at the core of the proposed MTD platform (ANCOR-MTD) and enables automated live instance replacements. In order to evaluate the platform’s practicality, this dissertation presents a series of experiments on multiple IT systems that show negligible (statistically non-significant) performance impacts. To evaluate the platform’s efficacy, this research analyzes costs versus security benefits by quantifying the outcome (sizes of potential attack windows) in terms of the number of adaptations, and demonstrates that an IT system deployed and managed using the proposed MTD platform will increase attack difficulty.
|
645 |
AUTOMATION OF A CLOUD HOSTED APPLICATION : Performance, Automated Testing, Cloud Computing / AUTOMATION OF A CLOUD HOSTED APPLICATION : Performance, Automated Testing, Cloud ComputingPenmetsa, Jyothi Spandana January 2016 (has links)
Context: Software testing is the process of assessing quality of a software product to determine whether it matches with the existing requirements of the customer or not. Software testing is one of the “Verification and Validation,” or V&V, software practices. The two basic techniques of software testing are Black-box testing and White box testing. Black-box testing focuses solely on the outputs generated in response to the inputs supplied neglecting the internal components of the software. Whereas, White-box testing focuses on the internal mechanism of the software of any application. To explore the feasibility of black-box and white-box testing under a given set of conditions, a proper test automation framework needs to be deployed. Automation is deployed in order to reduce the manual effort and to perform testing continuously, thereby increasing the quality of the product. Objectives: In this research, cloud hosted application is automated using TestComplete tool. The objective of this thesis is to verify the functionality of cloud application such as test appliance library through automation and to measure the impact of the automation on release cycles of the organisation. Methods: Here automation is implemented using scrum methodology which is an agile development software process. Using scrum methodology, the product with working software can be delivered to the customers incrementally and empirically with updating functionalities in it. Test appliance library functionality is verified deploying testing device thereby keeping track of automatic software downloads into the testing device and licenses updating in the testing device. Results: Automation of test appliance functionality of cloud hosted application is made using TestComplete tool and impact of automation on release cycles is found reduced. Through automation of cloud hosted application, nearly 24% of reduction in level of release cycles can be observed thereby reducing the manual effort and increasing the quality of delivery. Conclusion: Automation of a cloud hosted application provides no manual effort thereby utilisation of time can be made effectively and application can be tested continuously increasing the efficiency and / AUTOMATION OF A CLOUD HOSTED APPLICATION
|
646 |
Workload characterization, controller design and performance evaluation for cloud capacity autoscalingAli-Eldin Hassan, Ahmed January 2015 (has links)
This thesis studies cloud capacity auto-scaling, or how to provision and release re-sources to a service running in the cloud based on its actual demand using an auto-matic controller. As the performance of server systems depends on the system design,the system implementation, and the workloads the system is subjected to, we focuson these aspects with respect to designing auto-scaling algorithms. Towards this goal,we design and implement two auto-scaling algorithms for cloud infrastructures. Thealgorithms predict the future load for an application running in the cloud. We discussthe different approaches to designing an auto-scaler combining reactive and proactivecontrol methods, and to be able to handle long running requests, e.g., tasks runningfor longer than the actuation interval, in a cloud. We compare the performance ofour algorithms with state-of-the-art auto-scalers and evaluate the controllers’ perfor-mance with a set of workloads. As any controller is designed with an assumptionon the operating conditions and system dynamics, the performance of an auto-scalervaries with different workloads.In order to better understand the workload dynamics and evolution, we analyze a6-years long workload trace of the sixth most popular Internet website. In addition,we analyze a workload from one of the largest Video-on-Demand streaming servicesin Sweden. We discuss the popularity of objects served by the two services, the spikesin the two workloads, and the invariants in the workloads. We also introduce, a mea-sure for the disorder in a workload, i.e., the amount of burstiness. The measure isbased on Sample Entropy, an empirical statistic used in biomedical signal processingto characterize biomedical signals. The introduced measure can be used to charac-terize the workloads based on their burstiness profiles. We compare our introducedmeasure with the literature on quantifying burstiness in a server workload, and showthe advantages of our introduced measure.To better understand the tradeoffs between using different auto-scalers with differ-ent workloads, we design a framework to compare auto-scalers and give probabilisticguarantees on the performance in worst-case scenarios. Using different evaluation cri-teria and more than 700 workload traces, we compare six state-of-the-art auto-scalersthat we believe represent the development of the field in the past 8 years. Knowingthat the auto-scalers’ performance depends on the workloads, we design a workloadanalysis and classification tool that assigns a workload to its most suitable elasticitycontroller out of a set of implemented controllers. The tool has two main components;an analyzer, and a classifier. The analyzer analyzes a workload and feeds the analysisresults to the classifier. The classifier assigns a workload to the most suitable elasticitycontroller based on the workload characteristics and a set of predefined business levelobjectives. The tool is evaluated with a set of collected real workloads, and a set ofgenerated synthetic workloads. Our evaluation results shows that the tool can help acloud provider to improve the QoS provided to the customers.
|
647 |
Google app engine case study : a micro blogging siteKajita, Marcos Suguru 27 August 2010 (has links)
Cloud computing refers to the combination of large scale hardware resources at datacenters integrated by system software that provides services, commonly known as Software-as-a-Service (SaaS), over the Internet. As a result of more affordable datacenters, cloud computing is slowly making its way into the mainstream business arena and has the potential to revolutionize the IT industry. As more cloud computing solutions become available, it is expected that there will be a shift to what is sometimes referred to as the Web Operating System. The Web Operating System, along with the sense of infinite computing resources on the “cloud” has the potential to bring new challenges in software engineering. The motivation of this report, which is divided into two parts, is to understand these challenges. The first part gives a brief introduction and analysis of cloud computing. The second part focuses on Google’s cloud computing platform and evaluates the implementation of a micro blogging site using Google’s App Engine. / text
|
648 |
ENERGY-AWARE OPTIMIZATION FOR EMBEDDED SYSTEMS WITH CHIP MULTIPROCESSOR AND PHASE-CHANGE MEMORYLi, Jiayin 01 January 2012 (has links)
Over the last two decades, functions of the embedded systems have evolved from simple real-time control and monitoring to more complicated services. Embedded systems equipped with powerful chips can provide the performance that computationally demanding information processing applications need. However, due to the power issue, the easy way to gain increasing performance by scaling up chip frequencies is no longer feasible. Recently, low-power architecture designs have been the main trend in embedded system designs.
In this dissertation, we present our approaches to attack the energy-related issues in embedded system designs, such as thermal issues in the 3D chip multiprocessor (CMP), the endurance issue in the phase-change memory(PCM), the battery issue in the embedded system designs, the impact of inaccurate information in embedded system, and the cloud computing to move the workload to remote cloud computing facilities.
We propose a real-time constrained task scheduling method to reduce peak temperature on a 3D CMP, including an online 3D CMP temperature prediction model and a set of algorithm for scheduling tasks to different cores in order to minimize the peak temperature on chip. To address the challenging issues in applying PCM in embedded systems, we propose a PCM main memory optimization mechanism through the utilization of the scratch pad memory (SPM). Furthermore, we propose an MLC/SLC configuration optimization algorithm to enhance the efficiency of the hybrid DRAM + PCM memory. We also propose an energy-aware task scheduling algorithm for parallel computing in mobile systems powered by batteries.
When scheduling tasks in embedded systems, we make the scheduling decisions based on information, such as estimated execution time of tasks. Therefore, we design an evaluation method for impacts of inaccurate information on the resource allocation in embedded systems. Finally, in order to move workload from embedded systems to remote cloud computing facility, we present a resource optimization mechanism in heterogeneous federated multi-cloud systems. And we also propose two online dynamic algorithms for resource allocation and task scheduling. We consider the resource contention in the task scheduling.
|
649 |
Validation of scattering microwave radiative transfer models using an aircraft radiometer and ground-based radarJones, David C. January 1995 (has links)
No description available.
|
650 |
IDSAAS: INTRUSION DETECTION SYSTEM AS A SERVICE IN PUBLIC CLOUDSAlharkan, TURKI 11 January 2013 (has links)
In a public cloud computing environment, consumers cannot always just depend on the cloud provider’s security infrastructure. They may need to monitor and protect their virtual existence by implementing their own intrusion detection capabilities along with other security technologies within the cloud fabric. Also, cloud consumers may want to collect network traffic and log them for further analysis. This can help them in writing tailor-made attacking scenarios specifically designed based on the nature of the application they want to protect. Furthermore, consumers’ applications can be distributed among different regions of the cloud or in non-cloud locations. The need to protect all these assets from a centralized location is fundamental to many cloud consumers.
We provide a framework and implementation for an intrusion detection system that is suitable for the public cloud environment. The Intrusion Detection as a Service (IDSaaS) targets security of the infrastructure level for a public cloud (IaaS) by providing intrusion detection technology that is highly elastic, portable and fully controlled by the cloud consumer. These features allow cloud consumers to protect their cloud-based applications from security threats and unauthorized intruders. We developed a proof-of-concept prototype on Amazon EC2 cloud and performed different experiments to evaluate its performance. After examining the experimental results, we found that IDSaaS can provide the required protection in a reasonable and effective manner. / Thesis (Master, Computing) -- Queen's University, 2013-01-10 08:29:23.136
|
Page generated in 0.0303 seconds