Spelling suggestions: "subject:"ehe cloud"" "subject:"ehe aloud""
701 |
Green Cloud - Load Balancing, Load Consolidation using VM MigrationDo, Manh Duc 01 October 2017 (has links)
Recently, cloud computing is a new trend emerging in computer technology with a massive demand from the clients. To meet all requirements, a lot of cloud data centers have been constructed since 2008 when Amazon published their cloud service. The rapidly growing data center leads to the consumption of a tremendous amount of energy even cloud computing has better improved in the performance and energy consumption, but cloud data centers still absorb an immense amount of energy. To raise company’s income annually, the cloud providers start considering green cloud concepts which gives an idea about how to optimize CPU’s usage while guaranteeing the quality of service. Many cloud providers are paying more attention to both load balancing and load consolidation which are two significant components of a cloud data center.
Load balancing is taken into account as a vital part of managing income demand, improving the cloud system’s performance. Live virtual machine migration is a technique to perform the dynamic load balancing algorithm. To optimize the cloud data center, three issues are considered: First, how does the cloud cluster distribute the virtual machine (VM) requests from clients to all physical machine (PM) when each computer has a different capacity. Second, what is the solution to make CPU’s usage of all PMs to be nearly equal? Third, how to handle two extreme scenarios: rapidly rising CPU’s usage of a PM due to sudden massive workload requiring VM migration immediately and resources expansion to respond to substantial cloud cluster through VM requests. In this chapter, we provide an approach to work with those issues in the implementation and results. The results indicated that the performance of the cloud cluster was improved significantly.
Load consolidation is the reverse process of load balancing which aims to provide sufficient cloud servers to handle the client requests. Based on the advance of live VM migration, cloud data center can consolidate itself without interrupting the cloud service, and superfluous PMs are turned to save mode to reduce the energy consumption. This chapter provides a solution to approach load consolidation including implementation and simulation of cloud servers.
|
702 |
HADOOP-EDF: LARGE-SCALE DISTRIBUTED PROCESSING OF ELECTROPHYSIOLOGICAL SIGNAL DATA IN HADOOP MAPREDUCEWu, Yuanyuan 01 January 2019 (has links)
The rapidly growing volume of electrophysiological signals has been generated for clinical research in neurological disorders. European Data Format (EDF) is a standard format for storing electrophysiological signals. However, the bottleneck of existing signal analysis tools for handling large-scale datasets is the sequential way of loading large EDF files before performing an analysis. To overcome this, we develop Hadoop-EDF, a distributed signal processing tool to load EDF data in a parallel manner using Hadoop MapReduce. Hadoop-EDF uses a robust data partition algorithm making EDF data parallel processable. We evaluate Hadoop-EDF’s scalability and performance by leveraging two datasets from the National Sleep Research Resource and running experiments on Amazon Web Service clusters. The performance of Hadoop-EDF on a 20-node cluster improves 27 times and 47 times than sequential processing of 200 small-size files and 200 large-size files, respectively. The results demonstrate that Hadoop-EDF is more suitable and effective in processing large EDF files.
|
703 |
Cloud Security : Penetration Testing of Application in Micro-service architecture and Vulnerability Assessment.Kothawade, Prasad, Bhowmick, Partha Sarathi January 2019 (has links)
Software as a Service (SaaS) is a modern software product model that provides an awesome experience and dynamic platform for the expedition, communication and creating new features in a short amount of time. Cloud platforms provide an outstanding foundation for Software as a solution with their on user-demand infrastructure and application service. We can say that microservice architecture as the optional architecture for a cloud-hosted solution. Microservice architecture is not that much build-up, it just started getting attraction from various industries who want to market for their product in a short time by expanding productivity through increasing automation in the whole product lifecycle[1]. Microservice architecture approach come-up with lots of new complexity and it need a certain level of maturity development to confidently apply the architectural style. The challenge we are facing is how do we make sure the system stays safe and doesn't get hacked or leak data in this more complex and versatile cloud environment. Hence, we need to do penetration testing on the newly developed application in a microservice architecture.
|
704 |
Quality of service in cloud computing: Data model; resource allocation; and data availability and securityAkintoye, Samson Busuyi January 2019 (has links)
Philosophiae Doctor - PhD / Recently, massive migration of enterprise applications to the cloud has been recorded in
the Information Technology (IT) world. The number of cloud providers offering their
services and the number of cloud customers interested in using such services is rapidly
increasing. However, one of the challenges of cloud computing is Quality-of-Service
management which denotes the level of performance, reliability, and availability offered
by cloud service providers. Quality-of-Service is fundamental to cloud service providers
who find the right tradeoff between Quality-of-Service levels and operational cost. In
order to find out the optimal tradeoff, cloud service providers need to comply with service
level agreements contracts which define an agreement between cloud service providers
and cloud customers. Service level agreements are expressed in terms of quality of service
(QoS) parameters such as availability, scalability performance and the service cost. On
the other hand, if the cloud service provider violates the service level agreement contract,
the cloud customer can file for damages and claims some penalties that can result in
revenue losses, and probably detriment to the provider’s reputation. Thus, the goal of
any cloud service provider is to meet the Service level agreements, while reducing the
total cost of offering its services.
|
705 |
Achieving a Reusable Reference Architecture for Microservices in Cloud EnvironmentsLeo, Zacharias January 2019 (has links)
Microservices are a new trend in application development. They allow for breaking down big monolithic applications into smaller parts that can be updated and scaled independently. However, there are still many uncertainties when it comes to the standards of the microservices, which can lead to costly and time consuming creations or migrations of system architectures. One of the more common ways of deploying microservices is through the use of containers and container orchestration platform, most commonly the open-source platform Kubernetes. In order to speed up the creation or migration it is possible to use a reference architecture that acts as a blueprint to follow when designing and implementing the architecture. Using a reference architecture will lead to more standardized architectures, which in turn are most time and cost effective. This thesis proposes such a reference architecture to be used when designing microservice architectures. The goal of the reference architecture is to provide a product that meets the needs and expectations of companies that already use microservices or might adopt microservices in the future. In order to achieve the goal of the thesis, the work was divided into three main phases. First, a questionnaire was conducted and sent out to be answered by experts in the area of microservices or system architectures. Second, literature studies were made on the state of the art and practice of reference architectures and microservice architectures. Third, studies were made on the Kubernetes components found in the Kubernetes documentation, which were evaluated and chosen depending on how well they reflected the needs of the companies. This thesis finally proposes a reference architecture with components chosen according to the needs and expectations of the companies found from the questionnaire.
|
706 |
Vad är Cloud Computing? : En kvalitativ studie ur ett företagsperspektivNordlindh, Mattias, Suber, Kristoffer January 2010 (has links)
<p>Cloud computing is a new buzzword within the IT-industry, and introduces a whole new way of working with IT. The technique delivers web based services, which results in that the user no longer needs to install an application locally on a computer. Since the application no longer needs to run on a local entity, but in a datacenter located on a service provider, the users no longer need any specific hardware more than a computer with an internet connection. Cloud computing also offers IT-infrastructure and development environments as services, these three service types is better known as cloud services. Through the usage of different types of cloud services, the need for maintenance and hardware is significantly reduced. Therefore, the need for IT-competence in a company is reduced, which offers the company to focus on their core business strategy. A problem with cloud computing is that because it is such a new phenomenon, there is no established definition. This makes the subject hard to understand and easily misunderstood.</p><p>Cloud computing surely seems to solve many of the problems with reliability of systems and hardware that companies struggle with on a daily basis, but is it really that simple? The purpose of this thesis is to understand which types of company preconditions that affect the integration of cloud services in a company. We will also clarify the concept of Cloud computing by divide and describe its different components.</p><p>To investigate the different types of company preconditions and there approach to cloud services we have performed interviews at different companies in associations with our case study.</p><p>The result shows that a cloud service only can be integrated to an organization as long as the organization possesses the right preconditions. We think that cloud services can bring great advantages to organizations that meet these preconditions and that cloud services has the potential to ease the way of work for organizations in the future.</p> / <p>Cloud computing är ett nytt trendord inom IT-branschen och innebär ett nytt sätt att arbeta med IT. Tekniken bygger på att användare av en applikation inte behöver installera en applikation på sin lokala dator utan applikationen förmedlas som en tjänst genom Internet. Då applikationen inte körs på någon lokal enhet utan i en datorhall hos tjänsteleverantören behöver inte användaren ha någon mer specifik hårdvara utöver en dator och en Internetanslutning för att ta del av tjänsten. Även IT-infrastruktur och utvecklingsmiljö som tjänst erbjuds på samma sätt inom Cloud computing, dessa tre typer av tjänster kallas för molntjänster. Genom att använda olika typer av molntjänster minskar den interna driften av system, underhåll av hårdvara och således behövs minmal IT-kompetens inom företag, detta tillåter företag att fokusera på sin kärnverksamhet. Då Cloud computing är ett nytt fenomen finns det ingen erkänd definition av begreppet ännu, detta resulterar i att ämnet blir svårförstått och misstolkas lätt.</p><p>Cloud computing verkar onekligen lösa problematiken med driftsäkerhet som företag tvingas att handskas med dagligen, men är de verkligen så enkelt? Syftet med uppsatsen är att redogöra för vilka förutsättningar som företag besitter som påverkar hur bra olika typer av molntjänster kan integreras i företagets verksamhet. Uppsatsen ska även redogöra för begreppet Cloud computing genom att dela upp och beskriva de olika delar som begreppet består utav.</p><p>För att utreda detta har vi bedrivit en fallstudie genom intervjuer hos utvalda företag för att undersöka företagens förutsättningar och förhållningssätt gentemot olika typer av molntjänster.</p><p>Resultatet visar att de rätta förutsättningarna krävs för att ett företag ska kunna integrera molntjänster i sin verksamhet. Vi anser att molntjänster kan medföra stora fördelar för de företagen som besitter dessa förutsättningar, och att molntjänster har potential att underlätta verksamheten för många organisationer i framtiden.</p>
|
707 |
Quest for quiescent neutron star low mass X-ray binaries in the Small Magellanic CloudChowdhury, Md. Mizanul Huq 06 1900 (has links)
We present the first spectral search for neutron stars (NSs) in low-mass X-ray
binaries (LMXBs) between outbursts in the Small Magellanic Cloud (SMC).
We identify and discuss candidate LMXBs in quiescence in the SMC using
deep Chandra X-ray observations of two portions of the SMC. We produce
X-ray color-magnitude-diagrams of XRSs of these two fields and identify 10
candidates for quiescent NS LMXBs. Spectral fitting and searches for optical
counterparts rule out five, leaving five candidate quiescent NS LMXBs.
We estimate that we are sensitive to ~10% of quiescent NS LMXBs in our
fields. Our fields include 4.410^7 M of stellar mass, giving an upper limit of 10^{6} LMXBs per M in the SMC. We place a lower limit on the average duty cycle of NS LMXBs as ~0.003.
|
708 |
A Global Survey of Clouds by CloudSatRiley, Emily Marie 01 January 2009 (has links)
With the launch of CloudSat, direct observations of cloud vertical structure became possible on the global scale. This thesis utilizes over two years of CloudSat data to study large-scale variations of clouds. We compose a global data set of contiguous clouds (echo objects, EOs) and the individual pixels comprising each EO. For each EO many attributes are recorded. EOs are categorized according to cloud type, time of day, season, surface type, and region. From the categorization we first look at gross global climatology of clouds. Maps of cloud cover are subdivided by EO (cloud) type, and results compare well with previous CloudSat work. The seasonality of cloud cover is also examined. Focus topics studied in this thesis include: (1) mid-level clouds, (2) stratocumulus clouds, and (3) clouds across the Madden-Julian Oscillation (MJO). The mid-level cloud work found an unexpected frequency peak in EO top heights between 7-8 km in the tropics, further shown to correspond to a global peak in EO top temperature between -15°C ? -20°C. Hypotheses are discussed regarding cause of this feature. Stratocumulus clouds are defined as low-level (tops < 4.5 km), wide (width > 11 km) EOs. Stratocumulus cloud cover agrees (with understandable differences) with other estimates (ISCCP and CALIPSO). The seasonal cycle of stratocumulus over the main stratocumulus decks is examined. The Peruvian and Namibian decks have increased cloud cover in austral spring in 2007 vs. 2006, corresponding sensibly to sea surface temperature differences and changes in lower static stability. Looking at rain and drizzle statistics, wider EOs are found to drizzle more. Clouds across the MJO are defined relative to temporally filtered OLR data. Cloud cover (volume) doubles (triples) from suppressed to active MJO phases, with some shifts of the relative contributions of different EO types from the front to back of the MJO. Pixel statistics in dBZ-height space correspond to these cloud-type shifts. High anvils and low clouds in front lead deep convection followed by relatively lower anvils in the back.
|
709 |
Flexible Computing with Virtual MachinesLagar Cavilla, Horacio Andres 30 March 2011 (has links)
This thesis is predicated upon a vision of the future of computing with a separation of functionality between core and edges, very
similar to that governing the Internet itself. In this vision, the core of our computing infrastructure is made up of vast server farms with an abundance of storage and processing cycles. Centralization of
computation in these farms, coupled with high-speed wired or wireless connectivity, allows for pervasive access to a highly-available and well-maintained repository for data, configurations, and applications. Computation in the edges is concerned with provisioning application state and user data to rich clients, notably mobile devices equipped with powerful displays and graphics processors.
We define flexible computing as systems support for applications that dynamically leverage the resources available in the core
infrastructure, or cloud. The work in this thesis focuses on two instances of flexible computing that are crucial to the
realization of the aforementioned vision. Location flexibility aims to, transparently and seamlessly, migrate applications between
the edges and the core based on user demand. This enables performing the interactive tasks on rich edge clients and the computational tasks on powerful core servers. Scale flexibility is the ability of
applications executing in cloud environments, such as parallel jobs or
clustered servers, to swiftly grow and shrink their footprint according to execution demands.
This thesis shows how we can use system virtualization to implement systems that provide scale and location flexibility. To that effect we build and evaluate two system prototypes: Snowbird and SnowFlock. We present techniques for manipulating virtual machine state that turn running software into a malleable entity which is easily manageable, is decoupled from the underlying hardware, and is capable of dynamic relocation and scaling. This thesis demonstrates that virtualization technology is a powerful and suitable tool to
enable solutions for location and scale flexibility.
|
710 |
Replication, Security, and Integrity of Outsourced Data in Cloud Computing SystemsBarsoum, Ayad Fekry 14 February 2013 (has links)
In the current era of digital world, the amount of sensitive data produced by many organizations is outpacing their storage ability. The management of such huge amount of data is quite expensive due to the requirements of high storage capacity and qualified personnel. Storage-as-a-Service (SaaS) offered by cloud service providers (CSPs) is a paid facility that enables organizations to outsource their data to be stored on remote servers. Thus, SaaS reduces the maintenance cost and mitigates the burden of large local data storage at the organization's end.
For an increased level of scalability, availability and durability, some customers may want their data to be replicated on multiple servers across multiple data centers. The more copies the CSP is asked to store, the more fees the customers are charged. Therefore, customers need to have a strong guarantee that the CSP is storing all data copies that are agreed upon in the service contract, and these copies remain intact.
In this thesis we address the problem of creating multiple copies of a data file and verifying those copies stored on untrusted cloud servers. We propose a pairing-based provable multi-copy data possession (PB-PMDP) scheme, which provides an evidence that all outsourced copies are actually stored and remain intact. Moreover, it allows authorized users (i.e., those who have the right to access the owner's file) to seamlessly access the file copies stored by the CSP, and supports public verifiability.
We then direct our study to the dynamic behavior of outsourced data, where the data owner is capable of not only archiving and accessing the data copies stored by the CSP, but also updating and scaling (using block operations: modification, insertion, deletion, and append) these copies on the remote servers. We propose a new map-based provable multi-copy dynamic data possession (MB-PMDDP) scheme that verifies the intactness and consistency of outsourced dynamic multiple data copies. To the best of our knowledge, the proposed scheme is the first to verify the integrity of multiple copies of dynamic data over untrusted cloud servers.
As a complementary line of research, we consider protecting the CSP from a dishonest owner, who attempts to get illegal compensations by falsely claiming data corruption over cloud servers. We propose a new cloud-based storage scheme that allows the data owner to benefit from the facilities offered by the CSP and enables mutual trust between them. In addition, the proposed scheme ensures that authorized users receive the latest version of the outsourced data, and enables the owner to grant or revoke access to the data stored by cloud servers.
|
Page generated in 0.0656 seconds