631 |
Energy-aware adaptation in Cloud datacentersMahadevamangalam, Srivasthav January 2018 (has links)
Context: Cloud computing is providing services and resources to customers based on pay-per-use. As the services increasing, Cloud computing using a vast number of data centers like thousands of data centers which consumes high energy. The power consumption for cooling the data centers is very high. So, recent research going on to implement the best model to reduce the energy consumption by the data centers. This process of minimizing the energy consumption can be done using dynamic Virtual Machine Consolidation (VM Consolidation) in which there will be a migration of VMs from one host to another host so that energy can be saved. 70% of energy consumption will be reduced/ saved when the host idle mode is switched to sleep mode, and this is done by migration of VM from one host to another host. There are many energy adaptive heuristics algorithms for the VM Consolidation. Host overload detection, host underload detection and VM selection using VM placement are the heuristics algorithms of VM Consolidation which results in less consumption of the energy in the data centers while meeting Quality of Service (QoS). In this thesis, we proposed new heuristic algorithms to reduce energy consumption. Objectives: The objective of this research is to provide an energy efficient model to reduce energy consumption. And proposing a new heuristics algorithms of VM Consolidationtechnique in such a way that it consumes less energy. Presenting the advantages and disadvantages of the proposed heuristics algorithms is also considered as objectives of our experiment. Methods: Literature review was performed to gain knowledge about the working and performances of existing algorithms using VM Consolidation technique. Later, we have proposed a new host overload detection, host underload detection, VM selection, and VM placement heuristic algorithms. In our work, we got 32 combinations from the host overload detection and VM selection, and two VM placement heuristic algorithms. We proposed dynamic host underload detection algorithm which is used for all the 32 combinations. The other research method chosen is experimentation, to analyze the performances of both proposed and existing algorithms using workload traces of PlanetLab. This simulation is done usingCloudSim. Results: To compare and get the results, the following parameters had been considered: Energy consumption, No. of migrations, Performance Degradation due to VM Migrations (PDM),Service Level Agreement violation Time per Active Host (SLATAH), SLA Violation (SLAV),i.e. from a combination of the PDM, SLATAH, Energy consumption and SLA Violation (ESV).We have conducted T-test and Cohen’s d effect size to measure the significant difference and effect size between algorithms respectively. For analyzing the performance, the results obtained from proposed algorithms and existing algorithm were compared. From the 32 combinations of the host overload detection and VM Selection heuristic algorithms, MADmedian_MaxR (Mean Absolute Deviation around median (MADmedian) and Maximum Requested RAM (MaxR))using Modified Worst Fit Decreasing (MWFD) VM Placement algorithm, andMADmean_MaxR (Mean Absolute Deviation around mean (MADmean), and MaximumRequested RAM (MaxR)) using Modified Second Worst Fit Decreasing (MSWFD) VM placement algorithm respectively gives the best results which consume less energy and with minimum SLA Violation. Conclusion: By analyzing the comparisons, it is concluded that proposed algorithms perform better than the existing algorithm. As our aim is to propose the better energy- efficient model using the VM Consolidation techniques to minimize the power consumption while meeting the SLAs. Hence, we proposed the energy- efficient algorithms for VM Consolidation technique and compared with the existing algorithm and proved that our proposed algorithm performs better than the other algorithm. We proposed 32 combinations of heuristics algorithms (host overload detection and VM selection) with two adaptive heuristic VM placement algorithms. We have proposed a dynamic host underload detection algorithm, and it is used for all 32 combinations. When the proposed algorithms are compared with the existing algorithm, we got 22 combinations of host overload detection and VM Selection heuristic algorithms with MWFD(Modified Worst Fit Decreasing) VM placement and 20 combinations of host overload detection and VM Selection heuristic algorithms with MSWFD (Modified Second Worst FitDecreasing) VM placement algorithm which shows the better performance than existing algorithm. Thus, our proposed heuristic algorithms give better results with minimum energy consumption with less SLA violation.
|
632 |
A comparison of energy efficient adaptation algorithms in cloud data centersPenumetsa, Swetha January 2018 (has links)
Context: In recent years, Cloud computing has gained a wide range of attention in both industry and academics as Cloud services offer pay-per-use model, due to increase in need of factors like reliability and computing results with immense growth in Cloud-based companies along with a continuous expansion of their scale. However, the rise in Cloud computing users can cause a negative impact on energy consumption in the Cloud data centers as they consume huge amount of overall energy. In order to minimize the energy consumption in virtual datacenters, researchers proposed various energy efficient resources management strategies. Virtual Machine dynamic Consolidation is one of the prominent technique and an active research area in recent time, used to improve resource utilization and minimize the electric power consumption of a data center. This technique monitors the data centers utilization, identify overloaded, and underloaded hosts then migrate few/all Virtual Machines (VMs) to other suitable hosts using Virtual Machine selection and Virtual Machine placement, and switch underloaded hosts to sleep mode. Objectives: Objective of this study is to define and implement new energy-aware heuristic algorithms to save energy consumption in Cloud data centers and show the best-resulted algorithm then compare performances of proposed heuristic algorithms with old heuristics. Methods: Initially, a literature review is conducted to identify and obtain knowledge about the adaptive heuristic algorithms proposed previously for energy-aware VM Consolidation, and find the metrics to measure the performance of heuristic algorithms. Based on this knowledge, for our thesis we have proposed 32 combinations of novel adaptive heuristics for host overload detection (8) and VM selection algorithms (4), one host underload detection and two adaptive heuristic for VM placement algorithms which helps in minimizing both energy consumption and reducing overall Service Level Agreement (SLA) violation of Cloud data center. Further, an experiment is conducted to measure the performances of all proposed heuristic algorithms. We have used the CloudSim simulation toolkit for the modeling, simulation, and implementation of proposed heuristics. We have evaluated the proposed algorithms using PlanetLab VMs real workload traces. Results: The results were measured using metrics energy consumption of data center (power model), Performance Degradation due to Migration (PDM), Service Level Agreement violation Time per Active Host (SLATAH), Service Level Agreement Violation (SLAV = PDM . SLATAH) and, Energy consumption and Service level agreement Violation (ESV). Here for all four categories of VM Consolidation, we have compared the performances of proposed heuristics with each other and presented the best heuristic algorithm proposed in each category. We have also compared the performances of proposed heuristic algorithms with existing heuristics which are identified in the literature and presented the number of newly proposed algorithms work efficiently than existing algorithms. This comparative analysis is done using T-test and Cohen's d effect size. From the comparison results of all proposed algorithms, we have concluded that Mean absolute Deviation around median (MADmedain) host overload detection algorithm equipped with Maximum requested RAM VM selection (MaxR) using Modified First Fit Decreasing VM placement (MFFD), and Standard Deviation (STD) host overload detection algorithm equipped with Maximum requested RAM VM selection (MaxR) using Modified Last Fit decreasing VM placement (MLFD) respectively performed better than other 31 combinations of proposed overload detection and VM selection heuristic algorithms, with regards to Energy consumption and Service level agreement Violation (ESV). However, from the comparative study between existing and proposed algorithms, 23 and 21 combinations of proposed host overload detection and VM selection algorithms using MFFD and MLFD VM placements respectively performed efficiently compared to existing (baseline) heuristic algorithms considered for this study. Conclusions: This thesis presents novel proposed heuristic algorithms that are useful for minimization of both energy consumption and Service Level Agreement Violation in virtual datacenters. It presents new 23 combinations of proposed host overloading detection and VM selection algorithms using MFFD VM placement and 21 combinations of proposed host overloading detection and VM selection algorithms using MLFD VM placement, which consumes the minimum amount of energy with minimal SLA violation compared to the existing algorithms. It gives scope for future researchers related to improving resource utilization and minimizing the electric power consumption of a data center. This study can be extended in further by implementing the work on other Cloud software platforms and developing much more efficient algorithms for all four categories of VM consolidation.
|
633 |
Evaluation of Internet of Things Communication Protocols Adapted for Secure Transmission in Fog Computing EnvironmentsWiss, Thomas January 2018 (has links)
A current challenge in the Internet of Things is the seeking after conceptual structures to connect the presumably billions of devices of innumerable forms and capabilities. An emerging architectural concept, the fog cloud computing, moves the seemingly unlimited computational power of the distant cloud to the edge of the network, closer to the potentially computationally limited things, effectively diminishing the experienced latency. To allow computationally-constrained devices partaking in the network they have to be relieved from the burden of constant availability and extensive computational execution. Establishing a publish/subscribe communication pattern with the utilization of the popular Internet of Things application layer protocol Constrained Application Protocol is depicted one approach of overcoming this issue. In this project, a Java based library to establish a publish/subscribe communication pattern for the Constrained Application Protocol was develop. Furthermore, efforts to build and assess prototypes of several publish/subscribe application layer protocols executed over varying common as well as secured versions of the standard and non-standard transport layer protocols were made to take advantage, evaluate, and compare the developed library. The results indicate that the standard protocol stacks represent solid candidates yet one non-standard protocol stack is the considered prime candidate which still maintains a low response time while not adding a significant amount of communication overhead.
|
634 |
Protection of personal information in the South African cloud computing environment: a framework for cloud computing adoptionSkolmen, Dayne Edward January 2016 (has links)
Cloud Computing has advanced to the point where it may be considered an attractive proposition for an increasing number of South African organisations, yet the adoption of Cloud Computing in South Africa remains relatively low. Many organisations have been hesitant to adopt Cloud solutions owing to a variety of inhibiting factors and concerns that have created mistrust in Cloud Computing. One of the top concerns identified is security within the Cloud Computing environment. The approaching commencement of new data protection legislation in South Africa, known as the Protection of Personal Information Act (POPI), may provide an ideal opportunity to address the information security-related inhibiting factors and foster a trust relationship between potential Cloud users and Cloud providers. POPI applies to anyone who processes personal information and regulates how they must handle, store and secure that information. POPI is considered to be beneficial to Cloud providers as it gives them the opportunity to build trust with potential Cloud users through achieving compliance and providing assurance. The aim of this dissertation is, therefore, to develop a framework for Cloud Computing adoption that will assist in mitigating the information security-related factors inhibiting Cloud adoption by fostering a trust relationship through compliance with the POPI Act. It is believed that such a framework would be useful to South African Cloud providers and could ultimately assist in the promotion of Cloud adoption in South Africa.
|
635 |
Optimalizace datového úložištěAulehlová, Barbora January 2015 (has links)
The theoretical part of this thesis is devoted to familiarization with the technology of cloud computing, virtualization technology, data storage problems and Petri nets. Followed by the practical part, the first part discusses the various technologies which are VMware ESX, Citrix Xen, Hyper -V, KVM and oVirt. The following part is a summary of the characteristics of each technology and an output of this summary is a decision tree which is intended to help to select the appropriate virtualization technology. The second part is dedicated to the implementation of an application used for storing data in the database depending on the sensitivity of data. The thesis includes the final economic evaluation and discussion as well.
|
636 |
Cloud computing v sektoru malých a středních podniků / Cloud computing in the sector of small and medium sized enterprisesHavlíček, Tomáš January 2016 (has links)
My master thesis has the main objective to implement into the sector of small and medium sized enterprises a cloud computing solution as required the customer company. Subsidiary targets of my literary part are a definition of small and medium-sized enterprises, explanation of terms corresponding to cloud computing, explaining the very concept of cloud computing and its distribution models, analysis of cloud services and what is involved in the transition to a cloud environment for companies, including migration contract.
The first part summarizes information on cloud computing. The second part provides specific implementation of a cloud computing solutions into corporate environments based on empirical research and requirements of the company.
|
637 |
Cloud Computing pro datová úložiště s využitím mobilních zařízení / Cloud Computing for data storages using mobile devicesLeština, Petr January 2016 (has links)
This thesis deals with the use of Cloud Computing for data storages using mobile devices. It includes theoretical and practical aspects of the problem. The first section describes the theoretical principles of Cloud Computing.
Another practical part deals with the comparison of selected cloud-based applications for non-commercial use and compatible with the operating system Android. Tests are executed using a specific mobile device. Finally, follow the recommendations for users.
|
638 |
Testování stability a bezpečnosti počítačové sítě v oblasti cloud computingu / Security and stability networking testing in cloud computingEfimov, Igor Unknown Date (has links)
The purpose of this dissertation work involves creation and automation of my own cloud security and stability testing techniques.
The basis of this project is to explore and outline the virtual environment protection level in front of different types of security and stability networking attacks.
Theoretical part of this paper consists of concepts and, together with their definitions and reference to relevant research literature, science related publications and online sources that best describes testing methodology of cloud computing, focused on security and virtualization.
The practical part of the thesis demonstrate an understanding of theories and concepts that been studied. Based on the obtained research results, have been developed various
automation tests that are going to be run on virtual environment in order to test their stability.
|
639 |
GeDaNIC: um framework para gerenciamento de banco de dados em nuvem baseado nas interações entre consultas / GeDaNIC: a framework for database management cloud data based on interactions between queriesSiqueira Junior, Manoel Mariano January 2012 (has links)
SIQUEIRA JUNIOR, Manoel Mariano. GeDaNIC: um framework para gerenciamento de banco de dados em nuvem baseado nas interações entre consultas. 2012. 95 f. Dissertação (Mestrado em ciência da computação)- Universidade Federal do Ceará, Fortaleza-CE, 2012. / Submitted by Elineudson Ribeiro (elineudsonr@gmail.com) on 2016-07-11T18:53:56Z
No. of bitstreams: 1
2012_dis_mmsiqueirajunior.pdf: 3719350 bytes, checksum: 33c477059041509925b50560598154a7 (MD5) / Approved for entry into archive by Rocilda Sales (rocilda@ufc.br) on 2016-07-20T11:56:41Z (GMT) No. of bitstreams: 1
2012_dis_mmsiqueirajunior.pdf: 3719350 bytes, checksum: 33c477059041509925b50560598154a7 (MD5) / Made available in DSpace on 2016-07-20T11:56:41Z (GMT). No. of bitstreams: 1
2012_dis_mmsiqueirajunior.pdf: 3719350 bytes, checksum: 33c477059041509925b50560598154a7 (MD5)
Previous issue date: 2012 / Cloud computing is a recent trend of technology aimed at providing services for Information Technology (IT) and demand-based payment in use. One of the main services provided by a computing platform cloud is the service data management, or simply data service. This service accepts responsibility for the installation, configuration and maintenance of database systems, as well as for efficient access to stored data. This work presents a framework, called GeDaNIC, for managing database systems cloud data. The proposed framework aims to provide software infrastructure required for the provision of data services in computing environments cloud. Accordingly, the search system designed to solve some problems still in the context of open systems database in the cloud, such as dispatch, scheduling appointments and resource provisioning. The approach extends the designed Previous work by adding important features such as: support to unforeseen workloads and use of information about the interactions between queries. The supporting seasonal workloads is related to one of the main properties of computing Cloud: rapid elasticity. Already interactions between queries can provide impacts significant performance of database systems. For this reason, the GeDaNIC uses information about these interactions in order to reduce the execution time of workloads submitted to the data service and thereby increase the profit of provider of this service. For this, three new approaches to model and measure the interactions between instances and types of queries are proposed. In order to demonstrate the efficiency of the proposed framework for experimental evaluation using the TPC-H on PostgreSQL was performed. The results show that the designed solution has the potential to increase the profit of the service provider cloud data. / Computação em nuvem é uma tendência recente de tecnologia cujo objetivo é proporcionar serviços de Tecnologia da Informação (TI) sob demanda e com pagamento baseado no uso. Um dos principais serviços disponibilizados por uma plataforma de computação em nuvem consiste no serviço de gerenciamento de dados, ou simplesmente, serviço de dados. Este serviço assume a responsabilidade pela instalação, configuração e manutenção dos sistemas de banco de dados, bem como pelo acesso eficiente aos dados armazenados. Este trabalho apresenta um framework, denominado GeDaNIC, para o gerenciamento de sistemas de banco de dados em nuvem. O framework proposto tem por objetivo fornecer a infraestrutura de software necessária para a disponibilização de serviços de dados em ambientes de computação em nuvem. Neste sentido, o mecanismo concebido busca solucionar alguns problemas ainda em aberto no contexto de sistemas de banco de dados em nuvem, tais como: despacho, escalonamento de consultas e provisionamento de recursos. A abordagem concebida estende os trabalhos anteriores adicionando importantes características, como: o suporte às cargas de trabalho imprevistas e a utilização de informações sobre as interações entre consultas. O suporte às cargas de trabalhos sazonais está relacionado a uma das principais propriedades da computação em nuvem: a elasticidade rápida. Já as interações entre consultas podem proporcionar impactos significativos no desempenho dos sistemas de banco de dados. Por este motivo, o GeDaNIC utiliza informações sobre essas interações com a finalidade de reduzir o tempo de execução das cargas de trabalho submetidas ao serviço de dados e, consequentemente, aumentar o lucro do provedor deste serviço. Para isso, três novas abordagens para modelar e mensurar as interações entre instâncias e tipos de consultas são propostas. Com o objetivo de demonstrar a eficiência do framework proposto uma avaliação experimental usando o benchmark TPC-H sobre o PostgreSQL foi realizada. Os resultados apontam que a solução concebida tem potencial para aumentar o lucro do provedor do serviço de dados em nuvem.
|
640 |
Um serviço para flexibilização da tarifação em nuvens de infraestrutura / A Service for easing chargingi infrastructure in cloudsViana, Nayane Ponte January 2013 (has links)
VIANA, Nayane Ponte. Um serviço para flexibilização da tarifação em nuvens de infraestrutura. 2013. 106 f. Dissertação (Mestrado em ciência da computação)- Universidade Federal do Ceará, Fortaleza-CE, 2013. / Submitted by Elineudson Ribeiro (elineudsonr@gmail.com) on 2016-07-11T19:48:48Z
No. of bitstreams: 1
2013_dis_npviana.pdf: 3553603 bytes, checksum: 0e999b6df08351daaf88cb5e72f73163 (MD5) / Approved for entry into archive by Rocilda Sales (rocilda@ufc.br) on 2016-07-20T12:08:58Z (GMT) No. of bitstreams: 1
2013_dis_npviana.pdf: 3553603 bytes, checksum: 0e999b6df08351daaf88cb5e72f73163 (MD5) / Made available in DSpace on 2016-07-20T12:08:58Z (GMT). No. of bitstreams: 1
2013_dis_npviana.pdf: 3553603 bytes, checksum: 0e999b6df08351daaf88cb5e72f73163 (MD5)
Previous issue date: 2013 / Cloud computing has emerged in 2006 with the idea to make the utility computing service. This new paradigm is based on several mature technologies that have come to the right time, such as grid computing, distributed systems and virtualization. However, cloud computing has its peculiar features, like customization, elasticity and pay-per-use service and, moreover, to be a new paradigm, has many open questions that needs to be mature, such as security, availability and charging. The pricing is a key feature of cloud computing. In this approach the customer pay for your use, model known as pey per use. Then, perform the monitoring of the resources use is needed in order to pricing. In cloud services infrastructure case, monitoring the hardware resources use is necessary. However, many academic studies show that the form of charging by cloud providers do not take into account important requirements for the customers invoice calculation. Based on this, this work aims to improve the flexibility of charging for infrastructure clouds. For this, an academic and industry study was made to collect and sort the requirements for flexible charging services in cloud computing. Thus, an architecture and service charges, the aCCountS (a Cloud Accounting Service) and a domain specific language (DSL) for defining pricing policies in clouds, called Accounts-DSL, was proposed. In this study are defined: (i) the service architecture and the description of its key parts, (ii) the accounting service proposed and how it was developed, and (iii) the charging language and its requirements and grammar. Finally, the experimental evaluation performed to test the correctness of the service and the language proposed are described. For this, real test was made with the service deployment in commercial clouds in order to test the aCCountS and aCCountS-DSL features. / A computação em nuvem surgiu em 2006 com a ideia de transformar a computação em um serviço utilitário. Esse novo paradigma é baseado em várias tecnologias que vieram amadurecendo ao longo dos tempos, como o sistema distribuído, a computação em grade e a virtualização. Além disso, a computação em nuvem tem suas características peculiares, como a customização, a elasticidade e o pagamento por uso do serviço. Portanto, por ser um paradigma novo, a nuvem possui muitas questões em aberto que precisam ser amadurecidas, como a segurança, a disponibilidade e a tarifação. A tarifação é uma das principais características da computação em nuvem. Nesse paradigma, o cliente paga pelo que utiliza, modelo conhecido como pey per use. Para isso, é preciso realizar o monitoramento do uso dos recursos a fim de tarifar de acordo com sua utilização. As provedoras de nuvem disponibilizam diferentes tipos de serviços aos seus usuários, os principais são:(i) Software como Serviço, (ii) Plataforma como Serviços e (iii) Infraestrutura como Serviço (hardware). No caso dos serviços disponibilizados por nuvens de infraestrutura,é necessário medir o uso dos recursos de hardware na nuvem. Porém, muitos trabalhos acadêmicos mostram que o modelo de tarifação das provedoras de nuvem não levam em consideração requisitos importantes para o cálculo da fatura do cliente. Baseado nisso, este trabalho tem por objetivo melhorar a flexibilidade da tarifação em nuvens de infraestrutura. Para isso, ele propõe uma arquitetura e um serviço de tarifação, o aCCountS (Cloud aCCounting Service) e uma linguagem de domínio específico (DSL) para definição de políticas de tarifação em nuvens, chamada de aCCountS-DSL. Inicialmente, foi realizado um estudo na academia e na indústria a fim de coletar e classificar os requisitos para flexibilizar a cobrança de serviços na computação em nuvem e criar um novo modelo de tarifação. A partir desses estudos foram definidos (i) a linguagem de tarifação proposta, os requisitos que são suportados pela linguagem, em seguida a (ii) a arquitetura do serviço e a descrição de suas partes fundamentais e então, (iii) o serviço de tarifação proposto. Por fim, esse trabalho descreve as avaliações experimentais realizadas para testar a corretude do serviço e da linguagem propostos. Para isso, foram feitos testes reais a partir da implantação do serviço em nuvens comerciais com o objetivo de testar o aCCountS e a aCCountS-DSL.
|
Page generated in 0.1119 seconds