• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 777
  • 217
  • 122
  • 65
  • 54
  • 33
  • 32
  • 30
  • 28
  • 21
  • 15
  • 14
  • 9
  • 9
  • 7
  • Tagged with
  • 1595
  • 1595
  • 390
  • 281
  • 244
  • 242
  • 235
  • 231
  • 231
  • 226
  • 215
  • 210
  • 176
  • 173
  • 152
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
541

Computation offloading for algorithms in absence of the Cloud

Sthapit, Saurav January 2018 (has links)
Mobile cloud computing is a way of delegating complex algorithms from a mobile device to the cloud to complete the tasks quickly and save energy on the mobile device. However, the cloud may not be available or suitable for helping all the time. For example, in a battlefield scenario, the cloud may not be reachable. This work considers neighbouring devices as alternatives to the cloud for offloading computation and presents three key contributions, namely a comprehensive investigation of the trade-off between computation and communication, Multi-Objective Optimisation based approach to offloading, and Queuing Theory based algorithms that present the benefits of offloading to neighbours. Initially, the states of neighbouring devices are considered to be known and the decision of computation offloading is proposed as a multi-objective optimisation problem. Novel Pareto optimal solutions are proposed. The results on a simulated dataset show up to 30% increment in performance even when cloud computing is not available. However, information about the environment is seldom known completely. In Chapter 5, a realistic environment is considered such as delayed node state information and partially connected sensors. The network of sensors is modelled as a network of queues (Open Jackson network). The offloading problem is posed as minimum cost problem and solved using Linear solvers. In addition to the simulated dataset, the proposed solution is tested on a real computer vision dataset. The experiments on the random waypoint dataset showed up to 33% boost on performance whereas in the real dataset, exploiting the temporal and spatial distribution of the targets, a significantly higher increment in performance is achieved.
542

Heuristic approaches for network problems / Abordagens heuristicas para problemas em redes

Stefanello, Fernando January 2015 (has links)
Em nosso mundo altamente conectado, novas tecnologias provêm contínuas mudanças na velocidade e eficiência das redes de telecomunicações e de transporte. Muitas dessas tecnologias são originárias de pesquisas em problemas de otimização em redes aplicadas a diferentes áreas. Nesta tese, investigamos três problemas de otimização combinatória que podem ser abordados como estruturas de redes. Primeiramente, são abordados problemas de engenharia de tráfego em redes de transporte. O objetivo principal é investigar os efeitos de alterar o custo de um subconjunto de arcos da rede, considerando que os clientes desta rede agem com um comportamento bem definido. O objetivo é controlar o fluxo na rede de modo a obter uma melhor distribuição do fluxo, minimizando o congestionamento ou maximizando o fluxo em um subconjunto de arestas. No primeiro problema considerase instalar um número fixo de postos de pedágios e definir os valores das tarifas para minimizar o tempo médio de viagem dos usuários. No segundo problema abordado, o objetivo é definir os valores das tarifas para maximizar a receita arrecadada nos arcos com pedágios. Em ambos os problemas, os usuários escolhem as rotas com base nos caminhos de menor custo da origem para o destino. Em redes de telecomunicações, um problema de alocação sujeito às condições da rede é considerado. O objetivo é alocar um conjunto de recursos, minimizando o custo de comunicação. Uma aplicação de computação em nuvem é considerada, onde os recursos são máquinas virtuais que devem ser alocadas em um conjunto de centros de dados. Condições da rede como largura de banda e latência são consideradas de modo a garantir a qualidade dos serviços. Para todos estes problemas, os modelos matemáticos são apresentados e avaliados usando um solver comercial de propósito geral como um método exato. Além disso, abordagens heurísticas são propostas, incluindo uma classe de algoritmo genético de chaves aleatórias viciadas (BRKGA). Resultados experimentais demonstram o bom desempenho das abordagens heurísticas propostas, mostrando que o BRKGA é uma ferramenta eficiente para resolver diferentes tipos de problemas de otimização combinatória, especialmente sobre estruturas de rede. / In our highly connected world, new technologies provide continuous changes in the speed and efficiency of telecommunication and transportation networks. Many of these technologies come from research on network optimization problems with applications in different areas. In this thesis, we investigate three combinatorial optimization problems that arise from optimization on networks. First, traffic engineering problems in transportation networks are addressed. The main objective is to investigate the effects of changing the cost of some links in the network regarding some well-defined user behavior. The goal is to control the flow in the network and seek a better flow distribution over the network and then minimize the traffic congestion or maximize the flow on a subset of links over network conditions. The first problem considered is to install a fixed number of tollbooths and define the values of tariffs to minimize the average user travel time. The second problem considered is to define the values of tariffs to maximize the revenue collected in the tolled arcs. In both problems, users choose the routes based on the least cost paths from source to destination. From telecommunication networks, a placement problem subjected to network conditions is considered. The main objective is to place a set of resources minimizing the communication cost. An application from cloud computing is considered, where the resources are virtual machines that should be placed in a set of data centers. Network conditions, such as bandwidth and latency, are considered in order to ensure the service quality. For all these problems, mathematical models are presented and evaluated using a general-purpose commercial solver as an exact method. Furthermore, new heuristics approaches are proposed, including some based on biased random-key genetic algorithm (BRKGA). Experimental results demonstrate the good performance of the proposed heuristic approaches, showing that BRKGA is an efficient tool for solving different kinds of combinatorial optimization problems, especially over network structures.
543

Adoção da computação em nuvem: questões organizacionais e ambientais com o uso do modelo TAM-TOE em empresas de grande porte. / Cloud computing adoption: organizational and environmental issues with the use of the TAM-TOE model in large companies.

Nemer Alberto Zaguir 19 April 2017 (has links)
A computação em nuvem é um modelo que catalisa mudanças marcantes na forma com que a Tecnologia da Informação é distribuída. Como benefícios, sobressai-se a viabilidade de acesso rápido, de qualquer lugar, a recursos disponibilizados como serviços e utilizados sob demanda, subsidiando a criação de novos modelos de negócios. Entretanto, com os ativos da TI externos à organização, aumenta-se o interesse por estudos sobre adoção. A literatura retrata a utilização de vários modelos de adoção, entre eles o TAM (Technology Acceptance Model) e o TOE (Technology-Organizational-enviroment framework). Uma pesquisa que utilizou a combinação TAM-TOE revelou bom grau de previsão da adoção pelo modelo, porém indicou a necessidade de estudos de casos para aprofundar o tema em outros contextos, ensejando a questão: como ocorre o processo de adoção da computação em nuvem em relação às questões organizacionais e ambientais? Foi realizada uma revisão sistemática da literatura para confirmar lacunas de pesquisa e estender o modelo TAM-TOE, destacando-se elementos da teoria institucional no processo de adoção. Trata-se de uma pesquisa qualitativa, descritiva e estruturada por meio de estudos de casos múltiplos, com unidade de análise definida pelo estudo do processo de adoção de um serviço de nuvem em empresa de grande porte caracterizada como suporte no modelo de grade estratégica da TI. Sete unidades foram analisadas abordando-se influência das pressões institucionais sobre a alta gestão, as avaliações dos serviços e os termos de licenciamento dos contratos. O estudo contribui para a elucidação de comportamentos diferenciados das pressões institucionais sobre a alta gestão na decisão de adoção, destacando-se os mecanismos coercitivos. Expõe situações onde a gestão dos serviços requer a participação da TI sob o modo tradicional e discute aspectos contratuais sobre o licenciamento de serviços. Por fim, apresenta-se uma reflexão sobre a utilização do modelo, do método e limitações da pesquisa, com a indicação de estudos futuros para aprofundar as contribuições indicadas em outros contextos. / Cloud Computing is a model that has brought revolutionary changes in the way Information Technology (IT) is distributed. As benefits, it stands out the feasibility of fast access possible from anywhere to resources made available as on-demand services that help to create new business models. However, with IT assets outside the organization, interest in adoption studies have increased. The literature describes the use of several adoption models, among which are the Technology-Acceptance-Model (TAM) and the Technology-Organizational-environment (TOE). One research used the TAM-TOE combination and revealed a good degree of prediction to justify adoptions, but indicated the need for case studies to better understanding of adoptions in other contexts, raising the question: how should the process of adopting cloud computing occur regarding organizational and environmental questions? A systematic literature review was conducted to confirm research gaps and to broaden the TAM-TOE model, highlighting elements of institutional theory and its influence in the adoption process. This is a qualitative, descriptive and structured research using multiple case studies, with unit of analysis defined by the study of the process of adopting a cloud service in a large company characterized as support in the IT strategic grid model. Seven units were analyzed by addressing institutional pressures on top management, service evaluations and contract licensing terms. The study contributes to the elucidation of different behaviors of the institutional pressures on the top management in the decision making for its adoption, emphasizing the coercive mechanisms. It exposes situations in which the service management might require the participation of IT in the traditional way and the discussion of contractual aspects about the licensing of services. Finally, a reflection on the use of the model, method and limitations of the research is presented, indicating future studies.
544

IaaS-cloud security enhancement : an intelligent attribute-based access control model and implementation

Al-Amri, Shadha M. S. January 2017 (has links)
The cloud computing paradigm introduces an efficient utilisation of huge computing resources by multiple users with minimal expense and deployment effort compared to traditional computing facilities. Although cloud computing has incredible benefits, some governments and enterprises remain hesitant to transfer their computing technology to the cloud as a consequence of the associated security challenges. Security is, therefore, a significant factor in cloud computing adoption. Cloud services consist of three layers: Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS). Cloud computing services are accessed through network connections and utilised by multi-users who can share the resources through virtualisation technology. Accordingly, an efficient access control system is crucial to prevent unauthorised access. This thesis mainly investigates the IaaS security enhancement from an access control point of view.
545

Collaborative technologies for mobile workers and virtual project teams

McAndrew, Sean T. January 2009 (has links)
Information Technology is advancing at a frightening pace. Cloud computing and its subset, Software as a Service (SaaS), are rapidly challenging traditional thinking for enterprise-level application and infrastructure provision. The project-centric nature of the construction industry provides an environment where the utilisation of SaaS is commercially appropriate, given its ability to provide rapid set-up and predictable costs at the outset. Using project extranets, the construction industry has been - unusually for it as an industry sector - early-adopters of this cloud computing model. However, findings from the research highlight that there is a gap in the information and documents that pass from the construction phase into the operational phase of a building. This research considers examples of the SaaS IT model and how it has been used within a construction and facilities management industry context. A prototype system was developed to address the requirements of facilities management work order logging and tracking process. These requirements were gathered during detailed case studies of organisations within both the construction and facilities management sectors with a view to continue the use of building-specific information through its full life-cycle. The thesis includes a summary of the lessons learnt through system implementation within the construction-contracting organisation Taylor Woodrow, and it concludes with an IT strategy proposal that was developed based on a cloud computing model.
546

VIPLE Extensions in Robotic Simulation, Quadrotor Control Platform, and Machine Learning for Multirotor Activity Recognition

January 2018 (has links)
abstract: Machine learning tutorials often employ an application and runtime specific solution for a given problem in which users are expected to have a broad understanding of data analysis and software programming. This thesis focuses on designing and implementing a new, hands-on approach to teaching machine learning by streamlining the process of generating Inertial Movement Unit (IMU) data from multirotor flight sessions, training a linear classifier, and applying said classifier to solve Multi-rotor Activity Recognition (MAR) problems in an online lab setting. MAR labs leverage cloud computing and data storage technologies to host a versatile environment capable of logging, orchestrating, and visualizing the solution for an MAR problem through a user interface. MAR labs extends Arizona State University’s Visual IoT/Robotics Programming Language Environment (VIPLE) as a control platform for multi-rotors used in data collection. VIPLE is a platform developed for teaching computational thinking, visual programming, Internet of Things (IoT) and robotics application development. As a part of this education platform, this work also develops a 3D simulator capable of simulating the programmable behaviors of a robot within a maze environment and builds a physical quadrotor for use in MAR lab experiments. / Dissertation/Thesis / Masters Thesis Computer Science 2018
547

Dynamic superscalar grid for technical debt reduction

Killian, Rudi January 2018 (has links)
Thesis (MTech (Information Technology))--Cape Peninsula University of Technology, 2018. / Organizations and the private individual, look to technology advancements to increase their ability to make informed decisions. The motivation for technology adoption by entities sprouting from an innate need for value generation. The technology currently heralded as the future platform to facilitate value addition, is popularly termed cloud computing. The move to cloud computing however, may conceivably increase the obsolescence cycle for currently retained Information Technology (IT) assets. The term obsolescence, applied as the inability to repurpose or scale an information system resource for needed functionality. The incapacity to reconfigure, grow or shrink an IT asset, be it hardware or software is a well-known narrative of technical debt. The notion of emergent technical debt realities is professed to be all but inevitable when informed by Moore’s Law, as technology must inexorably advance. Of more imminent concern however are that major accelerating factors of technical debt are deemed as non-holistic conceptualization and design conventions. Should management of IT assets fail to address technical debt continually, the technology platform would predictably require replacement. The unrealized value, functional and fiscal loss, together with the resultant e-waste generated by technical debt is meaningfully unattractive. Historically, the cloud milieu had evolved from the grid and clustering paradigms which allowed for information sourcing across multiple and often dispersed computing platforms. The parallel operations in distributed computing environments are inherently value adding, as enhanced effective use of resources and efficiency in data handling may be achieved. The predominant information processing solutions that implement parallel operations in distributed environments are abstracted constructs, styled as High Performance Computing (HPC) or High Throughput Computing (HTC). Regardless of the underlying distributed environment, the archetypes of HPC and HTC differ radically in standard implementation. The foremost contrasting factors of parallelism granularity, failover and locality in data handling have recently been the subject of greater academic discourse towards possible fusion of the two technologies. In this research paper, we uncover probable platforms of future technical debt and subsequently recommend redeployment alternatives. The suggested alternatives take the form of scalable grids, which should provide alignment with the contemporary nature of individual information processing needs. The potential of grids, as efficient and effective information sourcing solutions across geographically dispersed heterogeneous systems are envisioned to reduce or delay aspects of technical debt. As part of an experimental investigation to test plausibility of concepts, artefacts are designed to generically implement HPC and HTC. The design features exposed by the experimental artefacts, could provide insights towards amalgamation of HPC and HTC.
548

Energy management for cloud computing environment. / Gerenciamento de energia para ambiente de computação em nuvem.

Nascimento, Viviane Tavares 08 August 2017 (has links)
As one of the major energy consumers in the world, the Information and Communication Technology (ICT) sector searches for efficient ways to cope with the energy expenditure of the infrastructure. One of the areas that tend to grow in the coming years, the Cloud Computing services providers look for approaches to change the energy expenditure pattern, concurrently reducing the operational costs. The most common strategy to cope with the energy consumption is related to its efficiency. However, there is the opportunity to encourage a new demand standard, based on the energy supply and price variation. A management approach that takes into account the uctuation of the energy to negotiate the contracts allocation is proposed. Contractible service terms regarding powering the services are established to enable the proposed management approach. Also, a new service layer able to deal with energy requirements is defined as an element of the Cloud Computing environment. Existing literature does not cope with the different terms of the energy supply and does not apply a management of the contracts simultaneously. The proposed method includes a service terms description, the energy-related service layer definition, and a framework for its implementation. A model designed to validate the approach applies a Use Case that simulates Data Centers (DCs) spread through the metropolitan area of S~ao Paulo. The obtained results show the ability of the model to manage the contracts allocation in accordance to the best exploitation of the self-generated energy. Taking into account the assignment costs range, to both user and services provider, the method negotiates the most affordable contracts assignment regarding the energy supply variation. / Como um dos maiores consumidores de energia do mundo, o setor de Tecnologia da Informação e Comunicação (TIC) busca por maneiras eficientes para lidar com o consumo de energia da infraestrutura. Uma das áreas que tende a crescer nos próximos anos, os provedores de serviço de Computação em Nuvem procuram por abordagens para mudar o padrão de consumo de energia, ao mesmo tempo reduzindo custos operacionais. A estratégia mais comum para lidar com o consumo de energia é relacionada à sua eficiência. No entanto, há a oportunidade para incentivar um novo padrão de demanda por serviços de Computação em Nuvem, baseado na variação do fornecimento e preços da energia. Uma solução que considera a flutuação da energia para negociar a alocação é proposta. Termos de serviços contratáveis referentes a energizar os serviços são estabelecidos para permitir a solução de gerenciamento proposta. Também, uma nova camada de serviço capaz de lidar com requisitos da energia é definida como um elemento do ambiente de Computação em Nuvem. A literatura existente não lida com os diferentes termos do fornecimento da energia e com o gerenciamento de contratos simultaneamente. O método proposto inclui descrição dos termos de serviço, a definição da camada de serviço relacionada à energia e uma metodologia de implementação. Um modelo foi construído para validar a proposta através de um Caso de Uso que simula uma quantidade de Data Centers (DCs) espalhados pela região metropolitana de São Paulo. Os resultados obtidos mostram a capacidade de gerenciar a alocação dos serviços buscando o melhor aproveitamento da energia auto-gerada pelo ambiente. Utilizando do critério de variação dos custos de alocação, tanto para o usuário quanto para o provedor de serviços, o método negocia a alocação mais favorável para os contratos em razão da variação do fornecimento de energia.
549

Inferring models from cloud APIs and reasoning over them : a tooled and formal approach / Inférer des modèles à partir d'APIs cloud et raisonner dessus : une approche outillée et formelle

Challita, Stéphanie 21 December 2018 (has links)
Avec l’avènement de l’informatique en nuage, différents fournisseurs offrant des services en nuage et des interfaces de programmation d’applications (APIs) hétérogènes sont apparus. Cette hétérogénéité complique la mise en œuvre d’un système de multi-nuages interopérable. Parmi les solutions pour l’interopérabilité de multi-nuages, l’Ingénierie Dirigée par les Modèles (IDM) s’est révélée avantageuse. Cependant, la plupart des solutions IDM existantes pour l’informatique en nuage ne sont pas représentatives des APIs et manquent de formalisation. Pour remédier à ces limitations, je présente dans cette thèse une approche basée sur le standard Open Cloud Computing Interface (OCCI), les approches IDM et les méthodes formelles. Je fournis deux contributions qui sont mises en œuvre dans le contexte du projet OCCIware. Premièrement, je propose une approche basée sur la rétro-ingénierie pour extraire des connaissances des documentations textuelles ambiguës des APIs de nuages et améliorer leur représentation à l’aide des techniques IDM. Cette approche est appliquée à Google Cloud Platform (GCP), où je propose GCP Model, une spécification précise et basée sur les modèles, automatiquement déduite de la documentation textuelle de GCP. Deuxièmement, je propose le cadre fclouds pour assurer une interopérabilité sémantique entre plusieurs nuages, i.e., pour identifier les concepts communs entre les APIs et raisonner dessus. Le langage fclouds est une formalisation des concepts et de la sémantique opérationnelle d’OCCI en employant le langage de spécification formel Alloy. Pour démontrer l’efficacité du langage fclouds, je spécifie formellement treize APIs et en vérifie les propriétés. / With the advent of cloud computing, different cloud providers with heterogeneous cloud services and Application Programming Interfaces (APIs) have emerged. This heterogeneity complicates the implementation of an interoperable multi-cloud system. Among the multi-cloud interoperability solutions, Model-Driven Engineering (MDE) has proven to be quite advantageous and is the mostly adopted methodology to rise in abstraction and mask the heterogeneity of the cloud. However, most of the existing MDE solutions for the cloud are not representative of the cloud APIs and lack of formalization. To address these shortcomings, I present in this thesis an approach based on Open Cloud Computing Interface (OCCI) standard, MDE, and formal methods. I provide two major contributions implemented in the context of the OCCIware project. First, I propose an approach based on reverse-engineering to extract knowledge from the ambiguous textual documentation of cloud APIs and to enhance its representation using MDE techniques. This approach is applied to Google Cloud Platform (GCP), where I provide GCP Model, a precise model-driven specification for GCP that is automatically inferred from GCP textual documentation. Second, I propose the fclouds framework to achieve semantic interoperability in multi-clouds, i.e., to identify the common concepts between cloud APIs and to reason over them. The fclouds language is a formalization of OCCI concepts and operational semantics in Alloy formal specification language. To demonstrate the effectiveness of the fclouds language, I formally specify thirteen case studies and verify their properties.
550

PIPEL: modelo de gerência da elasticidade para aplicações organizadas em pipeline

Meyer, Vinícius 23 August 2016 (has links)
Submitted by Silvana Teresinha Dornelles Studzinski (sstudzinski) on 2016-10-05T11:54:31Z No. of bitstreams: 1 Vinícius Meyer_.pdf: 1508817 bytes, checksum: caf4a7d85be91f78b827610620bae299 (MD5) / Made available in DSpace on 2016-10-05T11:54:32Z (GMT). No. of bitstreams: 1 Vinícius Meyer_.pdf: 1508817 bytes, checksum: caf4a7d85be91f78b827610620bae299 (MD5) Previous issue date: 2016-08-23 / Nenhuma / No ambiente da computação workflows tornam-se um padrão crescente para diversos experimentos científicos. Workflows científicos são compostos por várias aplicações estruturadas em um fluxo de atividades, onde o resultado de uma delas torna-se a entrada de outra. Uma aplicação pipeline é um tipo de workflow que recebe um conjunto de tarefas, as quais devem passar por todas as fases desta aplicação de forma sequencial, o que pode levar a um tempo de execução proibitivo. Tendo em vista este problema, aplicações pipeline podem se beneficiar da utilização de recursos distintos para cada um dos estágios, ou seja, executadas em plataformas distribuídas. Entretanto, dependências e necessidade especificas da computação distribuída surgem devido à interação entre os estágios de processamento e a grande quantidade de dados que devem ser processadas. O fluxo de entrada para aplicações que utilizam padrões pipeline pode ser intenso, inconstante ou irregular. De acordo com o comportamento do fluxo de tarefas, alguns estágios da aplicação podem ter seu desempenho prejudicado, atrasando os estágios subsequentes e por fim interferindo no desempenho da aplicação. Uma alternativa para resolver isto é alocar o máximo de recursos disponíveis (over-provisioning) em cada estágio da aplicação. Entretanto, esta técnica pode gerar um alto custo de infraestrutura, além da possibilidade que em alguns momentos os recursos fiquem ociosos. Sendo assim, a elasticidade em ambiente de nuvem computacional aparece como uma alternativa, explorando o conceito “pagar somente pelo que usar” (pay-as-you-go). Nesse contexto é proposto um modelo de elasticidade baseado na camada PaaS (Plataform as a Service) da nuvem, intitulado de Pipel. Este modelo permite que aplicações pipeline tirem vantagem do provisionamento dinâmico de recursos da infraestrutura de nuvem computacional. Pipel utiliza uma abordagem reativa, fazendo uso de thresholds para a tomada de decisões da elasticidade, baseados na carga de CPU das máquinas virtuais em cada estágio da aplicação. Cada estágio possui um balanceador de carga (chamado de controlador de estágio) e um determinado número de recursos em operação. O controlador do estágio recebe as tarefas que o estágio deve executar, as aloca em uma fila onde são distribuídas nas máquinas virtuais disponíveis em seu estágio. De acordo com regras estabelecidas Pipel realiza ações de elasticidade sobre o ambiente de nuvem. Para validar esta proposta foi desenvolvido um protótipo, o qual foi testado em dois cenários: (i) sem uso de elasticidade e (ii) com uso da elasticidade. Em cada cenário utilizou-se quatro cargas de processamento: (i) Crescente; (ii) Decrescente; (iii) Constante e (iv) Oscilante. Os resultados apresentam uma redução de 38% no tempo da execução da aplicação com o uso da elasticidade provida por Pipel. / In the computing environment workflows has become a standard for many scientific experiments. Scientific workflows consist of several applications structured in an activity flow, where the output of one becomes the input of another. A pipeline application is a type of workflow that receives a set of tasks, which must pass through all stages of this application in a sequential manner, which can lead to a prohibitive execution time. Considering this problem, pipeline applications can benefit from the use of different resources for each stage, or performed in a distributed way. However, specific dependencies and distributed computing problems arise due to the interaction between the processing stages and the mass of data that must be processed. The input stream for applications that use pipeline standards can be intense, erratic or irregular. According to the task flow behavior some stages may have degraded performance, delaying subsequent stages and ultimately interfering in the application’s performance. An alternative to solve this is to allocate the maximum available resources (over-provisioning) in each application stage. However, this technique can generate a high infrastructure costs and the possibility that some resources remain idle in certain moments. Thus, the elasticity in cloud computing environments appears as an alternative, exploring the pay-as-you-go concept. In this context, we propose an elastic model based on the PaaS layer (Platform as a Service) cloud, named Pipel. This model allows pipeline applications to take advantage of the dynamic resource provisioning capabilities of cloud computing infrastructure. Pipel uses a reactive approach, using thresholds for elasticity decisions based on the CPU load of virtual machines in each application stage. Each stage has a load balancer (called stage controller) and a number of operating resources. The stage controller receives the tasks that the stage should run, allocates in a queue and then distribute the tasks in virtual machines available at each stage. According to established rules Pipel performs elasticity actions on the cloud environment. To validate this proposal we developed a prototype that has been tested in two different scenarios: (i) without elasticity and (ii) with elasticity. In each scenario we used four different processing loads: (i) Increasing; (ii) Decreasing; (iii) Constant and (iv) Oscillating. The results showed a reduction of 38% of the application’s execution time using the elasticity provided by Pipel.

Page generated in 0.1386 seconds