• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 3
  • Tagged with
  • 10
  • 8
  • 8
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Investigating the effects of load on the XIFI node

Guduru, Manish Reddy January 2015 (has links)
Having a good understanding of the load requirements in the datacenter improves the capability to effectively provision the resources available to the meet the demands and objectives of application services. Especially in a large project like XIFI this aspect becomes even more critical because of the limited availability of the resources and the complexity of the various entities present.In this study we frame a structure that provides deep insights to comprehend XIFI infrastructure. Further, we model the user requests that approach the node for resource allocation to run their applications. We aim to provide an understanding on different aspects involved in modelling. The objective of this present study is to investigate the effect of load on the XIFI node. To achieve this objective, we model the XIFI node by examining the various entities involved in it. Furthermore, we provide an overview about what constitutes as load in the XIFI node.We conduct a detailed specifications study after which we identify the imperative entities required for the modelling of both the XIFI node and the requests. We examine the model by simulating it in CloudSim for two different scenarios varying the specifications.We simulated the designed structure for 30 iterations and analyzed 10,000 user requests for two cases where total RAM of the node is increased in the second case when compared to the first case. We analyze the CPU usage, RAM usage, Bandwidth usage and Storage usage in both the cases and examine the effects of the user requests on each one of them.The results provided evidence that the load indices on the host are dependent on each other. Also, it showed that the request modelling had an impact on the load of the host. It can also be concluded that the resource provisioning can be effective if the user behavior is known.
2

Experimentation on dynamic congestion control in Software Defined Networking (SDN) and Network Function Virtualisation (NFV)

Kamaruddin, Amalina Farhan January 2017 (has links)
In this thesis, a novel framework for dynamic congestion control has been proposed. The study is about the congestion control in broadband communication networks. Congestion results when demand temporarily exceeds capacity and leads to severe degradation of Quality of Service (QoS) and possibly loss of traffic. Since traffic is stochastic in nature, high demand may arise anywhere in a network and possibly causing congestion. There are different ways to mitigate the effects of congestion, by rerouting, by aggregation to take advantage of statistical multiplexing, and by discarding too demanding traffic, which is known as admission control. This thesis will try to accommodate as much traffic as possible, and study the effect of routing and aggregation on a rather general mix of traffic types. Software Defined Networking (SDN) and Network Function Virtualization (NFV) are concepts that allow for dynamic configuration of network resources by decoupling control from payload data and allocation of network functions to the most suitable physical node. This allows implementation of a centralised control that takes the state of the entire network into account and configures nodes dynamically to avoid congestion. Assumes that node controls can be expressed in commands supported by OpenFlow v1.3. Due to state dependencies in space and time, the network dynamics are very complex, and resort to a simulation approach. The load in the network depends on many factors, such as traffic characteristics and the traffic matrix, topology and node capacities. To be able to study the impact of control functions, some parts of the environment is fixed, such as the topology and the node capacities, and statistically average the traffic distribution in the network by randomly generated traffic matrices. The traffic consists of approximately equal intensity of smooth, bursty and long memory traffic. By designing an algorithm that route traffic and configure queue resources so that delay is minimised, this thesis chooses the delay to be the optimisation parameter because it is additive and real-time applications are delay sensitive. The optimisation being studied both with respect to total end-to-end delay and maximum end-to-end delay. The delay is used as link weights and paths are determined by Dijkstra's algorithm. Furthermore, nodes are configured to serve the traffic optimally which in turn depends on the routing. The proposed algorithm is a fixed-point system of equations that iteratively evaluates routing - aggregation - delay until an equilibrium point is found. Three strategies are compared: static node configuration where each queue is allocated 1/3 of the node resources and no aggregation, aggregation of real-time (taken as smooth and bursty) traffic onto the same queue, and dynamic aggregation based on the entropy of the traffic streams and their aggregates. The results of the simulation study show good results, with gains of 10-40% in the QoS parameters. By simulation, the positive effects of the proposed routing and aggregation strategy and the usefulness of the algorithm. The proposed algorithm constitutes the central control logic, and the resulting control actions are realisable through the SDN/NFV architecture.
3

Avaliação mista de aplicações do tipo Bag of Tasks sobre infraestruturas de nuvem física limitada e virtual escalada com a utilização do OpenStack e do CloudSim / Mixed evaluation of Bag of Tasks applications over limited physical and virtual scheduled cloud infrastructures with OpenStack and CloudSim Utilization

Angelin, Fernando 30 August 2017 (has links)
Submitted by Aline Batista (alinehb.ufpel@gmail.com) on 2018-04-19T13:23:06Z No. of bitstreams: 2 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Dissertacao_Fernando_Angelin.pdf: 1204993 bytes, checksum: 3cf6b9f14201e8e7168987d5dfd7354b (MD5) / Approved for entry into archive by Aline Batista (alinehb.ufpel@gmail.com) on 2018-04-19T14:44:30Z (GMT) No. of bitstreams: 2 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Dissertacao_Fernando_Angelin.pdf: 1204993 bytes, checksum: 3cf6b9f14201e8e7168987d5dfd7354b (MD5) / Made available in DSpace on 2018-04-19T14:44:39Z (GMT). No. of bitstreams: 2 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Dissertacao_Fernando_Angelin.pdf: 1204993 bytes, checksum: 3cf6b9f14201e8e7168987d5dfd7354b (MD5) Previous issue date: 2017-08-30 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / A Computação em Nuvem vem apresentando um crescimento extraordinário nos últimos anos, em questão de quantidade e variedade de serviços oferecidos, estes, tomando uma forma onipresente no cotidiano. Com isso, usuários que necessitam, geralmente, de alta disponibilidade de processamento, buscam na Nuvem soluções que diminuam custos pontuais, como construção e manutenção de uma infraestrutura privada. A saída para tal é alugar infraestrutura em uma Nuvem ou até mesmo utilizar a Nuvem para dimensionar uma infraestrutura própria que supra sua demanda, sem sub ou superdimensionamento. Este trabalho apresenta um modelo de simulação mista, o qual busca comparar uma infraestrutura física limitada à uma infraestrutura virtual simulada com as mesmas características. Para isto, foram executados testes em uma infraestrutura física limitada e testes de simulação utilizando o CloudSim, escalando o tamanho das tarefas do tipo Bag of Tasks (BoT) e o número de hosts e núcleos computacionais. Para tais testes foram implementados algoritmos que realizam a transformação da entrada BoT para a execução na infraestrutura física e na simulada. Também, foram prototipadas classes para complementação do CloudSim, tanto para leitura dos BoTs transformados quanto para a criação da infraestrutura simulada.Com os testes realizados, notamos a estabilidade do sistema, quando simulados testes com BoT pequenos, médios e grandes em infraestruturas que, para nosso caso, foram classificadas como pequena, média e grande. Outra observação importante realizada foi a de que quando a infraestrutura oferece carga externa à execução desejada (utilização por outro usuário, por exemplo), o tempo final de execução dos BoTs aumenta proporcionalmente à quanto a infraestrutura está em utilização. Também percebemos que a granularidade das tarefas impacta na execução. Com relação à escalabilidade, foi percebido que BoTs classificados como grandes para infraestruturas categorizadas como pequenas foram agrupados como pequenos para infraestruturas identificadas como grandes. / Cloud Computing has been showing extraordinary growth in recent years, in terms of the quantity and variety of services offered, these, taking a ubiquitous form in everyday life. As a result, users who generally require high availability of processing, search on cloud solutions that reduce specific costs, such as building and maintaining a private infrastructure. The way out is to rent infrastructure in a Cloud or even use the Cloud to size an infrastructure that suits your demand, without sub or oversize. This dissertation presents a mixed simulation model, which seeks to compare a limited physical infrastructure to a simulated virtual infrastructure with the same characteristics. For this, tests were performed on a limited physical infrastructure and simulation tests using CloudSim, scaling the size of Bag of Tasks (BoT) tasks and the number of hosts and processing cores. For such tests, were implemented algorithms that perform the transformation of the BoT input for execution in real infrastructure and simulation. Also, classes to complement CloudSim were prototyped, both for reading the transformed BoTs and for creating the simulated infrastructure. With the tests carried out, we noticed the stability of the system when simulated small, medium and large BoT tests in infrastructures that, in our case, were classified as small, medium and large. Another important observation was that when the infrastructure offers external load to the desired execution (use by another user, for example), the final execution time of the BoTs increases proportionally to how much the infrastructure is in use. We also realize that the granularity of tasks impacts execution. With regard to scalability, it was noticed that BoTs classified as large for infrastructures categorized as small were grouped as small for infrastructures identified as large.
4

Energy-aware adaptation in Cloud datacenters

Mahadevamangalam, Srivasthav January 2018 (has links)
Context: Cloud computing is providing services and resources to customers based on pay-per-use. As the services increasing, Cloud computing using a vast number of data centers like thousands of data centers which consumes high energy. The power consumption for cooling the data centers is very high. So, recent research going on to implement the best model to reduce the energy consumption by the data centers. This process of minimizing the energy consumption can be done using dynamic Virtual Machine Consolidation (VM Consolidation) in which there will be a migration of VMs from one host to another host so that energy can be saved. 70% of energy consumption will be reduced/ saved when the host idle mode is switched to sleep mode, and this is done by migration of VM from one host to another host. There are many energy adaptive heuristics algorithms for the VM Consolidation. Host overload detection, host underload detection and VM selection using VM placement are the heuristics algorithms of VM Consolidation which results in less consumption of the energy in the data centers while meeting Quality of Service (QoS). In this thesis, we proposed new heuristic algorithms to reduce energy consumption. Objectives: The objective of this research is to provide an energy efficient model to reduce energy consumption. And proposing a new heuristics algorithms of VM Consolidationtechnique in such a way that it consumes less energy. Presenting the advantages and disadvantages of the proposed heuristics algorithms is also considered as objectives of our experiment. Methods: Literature review was performed to gain knowledge about the working and performances of existing algorithms using VM Consolidation technique. Later, we have proposed a new host overload detection, host underload detection, VM selection, and VM placement heuristic algorithms. In our work, we got 32 combinations from the host overload detection and VM selection, and two VM placement heuristic algorithms. We proposed dynamic host underload detection algorithm which is used for all the 32 combinations. The other research method chosen is experimentation, to analyze the performances of both proposed and existing algorithms using workload traces of PlanetLab. This simulation is done usingCloudSim. Results: To compare and get the results, the following parameters had been considered: Energy consumption, No. of migrations, Performance Degradation due to VM Migrations (PDM),Service Level Agreement violation Time per Active Host (SLATAH), SLA Violation (SLAV),i.e. from a combination of the PDM, SLATAH, Energy consumption and SLA Violation (ESV).We have conducted T-test and Cohen’s d effect size to measure the significant difference and effect size between algorithms respectively. For analyzing the performance, the results obtained from proposed algorithms and existing algorithm were compared. From the 32 combinations of the host overload detection and VM Selection heuristic algorithms, MADmedian_MaxR (Mean Absolute Deviation around median (MADmedian) and Maximum Requested RAM (MaxR))using Modified Worst Fit Decreasing (MWFD) VM Placement algorithm, andMADmean_MaxR (Mean Absolute Deviation around mean (MADmean), and MaximumRequested RAM (MaxR)) using Modified Second Worst Fit Decreasing (MSWFD) VM placement algorithm respectively gives the best results which consume less energy and with minimum SLA Violation. Conclusion: By analyzing the comparisons, it is concluded that proposed algorithms perform better than the existing algorithm. As our aim is to propose the better energy- efficient model using the VM Consolidation techniques to minimize the power consumption while meeting the SLAs. Hence, we proposed the energy- efficient algorithms for VM Consolidation technique and compared with the existing algorithm and proved that our proposed algorithm performs better than the other algorithm. We proposed 32 combinations of heuristics algorithms (host overload detection and VM selection) with two adaptive heuristic VM placement algorithms. We have proposed a dynamic host underload detection algorithm, and it is used for all 32 combinations. When the proposed algorithms are compared with the existing algorithm, we got 22 combinations of host overload detection and VM Selection heuristic algorithms with MWFD(Modified Worst Fit Decreasing) VM placement and 20 combinations of host overload detection and VM Selection heuristic algorithms with MSWFD (Modified Second Worst FitDecreasing) VM placement algorithm which shows the better performance than existing algorithm. Thus, our proposed heuristic algorithms give better results with minimum energy consumption with less SLA violation.
5

A comparison of energy efficient adaptation algorithms in cloud data centers

Penumetsa, Swetha January 2018 (has links)
Context: In recent years, Cloud computing has gained a wide range of attention in both industry and academics as Cloud services offer pay-per-use model, due to increase in need of factors like reliability and computing results with immense growth in Cloud-based companies along with a continuous expansion of their scale. However, the rise in Cloud computing users can cause a negative impact on energy consumption in the Cloud data centers as they consume huge amount of overall energy. In order to minimize the energy consumption in virtual datacenters, researchers proposed various energy efficient resources management strategies. Virtual Machine dynamic Consolidation is one of the prominent technique and an active research area in recent time, used to improve resource utilization and minimize the electric power consumption of a data center. This technique monitors the data centers utilization, identify overloaded, and underloaded hosts then migrate few/all Virtual Machines (VMs) to other suitable hosts using Virtual Machine selection and Virtual Machine placement, and switch underloaded hosts to sleep mode.   Objectives: Objective of this study is to define and implement new energy-aware heuristic algorithms to save energy consumption in Cloud data centers and show the best-resulted algorithm then compare performances of proposed heuristic algorithms with old heuristics.   Methods: Initially, a literature review is conducted to identify and obtain knowledge about the adaptive heuristic algorithms proposed previously for energy-aware VM Consolidation, and find the metrics to measure the performance of heuristic algorithms. Based on this knowledge, for our thesis we have proposed 32 combinations of novel adaptive heuristics for host overload detection (8) and VM selection algorithms (4), one host underload detection and two adaptive heuristic for VM placement algorithms which helps in minimizing both energy consumption and reducing overall Service Level Agreement (SLA) violation of Cloud data center. Further, an experiment is conducted to measure the performances of all proposed heuristic algorithms. We have used the CloudSim simulation toolkit for the modeling, simulation, and implementation of proposed heuristics. We have evaluated the proposed algorithms using PlanetLab VMs real workload traces.   Results: The results were measured using metrics energy consumption of data center (power model), Performance Degradation due to Migration (PDM), Service Level Agreement violation Time per Active Host (SLATAH), Service Level Agreement Violation (SLAV = PDM . SLATAH) and, Energy consumption and Service level agreement Violation (ESV).  Here for all four categories of VM Consolidation, we have compared the performances of proposed heuristics with each other and presented the best heuristic algorithm proposed in each category. We have also compared the performances of proposed heuristic algorithms with existing heuristics which are identified in the literature and presented the number of newly proposed algorithms work efficiently than existing algorithms. This comparative analysis is done using T-test and Cohen's d effect size.   From the comparison results of all proposed algorithms, we have concluded that Mean absolute Deviation around median (MADmedain) host overload detection algorithm equipped with Maximum requested RAM VM selection (MaxR) using Modified First Fit Decreasing VM placement (MFFD), and Standard Deviation (STD) host overload detection algorithm equipped with Maximum requested RAM VM selection (MaxR) using Modified Last Fit decreasing VM placement (MLFD) respectively performed better than other 31 combinations of proposed overload detection and VM selection heuristic algorithms, with regards to Energy consumption and Service level agreement Violation (ESV). However, from the comparative study between existing and proposed algorithms, 23 and 21 combinations of proposed host overload detection and VM selection algorithms using MFFD and MLFD VM placements respectively performed efficiently compared to existing (baseline) heuristic algorithms considered for this study.   Conclusions: This thesis presents novel proposed heuristic algorithms that are useful for minimization of both energy consumption and Service Level Agreement Violation in virtual datacenters. It presents new 23 combinations of proposed host overloading detection and VM selection algorithms using MFFD VM placement and 21 combinations of proposed host overloading detection and VM selection algorithms using MLFD VM placement, which consumes the minimum amount of energy with minimal SLA violation compared to the existing algorithms. It gives scope for future researchers related to improving resource utilization and minimizing the electric power consumption of a data center. This study can be extended in further by implementing the work on other Cloud software platforms and developing much more efficient algorithms for all four categories of VM consolidation.
6

Autoscaling through Self-Adaptation Approach in Cloud Infrastructure. A Hybrid Elasticity Management Framework Based Upon MAPE (Monitoring-Analysis-Planning-Execution) Loop, to Ensure Desired Service Level Objectives (SLOs)

Butt, Sarfraz S. January 2019 (has links)
The project aims to propose MAPE based hybrid elasticity management framework on the basis of valuable insights accrued during systematic analysis of relevant literature. Each stage of MAPE process acts independently as a black box in proposed framework, while dealing with neighbouring stages. Thus, being modular in nature; underlying algorithms in any of the stage can be replaced with more suitable ones, without affecting any other stage. The hybrid framework enables proactive and reactive autoscaling approaches to be implemented simultaneously within same system. Proactive approach is incorporated as a core decision making logic on the basis of forecast data, while reactive approach being based upon actual data would act as a damage control measure; activated only in case of any problem with proactive approach. Thus, benefits of both the worlds; pre-emption as well as reliability can be achieved through proposed framework. It uses time series analysis (moving average method / exponential smoothing) and threshold based static rules (with multiple monitoring intervals and dual threshold settings) during analysis and planning phases of MAPE loop, respectively. Mathematical illustration of the framework incorporates multiple parameters namely VM initiation delay / release criterion, network latency, system oscillations, threshold values, smart kill etc. The research concludes that recommended parameter settings primarily depend upon certain autoscaling objective and are often conflicting in nature. Thus, no single autoscaling system with similar values can possibly meet all objectives simultaneously, irrespective of reliability of an underlying framework. The project successfully implements complete cloud infrastructure and autoscaling environment over experimental platforms i-e OpenStack and CloudSim Plus. In nutshell, the research provides solid understanding of autoscaling phenomenon, devises MAPE based hybrid elasticity management framework and explores its implementation potential over OpenStack and CloudSim Plus.
7

Projeto e avaliação de um broker como agente de intermediação e QoS em uma nuvem computacional híbrida / Design and evaluation of a broker as QoS and intermediation agent in hybrid cloud computing

Pardo, Mario Henrique de Souza 16 June 2016 (has links)
A presente tese de doutorado propõe uma arquitetura de cloud broker para ambientes de computação em nuvem híbrida. Um cloud broker tem o objetivo de executar a mediação entre clientes e provedores, recebendo requisições dos clientes e encaminhando-as ao serviço do provedor que melhor se adaptar aos requisitos de qualidade de serviço (QoS) solicitados. A arquitetura de broker de serviços com QoS proposta denomina-se QBroker, características de implementação de seu modo de operação bem como sua interação com os recursos virtuais de um ambiente de nuvem são apresentadas. O modelo de nuvem considerado foi o de nuvem híbrida com uma caracterização de arquitetura orientada a serviços (SOA) na qual serviços remotos são disponibilizados aos clientes. A política de escalonamento de tarefas desenvolvida para o QBroker foi a de intermediação de serviços, considerando tratativas de QoS, diferenciação das instâncias de serviços (SOA) e alocação dinâmica de serviços. Além disso, toda a caracterização do modo de operação do QBroker foi baseada no conceito de intermediação do modelo de referência de nuvem do NIST. O componente QBroker foi introduzido numa arquitetura de computação em nuvem BEQoS (Bursty Energy and Quality of Service), desenvolvida no Laboratório de Sistemas Distribuídos e Programação Concorrente do ICMC-USP de São Carlos. Avaliações de desempenho para a implementação da arquitetura QBroker foram conduzidas por meio de programas de simulação com uso da API do simulador CloudSim e da arquitetura CloudSim-BEQoS. Três cenários experimentais foram avaliados e, segundo a análise de resultados efetuada, foi possível validar que as características arquiteturais implementadas no QBroker resultaram em significativo impacto nas variáveis de resposta consideradas. Assim, foi possível comprovar que o uso do QBroker como mecanismo de mediação em ambientes de nuvem híbrida com SOA promoveu ganhos em desempenho para o sistema de nuvem e permitiu melhoria na qualidade dos serviços oferecidos. / This doctoral thesis proposes a cloud broker architecture for hybrid cloud computing environments. A cloud broker aims to perform mediation between clients and providers, receiving customer requests and forwarding them to the service provider that best suits the requested QoS requirements. The broker architecture services with QoS proposal is called QBroker. Implementation features of its mode of operation as well as its interaction with the virtual resources from a cloud environment are presented. The cloud deployment model was considered a hybrid cloud with a characterization of service-oriented architecture (SOA) in which remote services are available to customers. The task scheduling policy developed for QBroker was the intermediation of services, considering negotiations of QoS, differentiation of services instances and dynamic allocation of services. Moreover, the entire characterization of QBroker operation mode is based on the intermediation concept of the NIST cloud reference model. The QBroker component was introduced into a cloud computing architecture BEQoS (Bursty, Energy and Quality of Service), developed in the Laboratory of Distributed Systems and Concurrent Programming at ICMC-USP. Performance evaluations analysis the of results of QBroker architecture were conducted through simulation programs using the CloudSim simulator API and CloudSim-BEQoS architecture. Three experimental scenarios were evaluated and, according to analysis of the results, it was possible to validate that the architectural features implemented in QBroker resulted in significant impact on response variables considered. Thus, it was possible to prove that the use of QBroker as mediation mechanism in hybrid cloud environments with SOA promoted performance gains for the cloud system and allowed improvement in the quality of services offered.
8

Projeto e avaliação de um broker como agente de intermediação e QoS em uma nuvem computacional híbrida / Design and evaluation of a broker as QoS and intermediation agent in hybrid cloud computing

Mario Henrique de Souza Pardo 16 June 2016 (has links)
A presente tese de doutorado propõe uma arquitetura de cloud broker para ambientes de computação em nuvem híbrida. Um cloud broker tem o objetivo de executar a mediação entre clientes e provedores, recebendo requisições dos clientes e encaminhando-as ao serviço do provedor que melhor se adaptar aos requisitos de qualidade de serviço (QoS) solicitados. A arquitetura de broker de serviços com QoS proposta denomina-se QBroker, características de implementação de seu modo de operação bem como sua interação com os recursos virtuais de um ambiente de nuvem são apresentadas. O modelo de nuvem considerado foi o de nuvem híbrida com uma caracterização de arquitetura orientada a serviços (SOA) na qual serviços remotos são disponibilizados aos clientes. A política de escalonamento de tarefas desenvolvida para o QBroker foi a de intermediação de serviços, considerando tratativas de QoS, diferenciação das instâncias de serviços (SOA) e alocação dinâmica de serviços. Além disso, toda a caracterização do modo de operação do QBroker foi baseada no conceito de intermediação do modelo de referência de nuvem do NIST. O componente QBroker foi introduzido numa arquitetura de computação em nuvem BEQoS (Bursty Energy and Quality of Service), desenvolvida no Laboratório de Sistemas Distribuídos e Programação Concorrente do ICMC-USP de São Carlos. Avaliações de desempenho para a implementação da arquitetura QBroker foram conduzidas por meio de programas de simulação com uso da API do simulador CloudSim e da arquitetura CloudSim-BEQoS. Três cenários experimentais foram avaliados e, segundo a análise de resultados efetuada, foi possível validar que as características arquiteturais implementadas no QBroker resultaram em significativo impacto nas variáveis de resposta consideradas. Assim, foi possível comprovar que o uso do QBroker como mecanismo de mediação em ambientes de nuvem híbrida com SOA promoveu ganhos em desempenho para o sistema de nuvem e permitiu melhoria na qualidade dos serviços oferecidos. / This doctoral thesis proposes a cloud broker architecture for hybrid cloud computing environments. A cloud broker aims to perform mediation between clients and providers, receiving customer requests and forwarding them to the service provider that best suits the requested QoS requirements. The broker architecture services with QoS proposal is called QBroker. Implementation features of its mode of operation as well as its interaction with the virtual resources from a cloud environment are presented. The cloud deployment model was considered a hybrid cloud with a characterization of service-oriented architecture (SOA) in which remote services are available to customers. The task scheduling policy developed for QBroker was the intermediation of services, considering negotiations of QoS, differentiation of services instances and dynamic allocation of services. Moreover, the entire characterization of QBroker operation mode is based on the intermediation concept of the NIST cloud reference model. The QBroker component was introduced into a cloud computing architecture BEQoS (Bursty, Energy and Quality of Service), developed in the Laboratory of Distributed Systems and Concurrent Programming at ICMC-USP. Performance evaluations analysis the of results of QBroker architecture were conducted through simulation programs using the CloudSim simulator API and CloudSim-BEQoS architecture. Three experimental scenarios were evaluated and, according to analysis of the results, it was possible to validate that the architectural features implemented in QBroker resulted in significant impact on response variables considered. Thus, it was possible to prove that the use of QBroker as mediation mechanism in hybrid cloud environments with SOA promoted performance gains for the cloud system and allowed improvement in the quality of services offered.
9

IoT Workload Characterisation for Next Generation Cloud Systems

Mirza, Fatema January 2022 (has links)
The integration of The Internet of Things and cloud computing has led to the emergenceof new classes of applications ranging from smart healthcare, smart and precision agriculture,smart manufacturing to smart environmental monitoring. The rapid surge in the useof these applications is expected to generate massive amounts of data with differentcharacteristics that are yet not studied. It can be hypothesised that each IoT-enabledapplication may exhibit a diverse range of characteristics that if modelled correctly, maylead to efcient distributed systems. This thesis aims to study the trafc characteristics ofan IoT-enabled healthcare application to build intelligent policies for scalable IoT-cloudsystems by employing the use of workload prediction and load balancing demonstratedon CloudSim Plus platform. The realistic incoming trafc from the SSiO IoT healthcareapplication system is studied, developed and modeled. Workload prediction algorithmsare developed based on ARIMA and SARIMA. The workload prediction algorithms arethen performed and extensively evaluated to select the one with the best performance,which was SARIMA, outperforming ARIMA by 200% on the basis of MAE, RMSE andMAPE. On the basis of the SARIMA prediction for 2 time periods in advance, theload balancing algorithm is preempted to perform horizontal scaling. The results revealthat the load balancer with SARIMA prediction outperform round robin and active loadbalancers for response time and cost by atleast 64% when it comes to worst case scenario.To conclude, a reflection is commented upon about the load balancing for IoT systemsand the directions this could take in the future for a more holistic sustainable approachon real life platforms.
10

Optimizing Cloudlet Scheduling and Wireless Sensor Localization using Computational Intelligence Techniques

Al-Olimat, Hussein S. 19 December 2014 (has links)
No description available.

Page generated in 0.0212 seconds