Spelling suggestions: "subject:"autoscaling"" "subject:"autoscaling""
1 |
Scheduling and deployment of large-scale applications on Cloud platformsMuresan, Adrian 10 December 2012 (has links) (PDF)
Infrastructure as a service (IaaS) Cloud platforms are increasingly used in the IT industry. IaaS platforms are providers of virtual resources from a catalogue of predefined types. Improvements in virtualization technology make it possible to create and destroy virtual machines on the fly, with a low overhead. As a result, the great benefit of IaaS platforms is the ability to scale a virtual platform on the fly, while only paying for the used resources. From a research point of view, IaaS platforms raise new questions in terms of making efficient virtual platform scaling decisions and then efficiently scheduling applications on dynamic platforms. The current thesis is a step forward towards exploring and answering these questions. The first contribution of the current work is focused on resource management. We have worked on the topic of automatically scaling cloud client applications to meet changing platform usage. There have been various studies showing self-similarities in web platform traffic which implies the existence of usage patterns that may or may not be periodical. We have developed an automatic platform scaling strategy that predicted platform usage by identifying non-periodic usage patterns and extrapolating future platform usage based on them. Next we have focused on extending an existing grid platform with on-demand resources from an IaaS platform. We have developed an extension to the DIET (Distributed Interactive Engineering Toolkit) middleware, that uses a virtual market based approach to perform resource allocation. Each user is given a sum of virtual currency that he will use for running his tasks. This mechanism help in ensuring fair platform sharing between users. The third and final contribution targets application management for IaaS platforms. We have studied and developed an allocation strategy for budget-constrained workflow applications that target IaaS Cloud platforms. The workflow abstraction is very common amongst scientific applications. It is easy to find examples in any field from bioinformatics to geology. In this work we have considered a general model of workflow applications that comprise parallel tasks and permit non-deterministic transitions. We have elaborated two budget-constrained allocation strategies for this type of workflow. The problem is a bi-criteria optimization problem as we are optimizing both budget and workflow makespan. This work has been practically validated by implementing it on top of the Nimbus open source cloud platform and the DIET MADAG workflow engine. This is being tested with a cosmological simulation workflow application called RAMSES. This is a parallel MPI application that, as part of this work, has been ported for execution on dynamic virtual platforms. Both theoretical simulations and practical experiments have shown encouraging results and improvements.
|
2 |
Window-based Cost-effective Auto-scaling Solution with Optimized Scale-in StrategyPerera, Ashansa January 2016 (has links)
Auto-scaling is a major way of minimizing the gap between the demand and the availability of the computing resources for the applications with dynamic workloads. Even though a lot of effort has been taken to address the requirement of auto-scaling for the distributed systems, most of the available solutions are application-specific and consider only on fulfilling the application level requirements. Today, with the pay-as-you-go model of cloud computing, many different price plans have been offered by the cloud providers which leads the resource price to become an important decision-making criterion at the time of auto-scaling. One major step is using the spot instances which are more advantageous in the aspect of cost for elasticity. However, using the spot instances for auto-scaling should be handled carefully to avoid its drawbacks since the spot instances can be terminated at any time by the infrastructure providers. Despite the fact that some cloud providers such as Amazon Web Services and Google Compute Engine have their own auto-scaling solutions, they do not follow the goal of cost-effectiveness. In this work, we introduce our auto-scaling solution that is targeted for middle-layers in-between the cloud and the application, such as Karamel. Our work combines the aspect of minimizing the cost of the deployment with maintaining the demand for the resources. Our solution is a rule-based system that is built on top of resource utilization metrics as a more general metric for workloads. Further, the machine terminations and the billing period of the instances are taken into account as the cloud source events. Different strategies such as window based profiling, dynamic event profiling, and optimized scale-in strategy have been used to achieve our main goal of providing a cost-effective auto-scaling solution for cloud-based deployments. With the help of our simulation methodology, we explore our parameter space to find the best values under different workloads. Moreover, our cloud-based experiments show that our solution performs much more economically compare to the available cloud-based auto-scaling solutions.
|
3 |
AWS Flap Detector: An Efficient way to detect Flapping Auto Scaling Groups on AWS CloudChandrasekar, Dhaarini 07 June 2016 (has links)
No description available.
|
4 |
Comparison of Recommendation Systems for Auto-scaling in the Cloud EnvironmentBoyapati, Sai Nikhil January 2023 (has links)
Background: Cloud computing’s rapid growth has highlighted the need for efficientresource allocation. While cloud platforms offer scalability and cost-effectiveness for a variety of applications, managing resources to match dynamic workloads remains a challenge. Auto-scaling, the dynamic allocation of resources in response to real-time demand and performance metrics, has emerged as a solution. Traditional rule-based methods struggle with the increasing complexity of cloud applications. Machine Learning models offer promising accuracy by learning from performance metrics and adapting resource allocations accordingly. Objectives: This thesis addresses the topic of cloud environments auto-scaling recommendations emphasizing the integration of Machine Learning models and significant application metrics. Its primary objectives are determining the critical metrics for accurate recommendations and evaluating the best recommendation techniques for auto-scaling. Methods: The study initially identifies the crucial metrics—like CPU usage and memory consumption that have a substantial impact on auto-scaling selections through thorough experimentation and analysis. Machine Learning(ML) techniques are selected based on literature review, and then further evaluated through thorough experimentation and analysis. These findings establish a foundation for the subsequent evaluation of ML techniques for auto-scaling recommendations. Results: The performance of Random Forests (RF), K-Nearest Neighbors (KNN), and Support Vector Machines (SVM) are investigated in this research. The results show that RF have higher accuracy, precision, and recall which is consistent with the significance of the metrics which are identified earlier. Conclusions: This thesis enhances the understanding of auto-scaling recommendations by combining the findings from metric importance and recommendation technique performance. The findings show the complex interactions between metrics and recommendation methods, establishing the way for the development of adaptive auto-scaling systems that improve resource efficiency and application functionality.
|
5 |
Scheduling and deployment of large-scale applications on Cloud platforms / Ordonnancement et déploiement d'applications de gestion de données à grande échelle sur des plates-formes de type CloudsMuresan, Adrian 10 December 2012 (has links)
L'usage des plateformes de Cloud Computing offrant une Infrastructure en tant que service (IaaS) a augmenté au sein de l'industrie. Les infrastructures IaaS fournissent des ressources virtuelles depuis un catalogue de types prédéfinis. Les avancées dans le domaine de la virtualisation rendent possible la création et la destruction de machines virtuelles au fur et à mesure, avec un faible surcout d'exploitation. En conséquence, le bénéfice offert par les plate-formes IaaS est la possibilité de dimensionner une architecture virtuelle au fur et à mesure de l'utilisation, et de payer uniquement les ressources utilisées. D'un point de vue scientifique, les plateformes IaaS soulèvent de nouvelles questions concernant l'efficacité des décisions prises en terme de passage à l'échelle, et également l'ordonnancement des applications sur les plateformes dynamiques. Les travaux de cette thèse explorent ce thème et proposent des solutions à ces deux problématiques. La première contribution décrite dans cette thèse concerne la gestion des ressources. Nous avons travaillé sur le redimensionnement automatique des applications clientes de Cloud afin de modéliser les variations d'utilisation de la plateforme. De nombreuses études ont montré des autosimilarités dans le trafic web des plateformes, ce qui implique l'existence de motifs répétitifs pouvant être périodiques ou non. Nous avons développé une stratégie automatique de dimensionnement, capable de prédire le temps d'utilisation de la plateforme en identifiant les motifs répétitifs non périodiques. Dans un second temps, nous avons proposé d'étendre les fonctionnalités d'un intergiciel de grilles, en implémentant une utilisation des ressources à la demandes.Nous avons développé une extension pour l'intergiciel DIET (Distributed Interactive Engineering Toolkit), qui utilise un marché virtuel pour gérer l'allocation des ressources. Chaque utilisateur se voit attribué un montant de monnaie virtuelle qu'il utilisera pour exécuter ses tâches. Le mécanisme d'aide assure un partage équitable des ressources de la plateforme entre les différents utilisateurs. La troisième et dernière contribution vise la gestion d'applications pour les plateformes IaaS. Nous avons étudié et développé une stratégie d'allocation des ressources pour les applications de type workflow avec des contraintes budgétaires. L'abstraction des applications de type workflow est très fréquente au sein des applications scientifiques, dans des domaines variés allant de la géologie à la bioinformatique. Dans ces travaux, nous avons considéré un modèle général d'applications de type workflow qui contient des tâches parallèles et permet des transitions non déterministes. Nous avons élaboré deux stratégies d'allocations à contraintes budgétaires pour ce type d'applications. Le problème est une optimisation à deux critères dans la mesure où nous optimisons le budget et le temps total du flux d'opérations. Ces travaux ont été validés de façon expérimentale par leurs implémentations au sein de la plateforme de Cloud libre Nimbus et de moteur de workflow MADAG présent au sein de DIET. Les tests ont été effectuées sur une simulation de cosmologie appelée RAMSES. RAMSES est une application parallèle qui, dans le cadre de ces travaux, a été portée sur des plateformes virtuelles dynamiques. L'ensemble des résultats théoriques et pratiques ont débouché sur des résultats encourageants et des améliorations. / Infrastructure as a service (IaaS) Cloud platforms are increasingly used in the IT industry. IaaS platforms are providers of virtual resources from a catalogue of predefined types. Improvements in virtualization technology make it possible to create and destroy virtual machines on the fly, with a low overhead. As a result, the great benefit of IaaS platforms is the ability to scale a virtual platform on the fly, while only paying for the used resources. From a research point of view, IaaS platforms raise new questions in terms of making efficient virtual platform scaling decisions and then efficiently scheduling applications on dynamic platforms. The current thesis is a step forward towards exploring and answering these questions. The first contribution of the current work is focused on resource management. We have worked on the topic of automatically scaling cloud client applications to meet changing platform usage. There have been various studies showing self-similarities in web platform traffic which implies the existence of usage patterns that may or may not be periodical. We have developed an automatic platform scaling strategy that predicted platform usage by identifying non-periodic usage patterns and extrapolating future platform usage based on them. Next we have focused on extending an existing grid platform with on-demand resources from an IaaS platform. We have developed an extension to the DIET (Distributed Interactive Engineering Toolkit) middleware, that uses a virtual market based approach to perform resource allocation. Each user is given a sum of virtual currency that he will use for running his tasks. This mechanism help in ensuring fair platform sharing between users. The third and final contribution targets application management for IaaS platforms. We have studied and developed an allocation strategy for budget-constrained workflow applications that target IaaS Cloud platforms. The workflow abstraction is very common amongst scientific applications. It is easy to find examples in any field from bioinformatics to geology. In this work we have considered a general model of workflow applications that comprise parallel tasks and permit non-deterministic transitions. We have elaborated two budget-constrained allocation strategies for this type of workflow. The problem is a bi-criteria optimization problem as we are optimizing both budget and workflow makespan. This work has been practically validated by implementing it on top of the Nimbus open source cloud platform and the DIET MADAG workflow engine. This is being tested with a cosmological simulation workflow application called RAMSES. This is a parallel MPI application that, as part of this work, has been ported for execution on dynamic virtual platforms. Both theoretical simulations and practical experiments have shown encouraging results and improvements.
|
6 |
Towards auto-scaling in the cloudYazdanov, Lenar 16 January 2017 (has links) (PDF)
Cloud computing provides an easy access to computing resources. Customers can acquire and release resources any time. However, it is not trivial to determine when and how many resources to allocate. Many applications running in the cloud face workload changes that affect their resource demand. The first thought is to plan capacity either for the average load or for the peak load. In the first case there is less cost incurred, but performance will be affected if the peak load occurs. The second case leads to money wastage, since resources will remain underutilized most of the time. Therefore there is a need for a more sophisticated resource provisioning techniques that can automatically scale the application resources according to workload demand and performance constrains.
Large cloud providers such as Amazon, Microsoft, RightScale provide auto-scaling services. However, without the proper configuration and testing such services can do more harm than good. In this work I investigate application specific online resource allocation techniques that allow to dynamically adapt to incoming workload, minimize the cost of virtual resources and meet user-specified performance objectives.
|
7 |
On exploiting location flexibility in data-intensive distributed systemsYu, Boyang 12 October 2016 (has links)
With the fast growth of data-intensive distributed systems today, more novel and principled approaches are needed to improve the system efficiency, ensure the service quality to satisfy the user requirements, and lower the system running cost. This dissertation studies the design issues in the data-intensive distributed systems, which are differentiated from other systems by the heavy workload of data movement and are characterized by the fact that the destination of each data flow is limited to a subset of available locations, such as those servers holding the requested data. Besides, even among the feasible subset, different locations may result in different performance.
The studies in this dissertation improve the data-intensive systems by exploiting the data storage location flexibility. It addresses how to reasonably determine the data placement based on the measured request patterns, to improve a series of performance metrics, such as the data access latency, system throughput and various costs, by the proposed hypergraph models for data placement. To implement the proposal with a lower overhead, a sketch-based data placement scheme is presented, which constructs the sparsified hypergraph under a distributed and streaming-based system model, achieving a good approximation on the performance improvement. As the network can potentially become the bottleneck of distributed data-intensive systems due to the frequent data movement among storage nodes, the online data placement by reinforcement learning is proposed which intelligently determines the storage locations of each data item at the moment that the item is going to be written or updated, with the joint-awareness of network conditions and request patterns. Meanwhile, noticing that distributed memory caches are effective measures in lowering the workload to the backend storage systems, the auto-scaling of memory cache clusters is studied, which tries to balance the energy cost of the service and the performance ensured.
As the outcome of this dissertation, the designed schemes and methods essentially help to improve the running efficiency of data-intensive distributed systems. Therefore, they can either help to improve the user-perceived service quality under the same level of system resource investment, or help to lower the monetary expense and energy consumption in maintaining the system under the same performance standard. From the two perspectives, both the end users and the system providers could obtain benefits from the results of the studies. / Graduate
|
8 |
Towards auto-scaling in the cloud: online resource allocation techniquesYazdanov, Lenar 26 September 2016 (has links)
Cloud computing provides an easy access to computing resources. Customers can acquire and release resources any time. However, it is not trivial to determine when and how many resources to allocate. Many applications running in the cloud face workload changes that affect their resource demand. The first thought is to plan capacity either for the average load or for the peak load. In the first case there is less cost incurred, but performance will be affected if the peak load occurs. The second case leads to money wastage, since resources will remain underutilized most of the time. Therefore there is a need for a more sophisticated resource provisioning techniques that can automatically scale the application resources according to workload demand and performance constrains.
Large cloud providers such as Amazon, Microsoft, RightScale provide auto-scaling services. However, without the proper configuration and testing such services can do more harm than good. In this work I investigate application specific online resource allocation techniques that allow to dynamically adapt to incoming workload, minimize the cost of virtual resources and meet user-specified performance objectives.
|
9 |
Auto-scaling Prediction using MachineLearning Algorithms : Analysing Performance and Feature CorrelationAhmed, Syed Saif, Arepalli, Harshini Devi January 2023 (has links)
Despite Covid-19’s drawbacks, it has recently contributed to highlighting the significance of cloud computing. The great majority of enterprises and organisations have shifted to a hybrid mode that enables users or workers to access their work environment from any location. This made it possible for businesses to save on-premises costs by moving their operations to the cloud. It has become essential to allocate resources effectively, especially through predictive auto-scaling. Although many algorithms have been studied regarding predictive auto-scaling, further analysis and validation need to be done. The objectives of this thesis are to implement machine-learning algorithms for predicting auto-scaling and to compare their performance on common grounds. The secondary objective is to find data connections amongst features within the dataset and evaluate their correlation coefficients. The methodology adopted for this thesis is experimentation. The selection of experimentation was made so that the auto-scaling algorithms can be tested in practical situations and compared to the results to identify the best algorithm using the selected metrics. This experiment can assist in determining whether the algorithms operate as predicted. Metrics such as Accuracy, F1-Score, Precision, Recall, Training Time andRoot Mean Square Error(RMSE) are calculated for the chosen algorithms RandomForest(RF), Logistic Regression, Support Vector Machine and Naive Bayes Classifier. The correlation coefficients of the features in the data are also measured, which helped in increasing the accuracy of the machine learning model. In conclusion, the features related to our target variable(CPU us-age, p95_scaling) often had high correlation coefficients compared to other features. The relationships between these variables could potentially be influenced by other variables that are unrelated to the target variable. Also, from the experimentation, it can be seen that the optimal algorithm for determining how cloud resources should be scaled is the Random Forest Classifier.
|
10 |
Evaluation and comparison of a RabbitMQ broker solution on Amazon Web Services and Microsoft Azure / Evaluering och jämförelse av en RabbitMQ broker-lösning på Amazon Web Services och Microsoft AzureJärvelä, Andreas, Lindmark, Sebastian January 2019 (has links)
In this thesis, a scalable, highly available and reactive RabbitMQ cluster is implemented on Amazon Web Services (AWS) and Microsoft Azure. An alternative solution was created on AWS using the CloudFormation service. These solutions are performance tested using the RabbitMQ PerfTest tool by simulating high loads with varied parameters. The test results are used to analyze the throughput and price-performance ratio for a chosen set of instances on the respective cloud platforms. How performance changes between instance family types and cloud platforms is tested and discussed. Additional conclusions are presented regarding the general performance differences in infrastructure between AWS and Microsoft Azure.
|
Page generated in 0.0741 seconds