Spelling suggestions: "subject:"amazon EC2."" "subject:"imazon EC2.""
1 |
Performance Analysis of the Impact of Vertical Scaling on Application Containerized with Docker : Kubernetes on Amazon Web Services - EC2Midigudla, Dhananjay January 2019 (has links)
Containers are being used widely as a base technology to pack applications and microservice architecture is gaining popularity to deploy large scale applications, with containers running different aspects of the application. Due to the presence of dynamic load on the service, a need to scale up or scale down compute resources to the containerized applications arises in order to maintain the performance of the application. Objectives To evaluate the impact of vertical scaling on the performance of a containerized application deployed with Docker container and Kubernetes that includes identification of the performance metrics that are mostly affected and hence characterize the eventual negative effect of vertical scaling. Method Literature study on kubernetes and docker containers followed by proposing a vertical scaling solution that can add or remove compute resources like cpu and memory to the containerized application. Results and Conclusions Latency and connect times were the analyzed performance metrics of the containerized application. From the obtained results, it was concluded that vertical scaling has no significant impact on the performance of a containerized application in terms of latency and connect times.
|
2 |
A Framework For Elastic Execution of Existing MPI ProgramsRaveendran, Aarthi 08 September 2011 (has links)
No description available.
|
3 |
Network Latency Estimation Leveraging Network Path ClassificationOmer Mahgoub Saied, Khalid January 2018 (has links)
With the development of the Internet, new network services with strict network latency requirements have been made possible. These services are implemented as distributed systems deployed across multiple geographical locations. To provide low response time, these services require knowledge about the current network latency. Unfortunately, network latency among geo-distributed sites often change, thus distributed services rely on continuous network latency measurements. One goal of such measurements is to differentiate between momentary latency spikes from relatively long-term latency changes. The differentiation is achieved through statistical processing of the collected samples. This approach of high-frequency network latency measurements has high overhead, slow to identify network latency changes and lacks accuracy. We propose a novel approach for network latency estimation by correlating network paths to network latency. We demonstrate that network latency can be accurately estimated by first measuring and identifying the network path used and then fetching the expected latency for that network path based on previous set of measurements. Based on these principles, we introduce Sudan traceroute, a network latency estimation tool. Sudan traceroute can be used to both reduce the latency estimation time as well as to reduce the overhead of network path measurements. Sudan traceroute uses an improved path detection mechanism that sends only a few carefully selected probes in order to identify the current network path. We have developed and evaluated Sudan traceroute in a test environment and evaluated the feasibility of Sudan traceroute on real-world networks using Amazon EC2. Using Sudan traceroute we have shortened the time it takes for hosts to identify network latency level changes compared to existing approaches. / Med utvecklingen av Internet har nya nätverkstjänster med strikta fördröjningskrav möjliggjorts. Dessa tjänster är implementerade som distribuerade system spridda över flera geografiska platser. För att tillgodose låg svarstid kräver dessa tjänster kunskap om svarstiden i det nuvarande nätverket. Tyvärr ändras ofta nätverksfördröjningen bland geodistribuerade webbplatser, således är distribuerade tjänster beroende av kontinuerliga mätvärden för nätverksfördröjning. Ett mål med sådana mätningar är att skilja mellan momenta ökade svarstider från relativt långsiktiga förändringar av svarstiden. Differentieringen uppnås genom statistisk bearbetning av de samlade mätningarna. Denna högfrekventa insamling av mätningar av nätverksfördröjningen har höga overheadkostnader, identifierar ändringar långsamt och saknar noggrannhet. Vi föreslår ett nytt tillvägagångssätt för beräkningen av nätverksfördröjning genom att korrelera nätverksvägar till nätverksfördröjning. Vi visar att nätverksfördröjningen kan vara exakt uppskattad genom att man först mäter och identifierar den nätverksväg som används och sedan hämtar den förväntade fördröjningen för den nätverksvägen baserad på en tidigare uppsättning av mätningar. Baserat på dessa principer introducerar vi Sudan traceroute, ett Verktyg för att uppskatta nätverksfördröjning. Sudan traceroute kan användas för att både minska tiden att uppskatta fördröjningen samt att minska overhead för mätningarna i nätverket. Sudan traceroute använder en förbättrad vägdetekteringsmekanism som bara skickar några försiktigt valda prober för att identifiera den aktuella vägen i nätverket. Vi har utvecklat och utvärderat Sudan traceroute i en testmiljö och utvärderade genomförbarheten av Sudan traceroute i verkliga nätverk med hjälp av Amazon EC2. Med hjälp av Sudan traceroute har vi förkortat den tid det tar för värdar att identifiera nätverksfördröjnings förändringar jämfört med befintliga tillvägagångssätt.
|
4 |
A Cloud-Based Execution Environment for a Pandemic SimulatorBasile, Maurizio, Raciti, Massimiliano Gabriele January 2009 (has links)
<p>The aim of this thesis is to develop a flexible distributed platform designed toexecute a disease outbreaks simulator in a fast way over many types of platformsand operating systems. The architecture is realized using the Elastic ComputeCloud (EC2) supplied by Amazon and Condor as middleware among the varioustypes of OS. The second part of the report describes the realization of a webapplication that allows users to manage easily the various part of the architecture,to launch the simulations and to view some statistics of the relative results.</p>
|
5 |
Evaluation and Optimization of Turnaround Time and Cost of HPC Applications on the CloudMarathe, Aniruddha Prakash January 2014 (has links)
The popularity of Amazon's EC2 cloud platform has increased in commercial and scientific high-performance computing (HPC) applications domain in recent years. However, many HPC users consider dedicated high-performance clusters, typically found in large compute centers such as those in national laboratories, to be far superior to EC2 because of significant communication overhead of the latter. We find this view to be quite narrow and the proper metrics for comparing high-performance clusters to EC2 is turnaround time and cost. In this work, we first compare the HPC-grade EC2 cluster to top-of-the-line HPC clusters based on turnaround time and total cost of execution. When measuring turnaround time, we include expected queue wait time on HPC clusters. Our results show that although as expected, standard HPC clusters are superior in raw performance, they suffer from potentially significant queue wait times. We show that EC2 clusters may produce better turnaround times due to typically lower wait queue times. To estimate cost, we developed a pricing model---relative to EC2's node-hour prices---to set node-hour prices for (currently free) HPC clusters. We observe that the cost-effectiveness of running an application on a cluster depends on raw performance and application scalability. However, despite the potentially lower queue wait and turnaround times, the primary barrier to using clouds for many HPC users is the cost. Amazon EC2 provides a fixed-cost option (called on-demand) and a variable-cost, auction-based option (called the spot market). The spot market trades lower cost for potential interruptions that necessitate checkpointing; if the market price exceeds the bid price, a node is taken away from the user without warning. We explore techniques to maximize performance per dollar given a time constraint within which an application must complete. Specifically, we design and implement multiple techniques to reduce expected cost by exploiting redundancy in the EC2 spot market. We then design an adaptive algorithm that selects a scheduling algorithm and determines the bid price. We show that our adaptive algorithm executes programs up to 7x cheaper than using the on-demand market and up to 44% cheaper than the best non-redundant, spot-market algorithm. Finally, we extend our adaptive algorithm to exploit several opportunities for cost-savings on the EC2 spot market. First, we incorporate application scalability characteristics into our adaptive policy. We show that the adaptive algorithm informed with scalability characteristics of applications achieves up to 56% cost-savings compared to the expected cost for the base adaptive algorithm run at a fixed, user-defined scale. Second, we demonstrate potential for obtaining considerable free computation time on the spot market enabled by its hour-boundary pricing model.
|
6 |
Cloud Computing as a Tool to Secure and Manage Information Flow in Swedish Armed Forces NetworksAli, Muhammad Usman January 2012 (has links)
In the last few years cloud computing has created much hype in the IT world. It has provided new strategies to cut down costs and provide better utilization of resources. Apart from all drawbacks, the cloud infrastructure has been long discussed for its vulnerabilities and security issues. There is a long list of service providers and clients, who have implemented different service structures using cloud infrastructure. Despite of all these efforts many organizations especially with higher security concerns have doubts about the data privacy or theft protection in cloud. This thesis aims to encourage Swedish Armed Forces (SWAF) networks to move to cloud infrastructures as this is the technology that will make a huge difference and revolutionize the service delivery models in the IT world. Organizations avoiding it would lag behind but at the same time organizations should consider to adapt a cloud strategy most reliable and compatible with their requirements. This document provides an insight on different technologies and tools implemented specifically for monitoring and security in cloud. Much emphasize is given on virtualization technology because cloud computing highly relies on it. Amazon EC2 cloud is analyzed from security point of view. An intensive survey has also been conducted to understand the market trends and people’s perception about cloud implementation, security threats, cost savings and reliability of different services provided.
|
7 |
A Cloud-Based Execution Environment for a Pandemic SimulatorBasile, Maurizio, Raciti, Massimiliano Gabriele January 2009 (has links)
The aim of this thesis is to develop a flexible distributed platform designed toexecute a disease outbreaks simulator in a fast way over many types of platformsand operating systems. The architecture is realized using the Elastic ComputeCloud (EC2) supplied by Amazon and Condor as middleware among the varioustypes of OS. The second part of the report describes the realization of a webapplication that allows users to manage easily the various part of the architecture,to launch the simulations and to view some statistics of the relative results.
|
8 |
Prestandajämförelse mellan Amazon EC2 och privat datacenter / Performance comparison between Amazon EC2 and private computer centerJohansson, Daniel, Jibing, Gustav, Krantz, Johan January 2013 (has links)
Publika moln har sedan några år tillbaka blivit ett alternativ för olika företag att använda istället för lokala datacenter. Vad publika moln erbjuder är en tjänst som gör det möjligt för företag och privatpersoner att hyra datorkapacitet. Vilket gör att de inte längre behöver spendera pengar på resurser som inte används. Istället för att köpa en stor andel hårdvara och uppskatta hur stor kapacitet som man behöver kan man nu istället så smått börja utöka efter behov eller minska ifall det önskas. Därmed behöver företag inte spendera pengar på hårdvara som inte används eller har för lite datorkapacitet, vilket skulle kunna resultera i att stora batcharbeten inte blir färdiga i tid och i och med det kan företaget förlora potentiella kunder. Potentiella problem kan dock uppstå när man i ett moln virtualiserar och försöker fördela datorkapacitet mellan flera tusen instanser. Där även skalbarhet inte ska ha några begränsningar, enligt moln-leverantörerna. I denna rapport har vi med hjälp av olika benchmarks analyserat prestandan hos den största publika moln-leverantören på marknaden, Amazon, och deras EC2- och S3-tjänster. Vi har genomfört prestandatester på systemminne, MPI och hårddisk I/O. Då dessa är några av de faktorer som hindrar publika moln från att ta över marknaden, enligt artikeln Above The Clouds - A Berkely View of Cloud Computing [3]. Sedan har vi jämfört resultaten med prestandan på ett privat moln i ett datacenter. Våra resultat indikerar att prestandan på det publika molnet inte är förutsägbar och måste få en ordentlig skjuts för att stora företag ska ha en anledning till att börja använda det.
|
9 |
KTHFS Orchestration : PaaS orchestration for HadoopLorente Leal, Alberto January 2013 (has links)
Platform as a Service (PaaS) has produced a huge impact on how we can offer easy and scalable software that adapts to the needs of the users. This has allowed the possibility of systems being capable to easily configure themselves upon the demand of the customers. Based on these features, a large interest has emerged to try and offer virtualized Hadoop solutions based on Infrastructure as a Service (IaaS) architectures in order to easily deploy completely functional Hadoop clusters in platforms like Amazon EC2 or OpenStack. Throughout the thesis work, it was studied the possibility of enhancing the capabilities of KTHFS, a modified Hadoop platform in development; to allow automatic configuration of a whole functional cluster on IaaS platforms. In order to achieve this, we will study different proposals of similar PaaS platforms from companies like VMWare or Amazon EC2 and analyze existing node orchestration techniques to configure nodes in cloud providers like Amazon or Openstack and later on automatize this process. This will be the starting point for this work, which will lead to the development of our own orchestration language for KTHFS and two artifacts (i) a simple Web Portal to launch the KTHFS Dashboard in the supported IaaS platforms, (ii) an integrated component in the Dashboard in charge of analyzing a cluster definition file, and initializing the configuration and deployment of a cluster using Chef. Lastly, we discover new issues related to scalability and performance when integrating the new components to the Dashboard. This will force us to analyze solutions in order to optimize the performance of our deployment architecture. This will allow us to reduce the deployment time by introducing a few modifications in the architecture. Finally, we will conclude with some few words about the on-going and future work.
|
10 |
Integrating the Meta Attack Language in the Cybersecurity Ecosystem: Creating new Security Tools Using Attack Simulation ResultsGrönberg, Frida, Thiberg, Björn January 2022 (has links)
Cyber threat modeling and attack simulations arenew methods to assess and analyze the cybersecurity of ITenvironments. The Meta Attack Language (MAL) was createdto formalize the underlying attack logic of such simulationsby providing a framework to create domain specific languages(DSLs). DSLs can be used in conjunction with modeling softwareto simulate cyber attacks. The goal of this project was to examinehow MAL can be integrated in a wider cybersecurity context bydirectly combining attack simulation results with other tools inthe cybersecurity ecosystem. The result was a proof of conceptwhere a small DSL is created for Amazon EC2. Informationis gathered about a certain EC2 instance and used to create amodel and run an attack simulation. The resulting attack pathwas used to perform an offensive measure in Pacu, an AWSexploitation framework. The result was examined to arrive atconclusions about the proof of concept itself and about integratingMAL in the cybersecurity ecosystem in a more general sense. Itwas found that while the project was successful in showing thatintegrating MAL results in such manner is possible, the CADmodeling process is not an optimal route and that other domainsthan the cloud environment could be targeted. / Cyberhotsmodellering och attacksimuleringar är nya metoder för att bedöma och analysera cybersäkerheten i en IT-miljö. Meta Attack Language (MAL) skapades för att formalisera den underliggande attacklogiken för sådana simuleringar genom att tillhandahålla ett ramverk för att skapa domain-specific languages (DSL). En DSL kan användas tillsammans med modelleringsprogramvara för att simulera cyberattacker. Målet med detta projekt var att undersöka hur MAL kan integreras i ett bredare sammanhang genom att direkt kombinera MAL-resultat med andra verktyg inom IT-säkerhet. Resultatet blev ett koncepttest där en mindre DSL skapades för Amazon EC2. Information samlades in om en viss EC2-instans och användes för att skapa en modell och genomföra en attacksimulering. Den resulterande attackvägen användes för att utföra en offensiv åtgärd i Pacu, ett ramverk för AWS-exploatering. Resultatet undersöktes för att nå slutsatser om konceptet i sig och om att integrera MAL i IT-säkerhetens ekosystem i allmänhet. Det visade sig att även om projektet lyckades visa att det är möjligt att integrera MAL-resultat på ett sådant sätt, är CAD-modelleringsprocessen inte en optimal metodik och lämpar sig illa för syftet. Det visade sig också att andra domäner än molnmiljön skulle vara en givande riktning. / Kandidatexjobb i elektroteknik 2022, KTH, Stockholm
|
Page generated in 0.0397 seconds