11 |
Prestandajämförelse mellan Amazon EC2 och privat datacenter / Performance comparison between Amazon EC2 and private computer centerJohansson, Daniel, Jibing, Gustav, Krantz, Johan January 2013 (has links)
Publika moln har sedan några år tillbaka blivit ett alternativ för olika företag att använda istället för lokala datacenter. Vad publika moln erbjuder är en tjänst som gör det möjligt för företag och privatpersoner att hyra datorkapacitet. Vilket gör att de inte längre behöver spendera pengar på resurser som inte används. Istället för att köpa en stor andel hårdvara och uppskatta hur stor kapacitet som man behöver kan man nu istället så smått börja utöka efter behov eller minska ifall det önskas. Därmed behöver företag inte spendera pengar på hårdvara som inte används eller har för lite datorkapacitet, vilket skulle kunna resultera i att stora batcharbeten inte blir färdiga i tid och i och med det kan företaget förlora potentiella kunder. Potentiella problem kan dock uppstå när man i ett moln virtualiserar och försöker fördela datorkapacitet mellan flera tusen instanser. Där även skalbarhet inte ska ha några begränsningar, enligt moln-leverantörerna. I denna rapport har vi med hjälp av olika benchmarks analyserat prestandan hos den största publika moln-leverantören på marknaden, Amazon, och deras EC2- och S3-tjänster. Vi har genomfört prestandatester på systemminne, MPI och hårddisk I/O. Då dessa är några av de faktorer som hindrar publika moln från att ta över marknaden, enligt artikeln Above The Clouds - A Berkely View of Cloud Computing [3]. Sedan har vi jämfört resultaten med prestandan på ett privat moln i ett datacenter. Våra resultat indikerar att prestandan på det publika molnet inte är förutsägbar och måste få en ordentlig skjuts för att stora företag ska ha en anledning till att börja använda det.
|
12 |
KTHFS Orchestration : PaaS orchestration for HadoopLorente Leal, Alberto January 2013 (has links)
Platform as a Service (PaaS) has produced a huge impact on how we can offer easy and scalable software that adapts to the needs of the users. This has allowed the possibility of systems being capable to easily configure themselves upon the demand of the customers. Based on these features, a large interest has emerged to try and offer virtualized Hadoop solutions based on Infrastructure as a Service (IaaS) architectures in order to easily deploy completely functional Hadoop clusters in platforms like Amazon EC2 or OpenStack. Throughout the thesis work, it was studied the possibility of enhancing the capabilities of KTHFS, a modified Hadoop platform in development; to allow automatic configuration of a whole functional cluster on IaaS platforms. In order to achieve this, we will study different proposals of similar PaaS platforms from companies like VMWare or Amazon EC2 and analyze existing node orchestration techniques to configure nodes in cloud providers like Amazon or Openstack and later on automatize this process. This will be the starting point for this work, which will lead to the development of our own orchestration language for KTHFS and two artifacts (i) a simple Web Portal to launch the KTHFS Dashboard in the supported IaaS platforms, (ii) an integrated component in the Dashboard in charge of analyzing a cluster definition file, and initializing the configuration and deployment of a cluster using Chef. Lastly, we discover new issues related to scalability and performance when integrating the new components to the Dashboard. This will force us to analyze solutions in order to optimize the performance of our deployment architecture. This will allow us to reduce the deployment time by introducing a few modifications in the architecture. Finally, we will conclude with some few words about the on-going and future work.
|
13 |
Integrating the Meta Attack Language in the Cybersecurity Ecosystem: Creating new Security Tools Using Attack Simulation ResultsGrönberg, Frida, Thiberg, Björn January 2022 (has links)
Cyber threat modeling and attack simulations arenew methods to assess and analyze the cybersecurity of ITenvironments. The Meta Attack Language (MAL) was createdto formalize the underlying attack logic of such simulationsby providing a framework to create domain specific languages(DSLs). DSLs can be used in conjunction with modeling softwareto simulate cyber attacks. The goal of this project was to examinehow MAL can be integrated in a wider cybersecurity context bydirectly combining attack simulation results with other tools inthe cybersecurity ecosystem. The result was a proof of conceptwhere a small DSL is created for Amazon EC2. Informationis gathered about a certain EC2 instance and used to create amodel and run an attack simulation. The resulting attack pathwas used to perform an offensive measure in Pacu, an AWSexploitation framework. The result was examined to arrive atconclusions about the proof of concept itself and about integratingMAL in the cybersecurity ecosystem in a more general sense. Itwas found that while the project was successful in showing thatintegrating MAL results in such manner is possible, the CADmodeling process is not an optimal route and that other domainsthan the cloud environment could be targeted. / Cyberhotsmodellering och attacksimuleringar är nya metoder för att bedöma och analysera cybersäkerheten i en IT-miljö. Meta Attack Language (MAL) skapades för att formalisera den underliggande attacklogiken för sådana simuleringar genom att tillhandahålla ett ramverk för att skapa domain-specific languages (DSL). En DSL kan användas tillsammans med modelleringsprogramvara för att simulera cyberattacker. Målet med detta projekt var att undersöka hur MAL kan integreras i ett bredare sammanhang genom att direkt kombinera MAL-resultat med andra verktyg inom IT-säkerhet. Resultatet blev ett koncepttest där en mindre DSL skapades för Amazon EC2. Information samlades in om en viss EC2-instans och användes för att skapa en modell och genomföra en attacksimulering. Den resulterande attackvägen användes för att utföra en offensiv åtgärd i Pacu, ett ramverk för AWS-exploatering. Resultatet undersöktes för att nå slutsatser om konceptet i sig och om att integrera MAL i IT-säkerhetens ekosystem i allmänhet. Det visade sig att även om projektet lyckades visa att det är möjligt att integrera MAL-resultat på ett sådant sätt, är CAD-modelleringsprocessen inte en optimal metodik och lämpar sig illa för syftet. Det visade sig också att andra domäner än molnmiljön skulle vara en givande riktning. / Kandidatexjobb i elektroteknik 2022, KTH, Stockholm
|
14 |
Performance Evaluation of Cassandra Scalability on Amazon EC2Srinadhuni, Siddhartha January 2018 (has links)
Context In the fields of communication systems and computer science, Infrastructure as a Service consists of building blocks for cloud computing and to provide robust network features. AWS is one such infrastructure as a service which provides several services out of which Elastic Cloud Compute (EC2) is used to deploy virtual machines across several data centers and provides fault tolerant storage for applications across the cloud. Apache Cassandra is one of the many NoSQL databases which provides fault tolerance and elasticity across the servers. It has a ring structure which helps the communication effective between the nodes in a cluster. Cassandra is robust which means that there will not be a down-time when adding new Cassandra nodes to the existing cluster. Objectives. In this study quantifying the latency in adding Cassandra nodes to the Amazon EC2 instances and assessing the impact of Replication factors (RF) and Consistency Levels (CL) on autoscaling have been put forth. Methods. Primarily a literature review is conducted on how the experiment with the above-mentioned constraints can be carried out. Further an experimentation is conducted to address the latency and the effects of autoscaling. A 3-node Cassandra cluster runs on Amazon EC2 with Ubuntu 14.04 LTS as the operating system. A threshold value is identified for each Cassandra specific configuration and is scaled over to five nodes on AWS utilizing the benchmarking tool, Cassandra stress tool. This procedure is repeated for a 5-node Cassandra cluster and each of the configurations with a mixed workload of equal reads and writes. Results. Latency has been identified in adding Cassandra nodes on Amazon EC2 instances and the impacts of replication factors and consistency levels on autoscaling have been quantified. Conclusions. It is concluded that there is a decrease in latency after autoscaling for all the configurations of Cassandra and changing the replication factors and consistency levels have also resulted in performance change of Cassandra.
|
15 |
AES - kryptering med cuda : Skillnader i beräkningshastighet mellan AES-krypteringsmetoderna ECB och CTR vid implementering med Cuda-ramverket.Vidén, Pontus, Henningsson, Viktor January 2020 (has links)
Purpose – The purpose of this study is partly to illustrate how the AES encryption methods ECB and CTR affect the computational speed when using the GPGPU framework Cuda, but also to clarify the advantages and disadvantages between the different AES encryption modes. Method – A preliminary study was conducted to obtain empirical data on the AES encryption modes ECB and CTR. Data from the study has been analyzed and compared to determine the various aspects of the AES encryption modes and to create a basis for determining the advantages and disadvantages between them. The preliminary study has been carried out systematically by finding scientific works by searching databases within the subject. An experiment has been used as a method to be able to extract execution time data for the GPGPU framework Cuda when processing the AES encryption modes. Experiment were chosen as a method to gain control over the variables included in the study and to see how these variables change when they are consciously influenced. Findings – The findings of the preliminary study show that CTR is more secure than the ECB, but also considerably more complex, which can lead to integrity risks when implementation is done incorrectly. In the experiment, computational speeds are produced when the CPU memory sends to the GPU memory, the encryption on the GPU and how long it takes for the GPU memory to send to the CPU memory. This is done for both CTR and ECB in encryption and decryption. The result of the analysis shows that the ECB is faster than CTR in encryption and decryption. The calculation speed is higher with the ECB compared to the CTR. Implications – The experiment shows that CTR is slower than the ECB. But the most amount of time spent in encryption for both modes are the transfers between the CPU memory and the GPU memory. Limitations – The file sizes of the files tested only goes up to about 1 gigabyte which gave small computation times.
|
16 |
Testing Lifestyle Store Website Using JMeter in AWS and GCPTangella, Ankhit, Katari, Padmaja January 2022 (has links)
Background: As cloud computing has risen over the last decades, there are several cloud services accessible on the market, users may prefer to select those that are more flexible and efficient. Based on the preceding, we chose to research to evaluate cloud services in terms of which would be better for the user in terms ofgetting the needed data from the chosen website and utilizing JMeter for performance testing. If we continue our thesis study by assessing the performance of different sample users using JMeter as the testing tool, it is appropriate for our thesis research subject. In this case, the user interfaces of GCP and AWS are compared while doing several compute engine-related operations. Objectives: This thesis aims to test the website performance after deploying in two distinct cloud platforms.After the creation of instances in AWS, a domain in GCP and also the bucket, the website files are uploaded into the bucket. The GCP and AWS instances are connected to the lifestyle store website. The performance testing on the selected website is done on both services, and then comparison ofthe outcomes of our thesis research using the testing tool Jmeter is done. Methods: In these, we choose experimentation as our research methodology,and in this, the task is done in two cloud platforms in which the website will be deployed separately. The testing tool with performance testing is employed. JMeter is used to test a website’s performance in both possible services and then to gather our research results, and the visualization of the results are done in an aggregate graph, graphs and summary reports. The metrics are Throughput, average response time, median, percentiles and standard deviation. Results: The results are based on JMeter performance testing of a selected web-site between two cloud platforms. The results of AWS and GCP can be shown in the aggregate graph. The graph results are based on the testing tool to determine which service is best for users to obtain a response from the website for requested data in the shortest amount of time. We have considered 500 and 1000 users, and based on the results, we have compared the metrics throughput, average response time, standard deviation and percentiles. The 1000 user results are compared to know which cloud platform performs better. Conclusions: According to the results from the 1000 users, it can be concluded that AWS has a higher throughput than GCP and a less average response time.Thus, it can be said that AWS outperforms GCP in terms of performance.
|
17 |
Function as a Service : En fallstudie av Pennan & Svärdet och dess applikation WarstoriesNeterowicz, Martin, Johansson, Jacob January 2017 (has links)
Varje år går stora mängder resurser förlorade på misslyckade IT-system vilket bidrar till ett stort intresse för kostnadseffektiva tekniker. En sådan teknik kallas Cloud Computing och har funnits i ett flertal år. Cloud Computing kan potentiellt sänka kostnader relaterade till ITprojekt, såsom exempelvis kostnader rörande underhåll av serverhårdvara. Function as a Service (FaaS) är ett av de senaste tillskotten till Cloud Computing. Något som blir alltmer problematiskt är att identifiera vilken typ av Cloud Computing som bäst lämpar sig för ett företag eller projekt. Denna studie ämnar därför svara på följande frågor; vilket värde tillför FaaS till utvecklare vid utvecklande av applikationer, hur skiljer sig FaaS från IaaS rörande implementation och vilka är potentiella motiv bakom nyttjande av FaaS. Genom att svara på dessa frågor ämnar studien agera vägledande vid val av Cloud Computing-tjänst. Vid analys av FaaS har LEAN Software Development (LSD) applicerats för att identifiera var FaaS reducerar och potentiellt adderar slöseri vid mjukvaruutveckling nyttjande tekniken. En fallstudie genomfördes vid ett litet företag, mindre än 50 anställda, som experimenterar med Amazon Web Services implementation av FaaS, Lambda. Slutsatsen av studien är att trots att samtliga aspekter av LSD inte är applicerbara på alla företag och projekt motiverar Lambdas fördelaktiga betalmodell företag att själva utforska tekniken. / Every year a tremendous amount of resources is lost on failed IT-Systems. It is therefore of interest to explore potential cost-saving technologies. One such technology that has been around for many years is Cloud Computing. Cloud Computing can potentially lower costs of IT-projects by, for example, eliminating the need to maintain server hardware. One of the more recent additions to the Cloud Computing assortment is Function as a Service (FaaS). What is becoming increasingly problematic about the assortment of Cloud Computing services is to know which service is best suitable for a company or project. This study therefore aims to examine FaaS to answer the questions; what value does FaaS add to the developers when developing applications, what differs in implementing FaaS from IaaS, and what are potential motives behind the usage of FaaS, thereby provide guidance when choosing Cloud Computing service. To analyze the results the LEAN Software Development (LSD) model has been used to identify where FaaS reduces and potentially adds waste in software development. A casestudy of a small organization, less than 50 employees, that are experimenting with the usage of Amazon Web Services implementation of FaaS, Lambda, has been made. The conclusion of the study is that even though all the aspects of LSD is not applicable to all companies or projects, the payment model of Lambda makes it advantageous for organizations to try it out for themselves.
|
18 |
Att driftsätta i molnet : En undersökning i kostnader och skalningsmöjligheterBlom, Tryggve January 2012 (has links)
När en ny webbapplikation skall lanseras och driftsättas är det svårt att i förhand veta vilken datatrafik och belastning som tjänsten behöver vara dimensionerad för. Rapporten följer en webbapplikation som inte är förberedd för uppskalning till att bli separerad i olika komponenter för ökad skalbarhet och driftsäkerhet. I rapporten genomförs även en komparativ studie på olika typer av molntjänster som erbjuder infrastruktur (IaaS)-, plattform (PaaS)- och mjukvara (SaaS) som en tjänst. Målet med undersökningen var att hitta en kostnadseffektiv metod för att expandera applikationens infrastruktur och flytta implementationen till molnet. Resultatet och slutsatsen visar att den dyraste lösningen inte alltid är den bästa och i slutändan kan företag betalar pengar för resurser som de inte utnyttjar.
|
19 |
透過Spark平台實現大數據分析與建模的比較:以微博為例 / Accomplish Big Data Analytic and Modeling Comparison on Spark: Weibo as an Example潘宗哲, Pan, Zong Jhe Unknown Date (has links)
資料的快速增長與變化以及分析工具日新月異,增加資料分析的挑戰,本研究希望透過一個完整機器學習流程,提供學術或企業在導入大數據分析時的參考藍圖。我們以Spark作為大數據分析的計算框架,利用MLlib的Spark.ml與Spark.mllib兩個套件建構機器學習模型,解決傳統資料分析時可能會遇到的問題。在資料分析過程中會比較Spark不同分析模組的適用性情境,首先使用本地端叢集進行開發,最後提交至Amazon雲端叢集加快建模與分析的效能。大數據資料分析流程將以微博為實驗範例,並使用香港大學新聞與傳媒研究中心提供的2012年大陸微博資料集,我們採用RDD、Spark SQL與GraphX萃取微博使用者貼文資料的特增值,並以隨機森林建構預測模型,來預測使用者是否具有官方認證的二元分類。 / The rapid growth of data volume and advanced data analytics tools dramatically increase the challenge of big data analytics services adoption. This paper presents a big data analytics pipeline referenced blueprint for academic and company when they consider importing the associated services. We propose to use Apache Spark as a big data computing framework, which Spark MLlib contains two packages Spark.ml and Spark.mllib, on building a machine learning model. This resolves the traditional data analytics problem. In this big data analytics pipeline, we address a situation for adopting suitable Spark modules. We first use local cluster to develop our data analytics project following the jobs submitted to AWS EC2 clusters to accelerate analytic performance. We demonstrate the proposed big data analytics blueprint by using 2012 Weibo datasets. Finally, we use Spark SQL and GraphX to extract information features from large amount of the Weibo users’ posts. The official certification prediction model is constructed for Weibo users through Random Forest algorithm.
|
20 |
Empirical Performance Analysis of High Performance Computing Benchmarks Across Variations in Cloud ComputingMani, Sindhu 01 January 2012 (has links)
High Performance Computing (HPC) applications are data-intensive scientific software requiring significant CPU and data storage capabilities. Researchers have examined the performance of Amazon Elastic Compute Cloud (EC2) environment across several HPC benchmarks; however, an extensive HPC benchmark study and a comparison between Amazon EC2 and Windows Azure (Microsoft’s cloud computing platform), with metrics such as memory bandwidth, Input/Output (I/O) performance, and communication computational performance, are largely absent. The purpose of this study is to perform an exhaustive HPC benchmark comparison on EC2 and Windows Azure platforms.
We implement existing benchmarks to evaluate and analyze performance of two public clouds spanning both IaaS and PaaS types. We use Amazon EC2 and Windows Azure as platforms for hosting HPC benchmarks with variations such as instance types, number of nodes, hardware and software. This is accomplished by running benchmarks including STREAM, IOR and NPB benchmarks on these platforms on varied number of nodes for small and medium instance types. These benchmarks measure the memory bandwidth, I/O performance, communication and computational performance. Benchmarking cloud platforms provides useful objective measures of their worthiness for HPC applications in addition to assessing their consistency and predictability in supporting them.
|
Page generated in 0.0464 seconds