• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 5
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 22
  • 17
  • 13
  • 11
  • 9
  • 7
  • 7
  • 7
  • 6
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Performance Analysis of the Impact of Vertical Scaling on Application Containerized with Docker : Kubernetes on Amazon Web Services - EC2

Midigudla, Dhananjay January 2019 (has links)
Containers are being used widely as a base technology to pack applications and microservice architecture is gaining popularity to deploy large scale applications, with containers running different aspects of the application. Due to the presence of dynamic load on the service, a need to scale up or scale down compute resources to the containerized applications arises in order to maintain the performance of the application. Objectives To evaluate the impact of vertical scaling on the performance of a containerized application deployed with Docker container and Kubernetes that includes identification of the performance metrics that are mostly affected and hence characterize the eventual negative effect of vertical scaling. Method Literature study on kubernetes and docker containers followed by proposing a vertical scaling solution that can add or remove compute resources like cpu and memory to the containerized application. Results and Conclusions Latency and connect times were the analyzed performance metrics of the containerized application. From the obtained results, it was concluded that vertical scaling has no significant impact on the performance of a containerized application in terms of latency and connect times.
2

SpotLight: An Information Service for the Cloud

Ouyang, Xue 13 July 2016 (has links)
Infrastructure-as-a-Service cloud platforms are incredibly complex: they rent hundreds of different types of servers across multiple geographical regions under a wide range of contract types that offer varying tradeoffs between risk and cost. Unfortunately, the internal dynamics of cloud platforms are opaque in several dimensions. For example, while the risk of servers not being available when requested is critical in optimizing these risk-cost tradeoffs, it is not typically made visible to users. Thus, inspired by prior work on Internet bandwidth probing, we propose actively probing cloud platforms to explicitly learn such information, where each "probe'' is a request for a particular type of server. We model the relationships between different contracts types to develop a market-based probing policy, which leverages the insight that real-time prices in cloud spot markets loosely correlate with the supply (and availability) of fixed-price on-demand servers. That is, the higher the spot price for a server, the more likely the corresponding fixed-price on-demand server is not available. We incorporate market-based probing into SpotLight, an information service that enables cloud applications to query this and other data, and use it to monitor the availability of more than 4500 distinct server types across 9 geographical regions in Amazon's Elastic Compute Cloud over a 3 month period. We analyze this data to reveal interesting observations about the platform's internal dynamics. We then show how SpotLight enables two recently proposed derivative cloud services to select a better mix of servers to host applications, which improves their availability from 70-90% to near 100% in practice.
3

Användbarheten av Beam EC2 inom betongelement : En jämförelsestudie mellan beräkningsprogrammen Beam EC2 och Concrete Beam / The useability of Beam EC2 in concrete elements

Pålsson, Nils, Karlsson, Hanna January 2023 (has links)
Concrete is a versatile building material used all over the world. Today's constructions in building and civil engineering have high requirements and complexity that require a carefully executed dimensioning in connection with planning. Calculation programs that work based on standards are a tool for designers in calculation work. Complicated programs or ambiguities in the programs can take up important time, therefore good and clear programs are necessary for time efficiency. This thesis has compared the calculation programs Concrete Beam and Beam EC2, which both are calculation programs used for dimensioning concrete beams. Concrete Beam is used by most constructors in Sweden, while Beam EC2 is a new calculation program that is not used to the same extent. There is therefore a need to evaluate the respective calculation programs in order to increase knowledge in the construction industry about the differences between the calculation programs, as well as to open up the possibility of using more calculation programs in the future. The aim of the thesis is to investigate whether there are differences between Concrete Beam and Beam EC2 when dimensioning a concrete beam. To implement this, the differences in factors such as ease of use, available functions and a calculation result are examined. A smaller sensitivity analysis is also performed to analyze the stability of the calculation programs' values.    The thesis is carried out as a case study with input from a completed reference project by Sweco Sverige AB. The reference project contributes realistic values ​​that are used as input data in the study. The values ​​from the reference project are supplemented with smaller hand calculations for the loads acting on the beam, for example the dead weight and the wind load. This is because values ​​from the reference project are missing. The implementation involves applying the specified values ​​for the inputs from the case study and the hand calculations in the programs. The result shows how each program considers whether the concrete beam can withstand failure and the serviceability limit state, but also how the programs chose to dimension reinforcement in the beam, as well as the result from the sensitivity analysis. Based on the results, a certain difference in the calculation programs could be established, for example how they chose to shape the reinforcement in the beam, which then also led to differences in crack width and deformations. In contrast, the results for the moment and shear force were almost identical. The study also included a sensitivity analysis where Beam EC2 showed stable values ​​while Concrete Beam got a more uncertain result. Based on the results and the sensitivity analysis, the conclusion is that there are clear differences between the calculation programs. The usability is discussed in the opinion of the authors but needs further studies to be used as a general conclusion. The result is limited according to boundaries and the time frame for the work, but there is an opportunity for further studies regarding the differences and usability of the programs in the industry.
4

A Framework For Elastic Execution of Existing MPI Programs

Raveendran, Aarthi 08 September 2011 (has links)
No description available.
5

Network Latency Estimation Leveraging Network Path Classification

Omer Mahgoub Saied, Khalid January 2018 (has links)
With the development of the Internet, new network services with strict network latency requirements have been made possible. These services are implemented as distributed systems deployed across multiple geographical locations. To provide low response time, these services require knowledge about the current network latency. Unfortunately, network latency among geo-distributed sites often change, thus distributed services rely on continuous network latency measurements. One goal of such measurements is to differentiate between momentary latency spikes from relatively long-term latency changes. The differentiation is achieved through statistical processing of the collected samples. This approach of high-frequency network latency measurements has high overhead, slow to identify network latency changes and lacks accuracy. We propose a novel approach for network latency estimation by correlating network paths to network latency. We demonstrate that network latency can be accurately estimated by first measuring and identifying the network path used and then fetching the expected latency for that network path based on previous set of measurements. Based on these principles, we introduce Sudan traceroute, a network latency estimation tool. Sudan traceroute can be used to both reduce the latency estimation time as well as to reduce the overhead of network path measurements. Sudan traceroute uses an improved path detection mechanism that sends only a few carefully selected probes in order to identify the current network path. We have developed and evaluated Sudan traceroute in a test environment and evaluated the feasibility of Sudan traceroute on real-world networks using Amazon EC2. Using Sudan traceroute we have shortened the time it takes for hosts to identify network latency level changes compared to existing approaches. / Med utvecklingen av Internet har nya nätverkstjänster med strikta fördröjningskrav möjliggjorts. Dessa tjänster är implementerade som distribuerade system spridda över flera geografiska platser. För att tillgodose låg svarstid kräver dessa tjänster kunskap om svarstiden i det nuvarande nätverket. Tyvärr ändras ofta nätverksfördröjningen bland geodistribuerade webbplatser, således är distribuerade tjänster beroende av kontinuerliga mätvärden för nätverksfördröjning. Ett mål med sådana mätningar är att skilja mellan momenta ökade svarstider från relativt långsiktiga förändringar av svarstiden. Differentieringen uppnås genom statistisk bearbetning av de samlade mätningarna. Denna högfrekventa insamling av mätningar av nätverksfördröjningen har höga overheadkostnader, identifierar ändringar långsamt och saknar noggrannhet. Vi föreslår ett nytt tillvägagångssätt för beräkningen av nätverksfördröjning genom att korrelera nätverksvägar till nätverksfördröjning. Vi visar att nätverksfördröjningen kan vara exakt uppskattad genom att man först mäter och identifierar den nätverksväg som används och sedan hämtar den förväntade fördröjningen för den nätverksvägen baserad på en tidigare uppsättning av mätningar. Baserat på dessa principer introducerar vi Sudan traceroute, ett Verktyg för att uppskatta nätverksfördröjning. Sudan traceroute kan användas för att både minska tiden att uppskatta fördröjningen samt att minska overhead för mätningarna i nätverket. Sudan traceroute använder en förbättrad vägdetekteringsmekanism som bara skickar några försiktigt valda prober för att identifiera den aktuella vägen i nätverket. Vi har utvecklat och utvärderat Sudan traceroute i en testmiljö och utvärderade genomförbarheten av Sudan traceroute i verkliga nätverk med hjälp av Amazon EC2. Med hjälp av Sudan traceroute har vi förkortat den tid det tar för värdar att identifiera nätverksfördröjnings förändringar jämfört med befintliga tillvägagångssätt.
6

Comparing the Cost-effectiveness of Image Recognition for Elastic Cloud Computing : A cost comparison between Amazon Web Services EC2 instances / Jämför kostnadseffetiviten av bildigenkänning för Elastic Cloud Computing : En kostnadsjämförelse mellan Amazon Web Services EC2 instanser

Gauffin, Christopher, Rehn, Erik January 2021 (has links)
With the rise of the usage of AI, the need for computing power has grown exponentially. This has made cloud computing a popular option with its cost- effective and highly scalable capabilities. However, due to its popularity there exists thousands of possible services to choose from, making it hard to find the right tool for the job. The purpose of this thesis is to provide a methodological approach for evaluating which alternative is the best for machine learning applications deployed in the cloud. Nine different instances were evaluated on a major cloud provider and compared for their performance relative to their cost. This was accomplished by developing a cost evaluation model together with a test environment for image recognition models. The environment can be used on any type of cloud instance to aid in the decision-making. The results derived from the specific premises used in this study indicate that the higher the hourly cost an instance had, the less cost-effective it was. However, when making the same comparison within an instance family of similar machines the same conclusion can not be made. Regardless of the conclusions made in this thesis, the problem addressed remains, as the domain is too large to cover in one report. But the methodology used holds great value as it can act as guidance for similar evaluation with a different set of premises. / Användingen av Artificiell Intelligens har aldrig varit så stor som den är idag och behovet av att kunna göra tyngre och mer komplexa beräkningar har växt exponentiellt. Detta har gjort att molnet, cloud, ett mycket populärt alternativt för sin kostadseffektiva och skalbara förmåga. Däremot så finns det tusentals alternativ att välja emellan vilket gör det svårt att hitta rätt verktyg för jobbet. Syftet med denna uppsats är att förse läsaren med en användbar metodik för att evaluera vilket instans som passar bäst för maskininlärnings applikationer som distribueras i molnet. Nio stycken olika instanser evaluerades på en molnleverantör genom att jämföra deras prestanda kontra deras kostnad. Detta gjordes genom att utveckla en kostnadsmodell tillsammans med en testmiljö för bildigenkänningsmodeller. Testmiljön som användes kan appliceras på flertal instanser som inte ingick i denna rapport för att tillåta andra att använda den för egna tester. Resultaten för studien var att de instanserna med högre timkostnad tenderar till att vara mindre kostnadseffektiva. Gör man samma jämförelse med endast instanser av samma typ som är anpassade för maskininlärning så är samma slutsats inte lika självklar. Oavsett slutsatser som ges i denna rapport så består problemet. Detta beror på att molnet berör så många olika faktorer som bör värderas i evalueringen, till exempel utvecklingstid och modellens förmåga att förutspå en bild vilket alla kräver sin egna tes. Men metodiken som används kan definitivt vara till stor nytta om man vill göra en liknande utvärdering med andra premisser.
7

A Cloud-Based Execution Environment for a Pandemic Simulator

Basile, Maurizio, Raciti, Massimiliano Gabriele January 2009 (has links)
<p>The aim of this thesis is to develop a flexible distributed platform designed toexecute a disease outbreaks simulator in a fast way over many types of platformsand operating systems. The architecture is realized using the Elastic ComputeCloud (EC2) supplied by Amazon and Condor as middleware among the varioustypes of OS. The second part of the report describes the realization of a webapplication that allows users to manage easily the various part of the architecture,to launch the simulations and to view some statistics of the relative results.</p>
8

Evaluation and Optimization of Turnaround Time and Cost of HPC Applications on the Cloud

Marathe, Aniruddha Prakash January 2014 (has links)
The popularity of Amazon's EC2 cloud platform has increased in commercial and scientific high-performance computing (HPC) applications domain in recent years. However, many HPC users consider dedicated high-performance clusters, typically found in large compute centers such as those in national laboratories, to be far superior to EC2 because of significant communication overhead of the latter. We find this view to be quite narrow and the proper metrics for comparing high-performance clusters to EC2 is turnaround time and cost. In this work, we first compare the HPC-grade EC2 cluster to top-of-the-line HPC clusters based on turnaround time and total cost of execution. When measuring turnaround time, we include expected queue wait time on HPC clusters. Our results show that although as expected, standard HPC clusters are superior in raw performance, they suffer from potentially significant queue wait times. We show that EC2 clusters may produce better turnaround times due to typically lower wait queue times. To estimate cost, we developed a pricing model---relative to EC2's node-hour prices---to set node-hour prices for (currently free) HPC clusters. We observe that the cost-effectiveness of running an application on a cluster depends on raw performance and application scalability. However, despite the potentially lower queue wait and turnaround times, the primary barrier to using clouds for many HPC users is the cost. Amazon EC2 provides a fixed-cost option (called on-demand) and a variable-cost, auction-based option (called the spot market). The spot market trades lower cost for potential interruptions that necessitate checkpointing; if the market price exceeds the bid price, a node is taken away from the user without warning. We explore techniques to maximize performance per dollar given a time constraint within which an application must complete. Specifically, we design and implement multiple techniques to reduce expected cost by exploiting redundancy in the EC2 spot market. We then design an adaptive algorithm that selects a scheduling algorithm and determines the bid price. We show that our adaptive algorithm executes programs up to 7x cheaper than using the on-demand market and up to 44% cheaper than the best non-redundant, spot-market algorithm. Finally, we extend our adaptive algorithm to exploit several opportunities for cost-savings on the EC2 spot market. First, we incorporate application scalability characteristics into our adaptive policy. We show that the adaptive algorithm informed with scalability characteristics of applications achieves up to 56% cost-savings compared to the expected cost for the base adaptive algorithm run at a fixed, user-defined scale. Second, we demonstrate potential for obtaining considerable free computation time on the spot market enabled by its hour-boundary pricing model.
9

Cloud Computing as a Tool to Secure and Manage Information Flow in Swedish Armed Forces Networks

Ali, Muhammad Usman January 2012 (has links)
In the last few years cloud computing has created much hype in the IT world. It has provided new strategies to cut down costs and provide better utilization of resources. Apart from all drawbacks, the cloud infrastructure has been long discussed for its vulnerabilities and security issues. There is a long list of service providers and clients, who have implemented different service structures using cloud infrastructure. Despite of all these efforts many organizations especially with higher security concerns have doubts about the data privacy or theft protection in cloud. This thesis aims to encourage Swedish Armed Forces (SWAF) networks to move to cloud infrastructures as this is the technology that will make a huge difference and revolutionize the service delivery models in the IT world. Organizations avoiding it would lag behind but at the same time organizations should consider to adapt a cloud strategy most reliable and compatible with their requirements. This document provides an insight on different technologies and tools implemented specifically for monitoring and security in cloud. Much emphasize is given on virtualization technology because cloud computing highly relies on it. Amazon EC2 cloud is analyzed from security point of view. An intensive survey has also been conducted to understand the market trends and people’s perception about cloud implementation, security threats, cost savings and reliability of different services provided.
10

A Cloud-Based Execution Environment for a Pandemic Simulator

Basile, Maurizio, Raciti, Massimiliano Gabriele January 2009 (has links)
The aim of this thesis is to develop a flexible distributed platform designed toexecute a disease outbreaks simulator in a fast way over many types of platformsand operating systems. The architecture is realized using the Elastic ComputeCloud (EC2) supplied by Amazon and Condor as middleware among the varioustypes of OS. The second part of the report describes the realization of a webapplication that allows users to manage easily the various part of the architecture,to launch the simulations and to view some statistics of the relative results.

Page generated in 0.0458 seconds