Spelling suggestions: "subject:"scalability."" "subject:"calability.""
251 |
Performance Evaluation and Elastic Scaling of an IP Multimedia Subsystem Implemented in a CloudUmair, Muhammad January 2013 (has links)
Network (NGN) technology which enables telecommunication operators to provide multimedia services over fixed and mobile networks. All of the IMS infrastructure protocols work over IP which makes IMS easy to deploy on a cloud platform. The purpose of this thesis is to analysis a novel technique of “cloudifying” the OpenIMS core infrastructure. The primary goal of running OpenIMS in the cloud is to enable a highly available and horizontally scalable Home Subscriber Server (HSS). The resulting database should offer high availability, and high scalability. The prototype developed in this thesis project demonstrates a virtualized OpenIMS core with an integrated horizontal scalable HSS. Functional and performance measurements of the system under test (i.e. the virtualized OpenIMS core with horizontally scalable HSS) were conducted. The results of this testing include an analysis of benchmarking scenarios, the CPU utilization, and the available memory of the virtual machines. Based on these results we conclude that it is both feasible and desirable to deploy the OpenIMS core in a cloud. / IP Multimedia Subsystem (IMS) ramverk är ett Next Generation Network (NGN) teknik som möjliggör teleoperatörer att erbjuda multimediatjänster via fasta och mobila nät. Alla IMS infrastruktur protokollen fungera över IP som gör IMS lätt att distribuera på ett moln plattform. Syftet med denna uppsats är att analysera en ny teknik för “cloudifying” den OpenIMS kärninfrastrukturen. Det primära målet med att köra OpenIMS i molnet är att möjliggöra en hög tillgänglighet och horisontellt skalbara Server Home Subscriber (HSS). Den resulterande databasen bör erbjuda hög tillgänglighet och hög skalbarhet. Prototypen utvecklas i detta examensarbete visar en virtualiserad OpenIMS kärna med en integrerad horisontell skalbar HSS. Funktionella och prestanda mätningar av systemet under test (dvs. virtualiserade OpenIMS kärnan med horisontellt skalbara HSS) genomfördes. Resultaten av detta test inkluderar en analys av benchmarking scenarier, CPU-användning, och tillgängligt minne för de virtuella maskinerna. Baserat på dessa resultat drar vi slutsatsen att det är både möjligt och önskvärt att distribuera OpenIMS kärnan i ett moln.
|
252 |
Evaluation of Industrial Controllers’ Connectivity using MQTT Message ProtocolOpacin, Selma, Rizvanovic, Lejla January 2022 (has links)
The increase of the Industrial Internet of Things (IIoT) and Industry 4.0 led to many interconnections in the control systems. IIoT has begun its evolution and development in the scientific community and industrial application field. As communication between IIoT devices and other equipment needs to be implemented through the network, it is necessary to provide reliable and straightforward data transmission. This thesis investigates how to develop a middleware that provides connectivity between specific industrial environments. Particularly, a prototype design is created to connect an industrial controller with information of interest using the Message Queuing Telemetry Transport (MQTT) protocol. We set up experiments to evaluate the features of the implemented prototype. The experiment examines the effect of the developed prototype's end-to-end response time and scalability characteristics by receiving a different number of messages from the stimulator. By measuring the end-to-end response time, the experiment showed that in the case of many input/output (I/O) signals arriving in the connectivity service, the implemented prototype is well scaled under examined circumstances. Observing the results, the middleware prototype gave acceptable results in terms of response time. Also, it gave us a picture of how different network settings can cause a difference in estimating the end-to-end response time of a message.
|
253 |
Simulating propeller and Propeller-Hull Interaction in OpenFOAMMehdipour, Reza January 2014 (has links)
This is a master’s thesis performed at the Department of Shipping and Marine Technology research group in Hydrodynamics at Chalmers University of Technology and is written for the Center for Naval Architecture at the Royal Institute of Technology, KTH.In order to meet increased requirements on efficient ship propulsions with low noise level, it is important to consider the complete system with both the hull and the propeller in the simulation.OpenFOAM (Open Field Operation and Manipulation) provides different techniques to simulate a rotating propeller with different physical and computational properties. MRF (The Multiple Reference Frame Model) is, perhaps, the easiest way but is a computationally efficient technique to model a rotating frame of reference. The sliding grid techniques provide the more complex way to simulate the propeller and its surrounding region, rotating and interpolate on interface for transient effects. AMI, Arbitrary Mesh Interface, is a sliding grid implementation which is available in the recent versions of OpenFOAM, introduced in the official releases after v2.1.0.In this study, the main objective is to compare these two techniques, MRF and AMI, to perform the open water characteristics of the propeller with the Reynolds-Averaged Navier-Stokes equation computations (RANS) and study the accuracy in parallel performance and the benefits of each approach.More specifically, a self-propelled ship is simulated to study the interaction between the hull and propeller. In order to simplify and decrease the computational complexity the free surface is not considered. The ship under investigation is a 7000 DWT chemical tanker which is subject of a collaborative R&D project called STREAMLINE, strategic research for innovative marine propulsion concepts. In self-propelled condition, the transient forces on the propeller shall be evaluated. This study investigates the results of the experimental work with advanced CFD for accurate analysis and design of the propulsion. In this thesis, all simulations are conducted by using parallel computing. Therefore, a scalability analysis is studied to find out how to affect the average computational time by using different number of nodes.
|
254 |
Evaluation of Cloud Native Solutions for Trading Activity Analysis / Evaluering av cloud native lösningar för analys av transaktionsbaserad börshandelJohansson, Jonas January 2021 (has links)
Cloud computing has become increasingly popular over recent years, allowing computing resources to be scaled on-demand. Cloud Native applications are specifically created to run on the cloud service model. Currently, there is a research gap regarding the design and implementation of cloud native applications, especially regarding how design decisions affect metrics such as execution time and scalability of systems. The problem investigated in this thesis is whether the execution time and quality scalability, ηt of cloud native solutions are affected when housing the functionality of multiple use cases within the same cloud native application. In this work, a cloud native application for trading data analysis is presented, where the functionality of 3 use cases are implemented to the application: (1) creating reports of trade prices, (2) anomaly detection, and (3) analysis of relation diagram of trades. The execution time and scalability of the application are evaluated and compared to readily available solutions, which serve as a baseline for the evaluation. The results of use cases 1 and 2 are compared to Amazon Athena, while use case 3 is compared to Amazon Neptune. The results suggest that having functionalities combined into the same application could improve both execution time and scalability of the system. The impact depends on the use case and hardware configuration. When executing the use cases in a sequence, the mean execution time of the implemented system was decreased up to 17.2% while the quality scalability score was improved by 10.3% for use case 2. The implemented application had significantly lower execution time than Amazon Neptune but did not surpass Amazon Athena for respective use cases. The scalability of the systems varied depending on the use case. While not surpassing the baseline in all use cases, the results show that the execution time of a cloud native system could be improved by having functionality of multiple use cases within one system. However, the potential performance gains differ depending on the use case and might be smaller than the performance gains of choosing another solution. / Cloud computing har de senaste åren blivit alltmer populärt och möjliggör att skala beräkningskapacitet och resurser på begäran. Cloud native-applikationer är specifikt skapade för att köras på distribuerad infrastruktur. För närvarande finns det luckor i forskningen gällande design och implementering av cloud native applikationer, särskilt angående hur designbeslut påverkar mätbara värden som exekveringstid och skalbarhet. Problemet som undersöks i denna uppsats är huruvida exekveringstiden och måttet av kvalitetsskalbarhet, ηt påverkas när funktionaliteten av flera användningsfall intregreras i samma cloud native applikation. I det här arbetet skapades en cloud native applikation som kombinerar flera användningsfall för att analysera transaktionsbaserad börshandelsdata. Funktionaliteten av 3 användningsfall implementeras i applikationen: (1) generera rapporter över handelspriser, (2) detektering av avvikelser och (3) analys av relations-grafer. Applikationens exekveringstid och skalbarhet utvärderas och jämförs med kommersiella cloudtjänster, vilka fungerar som en baslinje för utvärderingen. Resultaten från användningsfall 1 och 2 jämförs med Amazon Athena, medan användningsfall 3 jämförs med Amazon Neptune. Resultaten antyder att systemets exekveringstid och skalbarhet kan förbättras genom att funktionalitet för flera användningsfall implementeras i samma system. Effekten varierar beroende på användningsfall och hårdvarukonfiguration. När samtliga användningsfall körs i en sekvens, minskar den genomsnittliga körtiden för den implementerade applikationen med upp till 17,2% medan kvalitetsskalbarheten ηt förbättrades med 10,3%för användningsfall 2. Den implementerade applikationen har betydligt kortare exekveringstid än Amazon Neptune men överträffar inte Amazon Athena för respektive användningsfall. Systemens skalbarhet varierade beroende på användningsfall. Även om det inte överträffar baslinjen i alla användningsfall, visar resultaten att exekveringstiden för en cloud native applikation kan förbättras genom att kombinera funktionaliteten hos flera användningsfall inom ett system. De potentiella prestandavinsterna varierar dock beroende på användningsfallet och kan vara mindre än vinsterna av att välja en annan lösning.
|
255 |
Dynamic scaling of a web-based application in a Cloud ArchitectureHossain, Md. Iqbal, Hossain, Md. Iqbal January 2014 (has links)
With the constant growth of internet applications, such as social networks, online media, various online communities, and mobile applications, website user traffic has grown, is very dynamic, and is oftentimes unpredictable. These unpredictable natures of the traffic have led to many new and unique challenges which must be addressed by solution architects, application developers, and technology researchers. All of these actors must continually innovate to create new attractive application and new system architectures to support the users of these new applications. In addition, increased traffic increases the demands for resources, while users demand even faster response times, despite the ever-growing datasets underlying many of these new applications. Several concepts and best practices have been introduced to build highly scalable applications by exploiting cloud computing. As no one who expect to be or remain a leader in business today can afford to ignore cloud computing. Cloud computing has emerged as a platform upon which innovation, flexibility, availability, and faster time-to-market can be supported by new small and medium sized enterprises. Cloud computing is enabling these businesses to create massively scalable applications, some of which handle tens of millions of active users daily. This thesis concerns the design, implementation, demonstration, and evaluation of a highly scalable cloud based architectures designed for high performance and rapid evolution for new businesses, such as Ifoodbag AB, in order to meet the requirement for their web based application. This thesis examines how to scale resources both up and down dynamically, since there is no reason to allocate more or less resources than actually needed. Apart from implementing and testing the proposed design, this thesis presents several guidelines, best practices and recommendations for optimizing auto scaling process including cost analysis. Test results and analysis presented in this thesis, clearly shows the proposed architecture model is strongly capable of supporting high demand applications, provides greater flexibility and enables rapid market share growth for new businesses, without their need to investing in an expensive infrastructure. / Med den ständiga tillväxten av Internet- applikationer, såsom sociala nätverk, online media, olika communities och mobila applikationer, har trafiken mot webbplatser ökat samt blivit mycket mer dynamisk och är ofta oförutsägbara. Denna oförutsägbara natur av trafiken har lett till många nya och unika utmaningar som måste lösas med hjälp av lösningsarkitekter, applikationsutvecklare och teknikforskare. Alla dessa aktörer måste ständigt förnya sig för att skapa nya attraktiva program och nya systemarkitekturer för att stödja användarna av dessa nya tillämpningar. Dessutom ökar den ökade trafikmängden krav på resurser, samtidigt som användarna kräver ännu snabbare svarstider, trots den ständigt växande datamängden som ligger som grund för många av dessa nya tillämpningar . Flera koncept och branchstandarder har införts för att bygga skalbara applikationer genom att utnyttja ”molnet” (”cloud computing”), eftersom att ingen som förväntar sig att bli eller förbli en ledare i näringslivet idag har råd att ignorera ”molnet”. Cloud computing har vuxit fram som en plattform på vilken innovation, flexibilitet, tillgänglighet och snabbhet till marknaden kan uppnås av nya, små och medelstora företag. Cloud computing är möjligt för dessa företag att skapa mycket skalbara applikationer, vilka kan hanterar tiotals miljoner aktiva användare varje dag. Detta examensarbete handlar om utformning, genomförande, demonstration och utvärdering av en mycket skalbar molnbaseradearkitekturer som utformats för höga prestanda och snabb utveckling av nya företag, såsom Ifoodbag AB, för att uppfylla kravet på deras webb- baserad applikation. Detta examensarbete undersöker hur man både skalar upp och ner dynamiskt, eftersom det inte finns någon anledning att tillägna applikationer mer eller mindre resurser än vad som faktiskt behövs för stunden. Som en del av examensarbetet implementeras och testas den föreslagna utformningen, samt presenterar flera riktlinjer, branchstandarder och rekommendationer för att optimera automatisk skalning av processer. Testresultat och de analyser som presenteras i detta examensarbete, visar tydligt att den föreslagna arkitekturen/modellen kan stödja resurskrävande applikationer, ger större flexibilitet och möjliggör snabb tillväxt av marknadsandelar för nya företag, utan att deras behov av att investera i en dyr infrastruktur.
|
256 |
Cohérence à terme fiable avec des types de données répliquées / Dependable eventual consistency with replicated data typesZawirski, Marek 14 January 2015 (has links)
Les bases de données répliquées cohérentes à terme récentes encapsulent la complexité de la concurrence et des pannes par le biais d'une interface supportant la cohérence causale, protégeant l'application des problèmes d'ordre, et/ou des Types de Données Répliqués (RDTs), assurant une sémantique convergente des mises-à-jour concurrentes en utilisant une interface objet. Cependant, les algorithmes fiables pour les RDTs et la cohérence causale ont un coût en terme de taille des métadonnées. Cette thèse étudie la conception de tels algorithmes avec une taille de métadonnées minimisée et leurs limites. Notre première contribution est une étude de la complexité des métadonnées des RDTs. Les nombreuses implémentations existantes impliquent un important surcoût en espace de stockage. Nous concevons un ensemble optimisé et un registre RDTs avec un surcoût des métadonnées réduit au nombre de répliques. Nous démontrons également les bornes inférieures de la taille des métadonnées pour six RDTs, prouvant ainsi l'optimalité de quatre implémentations. Notre seconde contribution est le design de SwiftCloud, une base de données répliquée causalement cohérente d'objets RDTs pour les applications côté client. Nous concevons des algorithmes qui supportent un grand nombre de répliques partielles côté client, s'appuyant sur le cloud, tout en étant tolérant aux fautes et avec une faible taille de métadonnées. Nous démontrons comment supporter la disponibilité (y compris la capacité à basculer entre des centre de données lors d'une erreur), la cohérence et le passage à l'échelle (petite taille de métadonnées, parallélisme) au détriment d'un léger retard dans l'actualisation des données. / Eventually consistent replicated databases offer excellent responsiveness and fault-tolerance, but expose applications to the complexity of concurrency andfailures. Recent databases encapsulate these problems behind a stronger interface, supporting causal consistency, which protects the application from orderinganomalies, and/or Replicated Data Types (RDTs), which ensure convergent semantics of concurrent updates using object interface. However, dependable algorithms for RDT and causal consistency come at a cost in metadata size. This thesis studies the design of such algorithms with minimized metadata, and the limits of the design space. Our first contribution is a study of metadata complexity of RDTs. RDTs use metadata to provide rich semantics; many existing RDT implementations incur high overhead in storage space. We design optimized set and register RDTs with metadata overhead reduced to the number of replicas. We also demonstrate metadata lower bounds for six RDTs, thereby proving optimality of four implementations. Our second contribution is the design of SwiftCloud, a replicated causally-consistent RDT object database for client-side applications. We devise algorithms to support high numbers of client-side partial replicas backed by the cloud, in a fault-tolerant manner, with small metadata. We demonstrate how to support availability and consistency, at the expense of some slight data staleness; i.e., our approach trades freshness for scalability (small metadata, parallelism), and availability (ability to fail-over between data centers). We validate our approach with experiments involving thousands of client replicas.
|
257 |
Exploring the aesthetical qualities of scaled game maps through Human-AI Collaboration / Utforskande av estetiska egenskaper i skalförändrade spelbräden genom samarbete mellan människa och AIRignell, Petter, Sjösvärd, Christian January 2023 (has links)
The primary objective is to explore the scalability of two-dimensional game maps while preserving certain aesthetical qualities in scaled representations. By either upscaling or downscaling the maps, features of the map inducing these aesthetical qualities may diminish. For instance, there could be alterations in the layout of corridors, rooms, characters, and treasures, as well as variations in the quantity of them. To address this, AI technology has been used as a means of preserving the feature soriginally introduced by the designer to create an alternatively scaled representation. The explorationis made possible by utilizing a game designer tool, Evolutionary Dungeon Designer (EDD), to designmaps - scale them - and generate AI-based solutions through an evolutionary algorithm. Furthermore, evaluations through both a user study and a controlled experiment were performed to analyze thescalability of game maps and the AI-generated representations. The user study showed some divisive results regarding whether the scaled or the AI-generated maps were superior. Often the AI-scaled maps were regarded as dissimilar compared to the original map. However, the AI could to some extent provide the same prevalence of some of the wanted features, but in a different design. This was also evident in the controlled experiment, where the AI managed to contain a specific feature to the same degree, but lacked the capability of making the maps similar. / Det primära målet är att utforska skalbarheten hos tvådimensionella spelkartor samtidigt som vissa estetiska egenskaper bevaras i skalförändrade representationer. Genom antingen att förstora eller förminska kartorna kan vissa egenskaper som bidrar till dessa estetiska egenskaper minska. Till exempel kan det finnas förändringar i layouten av korridorer, rum, karaktärer och skatter, liksom variationer i deras antal. För att lösa detta har AI-teknologi använts för att försöka bevara de egenskaper som ursprungligen infördes av designer, genom att skapa alternativt skalförändrade representationer. Utforskningen möjliggörs genom att använda ett speldesignverktyg, Evolutionary Dungeon Designer (EDD), för att designa kartor - skalförändra dem - och generera AI-baserade lösningar genom en evolutionär algoritm. För att analysera skalbarheten hos spelkartor och de AI-genererade representationerna genomfördes utvärderingar genom både en användarstudie och ett kontrollerat experiment. Användarstudien visade delade resultat gällande om skalförändrade kartorna eller AI-skalförändrade kartorna var mest representativ. Ofta ansågs AI-skalförändrade kartor vara olika den ursprungliga kartan. I de AI-skalade kartorna fanns det dock i viss utsträckning en liknande förekomst av de valda egenskaperna, men i en annan design. Detta var också tydligt i det kontrollerade experimentet, där AI skapade kartor som hade samma grad av en specifik egenskap, men saknade förmågan att göra kartorna lika ursprungskartan.
|
258 |
Learning from Structured Data: Scalability, Stability and Temporal AwarenessPavlovski, Martin, 0000-0003-1495-2128 January 2021 (has links)
A plethora of high-impact applications involve predictive modeling of structured data. In various domains, from hospital readmission prediction in the medical realm, though weather forecasting and event detection in power systems, up to conversion prediction in online businesses, the data holds a certain underlying structure. Building predictive models from such data calls for leveraging the structure as an additional source of information. Thus, a broad range of structure-aware approaches have been introduced, yet certain common challenges in many structured learning scenarios remain unresolved. This dissertation revolves around addressing the challenges of scalability, algorithmic stability and temporal awareness in several scenarios of learning from either graphically or sequentially structured data.
Initially, the first two challenges are discussed from a structured regression standpoint. The studies addressing these challenges aim at designing scalable and algorithmically stable models for structured data, without compromising their prediction performance. It is further inspected whether such models can be applied to both static and dynamic (time-varying) graph data. To that end, a structured ensemble model is proposed to scale with the size of temporal graphs, while making stable and reliable yet accurate predictions on a real-world application involving gene expression prediction. In the case of static graphs, a theoretical insight is provided on the relationship between algorithmic stability and generalization in a structured regression setting. A stability-based objective function is designed to indirectly control the stability of a collaborative ensemble regressor, yielding generalization performance improvements on structured regression applications as diverse as predicting housing prices based on real-estate transactions and readmission prediction from hospital records.
Modeling data that holds a sequential rather than a graphical structure requires addressing temporal awareness as one of the major challenges. In that regard, a model is proposed to generate time-aware representations of user activity sequences, intended to be seamlessly applicable across different user-related tasks, while sidestepping the burden of task-driven feature engineering. The quality and effectiveness of the time-aware user representations led to predictive performance improvements over state-of-the-art models on multiple large-scale conversion prediction tasks.
Sequential data is also analyzed from the perspective of a high-impact application in the realm of power systems. Namely, detecting and further classifying disturbance events, as an important aspect of risk mitigation in power systems, is typically centered on the challenges of capturing structural characteristics in sequential synchrophasor recordings. Therefore, a thorough comparative analysis was conducted by assessing various traditional as well as more sophisticated event classification models under different domain-expert-assisted labeling scenarios. The experimental findings provide evidence that hierarchical convolutional neural networks (HCNNs), capable of automatically learning time-invariant feature transformations that preserve the structural characteristics of the synchrophasor signals, consistently outperform traditional model variants. Their performance is observed to further improve as more data are inspected by a domain expert, while smaller fractions of solely expert-inspected signals are already sufficient for HCNNs to achieve satisfactory event classification accuracy. Finally, insights into the impact of the domain expertise on the downstream classification performance are also discussed. / Computer and Information Science
|
259 |
Performance Analysis and Evaluation of Divisible Load Theory and Dynamic Loop Scheduling Algorithms in Parallel and Distributed EnvironmentsBalasubramaniam, Mahadevan 14 August 2015 (has links)
High performance parallel and distributed computing systems are used to solve large, complex, and data parallel scientific applications that require enormous computational power. Data parallel workloads which require performing similar operations on different data objects, are present in a large number of scientific applications, such as N-body simulations and Monte Carlo simulations, and are expressed in the form of loops. Data parallel workloads that lack precedence constraints are called arbitrarily divisible workloads, and are amenable to easy parallelization. Load imbalance that arise from various sources such as application, algorithmic, and systemic characteristics during the execution of scientific applications degrades performance. Scheduling of arbitrarily divisible workloads to address load imbalance in order to obtain better utilization of computing resources is a major area of research. Divisible load theory (DLT) and dynamic loop scheduling (DLS) algorithms are two algorithmic approaches employed in the scheduling of arbitrarily divisible workloads. Despite sharing the same goal of achieving load balancing, the two approaches are fundamentally different. Divisible load theory algorithms are linear, deterministic and platform dependent, whereas dynamic loop scheduling algorithms are probabilistic and platform agnostic. Divisible load theory algorithms have been traditionally used for performance prediction in environments characterized by known or expected variation in the system characteristics at runtime. Dynamic loop scheduling algorithms are designed to simultaneously address all the sources of load imbalance that stochastically arise at runtime from application, algorithmic, and systemic characteristics. In this dissertation, an analysis and performance evaluation of DLT and DLS algorithms are presented in the form of a scalability study and a robustness investigation. The effect of network topology on their performance is studied. A hybrid scheduling approach is also proposed that integrates DLT and DLS algorithms. The hybrid approach combines the strength of DLT and DLS algorithms and improves the performance of the scientific applications running in large scale parallel and distributed computing environments, and delivers performance superior to that which can be obtained by applying DLT algorithms in isolation. The range of conditions for which the hybrid approach is useful is also identified and discussed.
|
260 |
Scalable Deep Reinforcement Learning for a Multi-Agent Warehouse SystemKhan, Akib, Loberg, Marcus January 2022 (has links)
This report presents an application of reinforcementlearning to the problem of controlling multiple robots performingthe task of moving boxes in a warehouse environment. The robotsmake autonomous decisions individually and avoid colliding witheach other and the walls of the warehouse. The problem is definedas a dynamical multi-agent system and a solution is reachedby applying the DQN algorithm. The solution is designed forachieving scalability, meaning that the trained robots are flexibleenough to be deployed in simulated environments of differentsizes and alongside a different number of robots. This wassuccessfully achieved by feature engineering. / Denna rapport presenterar en implementation av Reinforcement Learning som löser problemet med att styra flertalet robotar som utför uppgiften att flytta lådor i en lager miljö. Robotarna tar autonoma beslut individuellt och försöker att undvika att krocka med varandra och väggarna av lagerlokalen. Problemet definieras som ett dynamiskt multi-agent system och en lösning nås genom att tillämpa DQN algoritmen. Lösningen är utformad för att uppnå skalbarhet, vilket innebär att robotarna ska vara flexibla nog att agera i miljöer av antal robotar. Detta uppnåddes framgångsrikt genom att implementera funktionsextraktion. / Kandidatexjobb i elektroteknik 2022, KTH, Stockholm
|
Page generated in 0.0751 seconds