• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 1
  • Tagged with
  • 9
  • 9
  • 9
  • 6
  • 4
  • 4
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

End-to-end latency and cost impact of function segregation and customized memory allocation in FaaS environments

Fredriksson, Desireé January 2021 (has links)
Function as a service (FaaS) is a type of serverless cloud computing intended to facilitate development by abstracting away infrastructural management, and offer a more flexible, pay-as-you-go billing model based on execution time and memory allocation. FaaS functions are deployed to the cloud provider as either single units, or chained to form a pipeline of multiple functions that call each other. As each step in the pipeline might have different requirements, it could be beneficial to split larger functions into smaller parts. This would enable customized provisioning according to each function's needs, and potentially result in a lower rate. However, decreased memory entails a lower CPU performance, which directly correlates to computation time. A test application was created and executed on Google Cloud services to investigate what impact function segregation, and provisioning accommodated to each sub-function requirement, have on end-to-end latency and total cost. In conclusion, no trivial relation between cost and performance was found. Segregating and adjusting provisioning to required memory was in this experiment cheaper in some cases, but not all; however, always significantly slower. In addition to price and workload behavior being considered and balanced, it was found that aspects such as level of control over management and hardware configuration has to be weighed in when deciding if FaaS is a suitable alternative for a given situation.
2

Investigating differences in performance between monolithic and serverless based architectures / Undersöker skillnader i prestanda mellan monolitisk och serverlös arkitektur

Manousian, Jonathan January 2022 (has links)
With the growth of cloud computing, various delivery models have emerged to attract developers looking for scalable and cost-effective infrastructures for their software. Traditionally, applications are developed as a monolith with one single codebase repository, and they are easily deployed; for example as Platform as a Service (PaaS). However, monolithic applications have received criticism for inefficient resource handling when deployed on the cloud; therefore, new delivery models have been introduced as alternatives. Recent research points towards Function as a Service (FaaS) to potentially solve the issue of inefficient resource handling and, therefore, reduce costs. Furthermore, since multiple distinct development strategies and delivery models exist, it becomes increasingly important to choose the right strategy from the beginning since migrating to another development strategy or deployment model in the future is rather expensive. This thesis load tests monolithic and serverless applications to determine which development approach best suits performance, scalability, and cost requirements. The findings obtained showed that an application implemented with a serverless architecture can be a better strategy if the application needs to be able to handle a sudden large up-scaling. Otherwise, both architectures showed similar results to stable workloads. Regarding costs, the serverless architecture optimized costs on a smaller scale but further analysis showed that it can surpass the costs of a monolithic architecture if it surpasses a threshold of requests per month. / Nya molntjänster har lanserats för att locka utvecklare som letar efter skalbara och kostnadseffektiva infrastrukturer för sin programvara. Traditionellt utvecklas applikationer som en monolit med en enda kodbas, och de är lätta att lansera; till exempel med Platform as a Service (PaaS). Monolitiska applikationer har dock fått kritik för ineffektiv resurshantering när de distribueras i molnet; därför har nya servicemodeller introducerats som alternativ. Ny forskning pekar mot Function as a Service (FaaS) för att potentiellt lösa problemet med ineffektiv resurshantering och därför minska kostnaderna. Dessutom, eftersom det finns flera olika utvecklingsstrategier och servicemodeller, blir det allt viktigare att välja rätt strategi från början eftersom att det kan bli dyrt att migrera till en annan strategi i framtiden. Detta examensarbete testar en monolitisk applikation och en serverlös applikation för att avgöra vilken utvecklingsmetod som passar bäst gällande prestanda, skalbarhet och kostnadskrav. Resultaten som erhölls visade att en applikation implementerad med en serverlös arkitektur kan vara en bättre strategi om applikationen ska kunna hantera en snabb uppskalning. Annars visade båda arkitekturerna liknande resultat när det var stabilare arbetsbelastningar. Serverlösa arkitekturen optimerade kostanderna i en mindre skala, men vidare analyser visade att kostnaden kan överskrida en monolitisk arkitektur om applikationens efterfrågan skulle passera ett visst antal användare per månad.
3

Improving Availability of Stateful Serverless Functions in Apache Flink / Förbättring av Tillgänglighet För Tillståndsbaserade Serverlösa Funktioner i Apache Flink

Gustafson, Christopher January 2022 (has links)
Serverless computing and Function-as-a-Service are rising in popularity due to their ease of use, provided scalability and cost-efficient billing model. One such platform is Apache Flink Stateful Functions. It allows application developers to run serverless functions with state that is persisted using the underlying stream processing engine Apache Flink. Stateful Functions use an embedded RocksDB state backend, where state is stored locally at each worker. One downside of this architecture is that state is lost if a worker fails. To recover, a recent snapshot of the state is fetched from a persistent file system. This can be a costly operation if the size of the state is large. In this thesis, we designed and developed a new decoupled state backend for Apache Flink Stateful Functions, with the goal of increasing availability while measuring potential performance trade-offs. It extends an existing decoupled state backend for Flink, FlinkNDB, to support the operations of Stateful Functions. FlinkNDB stores state in a separate highly available database, RonDB, instead of locally at the worker nodes. This allows for fast recovery as large state does not have to be transferred between nodes. Two new recovery methods were developed, eager and lazy recovery. The results show that lazy recovery can decrease recovery time by up to 60% compared to RocksDB when the state is large. Eager recovery did not provide any recovery time improvements. The measured performance was similar between RocksDB and FlinkNDB. Checkpointing times in FlinkNDB were however longer, which cause short periodic performance degradation. The evaluation of FlinkNDB suggests that decoupled state can be used to improve availability, but that there might be performance deficits included. The proposed solution could thus be a viable option for applications with high requirements of availability and lower performance requirements. / Serverlös datorberäkning och Function-as-a-Service (FaaS) ökar i popularitet på grund av dess enkelhet att använda, skalbarhet och kostnadseffektiva fakturerings-model. En sådan platform är Apache Flink Stateful Functions. Den tillåter applikationsutvecklare att köra serverlösa funktioner med varaktigt tillstånd genom den underliggande strömprocesseringsmotorn Apache Flink. Stateful Functions använder en inbyggd RocksDB tillståndslagring, där tillstånd lagras lokalt på arbetarnoderna. Ett problem med denna arkitektur är att tillstånd förloras om en arbetarnod krashar. För att återhämta sig behöver systemet hämta en tidigare sparad tillståndskopia från ett varaktivt filsystem, vilket kan bli kostsamt om tillståndet är stort. I denna uppsatts har vi designat och utvecklat en ny prototyp för att separat hantera tillstånd i Apache Flink Stateful Functions, med målet att öka tillgängligheten utan att förlora prestanda. Prototypen är en vidareutveckling av en existerande separat tillståndshantering för Flink, FlinkNDB, som utökades för att kunna hantera Stateful Functions. FlinkNDB sparar tillstånd i en separat högtillgänglig database, RonDB, istället för att spara tillstånd lokalt på arbetarnoderna. Detta möjliggör snabb återhämtning då inte stora mängder tillstånd behöver skickas mellan noder. Två återhämtningsmetoder utvecklades, ivrig och lat återhämtning. Resultaten visar att lat återhämtning kan sänka återhämtningstiden med upp till 60% jämfört med RocksDB då tillståndet är stort. Ivrig återhämtning visade inte några förbättringar i återhämtningstid. Prestandan var liknande mellan RocksDB och FlinkNDB. Tiden för checkpoints var däremot längre för FlinkNDB vilket orsakade korta periodiska prestandadegraderingar jämfört med RocksDB. Evalueringen av FlinkNDB föreslår att separat tillståndshantering kan öka tillgängligheten av Stateful Functions, men att detta kan innebära vissa prestanda degraderingar. Den föreslagna lösningen kan således vara ett bra alternativ när det finns höga krav på tillgänglighet, men lågra krav på prestanda.
4

Using Function as a Service for Dynamic Application Scaling in the Cloud

Abrahamsson, Andreas January 2018 (has links)
Function as a Service is a new addition to cloud services that allow a user to execute code in form of a function, in the cloud. All underlying complexity is handled by the cloud provider and the user only pay per use. Cloud services have been growing significantly over the past years and many companies want to take advantages of the benefits of the cloud. The cloud services deliver computing resources as a service over a network connection, often by the Internet. To use the benefit of the cloud, one can not just move an application to the cloud and think that it will solve itself. First of all, an application needs to be optimized to be able to take advantages of the cloud. Therefore, together with Tieto, a microservice architecture have been the main architectural pattern when Function as a Service has been evaluated. A major problem with applications, both application built with a monolithic and microservice architecture, is to handle great amounts of information flows. An application may have scaling issues when an information flow becomes too large. A person using Function as a Service does not have to buy, rent or maintain their own servers. However, Function as a Service has a certain memory and runtime restrictions, so an entire application cannot be applied to a Function as a Service. This thesis examines the possibility of using Function as a Service in different architectural environments and estimating the cost of it. Function as a Service is a new addition to cloud services, so cloud providers are also compared and evaluated in terms of the Function as a Service functionality. Function as a Service has been tested directly on various cloud platforms and even developed and executed locally, encapsulated in containers. The results show that Function as a Service is a good complement to an application architecture. The results also show that Function as a Service is highly flexible and cost-effective, and it is advantageous compared to physical servers and Virtual Machines. Depending on how a function is built, the developer can lower the cost even more by choosing the cloud supplier that fits best for their use. With the flexibility of Function as a Service, applications can handle greater information flow without bottlenecks in the infrastructure and therefore, becomes more efficient and cost-effective.
5

Architectural Implications of Serverless and Function-as-a-Service / 无服务器和功能服务化的架构含义

Andell, Oscar January 2020 (has links)
Serverless or Function-as-a-Service (FaaS) is a recent architectural style that is based on the principles of abstracting infrastructure management and scaling to zero, meaning application instances are dynamically started and shut down to accommodate load. This concept of no idling servers and inherent autoscaling comes with benefits but also drawbacks. This study presents an evaluation of the performance and implications of the serverless architecture and contrasts it with the so-called monolith architectures. Three distinct architectures are implemented and deployed on the FaaS platform Microsoft Azure Functions as well as the PaaS platform Azure Web App. Results were produced through experiments measuring cold starts, response times, and scaling for the tested architectures as well as observations of traits such as cost and vendor lock-in. The results indicate that serverless architectures, while it is subjected to drawbacks such as vendor lock-in and cold starts, provides several benefits to a system such as reliability and cost reduction.
6

Creating and Deploying Metamorphic Services for SWMM Community Based on FaaS Architecture

Lin, Xuanyi 29 September 2021 (has links)
No description available.
7

Serverless Strategies and Tools in the Cloud Computing Continuum

Risco Gallardo, Sebastián 15 January 2024 (has links)
Tesis por compendio / [ES] En los últimos años, la popularidad de la computación en nube ha permitido a los usuarios acceder a recursos de cómputo, red y almacenamiento sin precedentes bajo un modelo de pago por uso. Esta popularidad ha propiciado la aparición de nuevos servicios para resolver determinados problemas informáticos a gran escala y simplificar el desarrollo y el despliegue de aplicaciones. Entre los servicios más destacados en los últimos años se encuentran las plataformas FaaS (Función como Servicio), cuyo principal atractivo es la facilidad de despliegue de pequeños fragmentos de código en determinados lenguajes de programación para realizar tareas específicas en respuesta a eventos. Estas funciones son ejecutadas en los servidores del proveedor Cloud sin que los usuarios se preocupen de su mantenimiento ni de la gestión de su elasticidad, manteniendo siempre un modelo de pago por uso de grano fino. Las plataformas FaaS pertenecen al paradigma informático conocido como Serverless, cuyo propósito es abstraer la gestión de servidores por parte de los usuarios, permitiéndoles centrar sus esfuerzos únicamente en el desarrollo de aplicaciones. El problema del modelo FaaS es que está enfocado principalmente en microservicios y tiende a tener limitaciones en el tiempo de ejecución y en las capacidades de computación (por ejemplo, carece de soporte para hardware de aceleración como GPUs). Sin embargo, se ha demostrado que la capacidad de autoaprovisionamiento y el alto grado de paralelismo de estos servicios pueden ser muy adecuados para una mayor variedad de aplicaciones. Además, su inherente ejecución dirigida por eventos hace que las funciones sean perfectamente adecuadas para ser definidas como pasos en flujos de trabajo de procesamiento de archivos (por ejemplo, flujos de trabajo de computación científica). Por otra parte, el auge de los dispositivos inteligentes e integrados (IoT), las innovaciones en las redes de comunicación y la necesidad de reducir la latencia en casos de uso complejos han dado lugar al concepto de Edge computing, o computación en el borde. El Edge computing consiste en el procesamiento en dispositivos cercanos a las fuentes de datos para mejorar los tiempos de respuesta. La combinación de este paradigma con la computación en nube, formando arquitecturas con dispositivos a distintos niveles en función de su proximidad a la fuente y su capacidad de cómputo, se ha acuñado como continuo de la computación en la nube (o continuo computacional). Esta tesis doctoral pretende, por lo tanto, aplicar diferentes estrategias Serverless para permitir el despliegue de aplicaciones generalistas, empaquetadas en contenedores de software, a través de los diferentes niveles del continuo computacional. Para ello, se han desarrollado múltiples herramientas con el fin de: i) adaptar servicios FaaS de proveedores Cloud públicos; ii) integrar diferentes componentes software para definir una plataforma Serverless en infraestructuras privadas y en el borde; iii) aprovechar dispositivos de aceleración en plataformas Serverless; y iv) facilitar el despliegue de aplicaciones y flujos de trabajo a través de interfaces de usuario. Además, se han creado y adaptado varios casos de uso para evaluar los desarrollos conseguidos. / [CA] En els últims anys, la popularitat de la computació al núvol ha permès als usuaris accedir a recursos de còmput, xarxa i emmagatzematge sense precedents sota un model de pagament per ús. Aquesta popularitat ha propiciat l'aparició de nous serveis per resoldre determinats problemes informàtics a gran escala i simplificar el desenvolupament i desplegament d'aplicacions. Entre els serveis més destacats en els darrers anys hi ha les plataformes FaaS (Funcions com a Servei), el principal atractiu de les quals és la facilitat de desplegament de petits fragments de codi en determinats llenguatges de programació per realitzar tasques específiques en resposta a esdeveniments. Aquestes funcions són executades als servidors del proveïdor Cloud sense que els usuaris es preocupen del seu manteniment ni de la gestió de la seva elasticitat, mantenint sempre un model de pagament per ús de gra fi. Les plataformes FaaS pertanyen al paradigma informàtic conegut com a Serverless, el propòsit del qual és abstraure la gestió de servidors per part dels usuaris, permetent centrar els seus esforços únicament en el desenvolupament d'aplicacions. El problema del model FaaS és que està enfocat principalment a microserveis i tendeix a tenir limitacions en el temps d'execució i en les capacitats de computació (per exemple, no té suport per a maquinari d'acceleració com GPU). Tot i això, s'ha demostrat que la capacitat d'autoaprovisionament i l'alt grau de paral·lelisme d'aquests serveis poden ser molt adequats per a més aplicacions. A més, la seva inherent execució dirigida per esdeveniments fa que les funcions siguen perfectament adequades per ser definides com a passos en fluxos de treball de processament d'arxius (per exemple, fluxos de treball de computació científica). D'altra banda, l'auge dels dispositius intel·ligents i integrats (IoT), les innovacions a les xarxes de comunicació i la necessitat de reduir la latència en casos d'ús complexos han donat lloc al concepte d'Edge computing, o computació a la vora. L'Edge computing consisteix en el processament en dispositius propers a les fonts de dades per millorar els temps de resposta. La combinació d'aquest paradigma amb la computació en núvol, formant arquitectures amb dispositius a diferents nivells en funció de la proximitat a la font i la capacitat de còmput, s'ha encunyat com a continu de la computació al núvol (o continu computacional). Aquesta tesi doctoral pretén, doncs, aplicar diferents estratègies Serverless per permetre el desplegament d'aplicacions generalistes, empaquetades en contenidors de programari, a través dels diferents nivells del continu computacional. Per això, s'han desenvolupat múltiples eines per tal de: i) adaptar serveis FaaS de proveïdors Cloud públics; ii) integrar diferents components de programari per definir una plataforma Serverless en infraestructures privades i a la vora; iii) aprofitar dispositius d'acceleració a plataformes Serverless; i iv) facilitar el desplegament d'aplicacions i fluxos de treball mitjançant interfícies d'usuari. A més, s'han creat i s'han adaptat diversos casos d'ús per avaluar els desenvolupaments aconseguits. / [EN] In recent years, the popularity of Cloud computing has allowed users to access unprecedented compute, network, and storage resources under a pay-per-use model. This popularity led to new services to solve specific large-scale computing challenges and simplify the development and deployment of applications. Among the most prominent services in recent years are FaaS (Function as a Service) platforms, whose primary appeal is the ease of deploying small pieces of code in certain programming languages to perform specific tasks on an event-driven basis. These functions are executed on the Cloud provider's servers without users worrying about their maintenance or elasticity management, always keeping a fine-grained pay-per-use model. FaaS platforms belong to the computing paradigm known as Serverless, which aims to abstract the management of servers from the users, allowing them to focus their efforts solely on the development of applications. The problem with FaaS is that it focuses on microservices and tends to have limitations regarding the execution time and the computing capabilities (e.g. lack of support for acceleration hardware such as GPUs). However, it has been demonstrated that the self-provisioning capability and high degree of parallelism of these services can be well suited to broader applications. In addition, their inherent event-driven triggering makes functions perfectly suitable to be defined as steps in file processing workflows (e.g. scientific computing workflows). Furthermore, the rise of smart and embedded devices (IoT), innovations in communication networks and the need to reduce latency in challenging use cases have led to the concept of Edge computing. Edge computing consists of conducting the processing on devices close to the data sources to improve response times. The coupling of this paradigm together with Cloud computing, involving architectures with devices at different levels depending on their proximity to the source and their compute capability, has been coined as Cloud Computing Continuum (or Computing Continuum). Therefore, this PhD thesis aims to apply different Serverless strategies to enable the deployment of generalist applications, packaged in software containers, across the different tiers of the Cloud Computing Continuum. To this end, multiple tools have been developed in order to: i) adapt FaaS services from public Cloud providers; ii) integrate different software components to define a Serverless platform on on-premises and Edge infrastructures; iii) leverage acceleration devices on Serverless platforms; and iv) facilitate the deployment of applications and workflows through user interfaces. Additionally, several use cases have been created and adapted to assess the developments achieved. / Risco Gallardo, S. (2023). Serverless Strategies and Tools in the Cloud Computing Continuum [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/202013 / Compendio
8

Function as a Service : En fallstudie av Pennan & Svärdet och dess applikation Warstories

Neterowicz, Martin, Johansson, Jacob January 2017 (has links)
Varje år går stora mängder resurser förlorade på misslyckade IT-system vilket bidrar till ett stort intresse för kostnadseffektiva tekniker. En sådan teknik kallas Cloud Computing och har funnits i ett flertal år. Cloud Computing kan potentiellt sänka kostnader relaterade till ITprojekt, såsom exempelvis kostnader rörande underhåll av serverhårdvara. Function as a Service (FaaS) är ett av de senaste tillskotten till Cloud Computing. Något som blir alltmer problematiskt är att identifiera vilken typ av Cloud Computing som bäst lämpar sig för ett företag eller projekt. Denna studie ämnar därför svara på följande frågor; vilket värde tillför FaaS till utvecklare vid utvecklande av applikationer, hur skiljer sig FaaS från IaaS rörande implementation och vilka är potentiella motiv bakom nyttjande av FaaS. Genom att svara på dessa frågor ämnar studien agera vägledande vid val av Cloud Computing-tjänst. Vid analys av FaaS har LEAN Software Development (LSD) applicerats för att identifiera var FaaS reducerar och potentiellt adderar slöseri vid mjukvaruutveckling nyttjande tekniken. En fallstudie genomfördes vid ett litet företag, mindre än 50 anställda, som experimenterar med Amazon Web Services implementation av FaaS, Lambda. Slutsatsen av studien är att trots att samtliga aspekter av LSD inte är applicerbara på alla företag och projekt motiverar Lambdas fördelaktiga betalmodell företag att själva utforska tekniken. / Every year a tremendous amount of resources is lost on failed IT-Systems. It is therefore of interest to explore potential cost-saving technologies. One such technology that has been around for many years is Cloud Computing. Cloud Computing can potentially lower costs of IT-projects by, for example, eliminating the need to maintain server hardware. One of the more recent additions to the Cloud Computing assortment is Function as a Service (FaaS). What is becoming increasingly problematic about the assortment of Cloud Computing services is to know which service is best suitable for a company or project. This study therefore aims to examine FaaS to answer the questions; what value does FaaS add to the developers when developing applications, what differs in implementing FaaS from IaaS, and what are potential motives behind the usage of FaaS, thereby provide guidance when choosing Cloud Computing service. To analyze the results the LEAN Software Development (LSD) model has been used to identify where FaaS reduces and potentially adds waste in software development. A casestudy of a small organization, less than 50 employees, that are experimenting with the usage of Amazon Web Services implementation of FaaS, Lambda, has been made. The conclusion of the study is that even though all the aspects of LSD is not applicable to all companies or projects, the payment model of Lambda makes it advantageous for organizations to try it out for themselves.
9

A FaaS Instance Management Algorithm for Minimizing Cost subject to Response Time / Algoritm för hantering av FaaS-instanser för att minimera kostnaderna med hänsyn till svarstiden

Zhang, Tianyu January 2022 (has links)
With the development of cloud computing technologies, the concept of Function as a Service (FaaS) has become increasingly popular over the years. Developers can choose to create applications in the form of functions, and delegate the deployment and management of the infrastructure to the FaaS provider. Before a function can be executed at the infrastructure of the FaaS service provider, an environment to execute a function needs to be initiated; this environment initialization is known as cold start. Loading and maintaining a function is costly for FaaS providers, especially the cold start process which costs more system resources like Central Processing Unit (CPU) and memory than keeping functions alive. Therefore it is essential to prevent a cold start whenever possible because this would lead to an increase in both the response time and the cost. An instance management policy need to be implemented to reduce the probability of cold starts while minimizing costs. This project’s objective is to develop an instance management algorithm to minimize total costs while meeting response time requirements. By investigating three widely used instance management algorithms we found that none of them utilize the dependency existing between functions. We believe these dependencies can be useful to reduce response time and cold start probability by predicting next invocations. By leveraging this observation, we proposed a novel Dependency Based Algorithm (DBA). By using extensive simulations we showed that proposed algorithm can solve the problem and provide low response time with low costs compare to baselines. / I och med utvecklingen av molntjänster har konceptet FaaS (Function as a Service) blivit alltmer populärt under årens lopp. Utvecklare kan välja att skapa applikationer i form av funktioner och delegera utplaceringen och förvaltningen av infrastrukturen till FaaS-leverantören. Innan en funktion kan exekveras i FaaS-tjänsteleverantörens infrastruktur måste en miljö för att exekvera en funktion initieras; denna miljöinitialisering kallas kallstart. Att ladda och underhålla en funktion är kostsamt för FaaS-leverantörerna, särskilt kallstartsprocessen som kostar mer systemresurser som CPU och minne än att hålla funktionerna vid liv. Därför är det viktigt att förhindra en kallstart när det är möjligt eftersom detta skulle leda till en ökning av både svarstiden och kostnaden. En policy för hantering av instanser måste införas för att minska sannolikheten för kallstarter och samtidigt minimera kostnaderna. Projektets mål är att utveckla en algoritm för hantering av instanser för att minimera de totala kostnaderna samtidigt som kraven på svarstid uppfylls. Genom att undersöka tre allmänt använda algoritmer för hantering av instanser fann vi att ingen av dem utnyttjar det beroende som finns mellan funktioner. Vi tror att dessa beroenden kan vara användbara för att minska svarstiden och sannolikheten för kallstart genom att förutsäga nästa anrop. Genom att utnyttja denna observation föreslog vi en ny beroendebaserad algoritm. Med hjälp av omfattande simuleringar visade vi att den föreslagna algoritmen kan lösa problemet och ge en låg svarstid med låga kostnader jämfört med baslinjerna.

Page generated in 0.0929 seconds