Spelling suggestions: "subject:"serverless computing"" "subject:"neverless computing""
1 |
Fireworks: A Fast, Efficient and Safe Serverless FrameworkShin, Wonseok 01 June 2021 (has links)
Serverless computing is a new paradigm, and it is becoming rapidly popular in Cloud computing. Serverless computing has interesting, unique properties that the unit of deployment
and execution is a serverless function. Moreover, it introduces the new economic model
pay-as-you-go billing model. It provides a high economic benefit from highly elastic resource
provisioning to the application.
However, it also accompanies the new challenges for serverless computing: (1) start-up time
latency problem from relatively short function execution time, (2) high-security risk from
highly consolidated environment, and (3) memory efficiency problem from unpredictable
function invocations. These problems not only degrade performance but also lowers the
economic benefits of Cloud providers.
In this work, we propose VM-level pre-JIT snapshot and develop Fireworks to solve
the three main challenges without any compromises. The key idea behind the VM-level preJIT snapshot is to leverage pre-JITted serverless function codes to reduce both start-up time
and execution time of the function and improve memory efficiency by sharing the pre-JITted
codes. Also, Fireworks can provide high-level isolation by storing the pre-JITted codes to
the snapshot of microVM's snapshot. Our evaluation shows that Fireworks outperforms the
state-of-art serverless platforms by 20.6× and memory efficiency up to 7.3×. / Master of Science / Serverless computing is the most popular in cloud computing. Contrary to its name, developers write and run their code on servers managed by cloud providers. The number of
servers, required CPU, memory are automatically adjusted in proportion to the incoming
traffic. Also, the users only pay for what they use and the pay-as-you-go attracts attention
as new infrastructure. Serverless computing continues to evolve and it is being done as research from business to academic. There are many efforts to reduce cold start, which is the
delay in creating the necessary resources when a serverless program runs first. The serverless
platforms prepare resources in advance or provide lighter cloud resources. However, this can
waste resources or increase a security threat. In this work, we propose a fast, efficient, and
safe serverless framework. We use Just-In-Time (JIT) compilation, which can improve the
performance of the interpreter languages which are widely used in the serverless. We keep
the JIT-generated machine code in the snapshot for reuse. Besides, the security is guaranteed by the VM-level snapshot. In addition, the snapshot can be shared, increasing memory
efficiency. Through our implementation and evaluation, we have shown that Fireworks improve up to 20 times in terms of cold start performance and more than 7 times in memory
efficiency than state-of-the-art serverless platforms. We believe our research has made a new
way to use the JIT and the snapshot in the serverless computing.
|
2 |
Kan man spara tid och pengar genom att migrera till serverless computing?Wahlman, Christoffer, Wallin, Philip January 2023 (has links)
Inom IT-världen använder man väldigt ofta WebJobs för att utföra mindrearbeten på internet. WebJobs använder en server som är i gång i bakgrunden.2016 introducerade företag som Amazon, Microsoft och andra storaföretag/bolag “Serverless Computing” som ska kunna exekvera arbeten överinternet utan att utvecklaren behöver tänka på hur mycket resurser somanvänds, samt att den ligger aktiv i bakgrunden hela tiden. Med MicrosoftsAzure Function så exekveras ett jobb när en trigger har aktiverats, och närjobbet är klart så körs inget i bakgrunden för användaren. Detta leder till attresurser allokeras dynamiskt samt att man endast blir debiterad för deresurser funktionen använt. Vi ska tillsammans med Visma SPCS arbeta meden produkt som flyttar över information hos användare till en molnserver.Denna produkt ska vi sedan konvertera från ett Azure WebJob till AzureFunction och göra det till serverless computing. Med detta vill vi analyseraom exekveringen blir snabbare då användaren inte behöver allokera resurser,samt om företaget kan spara pengar genom att göra denna typ av migrering.Serverless computing är något som det talas om mycket just nu inomIT-världen och det finns många debatter och artiklar som förespråkar för omman ska göra denna flytt eller ej. Därför vill vi själva göra flytten ochanalysera om det är lönsamt för företag i längden att lägga ner den tid det tarpå att konvertera ett fungerande program till en ny modern arkitektur. För attta reda på detta har vi analyserat det nuvarande programmet hos Visma, settöver vilka algoritmer och funktioner vi behöver skriva om för att görakonverteringen, samt lagt in ett loggningssystem så att vi enkelt har kunnatanalysera tiden varje exekvering av arbetet tar. Med datan vi samlat in kan vidra slutsatsen att kostnaden gick från cirka 1200 kr/månad till 0 kr/månad.Detta för att det är ett så litet jobb som exekveras och på grund av att Vismasgamla jobb exekverades var femte minut och letade efter filer som skulleflyttas, medans vårt jobb endast startades om det fanns något nytt att förflytta.Microsoft har även en miljon gratis exekveringar per månad för användaremed licens och Visma kommer upp i cirka 8000 exekveringar per månad. Vifick också fram ett resultat att exekveringen blev 6 gånger snabbare än dengamla lösningen och detta är på grund av att Microsoft själv allokerar de mestoptimerade resurserna för just detta jobb.
|
3 |
Serverless Computing som Function-as-a-Service : Skillnader i prestanda mellan GCP, Azure och AWSKristiansson, Albin January 2022 (has links)
Digitaliseringen går allt snabbare för att fylla det behov som det moderna samhälletkräver så behövs inte bara en digital arbetskraft, det behövs även en infrastruktur sommöjliggör en snabbare digital utveckling. Samtidigt har cloud computing och molnleverantörer blivit en alltmer integrerad del av mjukvaruutvecklingen. Ett ytterligare abstraktionslager som fått mer popularitet och uppmärksamhet de senaste åren är serverless computing. Serverless computing innebär ett abstraktionslager som moln-leverantörer tillhandahåller för att ta bort ansvaret för drift och skalbarhet av servrar. Denna studie konstruerar ett ramverk för en benchmark av prestanda för serverless infrastruktur på tre av de största moln-leverantörerna. Ramverket bygger på en grey box implementering av en rekursiv algoritm för att beräkna det 45:e numret i en Fibonacci-serie i Python, Java och NodeJS. Detta görs på moln-plattformarna Google Cloud Platform, Amazon Web Services och Microsoft Azure. Syftet är att se huruvida det finns skillnader i exekveringstid och minnesåtgång för den givna algoritmen på de tre plattformarna i respektive programmeringsspråk. Studien visar att det finns statistiskt signifikanta skillnader mellan både exekveringstid och minnesåtgång, för alla kodspråken på de tre plattformarna. Störst skillnad är det på NodeJS, följt av Java och sist Python. På aggregerad nivå är det större skillnad för minnesåtgång gentemot exekveringstid. / The pace of digitalization is ever-increasing. To fill societies need for digitalization adigital workforce is needed, as well as the infrastructure to support said workforce.In the wake of digitalization, cloud computing and cloud providers have become anintegrated part of software production. An abstraction layer that builds on top ofcloud computing has gained traction over the last couple of years, serverless computing.This is an abstraction layer that cloud providers provide, which takes away theresponsibility of scaling and maintaining servers. This study constructs a framework to benchmark performance for serverless infrastructurefor three large cloud providers. The framework is a grey-box implementationof a recursive algorithm to calculate the 45th number in a Fibonacci series. Saidalgorithm will be tested in Python, Java and NodeJS. The tests will be conducted onthe cloud providers Google Cloud Platform, Amazon Web Service and Microsoft Azure.The purpose of the study is to show any differences in execution time andmemory consumption, for the given algorithm, on all three platforms and betweeneach programming language. The study shows that there are statistically significant differences for execution timeas well as memory consumption, for all coding languages, between all three platforms.The biggest difference is observed for NodeJS, followed by Java and lastly Python.On an aggregated level there are greater differences in memory consumptionrather than execution time.
|
4 |
EdgeFn: A Lightweight Customizable Data Store for Serverless Edge ComputingPaidiparthy, Manoj Prabhakar 01 June 2023 (has links)
Serverless Edge Computing is an extension of the serverless computing paradigm that enables the deployment and execution of modular software functions on resource-constrained edge devices. However, it poses several challenges due to the edge network's dynamic nature and serverless applications' latency constraints. In this work, we introduce EdgeFn, a lightweight distributed data store for the serverless edge computing system. While serverless comput- ing platforms simplify the development and automated management of software functions, running serverless applications reliably on resource-constrained edge devices poses multiple challenges. These challenges include a lack of flexibility, minimum control over management policies, high data shipping, and cold start latencies. EdgeFn addresses these challenges by providing distributed data storage for serverless applications and allows users to define custom policies that affect the life cycle of serverless functions and their objects. First, we study the challenges of existing serverless systems to adapt to the edge environment. Sec- ond, we propose a distributed data store on top of a Distributed Hash Table (DHT) based Peer-to-Peer (P2P) Overlay, which achieves data locality by co-locating the function and its data. Third, we implement programmable callbacks for storage operations which users can leverage to define custom policies for their applications. We also define some use cases that can be built using the callbacks. Finally, we evaluate EdgeFn scalability and performance using industry-generated trace workload and real-world edge applications. / Master of Science / Serverless Edge Computing is an extension of the serverless computing paradigm that enables the deployment and execution of modular software functions on resource-constrained edge devices. However, it poses several challenges due to the edge network's dynamic nature and serverless applications' latency constraints. In this work, we introduce EdgeFn, a lightweight distributed data store for the serverless edge computing system. While serverless comput- ing platforms simplify the development and automated management of software functions, running serverless applications reliably on resource-constrained edge devices poses multiple challenges. These challenges include a lack of flexibility, minimum control over management policies, high data shipping, and cold start latencies. EdgeFn addresses these challenges by providing distributed data storage for serverless applications and allows users to define custom policies that affect the life cycle of serverless functions and their objects. First, we study the challenges of existing serverless systems to adapt to the edge environment. Sec- ond, we propose a distributed data store on top of a Distributed Hash Table (DHT) based Peer-to-Peer (P2P) Overlay, which achieves data locality by co-locating the function and its data. Third, we implement programmable callbacks for storage operations which users can leverage to define custom policies for their applications. We also define some use cases that can be built using the callbacks. Finally, we evaluate EdgeFn scalability and performance using industry-generated trace workload and real-world edge applications.
|
5 |
Punching Holes in the Cloud: Direct Communication between Serverless Functions Using NAT TraversalMoyer, Daniel William 04 June 2021 (has links)
A growing use for serverless computing is large parallel data processing applications that take advantage of its on-demand scalability. Because individual serverless compute nodes, which are called functions, run in isolated containers, a major challenge with this paradigm is transferring temporary computation data between functions. Previous works have performed inter-function communication using object storage, which is slow, or in-memory databases, which are expensive. We evaluate the use of direct network connections between functions to overcome these limitations. Although function containers block incoming connections, we are able to bypass this restriction using standard NAT traversal techniques. By using an external server, we implement TCP hole punching to establish direct TCP connections between functions. In addition, we develop a communications framework to manage NAT traversal and data flow for applications using direct network connections. We evaluate this framework with a reduce-by-key application compared to an equivalent version that uses object storage for communication. For a job with 100+ functions, our TCP implementation runs 4.7 times faster at almost half the cost. / Master of Science / Serverless computing is a branch of cloud computing where users can remotely run small programs, called "functions," and pay only based on how long they run. A growing use for serverless computing is running large data processing applications that use many of these serverless functions at once, taking advantage of the fact that serverless programs can be started quickly and on-demand. Because serverless functions run on isolated networks from each other and can only make outbound connections to the public internet, a major challenge with this paradigm is transferring temporary computation data between functions. Previous works have used separate types of cloud storage services in combination with serverless computing to allow functions to exchange data. However, hard-drive--based storage is slow and memory-based storage is expensive. We evaluate the use of direct network connections between functions to overcome these limitations. Although functions cannot receive incoming network connections, we are able to bypass this restriction by using a standard networking technique called Network Address Translation (NAT) traversal. We use an external server as an initial relay to setup a network connection between two functions such that once the connection is established, the functions can communicate directly with each other without using the server anymore. In addition, we develop a communications framework to manage NAT traversal and data flow for applications using direct network connections. We evaluate this framework with an application for combining matching data entries and compare it to an equivalent version that uses storage based on hard drives for communication. For a job with over 100 functions, our implementation using direct network connections runs 4.7 times faster at almost half the cost.
|
6 |
A Comparative Study on Container Orchestration and Serverless Computing PlatformsKushkbaghi, Nick January 2024 (has links)
This report compares the performance of container orchestration architecture and serverless computing platforms within cloud computing. The focus is on their application in managing real-time communications for electric vehicle(EV) charging systems using the Open Charge Point Protocol (OCPP). With the growing demand for efficient and scalable cloud solutions, especially in sectors using Internet of Things (IoT) and real-time communication technologies, this study investigates how different architectures handle high-load scenarios and real-time data transmission. Through systematic load testing of Kubernetes (for container orchestration) and Azure Functions (for serverless computing), the report measures and analyzes response times, throughput, and error rates at various demand levels. The findings indicate that while Kubernetes performs robustly under consistent loads, Azure Functionsexcel in managing dynamic, high-load conditions, showcasing superior scalability and efficiency. A controlled experiment method ensures a precise and objective assessment of performance differences. The report concludes by proposing a hybrid model that leverages the strengths of both architectures to optimize cloud resource utilization and performance.
|
7 |
Cloud Computing Pricing and Deployment Efforts : Navigating Cloud Computing Pricing and Deployment Efforts: Exploring the Public-Private Landscape / Prissättning och Implementeringsinsatser för Molntjänster : Att Navigera Molntjänsters Prissättning och Implementeringsinsatser: Utforska det Offentlig-Privata LandskapetKristiansson, Casper, Lundström, Fredrik January 2023 (has links)
The expanding adoption of cloud computing services by businesses has transformed IT infrastructure and data management in the computing space. Cloud computing offers advantages such as availability, scalability, and cost-effectiveness, making it a favored choice for businesses of all sizes. The aim of this thesis is to compare private and public cloud computing services in terms of pricing and implementation effort as well as comparing the cloud providers to each other. The top three cloud providers that will be examined are Google GCP, Microsoft Azure, and Amazon AWS. The study examines different pricing models and evaluates their effectiveness in different business scenarios. In addition, the thesis also discusses the challenges associated with building and maintaining private infrastructure and the deployment of applications to cloud computing service are examined. The research methodology involves data collection, analysis, and a case study of developing and deploying a ticketing system application on different cloud platforms. The ticket system helps to provide a realistic example and investigation of the cloud providers. The findings will help companies make informed decisions regarding the selection of the most appropriate cloud computing service based on pricing models and implementation efforts. The thesis provides valuable information on private and public cloud computing and recommends appropriate pricing models for different scenarios. This study adds to existing knowledge by analyzing current pricing models and deployment concepts in cloud computing. The thesis does not propose new solutions but follows a structured format compiling information on private, and public cloud computing and a comprehensive review of cloud computing pricing models and marketing efforts. / Den växande adoptionen av molntjänster inom företag har förändrat IT-infrastrukturen och datahanteringen inom datorområdet. Molntjänster erbjuder fördelar såsom tillgänglighet, skalbarhet och kostnadseffektivitet, vilket gör det till ett populärt val för företag i alla storlekar. Syftet med denna avhandling är att jämföra privata och offentliga molntjänster med avseende på prissättning och implementeringsinsatser samt att jämföra molnleverantörerna med varandra. De tre främsta molnleverantörerna som kommer att undersökas är Google GCP, Microsoft Azure och Amazon AWS. Studien undersöker olika prismodeller och utvärderar deras effektivitet i olika affärsscenarier. Dessutom diskuterar avhandlingen också utmaningarna med att bygga och underhålla privat infrastruktur samt implementeringen av applikationer till molntjänster. Forskningsmetodologin omfattar datainsamling, analys och en fallstudie av utveckling och implementering av ett support system på olika molnplattformar. Supportsystemet hjälper till att ge ett realistiskt exempel och undersökning av molnleverantörerna. Resultaten kommer att hjälpa företag att fatta informerade beslut när det gäller valet av lämpligaste molntjänst baserat på prismodeller och implementeringsinsatser. Avhandlingen tillhandahåller värdefull information om privat och offentlig molntjänst och rekommenderar lämpliga prismodeller för olika scenarier. Denna studie bidrar till befintlig kunskap genom att analysera nuvarande prismodeller och implementeringskoncept inom molntjänster. Avhandlingen föreslår inga nya lösningar, men följer en strukturerad format genom att sammanställa information om privat och offentlig molntjänst samt en omfattande översikt av prismodeller och marknadsinsatser inom molntjänster.
|
8 |
Serverless Computing Strategies on Cloud PlatformsNaranjo Delgado, Diana María 08 February 2021 (has links)
[ES] Con el desarrollo de la Computación en la Nube, la entrega de recursos virtualizados a través de Internet ha crecido enormemente en los últimos años. Las Funciones como servicio (FaaS), uno de los modelos de servicio más nuevos dentro de la Computación en la Nube, permite el desarrollo e implementación de aplicaciones basadas en eventos que cubren servicios administrados en Nubes públicas y locales. Los proveedores públicos de Computación en la Nube adoptan el modelo FaaS dentro de su catálogo para proporcionar computación basada en eventos altamente escalable para las aplicaciones.
Por un lado, los desarrolladores especializados en esta tecnología se centran en crear marcos de código abierto serverless para evitar el bloqueo con los proveedores de la Nube pública. A pesar del desarrollo logrado por la informática serverless, actualmente hay campos relacionados con el procesamiento de datos y la optimización del rendimiento en la ejecución en los que no se ha explorado todo el potencial.
En esta tesis doctoral se definen tres estrategias de computación serverless que permiten evidenciar los beneficios de esta tecnología para el procesamiento de datos. Las estrategias implementadas permiten el análisis de datos con la integración de dispositivos de aceleración para la ejecución eficiente de aplicaciones científicas en plataformas cloud públicas y locales.
En primer lugar, se desarrolló la plataforma CloudTrail-Tracker. CloudTrail-Tracker es una plataforma serverless de código abierto basada en eventos para el procesamiento de datos que puede escalar automáticamente hacia arriba y hacia abajo, con la capacidad de escalar a cero para minimizar los costos operativos.
Seguidamente, se plantea la integración de GPUs en una plataforma serverless local impulsada por eventos para el procesamiento de datos escalables. La plataforma admite la ejecución de aplicaciones como funciones severless en respuesta a la carga de un archivo en un sistema de almacenamiento de ficheros, lo que permite la ejecución en paralelo de las aplicaciones según los recursos disponibles. Este procesamiento es administrado por un cluster Kubernetes elástico que crece y decrece automáticamente según las necesidades de procesamiento. Ciertos enfoques basados en tecnologías de virtualización de GPU como rCUDA y NVIDIA-Docker se evalúan para acelerar el tiempo de ejecución de las funciones.
Finalmente, se implementa otra solución basada en el modelo serverless para ejecutar la fase de inferencia de modelos de aprendizaje automático previamente entrenados, en la plataforma de Amazon Web Services y en una plataforma privada con el framework OSCAR. El sistema crece elásticamente de acuerdo con la demanda y presenta una escalado a cero para minimizar los costes. Por otra parte, el front-end proporciona al usuario una experiencia simplificada en la obtención de la predicción de modelos de aprendizaje automático.
Para demostrar las funcionalidades y ventajas de las soluciones propuestas durante esta tesis se recogen varios casos de estudio que abarcan diferentes campos del conocimiento como la analítica de aprendizaje y la Inteligencia Artificial. Esto demuestra que la gama de aplicaciones donde la computación serverless puede aportar grandes beneficios es muy amplia. Los resultados obtenidos avalan el uso del modelo serverless en la simplificación del diseño de arquitecturas para el uso intensivo de datos en aplicaciones complejas. / [CA] Amb el desenvolupament de la Computació en el Núvol, el lliurament de recursos virtualitzats a través d'Internet ha crescut granment en els últims anys. Les Funcions com a Servei (FaaS), un dels models de servei més nous dins de la Computació en el Núvol, permet el desenvolupament i implementació d'aplicacions basades en esdeveniments que cobreixen serveis administrats en Núvols públics i locals. Els proveïdors de computació en el Núvol públic adopten el model FaaS dins del seu catàleg per a proporcionar a les aplicacions computació altament escalable basada en esdeveniments.
D'una banda, els desenvolupadors especialitzats en aquesta tecnologia se centren en crear marcs de codi obert serverless per a evitar el bloqueig amb els proveïdors del Núvol públic. Malgrat el desenvolupament alcançat per la informàtica serverless, actualment hi ha camps relacionats amb el processament de dades i l'optimització del rendiment d'execució en els quals no s'ha explorat tot el potencial.
En aquesta tesi doctoral es defineixen tres estratègies informàtiques serverless que permeten demostrar els beneficis d'aquesta tecnologia per al processament de dades. Les estratègies implementades permeten l'anàlisi de dades amb a integració de dispositius accelerats per a l'execució eficient d'aplicacion scientífiques en plataformes de Núvol públiques i locals.
En primer lloc, es va desenvolupar la plataforma CloudTrail-Tracker. CloudTrail-Tracker és una plataforma de codi obert basada en esdeveniments per al processament de dades serverless que pot escalar automáticament cap amunt i cap avall, amb la capacitat d'escalar a zero per a minimitzar els costos operatius.
A continuació es planteja la integració de GPUs en una plataforma serverless local impulsada per esdeveniments per al processament de dades escalables. La plataforma admet l'execució d'aplicacions com funcions severless en resposta a la càrrega d'un arxiu en un sistema d'emmagatzemaments de fitxers, la qual cosa permet l'execució en paral·lel de les aplicacions segon sels recursos disponibles. Este processament és administrat per un cluster Kubernetes elàstic que creix i decreix automàticament segons les necessitats de processament. Certs enfocaments basats en tecnologies de virtualització de GPU com rCUDA i NVIDIA-Docker s'avaluen per a accelerar el temps d'execució de les funcions.
Finalment s'implementa una altra solució basada en el model serverless per a executar la fase d'inferència de models d'aprenentatge automàtic prèviament entrenats en la plataforma de Amazon Web Services i en una plataforma privada amb el framework OSCAR. El sistema creix elàsticament d'acord amb la demanda i presenta una escalada a zero per a minimitzar els costos. D'altra banda el front-end proporciona a l'usuari una experiència simplificada en l'obtenció de la predicció de models d'aprenentatge automàtic.
Per a demostrar les funcionalitats i avantatges de les solucions proposades durant esta tesi s'arrepleguen diversos casos d'estudi que comprenen diferents camps del coneixement com l'analítica d'aprenentatge i la Intel·ligència Artificial. Això demostra que la gamma d'aplicacions on la computació serverless pot aportar grans beneficis és molt àmplia. Els resultats obtinguts avalen l'ús del model serverless en la simplificació del disseny d'arquitectures per a l'ús intensiu de dades en aplicacions complexes. / [EN] With the development of Cloud Computing, the delivery of virtualized resources over the Internet has greatly grown in recent years. Functions as a Service (FaaS), one of the newest service models within Cloud Computing, allows the development and implementation of event-based applications that cover managed services in public and on-premises Clouds. Public Cloud Computing providers adopt the FaaS model within their catalog to provide event-driven highly-scalable computing for applications.
On the one hand, developers specialized in this technology focus on creating open-source serverless frameworks to avoid the lock-in with public Cloud providers. Despite the development achieved by serverless computing, there are currently fields related to data processing and execution performance optimization where the full potential has not been explored.
In this doctoral thesis three serverless computing strategies are defined that allow to demonstrate the benefits of this technology for data processing. The implemented strategies allow the analysis of data with the integration of accelerated devices for the efficient execution of scientific applications on public and on-premises Cloud platforms.
Firstly, the CloudTrail-Tracker platform was developed to extract and process learning analytics in the Cloud. CloudTrail-Tracker is an event-driven open-source platform for serverless data processing that can automatically scale up and down, featuring the ability to scale to zero for minimizing the operational costs.
Next, the integration of GPUs in an event-driven on-premises serverless platform for scalable data processing is discussed. The platform supports the execution of applications as serverless functions in response to the loading of a file in a file storage system, which allows the parallel execution of applications according to available resources. This processing is managed by an elastic Kubernetes cluster that automatically grows and shrinks according to the processing needs. Certain approaches based on GPU virtualization technologies such as rCUDA and NVIDIA-Docker are evaluated to speed up the execution time of the functions.
Finally, another solution based on the serverless model is implemented to run the inference phase of previously trained machine learning models on theAmazon Web Services platform and in a private platform with the OSCAR framework. The system grows elastically according to demand and is scaled to zero to minimize costs. On the other hand, the front-end provides the user with a simplified experience in obtaining the prediction of machine learning models.
To demonstrate the functionalities and advantages of the solutions proposed during this thesis, several case studies are collected covering different fields of knowledge such as learning analytics and Artificial Intelligence. This shows the wide range of applications where serverless computing can bring great benefits. The results obtained endorse the use of the serverless model in simplifying the design of architectures for the intensive data processing in complex applications. / Naranjo Delgado, DM. (2021). Serverless Computing Strategies on Cloud Platforms [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/160916
|
Page generated in 0.0983 seconds