51 |
Evaluating machine learning strategies for classification of large-scale Kubernetes cluster logsSarika, Pawan January 2022 (has links)
Kubernetes is a free, open-source container orchestration system for deploying and managing Docker containers that host microservices. Its cluster logs are extremely helpful in determining the root cause of a failure. However, as systems become more complex, locating failures becomes more difficult and time-consuming. This study aims to identify the classification algorithms that accurately classify the given log data and, at the same time, require fewer computational resources. Because the data is quite large, we begin with expert-based feature selection to reduce the data size. Following that, TF-IDF feature extraction is performed, and finally, we compare five classification algorithms, SVM, KNN, random forest, gradient boosting and MLP using several metrics. The results show that Random forest produces good accuracy while requiring fewer computational resources compared to other algorithms.
|
52 |
Transformation of Directed Acyclic Graphs into Kubernetes Deployments with Optimized Latency / Transformation av riktade acykliska grafer till Kubernetes-distributioner med optimerad latensAlmgren, Robert, Lidekrans, Robin January 2022 (has links)
In telecommunications, there is currently a lot of work being done to migrate to the cloud, and a lot of specialized hardware is being exchanged for virtualized solutions. One important part of telecommunication networks that is yet to be moved to the cloud is known as the base-band unit, which sits between the antennas and the core network. The base-band unit has very strict latency requirements, making it unsuitable for out-of-the-box cloud solutions. Ericsson is therefore investigating if cloud solutions can be customized in such a way that base-band unit functionality can be virtualized as well. One such customization is to describe the functionality of a base-band unit using a directed acyclic graph (DAG), and deploy it to a cloud environment using Kubernetes. This thesis sets out to take applications represented using a DAG and deploy it using Kubernetes in such a way that the network latency is reduced when compared to the deployment generated by the default Kubernetes scheduler. The problem of placing the applications onto the available hardware resources was formulated as an integer linear programming problem. The problem was then implemented using Pyomo and solved with the open-source solver GLPK to obtain an optimized placement. This placement was then used to generate a configuration file that could be used to deploy the applications using Kubernetes. A mock application was developed in order to evaluate the optimized placement. The evaluation carried out in this thesis shows that the optimized placement obtained from the solution could improve the average round-trip latency of applications represented using a DAG by up to 30% when compared to the default Kubernetes scheduler.
|
53 |
Performance Evaluation of Kubernetes Autoscaling strategies on GKE clusters / Prestandautverdering av autoskalningsstrategier på GKE-klusterNilsen, Johanna January 2023 (has links)
Cloud computing and containerisation have experienced significant growth in recent years. With cloud providers requiring users to specify resource limits and requests, the need for performance and resource optimisation has emerged in the cloud computing domain. This thesis focuses on examining three autoscaling approaches in the Kubernetes container orchestrator: Hybrid Pod Autoscaler, Vertical Pod Autoscaler (VPA), and Horizontal Pod Autoscaler (HPA). To conduct the analysis, a production-grade microservice was deployed on a GKE cluster, replicating the workload of the host company Nordnet Bank AB, a pan-Nordic platform for savings investments. The main objective was to investigate the impact of the different autoscalers on the 50th and 99th percentile response times. The study also aimed to investigate whether a hybrid pod autoscaler, combining VPA and HPA, could outperform HPA and VPA in terms of response time and CPU usage. Additionally, the study aimed to identify the service metrics that an orchestrator can use to achieve response times similar to those obtained when resources are over-provisioned. The research findings indicate that response times varied significantly depending on the autoscaling strategy. While the 50th percentile response times remained consistent, the 99th percentile exhibited greater variation. Among the strategies, HPA demonstrated consistent performance, albeit with greater variability in the 99th percentile response times. The VPA strategy, in contrast, resulted in higher response times for both the 50th and 99th percentile compared to the baseline. The hybrid approach generally outperformed VPA in terms of response times while showing comparable performance to HPA, although with slightly greater variability. CPU usage patterns of the hybrid approach were more closely aligned with HPA than VPA. CPU usage and request rate were effectively used as service metrics for orchestrators in achieving acceptable 99th percentile response times, as demonstrated by both HPA and the hybrid approach. Nevertheless, these findings are contingent on the specific autoscaler configuration, microservice, and workload model used in this study and may not be universally applicable. / Cloud computing och containerisering har under de senaste åren haft en betydande tillväxt. I och med att molnleverantörer ger användare möjlighet att själva specificera resursgränser, har behovet för prestanda- och resursoptimering inom molntjänster blivit alltmer framträdande. Denna forskning fokuserar på att undersöka och utvärdera tre olika autoskalningsmetoder i Kubernetes containerorkestrator: Hybrid Pod Autoscaler, Vertical Pod Autoscaler (VPA) och Horizontal Pod Autoscaler (HPA).För att genomföra utvärderingen implementerades tre mikrotjänster i en GKE-klustermiljö. Arbetsbelastningen hos den svenska banken och handelsplattformen Nordnet Bank AB replikerades. Det primära syftet med studien var att undersöka hur de olika autoskalningsmetoderna påverkade svarstiden i den 50:e och 99:e percentilen. Utöver detta, syftade också till att undersöka om en hybrid pod autoscaler, som kombinerar både VPA och HPA, kunde överträffa de enskilda metoderna i svarstid och CPU-användning. Dessutom identifiera vilka mätvärden en orchestrator kan använda för att uppnå svarstider som liknar dem som uppnås när resurserna överdimensionerade. Resultaten från forskningen visar att svarstiderna varierade avsevärt beroende på vilken autoskalningsstrategi som användes. Medan svarstiderna för 50:e percentilen var relativt konsekventa, uppvisade 99:e percentilen större variation. HPA visade generellt sett jämn prestanda, men med en något större variation i 99:e percentilen av svarstider. Å andra sidan resulterade VPA i högre svarstider både för 50:e och 99:e percentilen. Hybridmetoden presterade generellt sett bättre än VPA när det gäller svarstider och visade liknande resultat som HPA, även om det fanns en något större variabilitet. Mönstret för CPU-användning för hybridmetoden låg närmare HPA än VPA. CPU-användning och förfrågningshastighet visade sig vara effektiva mätvärden för att uppnå acceptabla svarstider i 99:e percentilen, vilket bekräftades av både HPA och hybridmetoden. Det är dock viktigt att notera att dessa resultat är specifika för den autoskalningskonfiguration, mikrotjänst och arbetsbelastningsmodell som användes i studien och kanske inte är universellt tillämpliga.
|
54 |
IMPROVING MICROSERVICES OBSERVABILITY IN CLOUD-NATIVE INFRASTRUCTURE USING EBPFBhavye Sharma (15345346) 26 April 2023 (has links)
<p>Microservices have emerged as a popular pattern for developing large-scale applications in cloud environments for their flexibility, scalability, and agility benefits. However, microservices make management more complex due to their scale, multiple languages, and distributed nature. Orchestration and automation tools like Kubernetes help deploy microservices running simultaneously, but it can be difficult for an operator to understand their behaviors, interdependencies, and interactions. In such a complex and dynamic environment, performance problems (e.g., slow application responses and high resource usage) require significant human effort spent on diagnosis and recovery. Moreover, manual diagnosis of cloud microservices tends to be tedious, time-consuming, and impractical. Effective and automated performance analysis and anomaly detection require an observable system, which means an application's internal state can be inferred by observing and tracking metrics, traces, and logs. Traditional APM uses libraries and SDKs to improve application monitoring and tracing but has additional overheads of rewriting, recompiling, and redeploying the applications' code base. Therefore, there is a critical need for a standardized automated microservices observability solution that does not require rewriting or redeploying the application to keep up with the agility of microservices.</p>
<p><br></p>
<p>This thesis studies observability for microservices and implements an automated Extended Berkeley Packet Filter (eBPF) based observability solution. eBPF is a Linux feature that allows us to write extensions to the Linux kernel for security and observability use cases. eBPF does not require modifying the application layer and instrumenting the individual microservices. Instead, it instruments the kernel-level API calls, which are common across all hosts in the cluster. eBPF programs provide observability information from the lowest-level system calls and can export data without additional performance overhead. The Prometheus time-series database is leveraged to store all the captured metrics and traces for analysis. With the help of our tool, a DevOps engineer can easily identify abnormal behavior of microservices and enforce appropriate countermeasures. Using Chaos Mesh, we inject anomalies at the network and host layer, which we can identify with root cause identification using the proposed solution. The Chameleon cloud testbed is used to deploy our solution and test its capabilities and limitations.</p>
|
55 |
Scaling cloud-native Apache Spark on Kubernetes for workloads in external storagesMrowczynski, Piotr January 2018 (has links)
CERN Scalable Analytics Section currently offers shared YARN clusters to its users as monitoring, security and experiment operations. YARN clusters with data in HDFS are difficult to provision, complex to manage and resize. This imposes new data and operational challenges to satisfy future physics data processing requirements. As of 2018, there were over 250 PB of physics data stored in CERN’s mass storage called EOS. Hadoop-XRootD Connector allows to read over network data stored in CERN EOS. CERN’s on-premise private cloud based on OpenStack allows to provision on-demand compute resources. Emergence of technologies as Containers-as-a-Service in Openstack Magnum and support for Kubernetes as native resource scheduler for Apache Spark, give opportunity to increase workflow reproducability on different compute infrastructures with use of containers, reduce operational effort of maintaining computing cluster and increase resource utilization via cloud elastic resource provisioning. This trades-off the operational features with datalocality known from traditional systems as Spark/YARN with data in HDFS.In the proposed architecture of cloud-managed Spark/Kubernetes with data stored in external storage systems as EOS, Ceph S3 or Kafka, physicists and other CERN communities can on-demand spawn and resize Spark/Kubernetes cluster, having fine-grained control of Spark Applications. This work focuses on Kubernetes CRD Operator for idiomatically defining and running Apache Spark applications on Kubernetes, with automated scheduling and on-failure resubmission of long-running applications. Spark Operator was introduced with design principle to allow Spark on Kubernetes to be easy to deploy, scale and maintain with similar usability of Spark/YARN.The analysis of concerns related to non-cluster local persistent storage and memory handling has been performed. The architecture scalability has been evaluated on the use case of sustained workload as physics data reduction, with files in ROOT format being stored in CERN mass-storage called EOS. The series of microbenchmarks has been performed to evaluate the architecture properties compared to state-of-the-art Spark/YARN cluster at CERN. Finally, Spark on Kubernetes workload use-cases have been classified, and possible bottlenecks and requirements identified. / CERN Scalable Analytics Section erbjuder för närvarande delade YARN-kluster till sina användare och för övervakning, säkerhet, experimentoperationer, samt till andra grupper som är intresserade av att bearbeta data med hjälp av Big Data-tekniker. Dock är YARNkluster med data i HDFS svåra att tillhandahålla, samt komplexa att hantera och ändra storlek på. Detta innebär nya data och operativa utmaningar för att uppfylla krav på dataprocessering för petabyte-skalning av fysikdata.Från och med 2018 fanns över 250 PB fysikdata lagrade i CERNs masslagring, kallad EOS. CERNs privata moln, baserat på OpenStack, gör det möjligt att tillhandahålla beräkningsresurser på begäran. Uppkomsten av teknik som Containers-as-a-Service i Openstack Magnum och stöd för Kubernetes som inbyggd resursschemaläggare för Apache Spark, ger möjlighet att öka arbetsflödesreproducerbarheten på olika databaser med användning av containers, minska operativa ansträngningar för att upprätthålla datakluster, öka resursutnyttjande via elasiska resurser, samt tillhandahålla delning av resurser mellan olika typer av arbetsbelastningar med kvoter och namnrymder.I den föreslagna arkitekturen av molnstyrda Spark / Kubernetes med data lagrade i externa lagringssystem som EOS, Ceph S3 eller Kafka, kan fysiker och andra CERN-samhällen på begäran skapa och ändra storlek på Spark / Kubernetes-klustrer med finkorrigerad kontroll över Spark Applikationer. Detta arbete fokuserar på Kubernetes CRD Operator för idiomatiskt definierande och körning av Apache Spark-applikationer på Kubernetes, med automatiserad schemaläggning och felåterkoppling av långvariga applikationer. Spark Operator introducerades med designprincipen att tillåta Spark över Kubernetes att vara enkel att distribuera, skala och underhålla. Analys av problem relaterade till icke-lokal kluster persistent lagring och minneshantering har utförts. Arkitekturen har utvärderats med användning av fysikdatareduktion, med filer i ROOT-format som lagras i CERNs masslagringsystem som kallas EOS. En serie av mikrobenchmarks har utförts för att utvärdera arkitekturegenskaperna såsom prestanda jämfört med toppmoderna Spark / YARN-kluster vid CERN, och skalbarhet för långvariga dataprocesseringsjobb.
|
56 |
Evaluation and Improvement of Application Deployment in Hybrid Edge Cloud Environment : Using OpenStack, Kubernetes, and SpinnakerJendi, Khaled January 2020 (has links)
Traditional mechanisms of deployment of deferent applications can be costly in terms of time and resources, especially when the application requires a specific environment to run upon and has a different kind of dependencies so to set up such an application, it would need an expert to find out all required dependencies. In addition, it is difficult to deploy applications with efficient usage of resources available in the distributed environment of the cloud. Deploying different projects on the same resources is a challenge. To solve this problem, we evaluated different deployment mechanisms using heterogeneous infrastructure-as-a-service (IaaS) called OpenStack and Microsoft Azure. we also used platform-as-a-service called Kubernetes. Finally, to automate and auto integrate deployments, we used Spinnaker as the continuous delivery framework. The goal of this thesis work is to evaluate and improve different deployment mechanisms in terms of edge cloud performance. Performance depends on achieving efficient usage of cloud resources, reducing latency, scalability, replication and rolling upgrade, load balancing between data nodes, high availability and measuring zero- downtime for deployed applications. These problems are solved basically by designing and deploying infrastructure and platform in which Kubernetes (PaaS) is deployed on top of OpenStack (IaaS). In addition, the usage of Docker containers rather than regular virtual machines (containers orchestration) will have a huge impact. The conclusion of the report would demonstrate and discuss the results along with various test cases regarding the usage of different methods of deployment, and the presentation of the deployment process. It includes also suggestions to develop more reliable and secure deployment in the future when having heterogeneous container orchestration infrastructure. / Traditionella mekanismer för utplacering av deferentapplikationer kan vara kostsamma när det gäller tid och resurser, särskilt när applikationen kräver en specifik miljö att löpa på och har en annan typ av beroende, så att en sådan applikation upprättas, skulle det behöva en expert att hitta ut alla nödvändiga beroenden. Dessutom är det svårt att distribuera applikationer med effektiv användning av resurser tillgängliga i molnens distribuerade i Edge Cloud Computing. Att distribuera olika projekt på samma resurser är en utmaning. För att lösa detta problem skulle jag utvärdera olika implementeringsmekanismer genom att använda heterogen infrastruktur-as-a-service (IaaS) som heter OpenStack och Microsoft Azure. Jag skulle också använda plattform-som-en-tjänst som heter Kubernetes. För att automatisera och automatiskt integrera implementeringar skulle jag använda Spinnaker som kontinuerlig leveransram. Målet med detta avhandlingsarbete är att utvärdera och förbättra olika implementeringsmekanismer när det gäller Edge Cloud prestanda. Prestanda beror på att du uppnår effektiv användning av Cloud resurser, reducerar latens, skalbarhet, replikering och rullningsuppgradering, lastbalansering mellan datodenoder, hög tillgänglighet och mätning av nollstanntid för distribuerade applikationer. Dessa problem löses i grunden genom att designa och distribuera infrastruktur och plattform där Kubernetes (PaaS) används på toppen av OpenStack (IaaS). Dessutom kommer användningen av Docker- behållare istället för vanliga virtuella maskiner (behållare orkestration) att ha en stor inverkan. Slutsatsen av rapporten skulle visa och diskutera resultaten tillsammans med olika testfall angående användningen av olika metoder för implementering och presentationen av installationsprocessen. Det innehåller också förslag på att utveckla mer tillförlitlig och säker implementering i framtiden när den har heterogen behållareorkesteringsinfrastruktur.
|
57 |
Enhancing the performance of mobile networks using Kubernetes : Load balancing traffic by utilizing workload estimation / Lastbalansering av trafik i ett Kuberneteskluster med hjälp av arbetsbelastningestimeringLaukka, Lucas, Fransson, Carl January 2023 (has links)
As global mobile network usage increases rapidly and users demand lower latency, the importance of stable 5G networks is more critical than ever. One way to orchestrate mobile network backends is by using Kubernetes. Kubernetes allows for automatic restarts and scaling of containers and provides an easy way to route incoming connections to applications running in containers. By routing the incoming connections using different load-balancing algorithms, it is possible to reduce latency through more efficient usage of worker nodes. This thesis aims to identify ways to use load balancing inside a Kubernetes cluster to increase throughput and reduce latency in a mobile network system. We perform a literature study on possible ways to implement load balancing in Kubernetes and possible algorithms to use in the load balancing. Using the study results, we model a simplified mobile network system in a Kubernetes cluster and implement a load balancer at the Service level. By running simulations on this model, we compare three algorithms existing in Kubernetes as well as a dynamic algorithm using estimated workloads in terms of latency and throughput. The existing algorithms that are compared include Round Robin, Least Connections, and Random. The results show a potential to reduce latency by up to 31% compared to the native Random algorithm when utilizing a dynamic load balancer at the Service level.
|
58 |
Tracing Control with Linux Tracing Toolkit, next generation in a Containerized EnvironmentRavi, Vikhram January 2021 (has links)
5G is becoming reality with companies rolling out the technology around the world. In 5G,the Radio Access Network (RAN) is moving from a monolithic-based architecture into a cloud-based microservice architecture for the purpose of simplifying deployment and manageability,and explore scalability and flexibility. Thus, the transition of functionalities from a proprietaryhardware-based system into a more distributed and flexible virtualized system is ongoing. Insuch systems, legacy methods performance monitoring is relevant, wheresystem tracingplaysan important role. System tracing is important for the purpose of performance analysis of anygiven system. However, current tools were designed thinking about monolith architectureswhere, therefore, in new distributed architectures, new tracing tools need to be developed. System tracing often requires special permissions to be executed in applications running ina virtualized third-party environment. Unfortunately, not all applications running in a dis-tributed virtualized environment can be given such special access, at the risk of compromis-ing security and stability of the system. However, tracing data needs to be also collected fromapplications running in such environments. This thesis addresses the challenge of remotely configuring and controlling the system tracingtool with the example of LTTng in applications that run as part of a distributed virtualizedenvironment with Kubernetes. We explore the problem of remotely controlling and configuringsystem tracing as well as to optimize data collection. The main outcome is a tool able to re-motely control and configure system tracing tools. In addition, a proof-of-concept is presentedwith working demos for basic system tracing commands. It was discovered that a relay-based solution can be exposed outside the cluster via node-portwhich can relay incoming requests on-wards to any number of microservices. However, dis-covery of these microservices that are running system tracing tools is critial. Service discoverymechanism’s were brought forth and introduced to the system for the purpose of disoveringmicroservices with system tracing tools. Tracing data that is saved locally can be extracted bythe user through the relay-based solution or sent directly to any remote system using LTTngrelay daemon functionality. Comparison between directly executing commands in a bash shelland the remote CLI was measured. It has been concluded that the overall the response timeof both Linux and LTTng commands that are sent through the remote CLI is 1.96 times longerthan directly executing these commands in a bash shell. This was accounted to the fact thatcommands sent over the network traffic within the kubernetes cluster which is the cost ofremotely being able to control and configure system tracing tools. This being said, there arestill many steps that can be taken to improve the solution and to develop a more productionready solution.i
|
59 |
Autonomic Management and Orchestration Strategies in MEC-Enabled 5G NetworksSubramanya, Tejas 26 October 2021 (has links)
5G and beyond mobile network technology promises to deliver unprecedented ultra-low latency and high data rates, paving the way for many novel applications and services. Network Function Virtualization (NFV) and Multi-access Edge Computing (MEC) are two technologies expected to play a vital role in achieving ambitious Quality of Service requirements of such applications. While NFV provides flexibility by enabling network functions to be dynamically deployed and inter-connected to realize Service Function Chains (SFC), MEC brings the computing capability to the mobile network's edges, thus reducing latency and alleviating the transport network load. However, adequate mechanisms are needed to meet the dynamically changing network service demands (i.e., in single and multiple domains) and optimally utilize the network resources while ensuring that the end-to-end latency requirement of services is always satisfied. In this dissertation work, we break the problem into three separate stages and present the solutions for each one of them.Firstly, we apply Artificial Intelligence (AI) techniques to drive NFV resource orchestration in MEC-enabled 5G architectures for single and multi-domain scenarios. We propose three deep learning approaches to perform horizontal and vertical Virtual Network Function (VNF) auto-scaling: (i) Multilayer Perceptron (MLP) classification and regression (single-domain), (ii) Centralized Artificial Neural Network (ANN), centralized Long-Short Term Memory (LSTM) and centralized Convolutional Neural Network-LSTM (CNN-LSTM) (single-domain), and (iii) Federated ANN, federated LSTM and federated CNN-LSTM (multi-domain). We evaluate the performance of each of these deep learning models trained over a commercial network operator dataset and investigate the pros and cons of different approaches for VNF auto-scaling. For the first approach, our results show that both MLP classifier and MLP regressor models have strong predicting capability for auto-scaling. However, MLP regressor outperforms MLP classifier in terms of accuracy. For the second approach (one-step prediction), CNN-LSTM performs the best for the QoS-prioritized objective and LSTM performs the best for the cost-prioritized objective. For the second approach (multi-step prediction), the encoder-decoder CNN-LSTM model outperforms the encoder-decoder LSTM model for both QoS and Cost prioritized objectives. For the third approach, both federated LSTM and federated CNN-LSTM models perform equally better than the federated ANN model. It was also noted that in general federated learning approaches performs poorly compared to centralized learning approaches. Secondly, we employ Integer Linear Programming (ILP) techniques to formulate and solve a joint user association and SFC placement problem, where each SFC represents a service requested by a user with end-to-end latency and data rate requirements. We also develop a comprehensive end-to-end latency model considering radio delay, backhaul network delay and SFC processing delay for 5G mobile networks. We evaluated the proposed model using simulations based on real-operator network topology and real-world latency values. Our results show that the average end-to-end latency reduces significantly when SFCs are placed at the ME hosts according to their latency and data rate demands. Furthermore, we propose an heuristic algorithm to address the issue of scalability in ILP, that can solve the above association/mapping problem in seconds rather than hours.Finally, we introduce lightMEC - a lightweight MEC platform for deploying mobile edge computing functionalities which allows hosting of low-latency and bandwidth-intensive applications at the network edge. Measurements conducted over a real-life test demonstrated that lightMEC could actually support practical MEC applications without requiring any change to existing mobile network nodes' functionality in the access and core network segments. The significant benefits of adopting the proposed architecture are analyzed based on a proof-of-concept demonstration of the content caching use case. Furthermore, we introduce the AI-driven Kubernetes orchestration prototype that we implemented by leveraging the lightMEC platform and assess the performance of the proposed deep learning models (from stage 1) in an experimental setup. The prototype evaluations confirm the simulation results achieved in stage 1 of the thesis.
|
60 |
Emerging Paradigms in the Convergence of Cloud and High-Performance ComputingAraújo De Medeiros, Daniel January 2023 (has links)
Traditional HPC scientific workloads are tightly coupled, while emerging scientific workflows exhibit even more complex patterns, consisting of multiple characteristically different stages that may be IO-intensive, compute-intensive, or memory-intensive. New high-performance computer systems are evolving to adapt to these new requirements and are motivated by the need for performance and efficiency in resource usage. On the other hand, cloud workloads are loosely coupled, and their systems have matured technologies under different constraints from HPC. In this thesis, the use of cloud technologies designed for loosely coupled dynamic and elastic workloads is explored, repurposed, and examined in the landscape of HPC in three major parts. The first part deals with the deployment of HPC workloads in cloud-native environments through the use of containers and analyses the feasibility and trade-offs of elastic scaling. The second part relates to the use of workflow management systems in HPC workflows; in particular, a molecular docking workflow executed through Airflow is discussed. Finally, object storage systems, a cost-effective and scalable solution widely used in the cloud, and their usage in HPC applications through MPI I/O are discussed in the third part of this thesis. / Framväxande vetenskapliga applikationer är mycket datatunga och starkt kopplade. Nya högpresterande datorsystem anpassar sig till dessa nya krav och motiveras av behovet av prestanda och effektivitet i resursanvändningen. Å andra sidan är moln-applikationer löst kopplade och deras system har mogna teknologier som utvecklats under andra begränsningar än HPC. I den här avhandlingen diskuteras användningen av moln-teknologier som har mognat under löst kopplade applikationer i HPC-landskapet i tre huvuddelar. Den första delen handlar om implementeringen av HPC-applikationer i molnmiljöer genom användning av containrar och analyserar genomförbarheten och avvägningarna av elastisk skalning. Den andra delen handlar om användningen av arbetsflödeshanteringsystem i HPC-arbetsflöden; särskilt diskuteras ett molekylär dockningsarbetsflöde som utförs genom Airflow. Objektlagringssystem och deras användning inom HPC, tillsammans med ett gränssnitt mellan S3-standard och MPI I/O, diskuteras i den tredje delen av denna avhandling / <p>QC 20231122</p>
|
Page generated in 0.0511 seconds