• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 91
  • 4
  • 2
  • 1
  • Tagged with
  • 99
  • 63
  • 47
  • 40
  • 31
  • 31
  • 28
  • 27
  • 26
  • 23
  • 23
  • 19
  • 19
  • 15
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Aplikace pro monitorování sítí / Application for Monitoring of IP Networks

Šmalec, Ondřej January 2019 (has links)
Diplomová práce popisuje vytvoření aplikace pro monitorování síťových zařízení. Výsledky jsou zobrazené jako grafické uživatelské rozhraní společně s vykreslenou topologií. Aplikace je z velké části napsána v jazyce Python. Pro získávání informací z topologie jsou využity protokoly SNMP a SSH. Hlavní cíl je vytvořit aplikaci, která monitoruje síťová zařízení a vykresluje tuhle topologii do grafického uživatelského rozhraní. Tato aplikace reaguje dynamicky na změny v monitorovací topologii.
42

Aplikace platformy OpenShift pro testování studentských projektů / Application for OpenShift Plaform for Testing of Students Projects

Országh, Marián January 2020 (has links)
Cieľom tejto práce je navrhnúť službu pre automatizované testovanie študentských programovacích projektov na základe požiadaviek a následne implementovať túto službu za použitia technológií OpenShift, Python a Git. Vytvorenie takejto služby stavia základ pre zjednotený proces testovania študentských projektov, ktorý zahŕňa spúšťanie testovacích sád v oddelených Linuxových kontajneroch. Vylepšený testovací proces má viesť ku zjednodušeniu známkovania vyučujúcimi a taktiež zlepšeniu výsledkov študentov pri týchto úlohách.   Táto diplomová práca vysvetľuje základy testovania softvéru, pričom sa sústredí na testovanie založené na požiadavkách, poskytuje náhľad do technológie kontajnerov a objasňuje, ako boli tieto témy zahrnuté pri návrhu služby a taktiež, ako sa ich použitie odrazilo na požiadavkách na ňu. Okrem toho je implementácia tejto služby podrobená detailnej analýze, ktorá má slúžiť ako referenčný materiál pre jej akékoľvek budúce rozšírenia.   Implementovaná služba je schopná vykonávať základné operácie, zahřňajúce paralelné testovanie študentských projektov v oddelených kontajneroch, vytvorenie kontajnerizovaného ladiaceho prostredia, alebo automatické zostavenie kontajnerového obrazu pre konkrétne zadanie.
43

Elasticity of Elasticsearch

Tsaousi, Kleivi Dimitris January 2021 (has links)
Elasticsearch has evolved from an experimental, open-source, NoSQL database for full-text documents to an easily scalable search engine that canhandle a large amount of documents. This evolution has enabled companies todeploy Elasticsearch as an internal search engine for information retrieval (logs,documents, etc.). Later on, it was transformed as a cloud service and the latestdevelopment allows a containerized, serverless deployment of the application,using Docker and Kubernetes.This research examines the behaviour of the system by comparing the length and appearance of single-term and multiple-terms queries, the scaling behaviour and the security of the service. The application is deployed on Google Cloud Platform as a Kubernetes cluster hosting containerized Elasticsearch images that work as databasenodes of a bigger database cluster. As input data, a collection of JSON formatted documents containing the title and abstract of published papersin the field of computer science was used inside a single index. All the plots were extracted using Kibana visualization software. The results showed that multiple-term queries put a bigger stress on thesystem than single-term queries. Also the number of simultaneous users querying in the system is a big factor affecting the behaviour of the system. By scaling up the number of Elasticsearch nodes inside the cluster, indicated that more simultaneous requests could be served by the system.
44

Model-driven development for Microservices : A domain-specific modeling language for Kubernetes

Johansson, Daniel January 2022 (has links)
In the digital age that we live in today, we are dependent on numerous web applications or services, from dealing with banking, booking air flights, and handling our taxes. We expect these applications and services to support high availability, data loss prevention, and fast response time. Microservices is a design pattern to support faster software change, and it also supports other non-functional attributes such as scalability and high availability. One way to deploy your software as microservices is to use containers and deploy them on a container cluster such as Kubernetes. The public opinion about writing Kubernetes deployment files is that it is complex and repetitive writing. This project aims to see how model-driven development can assist with the creation of the Kubernetes deployment files. To see how model-driven development can assist in the creation of Kubernetes files. The project will implement a domain-specific modeling language for Kubernetes, and the language should be able to model the application's desired states. And by using model transformation, the tool can generate Kubernetes deployable files.
45

Framework to set up a generic environment for applications / Ramverk för uppsättning av generisk miljö för applikationer

Das, Ruben January 2021 (has links)
Infrastructure is a common word used to express the basic equipment and structures that are needed e.g.  for a country or organisation to function properly. The same concept applies in the field of computer science, without infrastructure one would have problems operating software at scale. Provisioning and maintaining infrastructure through manual labour is a common occurrence in the "iron age" of IT. As the world is progressing towards the "cloud age" of IT, systems are decoupled from physical hardware enabling anyone who is software savvy to automate provisioning and maintenance of infrastructure. This study aims to determine how a generic environment can be created for applications that can run on Unix platforms and how that underlying infrastructure can be provisioned effectively. The results show that by utilising OS-level virtualisation, also known as "containers", one can deploy and serve any application that can use the Linux kernel in the sense that is needed. To further support realising the generic environment, hardware virtualisation was applied to provide the infrastructure needed to be able to use containers. This was done by provisioning a set of virtual machines on different cloud providers with a lightweight operating system that could support the container runtime needed. To manage these containers at scale a container orchestration tool was installed onto the cluster of virtual machines. To provision the said environment in an effective manner, the principles of infrastructure as code (IaC) were used to create a “blueprint" of the infrastructure that was desired. By using the metric mean time to environment (MTTE) it was noted that a cluster of virtual machines with a container orchestration tool installed onto it could be provisioned under 10 minutes for four different cloud providers.
46

Designing an AI-driven System at Scale for Detection of Abusive Head Trauma using Domain Modeling

January 2020 (has links)
abstract: Traumatic injuries are the leading cause of death in children under 18, with head trauma being the leading cause of death in children below 5. A large but unknown number of traumatic injuries are non-accidental, i.e. inflicted. The lack of sensitivity and specificity required to diagnose Abusive Head Trauma (AHT) from radiological studies results in putting the children at risk of re-injury and death. Modern Deep Learning techniques can be utilized to detect Abusive Head Trauma using Computer Tomography (CT) scans. Training models using these techniques are only a part of building AI-driven Computer-Aided Diagnostic systems. There are challenges in deploying the models to make them highly available and scalable. The thesis models the domain of Abusive Head Trauma using Deep Learning techniques and builds an AI-driven System at scale using best Software Engineering Practices. It has been done in collaboration with Phoenix Children Hospital (PCH). The thesis breaks down AHT into sub-domains of Medical Knowledge, Data Collection, Data Pre-processing, Image Generation, Image Classification, Building APIs, Containers and Kubernetes. Data Collection and Pre-processing were done at PCH with the help of trauma researchers and radiologists. Experiments are run using Deep Learning models such as DCGAN (for Image Generation), Pretrained 2D and custom 3D CNN classifiers for the classification tasks. The trained models are exposed as APIs using the Flask web framework, contained using Docker and deployed on a Kubernetes cluster. The results are analyzed based on the accuracy of the models, the feasibility of their implementation as APIs and load testing the Kubernetes cluster. They suggest the need for Data Annotation at the Slice level for CT scans and an increase in the Data Collection process. Load Testing reveals the auto-scalability feature of the cluster to serve a high number of requests. / Dissertation/Thesis / Masters Thesis Software Engineering 2020
47

Scalability of Kubernetes Running Over AWS - A Performance Study while deploying CPU intensive application containers

MOGALLAPU, RAJA January 2019 (has links)
Background: Nowadays lot of companies are enjoying the benefits of kubernetes by maintaining their containerized applications over it. AWS is one of the leading cloud computing service providers and many well-known companies are their clients. Many researches have been conducted on kubernetes, docker containers, cloud computing platforms but a confusion exists on how to deploy the applications in Kubernetes. A research gap about the impact created by CPU limits and requests while deploying the Kubernetes application can be found. So, through this thesis I want to analyze the performance of the CPU intensive containerized application. It will help many companies avoid the confusion while deploying their applications over kubernetes. Objectives: We measure the scalability of kubernetes under CPU intensive containerized application running over AWS and we can study the impact created by changing CPU limits and requests while deploying the application in Kubernetes. Methods: we choose a blend of literature study and experimentation as methods to conduct the research. Results and Conclusion: From the experiments it is evident that the application performs better when we allocate more CPU limits and less CPU requests when compared to equal CPU requests and CPU limits in the deployment file. CPU metrics collected from SAR and Kubernetes metrics server are similar. It is better to allocate pods with more CPU limits and CPU requests than with equal CPU requests and CPU limits for better performance. Keywords: Kubernetes, CPU intensive containerized application, AWS, Stress-ng.
48

Performance evaluation of wireguard in kubernetes cluster

Gunda, Pavan, Voleti, Sri Datta January 2021 (has links)
Containerization has gained popularity for deploying applications in a lightweight environment. Kubernetes and Docker have gained a lot of dominance for scalable deployments of applications in containers. Usually, kubernetes clusters are deployed within a single shared network. For high availability of the application, multiple kubernetes clusters are deployed in multiple regions, due to which the number of kubernetes clusters keeps on increasing over time. Maintaining and managing mul-tiple kubernetes clusters is a challenging and time-consuming process for system administrators or DevOps engineers. These issues can be addressed by deploying a kubernetes cluster in a multi-region environment. A multi-region kubernetes de-ployment reduces the hassle of handling multiple kubernetes masters by having onlyone master with worker nodes spread across multiple regions. In this thesis, we investigated a multi-region kubernetes cluster’s network performance by deploying a multi-region kubernetes cluster with worker nodes across multiple openstack regions and tunneled using wireguard(a VPN protocol). A literature review on the common factors that influence the network performance in a multi-region deployment is conducted for the network performance metrics. Then, we compared the request-response time of this multi-region kubernetes cluster with the regular kubernetes cluster to evaluate the performance of the deployed multi-region kubernetescluster. The results obtained show that a kubernetes cluster with worker nodes ina single shared network has an average request-response time of 2ms. In contrast, the kubernetes cluster with worker nodes in different openstack projects and regions has an average request-response time of 14.804 ms. This thesis aims to provide a performance comparison of the kubernetes cluster with and without wireguard, fac-tors affecting the performance, and an in-depth understanding of concepts related to kubernetes and wireguard.
49

Comparing various methods for improving resource allocation on a single node cluster in Kubernetes

Sopi, Abaied, Andrei, Plotoaga January 2023 (has links)
When dealing with latency-critical applications in Kubernetes, a common strategy is to over-allocate resources to ensure the application can meet its latency guarantees during traffic surges. However, this practice often leads to resource underutilizationas the application will not fully utilize its reserved resources. The Kubernetes scheduler cannot initiate new workloads on the node because of perceived full resource utilization. This study explored the utility of two existing methods, Container Runtime Interface (CRI-RM), which we configured to use the 'balloon policy' and the vertical Pod Autoscaler (VPA) in addressing resource underutilization problems on single node Kubernetes clusters while maintaining latency-grantees of certain pods. Utilizing tc-sim, a network traffic simulator, we deployed four latency-critical and two non-latency-critical pods, all subjected to overallocation. Our finding reveals that VPA was ineffectivein detecting and addressing the underutilization of resources because of its slow response in adjusting requests inside the pods. Moreover, it worsened the underutilization issues of the node. Our configuration of the 'balloon policy' failed to detect theover-allocation issues and further led to performance degradation in the simulator, potentially due to the overhead introduced by CRI-RM. These results underscore the intricacy of over-allocation challenges in latency-critical applications, emphasizing the need for proposed-designed solutions that enable quick and dynamic exchange of resources between pods.
50

Evaluating machine learning strategies for classification of large-scale Kubernetes cluster logs

Sarika, Pawan January 2022 (has links)
Kubernetes is a free, open-source container orchestration system for deploying and managing Docker containers that host microservices. Its cluster logs are extremely helpful in determining the root cause of a failure. However, as systems become more complex, locating failures becomes more difficult and time-consuming. This study aims to identify the classification algorithms that accurately classify the given log data and, at the same time, require fewer computational resources. Because the data is quite large, we begin with expert-based feature selection to reduce the data size. Following that, TF-IDF feature extraction is performed, and finally, we compare five classification algorithms, SVM, KNN, random forest, gradient boosting and MLP using several metrics. The results show that Random forest produces good accuracy while requiring fewer computational resources compared to other algorithms.

Page generated in 0.0261 seconds