• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 82
  • 17
  • 10
  • 3
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 125
  • 59
  • 32
  • 31
  • 30
  • 29
  • 27
  • 26
  • 24
  • 23
  • 23
  • 21
  • 21
  • 17
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Semantic Segmentation of Iron Ore Pellets in the Cloud

Lindberg, Hampus January 2021 (has links)
This master's thesis evaluates data annotation, semantic segmentation and Docker for use in AWS. The data provided has to be annotated and is to be used as a dataset for the creation of a neural network. Different neural network models are then to be compared based on performance. AWS has the option to use Docker containers and thus that option is to be examined, and lastly the different tools available in AWS SageMaker will be analyzed for bringing a neural network to the cloud. Images were annotated in Ilastik and the dataset size is 276 images, then a neural network was created in PyTorch by using the library Segmentation Models PyTorch which gave the option of trying different models. This neural network was created in a notebook in Google Colab for a quick setup and easy testing. The dataset was then uploaded to AWS S3 and the notebook was brought from Colab to an AWS instance where the dataset then could be loaded from S3. A Docker container was created and packaged with the necessary packages and libraries as well as the training and inference code, to then be pushed to the ECR (Elastic Container Registry). This container could then be used to perform training jobs in SageMaker which resulted in a trained model stored in S3, and the hyperparameter tuning tool was also examined to get a better performing model. The two different deployment methods in SageMaker was then investigated to understand the entire machine learning solution. The images annotated in Ilastik were deemed sufficient as the neural network results were satisfactory. The neural network created was able to use all of the models accessible from Segmentation Models PyTorch which enabled a lot of options. By using a Docker container all of the tools available in SageMaker could be used with the created neural network packaged in the container and pushed to the ECR. Training jobs were run in SageMaker by using the container to get a trained model which could be saved to AWS S3. Hyperparameter tuning was used and got better results than the manually tested parameters which resulted in the best neural network produced. The model that was deemed the best was Unet++ in combination with the Dpn98 encoder. The two different deployment methods in SageMaker was explored and is believed to be beneficial in different ways and thus has to be reconsidered for each project. By analysis the cloud solution was deemed to be the better alternative compared to an in-house solution, in all three aspects measured, which was price, performance and scalability.
62

Elasticity of Elasticsearch

Tsaousi, Kleivi Dimitris January 2021 (has links)
Elasticsearch has evolved from an experimental, open-source, NoSQL database for full-text documents to an easily scalable search engine that canhandle a large amount of documents. This evolution has enabled companies todeploy Elasticsearch as an internal search engine for information retrieval (logs,documents, etc.). Later on, it was transformed as a cloud service and the latestdevelopment allows a containerized, serverless deployment of the application,using Docker and Kubernetes.This research examines the behaviour of the system by comparing the length and appearance of single-term and multiple-terms queries, the scaling behaviour and the security of the service. The application is deployed on Google Cloud Platform as a Kubernetes cluster hosting containerized Elasticsearch images that work as databasenodes of a bigger database cluster. As input data, a collection of JSON formatted documents containing the title and abstract of published papersin the field of computer science was used inside a single index. All the plots were extracted using Kibana visualization software. The results showed that multiple-term queries put a bigger stress on thesystem than single-term queries. Also the number of simultaneous users querying in the system is a big factor affecting the behaviour of the system. By scaling up the number of Elasticsearch nodes inside the cluster, indicated that more simultaneous requests could be served by the system.
63

Využití virtualizace v podnikovém prostředí / Using Virtualization in the Enterprise Environment

Bartík, Branislav January 2016 (has links)
Diplomová práca sa zaoberá návrhom riešenia pre fiktívnu spoločnosť XYZ s.r.o., ako ušetriť náklady vybudovaním výukového prostredia pre jej zamestnancov za účelom rozvíjať ich zručnosti a skúsenosti v danom obore. Toto riešenie môže byť taktiež použité zamestnancami a študentami univerzít, aby si mohli otestovať podnikový softvér pre výukové účely. Autor zdôrazňuje výhody používania Cloud Computingu a otvoreného softvéru, ako aj využitie technológie Docker kontajnerov v kombinácii s komerčným softvérom ako je napr. IBM WebSphere Application Server.
64

Virtualizace operačních systémů / Virtualization of operating systems

Král, Jan January 2016 (has links)
The diploma thesis ’Virtualization of Operating Systems’ deals with a general description of virtualization technology and briefly discusses its use cases and advantages. The thesis also mentions examples of different types of virtualization technologies and tools, including more thorough description of the two technologies used for measurement: Docker and KVM. The second part of this thesis descibes the preparation, installation and configuration of all the tools and services that are necessary for measurement of the influence of the aforementioned virtualization technologies on network services running on the virtual machines, including analysis and discussion of the resulting data. Moreover, a custom application for fully automated measurement of the parameters of network services was created, and is also described in this thesis. The conclusion of this thesis summarizes and discusses the achieved results and acknowledges the influence of virtualization on network services, where the Docker application containers, which are with their low overhead comparable to a "bare" system without any virtualization, managed to achieve much better performance results than the traditional virtual machines on KVM.
65

Testování aplikací s využitím Linuxových kontejnerů / Testing Applications Using Linux Containers

Marhefka, Matúš January 2016 (has links)
This thesis discusses software containers (Docker containers in particular) as a variant of server virtualization. Instead of virtualizing hardware, software containers rest on top of a single operating system instance and are much more efficient than hypervisors in system resource terms. Docker containers make it easy to package and ship applications, and guarantee that applications will always run the same, regardless of the environment they are running in. There is a whole range of use cases of containers, this work examines their usage in the field of software testing. The thesis proposes three main use case categories for running software systems in Docker containers. It introduces aspects for applications running in containers, which should give a better overview about an application setting within containers infrastructure. Subsequently, possible issues with testing software systems running inside Docker containers are discussed and the testing methods which address the presented issues are proposed. One proposed testing method was also used in the implementation of the framework for testing software running in Docker containers which was developed within this work.
66

A Comparative Study on the Performance Isolation of Virtualization Technologies

January 2019 (has links)
abstract: Virtualization technologies are widely used in modern computing systems to deliver shared resources to heterogeneous applications. Virtual Machines (VMs) are the basic building blocks for Infrastructure as a Service (IaaS), and containers are widely used to provide Platform as a Service (PaaS). Although it is generally believed that containers have less overhead than VMs, an important tradeoff which has not been thoroughly studied is the effectiveness of performance isolation, i.e., to what extent the virtualization technology prevents the applications from affecting each other’s performance when they share the resources using separate VMs or containers. Such isolation is critical to provide performance guarantees for applications consolidated using VMs or containers. This paper provides a comprehensive study on the performance isolation for three widely used virtualization technologies, full virtualization, para-virtualization, and operating system level virtualization, using Kernel-based Virtual Machine (KVM), Xen, and Docker containers as the representative implementations of these technologies. The results show that containers generally have less performance loss (up to 69% and 41% compared to KVM and Xen in network latency experiments, respectively) and better scalability (up to 83.3% and 64.6% faster compared to KVM and Xen when increasing number of VMs/containers to 64, respectively), but they also suffer from much worse isolation (up to 111.8% and 104.92% slowdown compared to KVM and Xen when adding disk stress test in TeraSort experiments under full usage (FU) scenario, respectively). The resource reservation tools help virtualization technologies achieve better performance (up to 85.9% better disk performance in TeraSort under FU scenario), but cannot help them avoid all impacts. / Dissertation/Thesis / Masters Thesis Computer Science 2019
67

Container performance benchmark between Docker, LXD, Podman & Buildah

Emilsson, Rasmus January 2020 (has links)
Virtualization is a much-used technology by small and big companies alike as running several applications on the same server is a flexible and resource-saving measure. Containers which is another way of virtualizing has become a popular choice for companies in the past years seeing even more flexibility and use cases in continuous integration and continuous development.This study aims to explore how the different leading container solutions perform in relation to one another in a test scenario that replicates a continuous integration use case which is compiling a big project from source, in this case, Firefox.The tested containers are Docker, LXD, Podman, and Buildah which will have their CPU and RAM usage tested while also looking at the time to complete the compilation. The containers perform almost on par with bare-metal except Podman/Buildah that perform worse during compilation, falling a few minutes behind.
68

Implementácia inovácií a rozšírenie funkcionality systému Microsoft Dynamics NAV podľa aktuálnych trendov

Mudronček, Ivan January 2019 (has links)
This master’s thesis focuses on design and creation of extensions for Microsoft Dynamics NAV information system. Thesis includes analysis of system’s internal structure, it’s objects, posibilities of extending this system and it’s distribution as a Docker container. Three versions of extension were created, based on previously mentioned methods and customer’s requests. Subsequently, the specific versions are evaluated with possibilities of their further development and deployment.
69

Predicting Service Metrics from Device Statistics in a Container-Based Environment

Jiang, Zuoying January 2015 (has links)
Service assurance is critical for high-demand services running on telecom clouds. While service performance metrics may not always be available in real time to telecom operators or service providers, service performance prediction becomes an important building block for such a system. However, it is generally hard to achieve.  In this master thesis, we propose a machine-learning based method that enables performance prediction for services running in virtualized environments with Docker containers. This method is service agnostic and the prediction models built by this method use only device statistics collected from the server machine and from the containers hosted on it to predict the values of the service-level metrics experienced on the client side.  The evaluation results from the testbed, which runs a Video-on-Demand service using containerized servers, show that such a method can accurately predict different service-level metrics under various scenarios and, by applying suitable preprocessing techniques, the performance of the prediction models can be further improved.  In this thesis, we also show the design of a proof-of-concept of a Real-Time Analytics Engine that uses online learning methods to predict the service-level metrics in real time in a container-based environment.
70

Experimental Investigation of Container-based Virtualization Platforms For a Cassandra Cluster

Sulewski, Patryk, Jesper, Hallborg January 2017 (has links)
Context. Cloud computing is growing fast and has established itself as the next generationsoftware infrastructure. A major role in cloud computing is the virtualization of hardware toisolate systems from each other. This virtualization is often done with Virtual Machines thatemulate both hardware and software, which in turn makes the process isolation expensive. Newtechniques, known as Microservices or containers, has been developed to deal with the overhead.The infrastructure is conjoint with storing, processing and serving vast and unstructureddata sets. The overall cloud system needs to have high performance while providing scalabilityand easy deployment. Microservices can be introduced for all kinds of applications in a cloudcomputing network, and be a better fit for certain products.Objectives. In this study we investigate how a small system consisting of a Cassandra clusterperform while encapsulated in LXC and Docker containers, compared to a non virtualizedstructure. A specific loader is built to stress the cluster to find the limits of the containers.Methods. We constructed an experiment on a three node Cassandra cluster. Test data is sentfrom the Cassandra-loader from another server in the network. The Cassandra processes are thendeployed in the different architectures and tested. During these tests the metrics CPU, disk I/O,network I/O are monitored on the four servers. The data from the metrics is used in statisticalanalysis to find significant deviations.Results. Three experiments are being conducted and monitored. The Cluster test pointed outthat isolated Docker container indicate major latency during disk reads. A local stress test furtherconfirmed those results. The step-wise test in turn, implied that disk read latencies happened dueto isolated Docker containers needs to read more data to handle these requests. All Microservicesprovide some overheads, but fall behind the most for read requests.Conclusions. The results in this study show that virtualization of Cassandra nodes in a clusterbring latency in comparison to a non virtualized solution for write operations. However, thoselatencies can be neglected if scalability in a system is the main focus. For read operationsall microservices had reduced performance and isolated Docker containers brought out thehighest overhead. This is due to the file system used in those containers, which makes disk I/Oslower compared to the other structures. If a Cassandra cluster is to be launched in a containerenvironment we recommend a Docker container with mounted disks to bypass Dockers filesystem or a LXC solution.

Page generated in 0.0205 seconds