• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 82
  • 17
  • 10
  • 3
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 125
  • 59
  • 32
  • 31
  • 30
  • 29
  • 27
  • 26
  • 24
  • 23
  • 23
  • 21
  • 21
  • 17
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Evolving geospatial applications: from silos and desktops to Microservices and DevOps

Gao, Bing 30 April 2019 (has links)
The evolution of software applications from single desktops to sophisticated cloud-based systems is challenging. In particular, applications that involve massive data sets, such as geospatial applications and data science applications are challenging for domain experts who are suddenly constructing these sophisticated code bases. Relatively new software practices, such as Microservice infrastructure and DevOps, give us an opportunity to improve development, maintenance and efficiency for the entire software lifecycle. Microservices and DevOps have become adopted by software developers in the past few years, as they have relieved many of the burdens associated with software evolution. Microservices is an architectural style that structures an application as a collection of services. DevOps is a set of practices that automates the processes between software development and IT teams, in order to build, test, and release software faster and increase reliability. Combined with lightweight virtualization solutions, such as containers, this technology will not only improve response rates in cloud-based solutions but also drastically improve the efficiency of software development. This thesis studies two applications that apply Microservices and DevOps within a domain-specific application. The advantages and disadvantages of Microservices architecture and DevOps are evaluated through the design and development on two different platforms---a batch-based cloud system, and a general purpose cloud environment. / Graduate
22

Comparison of Auto-Scaling Policies Using Docker Swarm / Jämförelse av autoskalningspolicies med hjälp av Docker Swarm

Adolfsson, Henrik January 2019 (has links)
When deploying software engineering applications in the cloud there are two similar software components used. These are Virtual Machines and Containers. In recent years containers have seen an increase in popularity and usage, in part because of tools such as Docker and Kubernetes. Virtual Machines (VM) have also seen an increase in usage as more companies move to solutions in the cloud with services like Amazon Web Services, Google Compute Engine, Microsoft Azure and DigitalOcean. There are also some solutions using auto-scaling, a technique where VMs are commisioned and deployed to as load increases in order to increase application performace. As the application load decreases VMs are decommisioned to reduce costs. In this thesis we implement and evaluate auto-scaling policies that use both Virtual Machines and Containers. We compare four different policies, including two baseline policies. For the non-baseline policies we define a policy where we use a single Container for every Virtual Machine and a policy where we use several Containers per Virtual Machine. To compare the policies we deploy an image serving application and run workloads to test them. We find that the choice of deployment strategy and policy matters for response time and error rate. We also find that deploying applications as described in the methodis estimated to take roughly 2 to 3 minutes.
23

Utvärdering av containerbaserad virtualisering för telekomsignalering / Evaluation of container-based virtualization for telecom signaling

Arvidsson, Jonas January 2018 (has links)
New and innovative technologies to improve the techniques that are already being used are constantly developing. This project was about evaluating if containers could be something for the IT company Tieto to use on their products in telecommunications. Container are portable, standalone, executable lightweight packages of software that also contains all it needs to run the software. Containers are a very hot topic right now and are a fast-growing technology. Tieto wanted an investigation of the technology and it would be carried out with certain requirements where the main requirement was to have a working and executable protocol stack in a container environment. In the investigation, a proof of concept was developed, proof of concept is a realization of a certain method or idea in order to demonstrate its feasibility. The proof of concept led to Tieto wanting additional experiments carried out on containers. The experiments investigated if equal performance could be achieved with containers compared to the method with virtual machine used by Tieto today. The experiment observed a small performance reduction of efficiency, but it also showed benefits such as higher flexibility. Further development of the container method could provide a just as good and equitable solution. The project can therefore be seen as successful whereas the proof of concept developed, and experiments carried out both points to that this new technology will be part of Tieto's product development in the future.
24

The Telecommuting Software Developer

Norin, Niklas January 2018 (has links)
This thesis designs, and partially implements, an architecture for running an embedded Linux application on a regular PC, without access to the target device. This thesis shows how a standard Linux User space filesystem, in the right environment, can be used to emulate the most common User space GPIO interface in Linux, SysFS. Furthermore, this thesis sets up a template for how this architecture can be used to run both the embedded application and an application emulating the connected hardware.
25

The Effects of Parallelizing Builds in Continuous Integration Software

Lindblom, William, Johnsson, Jesper January 2018 (has links)
Quick feedback in regards to build times is important in Continuous Integration. If builds become too long, it can hurt the rate of software development. There are multiple methods to reduce build times. One commonly suggested method is to parallelize builds. This thesis aims to investigate the effects of parallelizing builds in Continuous Integration software and provide support for whether parallelizing is a good way of reducing build times or not. We conducted an experiment consisting of running tests on different Continuous Integration software with different configurations. These configurations changed how many tests were executed and how many parallel build agents were used. The aspects that were observed and analyzed was how build time, average CPU usage and CPU time were affected. What we found was that parallelizing a Continuous Integration build drastically improves build time, while RAM usage and CPU time remains similar. This entails that there are no major consequences to parallelizing other than utilizing more threads and therefore using more of the available CPU resources.
26

Deployment of AI Model inside Docker on ARM-Cortex-based Single-Board Computer : Technologies, Capabilities, and Performance

WoldeMichael, Helina Getachew January 2018 (has links)
IoT has become tremendously popular. It provides information access, processing and connectivity for a huge number of devices or sensors. IoT systems, however, often do not process the information locally, rather send the information to remote locations in the Cloud. As a result, it adds huge amount of data traffic to the network and additional delay to data processing. The later feature might have significant impact on applications that require fast response times, such as sophisticated artificial intelligence (AI) applications including Augmented reality, face recognition, and object detection. Consequently, edge computing paradigm that enables computation of data near the source has gained a significant importance in achieving a fast response time in the recent years. IoT devices can be employed to provide computational resources at the edge of the network near the sensors and actuators. The aim of this thesis work is to design and implement a kind of edge computing concept that brings AI models to a small embedded IoT device by the use of virtualization concepts. The use of virtualization technology enables the easy packing and shipping of applications to different hardware platforms. Additionally, this enable the mobility of AI models between edge devices and the Cloud. We will implement an AI model inside a Docker container, which will be deployed on a FireflyRK3399 single-board computer (SBC). Furthermore, we will conduct CPU and memory performance evaluations of Docker on Firefly-RK3399. The methodology adopted to reach to our goal is experimental research. First, different literatures have been studied to demonstrate by implementation the feasibility of our concept. Then we setup an experiment that covers measurement of performance metrics by applying synthetic load in multiple scenarios. Results are validated by repeating the experiment and statistical analysis. Results of this study shows that, an AI model can successfully be deployed and executed inside a Docker container on Arm-Cortex-based single-board computer. A Docker image of OpenFace face recognition model is built for ARM architecture of the Firefly SBC. On the other hand, the performance evaluation reveals that the performance overhead of Docker in terms of CPU and Memory is negligible. The research work comprises the mechanisms how AI application can be containerized in ARM architecture. We conclude that the methods can be applied to containerize software application in ARM based IoT devices. Furthermore, the insignificant overhead brought by Docker facilitates for deployment of applications inside a container with less performance overhead. The functionality of IoT device i.e. Firefly-RK3399 is exploited in this thesis. It is shown that the device is capable and powerful and gives an insight for further studies.
27

Impact of Cassandra Compaction on Dockerized Cassandra’s performance : Using Size Tiered Compaction Strategy

Mohanty, Biswajeet January 2016 (has links)
Context. Cassandra is a NoSQL Database which handles large amount of data simultaneously and provides high availability for the data present. Compaction in Cassandra is a process of removing stale data and making data more available to the user. This thesis focusses on analyzing the impact of Cassandra compaction on Cassandra’s performance when running inside a Docker container. Objectives. In this thesis, we investigate the impact of Cassandra compaction on the database performance when it is used within a Docker based container platform. We further fine tune Cassandra’s compaction settings to arrive at a sub-optimal scenario which maximizes its performance while operating within a Docker. Methods. Literature review is performed to enlist different compaction related metrics and compaction related parameters which have an effect on Cassandra’s performance. Further, Experiments are conducted using different sets of mixed workload to estimate the impact of compaction over database performance when used within a Docker. Once these experiments are conducted, we modify compaction settings while operating under a write heavy workload and access database performance in each of these scenarios to identify a sub-optimal value of parameter for maximum database performance. Finally, we use these sub-optimal parameters to perform an experiment and access the database performance. Results. The Cassandra and Operating System related parameters and metrics which affect the Cassandra compaction are listed and their effect on Cassandra’s performance has been tested using some experiments. Based on these experiments, few sub-optimum values are proposed for the listed metrics. Conclusions. It can be concluded that, for better performance of Dockerized Cassandra, the proposed values for each of the parameters in the results (i.e. 5120 for Memtable_heap_size_in_mb, 24 for concurrent_compactors, 16 for compaction_throughput_mb_per_sec, 6 for Memtable_flush_writers and 0.14 for Memtable_cleaup _threshold) can be chosen separately but not the union of those proposed values (confirmed from the experiment performed). Also the metrics and parameters affecting Cassandra performance are listed in this thesis.
28

Jails vs Docker : A performance comparison of different container technologies

Ryding, Christian, Johansson, Rickard January 2020 (has links)
Virtualization is used extensively by Enterprise IT architecture and cloud computing, it is used to provide customers a part of their hardware resources as a service. Container technology is the new generation of virtualization and provides performance benefits due to less overhead. Earlier research has compared different container technologies regarding their performance, including Docker which is the most popular container technology. Most of this research has been focusing on Linux based container technologies. Even though there is interest in knowing how other container technologies under different operating systems perform. In this study we explore the performance of Docker in contrast to the performance of a contending container technology named Jails. We present how well each container technology performs running one or multiple containers, in the areas of CPU, memory, read from disk, write to disk, network and startup time efficiency. The comparison was done using collected statistics from different benchmarking tools. Results from this study have shown that Docker is utilizing shared resources and has better stability compared to Jails. We also discuss what unexplored benefits Docker and Jails can have by implementing each other’s unique features. Future work could consist of writing to disk or reading from disk performance tests under one common filesystem, e.g., ZFS file system. / Virtualisering används i stor utsträckning av Enterprise IT-arkitektur och molntjänster, den används för att kunna erbjuda sina kunder en del av sina hårdvaruresurser som en tjänst. Containerteknologi är den nya generationen virtualisering och ger prestandafördelar på grund av mindre omkostnader. Tidigare forskning har jämfört olika containerteknologier angående deras prestanda, inklusive Docker, som är den mest populära containertekniken. Merparten av tidigare forskning har fokuserat på Linuxbaserade containerteknologier, även om det finns intresse för att veta hur andra containerteknologier under olika operativsystem fungerar. I denna studie undersöker vi Dockers prestanda jämfört med prestandan till containerteknologin med namnet Jails. Vi presenterar hur bra varje containerteknologi fungerar med att köra en eller flera containrar inom områdena CPU, minne, läsa från disk, skriva till disk, nätverkshastighet och starttid. Jämförelsen gjordes med insamlad statistik från olika referensverktyg. Resultat från denna studie har visat att Docker använder delade resurser på ett effektivare sätt och har bättre stabilitet jämfört med Jails. Vi diskuterar också vilka outforskade fördelar Docker och Jails kan ha genom att implementera varandras unika funktioner. Framtida arbete kan bestå av att skriva till disk eller läsa från diskprestanda under ett gemensamt filsystem, t.ex. ZFS-filsystem.
29

Odhad rychlosti automobilů ve videu / Vehicle Speed Estimation from Video

Hájek, Pavel January 2017 (has links)
This master's thesis describes design and development of an application for estimation of vehicle speed from both recorded video file and from camera stream.  It explains the procecess of a camera calibration, vehicle detection and tracking and describes the robot operating system as a target platform. The application uses library OpenCV for most of tasks, to access video application uses a FFmpeg library. Results can be printed to the terminal window, they can also be logged to a file or published in ROS. Application is written in C++ language, some parts in Python.
30

Zabezpečená archivace dat s využitím cloudového výpočtu / Secure data archiving using cloud computing

Šulič, Martin January 2021 (has links)
This master’s thesis is focused on detailed analysis of possibilities of implementing a private cloud and secure data archiving for a long period of time using open-source tools. It describes the individual standards and processes of data preparation, as well as the OAIS reference model for long-term preservation. From the analyzed information, a complete design of the final solution is created, with a description of the functionality and the method of deployment in the environment of Docker containers. The design implementation and the main functionality of individual systems such as Archivematica or Nextcloud are thoroughly described and also the hardware requirements and cryptographic security were evaluated.

Page generated in 0.0222 seconds