• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 112
  • 55
  • 13
  • 10
  • 6
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 253
  • 34
  • 28
  • 25
  • 22
  • 20
  • 19
  • 17
  • 17
  • 17
  • 16
  • 16
  • 16
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

INVESTIGATING ESCAPE VULNERABILITIES IN CONTAINER RUNTIMES

Michael J Reeves (10797462) 14 May 2021 (has links)
Container adoption has exploded in recent years with over 92% of companies using containers as part of their cloud infrastructure. This explosion is partly due to the easy orchestration and lightweight operations of containers compared to traditional virtual machines. As container adoption increases, servers hosting containers become more attractive targets for adversaries looking to gain control of a container to steal trade secrets, exfiltrate customer data, or hijack hardware for cryptocurrency mining. To control a container host, an adversary can exploit a vulnerability that enables them to escape from the container onto the host. This kind of attack is termed a “container escape” because the adversary is able to execute code on the host from within the isolated container. The vulnerabilities which allow container escape exploits originate from three main sources: (1) container profile misconfiguration, (2) the host’s Linux kernel, and (3) the container runtime. While the first two cases have been studied in the literature, to the best of the author’s knowledge, there is, at present, no work that investigates the impact of container runtime vulnerabilities. To fill this gap, a survey over container runtime vulnerabilities was conducted investigating 59 CVEs for 11 different container runtimes. As CVE data alone would limit the investigation analysis, the investigation focused on the 28 CVEs with publicly available proof of concept (PoC) exploits. To facilitate this analysis, each exploit was broken down into a series of high-level commands executed by the adversary called “steps”. Using the steps of each CVE’s corresponding exploit, a seven-class taxonomy of these 28 vulnerabilities was constructed revealing that 46% of the CVEs had a PoC exploit which enabled a container escape. Since container escapes were the most frequently occurring category, the nine corresponding PoC exploits were further analyzed to reveal that the underlying cause of these container escapes was a host component leaking into the container. This survey provides new insight into system vulnerabilities exposed by container runtimes thereby informing the direction of future research.
162

Investigating differences in response time and error rate between a monolithic and a microservice based architecture

Johansson, Gustav January 2019 (has links)
With great advancements in cloud computing, the microservice architecture has become a promising architectural style for enterprise software. It has been proposed to cope with problems of the traditional monolithic architecture which includes slow release cycles, limited scalability and low developer productivity. Therefore, this thesis aims to investigate the affordances and challenges of adopting microservices as well as the difference in performance compared to the monolithic approach at one of Sweden’s largest banks, SEB - the Scandinavian Individual Bank. The investigation consisted of a literature study of research papers and official documentation of microservices. Moreover, two applications were developed and deployed using two different system architectures - a monolithic architecture and a microservice architecture. Performance tests were executed on both systems to gather quantitative data for analysis. The two metrics investigated in this study were response time and error rate. The results indicate the microservice architecture has a significantly higher error rate but a slower response time than the monolithic approach, further strengthening the results of Ueda et. al. [47] and Villamizar et. al. [48]. The findings have then been discussed with regards to the challenges and complexity involved in implementing distributed systems. From this study, it becomes clear the complexity shifts from inside the application out towards infrastructure with a microservice architecture. Therefore, microservices should not be seen as a silver bullet. Rather, the type of architecture is highly dependent on the scope of the project and the size of the organization. / Med stora framstegen inom molntjänster har microservice arkitekturen kommit att bli en lämplig kandidat för utveckling av företagsprogramvara. Denna typ av systemarkitektur har föreslagits att lösa de problem som den traditionella monolitiska arkitekturen medför; långsamma lanseringar, begränsad skalbarhet och låg produktivitet. Således fokuserar denna avhandling på att utforska de möjligheter samt utmaningar som följer vid adoptering av microservices samt skillnaden i prestanda jämfört med den monolitiska arkitekturen. Detta undersöktes på en av Sveriges största banker, SEB, den Skandinaviska Enskilda Banken. Utredningen bestod av en litteraturstudie av vetenskapliga artiklar samt officiell dokumentation för microservices. Dessutom utvecklades och lanserades två applikationer byggt med två olika typer av systemarkitektur - en som monolitisk arkitektur och den andra som en microservice arkitektur. Prestandatest utfördes sedan på båda systemen för att samla kvantitativ data för analys. De två nyckelvardena som undersöktes i denna studie var responstid och felfrekvens. Resultaten indikerar att microservice arkitekturen har en signifikant högre felfrekvens men en långsammare responstid än den monolitiska arkitekturen, vilket stärker resultaten av Ueda et. al. [47] och Villamizar et. al. [48]. Forskningsresultaten har diskuterats med hänsyn till den komplexitet och de utmaningar som följer vid implementering av distribuerade system. Från denna studie blir det tydligt att komplexiteten i en microservice arkitektur skiftar från inuti applikationen ut till infrastrukturen. Således borde microservices inte ses som en silverkula. Istället är valet av systemarkitektur strikt beroende på omfattningen av projektet samt storleken på organisationen i fråga.
163

Parametric design and optimisation of thin-walled structures for food packaging

Ugail, Hassan January 2003 (has links)
In this paper the parametric design and functional optimisation of thin-walled structures made from plastics for food packaging is considered. These objects are produced in such vast numbers each year that one important task in the design of these objects is to minimise the amount of plastic used, subject to functional constraints, to reduce the costs of production and to conserve raw materials. By means of performing an automated optimisation on the possible shapes of the food containers, where the geometry is parametrised succinctly, a strategy to create the optimal design of the containers subject to a given set of functional constraints is demonstrated.
164

Cost-Effective Large-Scale Digital Twins Notification System with Prioritization Consideration

Vrbaski, Mira 19 December 2023 (has links)
Large-Scale Digital Twins Notification System (LSDTNS) monitors a Digital Twin (DT) cluster for a predefined critical state, and once it detects such a state, it sends a Notification Event (NE) to a predefined recipient. Additionally, the time from producing the DT's Complex Event (CE) to sending an alarm has to be less than a predefined deadline. However, addressing scalability and multi-objectives, such as deployment cost, resource utilization, and meeting the deadline, on top of process scheduling, presents a complex challenge. Therefore, this thesis presents a complex methodology consisting of three contributions that address system scalability, multi-objectivity and scheduling of CE processes using Reinforcement Learning (RL). The first contribution proposes the IoT Notification System Architecture based on a micro-service-based notification methodology that allows for running and seamlessly switching between various CE reasoning algorithms. Our proposed IoT Notification System architecture addresses the scalability issue in state-of-the-art CE Recognition systems. The second contribution proposes a novel methodology for multi-objective optimization for cloud provisioning (MOOP). MOOP is the first work dealing with multi-optimization objectives for microservice notification applications, where the notification load is variable and depends on the results of previous microservices subtasks. MOOP provides a multi-objective mathematical cloud resource deployment model and demonstrates effectiveness through the case study. Finally, the thesis presents a Scheduler for large-scale Critical Notification applications based on a Deep Reinforcement Learning (SCN-DRL) scheduling approach for LSDTNS using RL. SCN-DRL is the first work dealing with multi-objective optimization for critical microservice notification applications using RL. During the performance evaluation, SCN-DRL demonstrates better performance than state-of-the-art heuristics. SCN-DRL shows steady performance when the notification workload increases from 10% to 90%. In addition, SCN-DRL, tested with three neural networks, shows that it is resilient to sudden container resources drop by 10%. Such resilience to resource container failures is an important attribute of a distributed system.
165

A Comparison of CI/CD Tools on Kubernetes

Johansson, William January 2022 (has links)
Kubernetes is a fast emerging technological platform for developing and operating modern IT applications. The capacity to deploy new apps and change old ones at a faster rate with less chance of error is one of the key value proposition of the Kubernetes platform. A continuous integration and continuous deployment (CI/CD) pipeline is a crucial component of the technology. Such pipelines compile all updated code and do specific tests and may then automatically deploy the produced code artifacts to a running system. There is a thriving ecosystem of CI/CD tools. Tools can also be divided into two types: integrated and standalone. Integrated tools will be utilized for both pipeline phases, CI and CD. The standalone tools will be used just for one of the processes, which needs the usage of two independent programs to build up the pipeline. Some tools predate Kubernetes and may be converted to operate on Kubernetes, while others are new and designed specifically for usage with Kubernetes clusters. CD systems are classified as push-style (artifacts from outside the cluster are pushed into the cluster) or pull-style (CD tool running inside the cluster pulling built artifacts into the cluster). Pull- and push-style pipelines will have an impact on how cluster credentials are managed and if they ever need to leave the cluster. This thesis investigates the deployment time, fault tolerance, and access security of pipelines. Using a simple microservices application, a testing setup is created to measure the metrics of the pipelines. Drone, Argo Workflows, ArgoCD, and GoCD are the tools compared in this study. These tools are coupled to form various pipelines. The pipeline using Kubernetes-specific tools, Argo Workflows and ArgoCD, is the fastest, the pipeline with GoCD is somewhat slower, and the Drone pipeline is the slowest. The pipeline that used Argo Workflows and ArgoCD could also withstand failures. Theother pipelines that used Drone and GoCD were unable to recover and timed out. Pull pipelines handles the Kubernetes access differently to push pipelines as the Kubernetes cluster credentials does not have to leave the cluster, whereas push pipelines needs the cluster credentials in the external environment where the CD tool is running.
166

Mechanical behaviour and durability of disposable food containers / Egenskaper och hållbarhet av engångsförpackningar för livsmedel

Johansson, Frida January 2024 (has links)
A large proportion of the food that is consumed daily is bought ready-madeand is served on some sort of disposable container. ConServ AB developsand produces sustainable food packaging made from Areca palm leaves.The company wonders how durable the product is so that they can further investigate on their own what form the product should have for thebest durability. They also wonder how durable the product is during usage.The aim is to conduct a pilot study to investigate and identify trends regarding the material’s durability and mechanical behaviour during and as aresult of simulated useage. The goal is to use tensile tests and photograhicmethods to produce a basis with data on the material’s behaviour for ConServ.In order to be able to evaluate the durability and behavior of the product,a systematic study has been carried out where tensile tests were performedon test pieces exposed to a food simulant in the form of water or vinegarsolution. The test pieces were exposed for 0, 1, 6, 24 or 48 hours and testswere performed immediately after exposure.Experimental data show that the durability of the product depends to alarge extent on the fiber direction, where the test pieces taken perpendicularto the fiber direction performed worse in the tensile test. The mechanicalbehavior of the material is affected by the time it is exposed to liquid andbecomes more ductile with time.
167

The Effect of Pallets and Unitization on the Efficiency of Intercontinental Product Movement Using Ocean Freight Containers

Hagedorn, Alexander 31 August 2009 (has links)
Global industrialization was developed in response to both consumers and manufacturers demand for lower product prices and availability of goods and services. As a result, products are transported greater distances. Shipping constitutes the majority of costs in the export/import supply chain. Shippers and buyers commonly attempt to offset these costs by maximizing the capacity of ocean freight containers (cube or weight). Boxes (usually constructed of corrugated fiberboard) containing consumer grade products are commonly floor loaded into containers to maximize capacity. Boxes that are not floor loaded are likely to be unitized on pallets in containers. Beyond maximizing a container with cargo, a defined decision to determine which method of loading is most efficient in regard to cost and time does not exist. For this research, field studies were conducted and questionnaires were distributed to identify the variables that influence efficiency. A method to make an efficient decision was developed by incorporating the variables into a model. The model compares the overall export/import supply chain efficiency for boxes that are floor loaded to boxes that are unitized on pallets in containers. The recommended decision is determined by comparing the shipping and handling costs and the receiving dock door capability for the two loading methods. The results of this research reveal that floor loading boxes can provide a higher value per container due to increased capacity. Increased capacity by floor loading often reduces the number of containers needed to meet daily demand. However, since manual labor is utilized for the loading/unloading process, more time is required, which results in higher labor costs and restricted product throughput. Unitized boxes loaded in containers on pallets can limit container capacity, but allows for faster loading/unloading times (if no incompatibilities between product and pallet or pallet and/or material handling equipment exist), reduced labor costs, and the potential for increased product throughput. Importing boxes unitized on pallets commonly requires more containers to meet demand, but fewer receiving dock doors. Utilizing fewer dock doors allows otherwise occupied doors to be available to receive additional product. The decision to floor load or unitize exports/imports needs to be made on a SKU basis meeting daily demand, not only per container capacity. Labor cost, pallet cost, the magnitude of box variation between loading methods, and the ability of the receiver to process containers are all influencing factors in determining which loading method is most overall efficient. Given the current cost for containerized shipments and considering all costs, most consumer goods are more efficiently shipped floor loaded. When additional containers would be needed to meet demand for product unitized on pallets, floor loading will be more efficient. When there is only a small difference in box count between floor loading and palletizing, palletizing product will be more efficient. This will often occur when loads will meet container weight capacity before it reaches volume capacity. If the product is too heavy to move manually it will be palletized. / Ph. D.
168

[pt] OS RECENTES AVANÇOS DA MULTIMODALIDADE NO BRASIL / [en] THE RECENT ADVANCES IN BRAZILIAN MULTIMODALITY

ADRIANA FERREIRA PEDREIRA 02 August 2006 (has links)
[pt] O transporte é geralmente o elemento mais importante nos custos logísticos, para a maioria das empresas. A movimentação de fretes absorve, segundo Ballou (1993), entre um e dois terços do total de custos logísticos. Dentro desse panorama desenvolveu-se este estudo sobre multimodalidade. O foco está nas instalações e serviços que compõem o sistema de transporte multimodal, com especial destaque para a infra-estrutura portuária, nas taxas (custos) e no desempenho dos vários serviços envolvidos. O principal objetivo do trabalho é compreender como o processo de privatização das estruturas de transporte, portos, ferrovias e rodovias, viabilizou a evolução da Multimodalidade como alternativa ao transporte nacional. Um assunto relativamente novo, ainda pouco explorado academicamente e com escassas referencias bibliográficas. Para que se tenha um entendimento de como o processo de privatização transformou as estruturas de transporte, especialmente a atividade portuária brasileira, viabilizando o transporte multimodal e obviamente as vantagens dele decorrentes, fez- se necessária a abordagem de temas congruentes, como a logística e também os cenários dos modos de transporte rodoviário, ferroviário e o marítimo (cabotagem). Nesta fase também se analisam os modos de transporte e os respectivos processos de privatização e cenário atual, de suas estruturas e os operadores logísticos, assuntos que contribuem para um melhor entendimento das questões relativas ao Operador de Transporte Multimodal. / [en] Inside of this panorama this study was developed on multimodality. The focus is in the installations and services that compose the system of multimodal transport, with special prominence for the port infrastructure, in the taxes (costs) and the performance of the some involved services. The main objective of the work is to understand as the process of privatization of the transport structures, ports, railroads and highways, it made possible the evolution of Multimodalidade as alternative to the national transport. A relatively new subject, still little explored academically and with scarce bibliographical references. For a well understanding as the privatization process transformed the transport structures, especially the Brazilian port activity, making possible the multimodal transport and obviously the decurrently advantages of the boarding of congruence´s subjects became necessary, as logistic and also the scenes in the ways of road transport, railroad worker and the marine (cabotage). In this phase also the ways of transport and the respective processes of privatization and current scene are analyzed, of its structures and the logistic operators, subjects that contribute for one better agreement of the relative questions the Operator of Multimodal Transport.
169

Towards a Flexible High-efficiency Storage System for Containerized Applications

Zhao, Nannan 08 October 2020 (has links)
Due to their tight isolation, low overhead, and efficient packaging of the execution environment, Docker containers have become a prominent solution for deploying modern applications. Consequently, a large amount of Docker images are created and this massive image dataset presents challenges to the registry and container storage infrastructure and so far has remained a largely unexplored area. Hence, there is a need of docker image characterization that can help optimize and improve the storage systems for containerized applications. Moreover, existing deduplication techniques significantly degrade the performance of registries, which will slow down the container startup time. Therefore, there is growing demand for high storage efficiency and high-performance registry storage systems. Last but not least, different storage systems can be integrated with containers as backend storage systems and provide persistent storage for containerized applications. So, it is important to analyze the performance of different backend storage systems and storage drivers and draw out the implications for container storage system design. These above observations and challenges motivate my dissertation. In this dissertation, we aim to improve the flexibility, performance, and efficiency of the storage systems for containerized applications. To this end, we focus on the following three important aspects: Docker images, Docker registry storage system, and Docker container storage drivers with their backend storage systems. Specifically, this dissertation adopts three steps: (1) analyzing the Docker image dataset; (2) deriving the design implications; (3) designing a new storage framework for Docker registries and propose different optimizations for container storage systems. In the first part of this dissertation (Chapter 3), we analyze over 167TB of uncompressed Docker Hub images, characterize them using multiple metrics and evaluate the potential of le level deduplication in Docker Hub. In the second part of this dissertation (Chapter 4), we conduct a comprehensive performance analysis of container storage systems based on the key insights from our image characterizations, and derive several design implications. In the third part of this dissertation (Chapter 5), we propose DupHunter, a new Docker registry architecture, which not only natively deduplicates layers for space savings but also reduces layer restore overhead. DupHunter supports several configurable deduplication modes, which provide different levels of storage efficiency, durability, and performance, to support a range of uses. In the fourth part of this dissertation (Chapter 6), we explore an innovative holistic approach, Chameleon, that employs data redundancy techniques such as replication and erasure-coding, coupled with endurance-aware write offloading, to mitigate wear level imbalance in distributed SSD-based storage systems. This high-performance fash cluster can be used for registries to speedup performance. / Doctor of Philosophy / The amount of Docker images stored in Docker registries is increasing rapidly and present challenges for the underlying storage infrastructures. Before we do any optimizations for the storage system, we should first analyze this big Docker image dataset. To this end, in this dissertation we perform the first large-scale characterization and redundancy analysis of the images and layers stored in the Docker Hub registry. Based on the findings, this dissertation presents a series of practical and efficient techniques, algorithms, optimizations to achieve high performance and flexibility, and space-efficient storage system for containerized applications. The experimental evaluation demonstrates the effectiveness of our optimizations and techniques to make storage systems flexible and space-efficacy.
170

The state of WebAssembly in distributed systems : With a focus on Rust and Arc-Lang / En utvärdering av WebAssembly inom Distribuerade system : Med fokus på Rust och Arc-Lang

Moise, Theodor-Andrei January 2023 (has links)
With the current developments in modern web browsers, WebAssembly has been a rising trend over the last four years. Aimed at replacing bits of JavaScript functionality, it attempts to bring extra features to achieve portability and sandboxing through virtualisation. After the release of the WebAssembly System Interface, more and more projects have been working on using it outside web pages and browsers, in scenarios such as embedded, serverless, or distributed computing. This is thus not only relevant to the web and its clients, but also to applications in distributed systems. Considering the novelty of the topic, there is currently very little related scientific literature. With constant changes in development, proposals and goals, there is a large gap in relevant research. We aim to help bridge this gap by focusing on Rust and Arc-Lang, a domain-specific language for data analytics, in order to provide an overview of how far the technology has progressed, in addition to what runtimes there are and how they work. We investigate what kind of use case WebAssembly could have in the context of distributed systems, as well as how it can benefit data processing pipelines. Even though the technology is still immature at first glance, it is worth checking whether its proposals have been implemented, and how its performance compared to that of native Rust can affect data processing in a pipeline. We show this by benchmarking a filter program as part of a distributed container environment, while looking at different WebAssembly compilers such as Cranelift and LLVM. Then, we compare the resulting statistics to native Rust and present a synopsis of the state of WebAssembly in a distributed context. / I takt med den nuvarande utvecklingen av moderna webbläsare har WebAssembly stigit i trend under de senaste fyra åren. WebAssembly har som syfte att ersätta och utöka JavaScript med funktionalitet som är portabel och isolerad från omvärlden genom virtualisering. Efter lanseringen av WebAssembly System Interface har fler och fler projekt börjat applicera WebAssembly utanför webbsidor och webbläsare, i scenarier som inbäddade, serverlösa eller distribuerade beräkningar. Detta har gjort WebAssembly till ett språk som inte bara är relevant för webben och dess användare, utan även för applikationer i distribuerade system. Med tanke på ämnets framkant finns det för närvarande väldigt lite relaterad vetenskaplig litteratur. Ständiga förändringar i utveckling, förslag och mål har resulterat i stort gap i relevant forskning. Vi strävar efter att hjälpa till att överbrygga denna klyfta genom att studera WebAssembly i perspektivet av Rust och Arc-Lang, ett domänspecifikt språk för dataanalys, för att ge en översikt över hur långt tekniken har kommit, och samt utreda vilka exekveringssystem som finns och hur de fungerar. Vi undersöker vilken typ av användning WebAssembly kan ha i samband med distribuerade system, samt hur det kan gynna databehandlingspipelines. Även om tekniken fortfarande är ny vid första anblicken, är det värt att kontrollera om dess förslag har implementerats och hur dess prestanda gentemot Rust kan påverka databehandling i en pipeline. Vi visar detta genom att benchmarka ett filtreringsprogram som en del av en distribuerad containermiljö, samtidigt som vi tittar på olika WebAssembly-kompilatorer som exempelvis Cranelift och LLVM. Vi jämför resultaten med Rust och presenterar en sammanfattning av WebAssemblys tillstånd i sammanhanget av distribuerade system.

Page generated in 0.0589 seconds