Spelling suggestions: "subject:"containers."" "subject:"ontainers.""
151 |
A study of drug-plastic interactions in a variety of plastic containersSmith, Charles Arthur 01 January 1979 (has links)
In most hospitals today, plastic devices are replacing the traditional metal, glass and rubber. The increased use of polymeric materials as implanted prosthetic devices, catheters, disposable equipment and for the administration of blood, intravenous fluids and drugs has been widely accepted by the medical profession. This present study was designed to evaluate different types of plastics for potential use as large volume parenteral containers. Using several different therapeutic agents, a variety of plastic containers were examined for the possibility of the occurrence of drug-plastic interactions. The inclusion in the study of a commercially available plasticized product used for the administration of parenteral solutions was to compare the results of commercial products to non-commercial products.
|
152 |
Využití virtualizace v podnikovém prostředí / Using Virtualization in the Enterprise EnvironmentBartík, Branislav January 2016 (has links)
Diplomová práca sa zaoberá návrhom riešenia pre fiktívnu spoločnosť XYZ s.r.o., ako ušetriť náklady vybudovaním výukového prostredia pre jej zamestnancov za účelom rozvíjať ich zručnosti a skúsenosti v danom obore. Toto riešenie môže byť taktiež použité zamestnancami a študentami univerzít, aby si mohli otestovať podnikový softvér pre výukové účely. Autor zdôrazňuje výhody používania Cloud Computingu a otvoreného softvéru, ako aj využitie technológie Docker kontajnerov v kombinácii s komerčným softvérom ako je napr. IBM WebSphere Application Server.
|
153 |
Virtualizace operačních systémů / Virtualization of operating systemsKrál, Jan January 2016 (has links)
The diploma thesis ’Virtualization of Operating Systems’ deals with a general description of virtualization technology and briefly discusses its use cases and advantages. The thesis also mentions examples of different types of virtualization technologies and tools, including more thorough description of the two technologies used for measurement: Docker and KVM. The second part of this thesis descibes the preparation, installation and configuration of all the tools and services that are necessary for measurement of the influence of the aforementioned virtualization technologies on network services running on the virtual machines, including analysis and discussion of the resulting data. Moreover, a custom application for fully automated measurement of the parameters of network services was created, and is also described in this thesis. The conclusion of this thesis summarizes and discusses the achieved results and acknowledges the influence of virtualization on network services, where the Docker application containers, which are with their low overhead comparable to a "bare" system without any virtualization, managed to achieve much better performance results than the traditional virtual machines on KVM.
|
154 |
Testování aplikací s využitím Linuxových kontejnerů / Testing Applications Using Linux ContainersMarhefka, Matúš January 2016 (has links)
This thesis discusses software containers (Docker containers in particular) as a variant of server virtualization. Instead of virtualizing hardware, software containers rest on top of a single operating system instance and are much more efficient than hypervisors in system resource terms. Docker containers make it easy to package and ship applications, and guarantee that applications will always run the same, regardless of the environment they are running in. There is a whole range of use cases of containers, this work examines their usage in the field of software testing. The thesis proposes three main use case categories for running software systems in Docker containers. It introduces aspects for applications running in containers, which should give a better overview about an application setting within containers infrastructure. Subsequently, possible issues with testing software systems running inside Docker containers are discussed and the testing methods which address the presented issues are proposed. One proposed testing method was also used in the implementation of the framework for testing software running in Docker containers which was developed within this work.
|
155 |
Machine Learning Approach to Forecasting Empty Container VolumesLIU, YUAN January 2019 (has links)
Background With the development of global trade, the volume of goods transported around the world is growing. And over 90% of world trade is carried by shipping industry, container shipping is the most important way. But with the growth of trade imbalances, the reposition of empty containers has become an important issue for shipping. Accurately predicting the volume of empty containers will greatly assist the empty container reposition plan. Objectives The main aim of this study is to explore the effect of machine learning in predicting empty container volumes, make a performance comparison and analysis with existing empirical methods and mathematical statistics methods. Methods The main method of this study is experiment. In this study I chose the appropriate algorithm model and then trained and tested the model. This study uses the same data sources as the industrial approach, using the same metric to evaluate and compare the performance of machine learning methods and industrial methods. Results Through experiments, this study obtained the forecasting performance results of five machine algorithms including the LASSO regression algorithm on the Los Angeles Port and Long Beach Port datasets. Metrics are (Mean Square Error) MSE and (Mean Absolute Error) MAE. Conclusions LASSO Regression and Ridge Regression are the best machine learning algorithms for predicting the volume of empty containers. Compared to empirical methods, the single machine learning algorithm performs better and has better accuracy. However, compared with mature statistical methods such as time series, the performance of a single machine learning algorithm is worse than the time series method. Machine learning needs to try to combine multiple models or select more high-correlation feature quantities to improve performance on this prediction problem.
|
156 |
Efficient parallel installation of software collections on a PaaSBoraie, Alexander January 2021 (has links)
This master thesis analyses and investigates how to speed up the deployment of a suite of services to a Platform as a Service. The project use IBM’s Cloud Pak for Applications together with Red Hats OpenShift to provide insights on important factors that influences the deployment process. In this thesis, a modification was done on the installer such that the deployment instructions were sent in parallel instead of sequentially. Except for the parallel suggestion, this thesis also investigates different options on how constraints could be applied to the CPU and what the consequences are. At the end of this report, the reader will also see how the deployment times are affected by cluster scaling. An implementation of the parallel deployment showed that the installation time of Cloud Pak for Applications could be decreased. It was also shown that the CPU was not utilized fully and that there exists significant CPU saturation during the deployment. The evaluation of the scaling analysis showed that, in regards of this thesis, it is more beneficial both timewise and cost-wise to scale horizontally rather than vertically.
|
157 |
Container performance benchmark between Docker, LXD, Podman & BuildahEmilsson, Rasmus January 2020 (has links)
Virtualization is a much-used technology by small and big companies alike as running several applications on the same server is a flexible and resource-saving measure. Containers which is another way of virtualizing has become a popular choice for companies in the past years seeing even more flexibility and use cases in continuous integration and continuous development.This study aims to explore how the different leading container solutions perform in relation to one another in a test scenario that replicates a continuous integration use case which is compiling a big project from source, in this case, Firefox.The tested containers are Docker, LXD, Podman, and Buildah which will have their CPU and RAM usage tested while also looking at the time to complete the compilation. The containers perform almost on par with bare-metal except Podman/Buildah that perform worse during compilation, falling a few minutes behind.
|
158 |
Framework to set up a generic environment for applications / Ramverk för uppsättning av generisk miljö för applikationerDas, Ruben January 2021 (has links)
Infrastructure is a common word used to express the basic equipment and structures that are needed e.g. for a country or organisation to function properly. The same concept applies in the field of computer science, without infrastructure one would have problems operating software at scale. Provisioning and maintaining infrastructure through manual labour is a common occurrence in the "iron age" of IT. As the world is progressing towards the "cloud age" of IT, systems are decoupled from physical hardware enabling anyone who is software savvy to automate provisioning and maintenance of infrastructure. This study aims to determine how a generic environment can be created for applications that can run on Unix platforms and how that underlying infrastructure can be provisioned effectively. The results show that by utilising OS-level virtualisation, also known as "containers", one can deploy and serve any application that can use the Linux kernel in the sense that is needed. To further support realising the generic environment, hardware virtualisation was applied to provide the infrastructure needed to be able to use containers. This was done by provisioning a set of virtual machines on different cloud providers with a lightweight operating system that could support the container runtime needed. To manage these containers at scale a container orchestration tool was installed onto the cluster of virtual machines. To provision the said environment in an effective manner, the principles of infrastructure as code (IaC) were used to create a “blueprint" of the infrastructure that was desired. By using the metric mean time to environment (MTTE) it was noted that a cluster of virtual machines with a container orchestration tool installed onto it could be provisioned under 10 minutes for four different cloud providers.
|
159 |
Replacing Virtual Machines and Hypervisors with Container SolutionsAlndawi, Tara January 2021 (has links)
We live in a world that is constantly evolving where new technologies and innovations are being introduced. This progress partly results in developing new technologies and also in the improvement of the current ones. Docker containers are a virtualization method that is one of these new technologies that has become a hot topic around the world as it is said to be a better alternative to today's current virtual machines. One of the aspects that has contributed to this statement is the difference from virtual machines where containers isolate processes from each other and not the entire operating system. The company Saab AB wants to be at the forefront of today's technology and is interested in investigating the possibilities with container technology. The purpose with this thesis work is partly to investigate whether the container solution is in fact an alternative to traditional VMs and what differences there are between these methods. This will be done with the help of an in-depth literature study of comperative studies between containers and VMs. The results of the comparative studies showed that containers are in fact a better alternative than VMs in certain aspects such as performance and scalability and are worthy for the company. Thus, in the second part of this thesis work, a proof of concept implementation was made, by recreating a part of the company’s subsystem TactiCall into containers, to ensure that this transition is possible for the concrete use-case and that the container solution works as intended. This task has succeeded in highlighting the benefits of containers and showing through a proof of concept that there is an opportunity for the company to transition from VMs into containers.
|
160 |
A performance study for autoscaling big data analytics containerized applications : Scalability of Apache Spark on KubernetesVennu, Vinay Kumar, Yepuru, Sai Ram January 2022 (has links)
Container technologies are rapidly changing how distributed applications are executed and managed on cloud computing resources. As containers can be deployed on a large scale, there is a tremendous need for Container Orchestration tools like Kubernetes that are highly automatic in deployment, scaling, and management. In recent times, the adoption of these container technologies like Docker has seen a rise in internal usage, commercial offering, and various application fields ranging from High-Performance Computing to Geo-distributed (Edge or IoT) applications. Big Data analytics is another field where there is a trend to run applications (e.g., Apache Spark) as containers for elastic workloads and multi-tenant service models by leveraging various container orchestration tools like Kubernetes. Despite the abundant research on the performance impact of containerizing big data applications, to the best of our knowledge, the studies that focus on specific aspects like scalability and resource management are largely unexplored, which leaves a research gap to study upon. This research studies the performance impact of autoscaling a big data analytics application on Kubernetes based on autoscaling mechanisms like Horizontal Pod Autoscaler (HPA) and Vertical Pod Autoscaler (VPA). These state-of-art autoscaling mechanisms available for scaling containerized applications on Kubernetes and the available big data benchmarking tools for generating workload on frameworks like Spark are identified through a literature review. Apache Spark is selected as a representative big data application due to its ecosystem and industry-wide adoption by enterprises. In particular, a series of experiments are conducted by adjusting resource parameters (such as CPU requests and limits) and autoscaling mechanisms to measure run-time metrics like execution time and CPU utilization. Our experiment results show that while Spark performs better execution time when configured to scale with VPA, it also exhibits overhead in CPU utilization. In contrast, the impact of autoscaling big data applications using HPA adds overhead in terms of both execution time and CPU utilization. The research from this thesis can be used by researchers and other cloud practitioners, using big data applications to evaluate autoscaling mechanisms and derive better performance and resource utilization.
|
Page generated in 0.0644 seconds