131 |
Diseño de una infraestructura virtual sobre tecnología hiperconvergente en la gerencia de experiencia al cliente / Design of a Virtual Infrastructure on Hiperconvergent Technology in the Client Experience ManagementCervantes Villavicencio, André 27 February 2020 (has links)
Las necesidades de los clientes de la Gerencia de Experiencia al Cliente para requerir servicios que contempla la aplicación de tecnología está altamente demandada, en ese sentido, la rapidez que la organización aplica para atender a sus clientes debe adaptable a las exigencias actuales de nuestra época, esta perspectiva de negocio permitió desarrollar el presente proyecto de tesis y está relacionado al diseño de una infraestructura virtual sobre tecnología Hiperconvergente que permita soportar la carga de trabajo de las aplicaciones y servidores que pueda alojar el nuevo diseño dentro del centro de datos. Esta nueva infraestructura está siendo diseñada bajo los estándares, recomendaciones y buenas prácticas que cada uno de los fabricantes de las marcas que se mencionaran indican en sus hojas técnicas o investigaciones.
Primero, realizaremos un análisis para definir la situación problemática que será resuelta por el presente proyecto, nos adentramos en el problema a resolver para enfocarnos en los detalles, luego describiremos los objetivos específicos y explicaremos los indicadores o métricas por cada objetivo descrito.
Segundo, se desarrollará toda la parte conceptual del proyecto suficiente para darle un soporte teórico y técnico a cada punto que se mencione dentro de diseño propuesto.
Tercero, se desarrollará un análisis profundo del problema identificado con datos concretos cuantificables que se puedan demostrar, además, del impacto y las causas que original el problema en la organización.
Cuarto, se indicará las especificaciones técnicas de la solución propuesta para la nueva infraestructura virtual sobre tecnología Hiperconvergente, teniendo los datos concretos de la aplicación de virtualización, análisis del almacenamiento de la solución y la cantidad de procesamiento que se necesitará. Y finalmente, se expone los resultados y validaciones, las cuales servirán para el cumplimiento de los objetivos de acuerdo con las métricas plasmadas en los indicadores del Capítulo 1. / The customer needs of the Customer Experience Management to require services contemplated by the application of technology are highly demanded, in that sense, the speed that the organization applies to serve its customers must adapt to the current demands of our time, This business perspective allowed the development of this thesis project and is related to the design of a virtual infrastructure on Hyperconvergent technology that can support the workload of applications and servers that can accommodate the new design within the data center. This new infrastructure is being designed under the standards, recommendations and good practices that each of the manufacturers of the brands mentioned indicate in their technical sheets or research.
First, we will carry out an analysis to define the problematic situation that will be solved by this project, we will get into the problem to be solved to focus on the details, then we will describe the specific objectives and explain the indicators or metrics for each objective described.
Second, the entire conceptual part of the project will be developed enough to give theoretical and technical support to each point mentioned within the proposed design.
Third, an in-depth analysis of the identified problem will be developed with quantifiable concrete data that can be demonstrated, in addition, of the impact and the causes that the problem originates in the organization.
Fourth, the technical specifications of the proposed solution for the new virtual infrastructure on Hyperconvergent technology will be indicated, having the specific data of the virtualization application, analysis of the storage of the solution and the amount of processing that will be needed. And finally, the results and validations are exposed, which will serve to fulfill the objectives according to the metrics set out in the indicators of Chapter 1. / Tesis
|
132 |
Emulating Trust zone feature in Android emulator by extending QEMUMuthu, Arun January 2013 (has links)
The arrival of smart phones has created the new era in communication between users and internet. Smart phone users are able to run their own application along with enterprise applications. In case of personal application, most of them are downloaded from public market, resulting in challenge for the security frame work by threat of losing sensitive user data. So, ARM introduces the virtualization technique in hardware level to prevent the application process completely isolated from the normal world. However, understanding ARM architecture and internal working is still black box for the user as well as developers. So, in this thesis, by using the qualitative approach like examine the pre research work in open source and ARM trust zone, white paper, internal knowledge from Sony security team, we take a deep look at the architecture of the ARM trust zone in hardware level to analyze and evaluate their implementation. We describe the design and implementation of trust zone features in android emulator with advantages and disadvantages of it in analysis and result phase and conclude with annotation of suitable design on future use to enhance the security domain for secure processing and utility in Android emulator to benefit the user and developer community. The contribution of this thesis project can be summarized as following: 1) reviewing current practices and theories on implementation of ARM Trust zone; 2) creating a common methodology for handle the research problem; 3) proposing step-by-step approaches by comparing actual working of Trust zone in hardware level with design and idea of emulated one; 4) Analysis and design the appropriate model to solve the research question.
|
133 |
Containers & Virtual machines : A performance, resource & power consumption comparisonLindström, Martin January 2022 (has links)
Due to the growth of cloud computing in recent years, the use of virtualization has exploded. Virtual machines (VMs) and containers are both virtualization technologies used to create isolated computing environments. While VMs are created and managed by hypervisors and need their own full guest operating system, containers share the kernel of the host computers and do not need a full guest operating system. Because of this, containers are rumored to have less overhead involved, yielding higher performance and less resource usage compared to VMs. In this paper we perform a literature study along with an empirical study to examine the differences between containers and virtual machines when it comes to cpu, memory and disk performance, cpu and memory resource utilization, and power consumption. To answer the question regarding performance, a series of benchmarks were run inside both a container and a VM. During these benchmarks the resource utilization of the host machine was also measured to answer the second question and to answer the third and final question the power draw was measured while some of the benchmarks were running. The results showed that the cpu performance was extremely similar between the two and memory performance seemed to be similar for the most part but fairly big differences were seen in favor of both depending on the benchmark in some cases. With disk performance the container was between 15-50% faster depending on the benchmark. As for resource usage, the cpu usage was the same for both technologies but memory usage differed greatly in favor of the container. The VM used between 3-4 GiB and the container between 70 MiB - 2.5 GiB depending on the benchmark. The power draw was the same for both technologies when under cpu and memory load but when idle the VM proved to draw around 40% more power.
|
134 |
Differences in performance between containerization & virtualization : With a focus on HTTP requestsBerggren, Johannes, Karlsson, Jens January 2022 (has links)
Containerization and virtualization are two of the keystones of cloud computing. Neither technologies are a new invention but did not become widely used until it regained popularity through new implementations. Virtualization regained popularity with the founding of VMWare, and containerization has become vastly popular in the last decade with Docker. When using a service from a Cloud Service Provider today, that service will more than likely be utilizing one of these technologies. This study aims to compare the performance of these two technologies when being used to host an API and how they utilize their provided hardware resources to handle HTTP requests.A series of load tests were conducted on an API developed and hosted on the two technologies to measure the hardware performance, response time and throughput of each technology.Hyper-V was used for virtualization, and Docker was used for containerization. Data was collected on resource utilization, response time, and throughput. The data was also compared to related research to validate it.The results of the experiment showed that, in our implementation, virtualization was superior to containerization in every measured aspect.We conclude that containerization has a bottleneck in the implementation we chose that impedes the container's network performance, which results in the container not being able to process as many HTTP requests as the virtualized environment.The number of processed HTTP requests for the container in relation to CPU usage is superior to that of the virtualized environment, which leads us to believe that it could be possible that the container would be superior if not for the network performance.
|
135 |
Designing Cybersecurity Competitions in the Cloud: A Framework and Feasibility StudyNewby, Chandler Ryan 10 December 2018 (has links)
Cybersecurity is an ever-expanding field. In order to stay current, training, development, and constant learning are necessary. One of these training methods has historically been competitions. Cybersecurity competitions provide a method for competitors to experience firsthand cybersecurity concepts and situations. These experiences can help build interest in, and improve skills in, cybersecurity. While there are diverse types of cybersecurity competitions, most are run with on-premise hardware, often centralized at a specific location, and are usually limited in scope by available hardware. This research focuses on the possibility of running cybersecurity competitions, specifically CCDC style competitions, in a public cloud environment. A framework for running cybersecurity competitions in general was developed and is presented in this research. The framework exists to assist those who are considering moving their competition to the cloud. After the framework was completed, a CCDC style competition was developed and run entirely in a public cloud environment. This allowed for a test of the framework, as well as a comparison against traditional, on-premise hosting of a CCDC. The cloud-based CCDC created was significantly less expensive than running a comparable size competition in on-premise hardware. Performance problems—typically endemic in traditionally-hosted CCDCs—were virtually non-existent. Other benefits, as well as potential contraindications, are also discussed. Another CCDC style competition, this one originally built for on-premise hardware, was then ported to the same public cloud provider. This porting process helped to further evaluate and enrich the framework. The porting process was successful, and data was added to the framework.
|
136 |
Towards a Traffic-aware Cloud-native Cellular CoreAmit Kumar Sheoran (11184387) 26 July 2021 (has links)
<div>Advances in virtualization technologies have revolutionized the design of the core of cellular networks. However, the adoption of microservice design patterns and migration of services from purpose-built hardware to virtualized hardware has adversely affected the delivery of latency-sensitive services.</div><div><br></div><div>In this dissertation, we make a case for cloud-native (microservice container packaged) network functions in the cellular core by proposing domain knowledge-driven, traffic-aware, orchestration frameworks to make network placement decisions. We begin by evaluating the suitability of virtualization technologies for the cellular core and demonstrating that container-driven deployments can significantly outperform other virtualization technologies such as Virtual Machines for control and data plane applications.</div><div><br></div><div>To support the deployment of latency-sensitive applications on virtualized hardware, we propose using Virtual Network Function (VNF) bundles (aggregates) to handle transactions. Specifically, we design Invenio to leverage a combination of network traces and domain knowledge to identify VNFs involved in processing a specific transaction, which are then collocated by a traffic-aware orchestrator. By ensuring that a user request is processed by a single aggregate of collocated VNFs, Invenio can significantly reduce end-to-end latencies and improve user experience.</div><div><br></div><div>Finally, to understand the challenges in using container-driven deployments in real-world applications, we develop and evaluate a novel caller-ID spoofing detection solution in Voice over LTE (VoLTE) calls. Our proposed solution, NASCENT, cross validates the caller-ID used during voice-call signaling with a previously authenticated caller-ID to detect caller-ID spoofing. Our evaluation with traditional and container-driven deployments shows that container-driven deployment can not only support complex cellular services but also outperform traditional deployments.</div><div><br></div>
|
137 |
Kubernetes for Game Development : Evaluation of the Container-Orchestration SoftwareLundgren, Jonas January 2021 (has links)
Kubernetes is a software for managing clusters of containerized applications and has recently risen in popularity in the tech industry. However, this popularity seems to not have spread to the game development industry, prompting the author to investigate if the reason is a technical limitation. The investigation is done by creating a proof-of-concept of a simple system setup for running a game server in Kubernetes, consisting of the Kubernetes-cluster itself, the game server to be run in the cluster, and a matchmaker server for managing client requests and creation of game server instances. Thanks to the successful proof-of-concept, a conclusion can be made that there is no inherent technical limitation causing its infrequent use in game development, but most likely habitual reasons in combination with how new Kubernetes is.
|
138 |
Security implications for docker container environments deploying images from public repositories : A systematic literature reviewTyresson, Dennis January 2020 (has links)
Because of the ease of use and effectiveness, Docker containers have become immensely popular among system administrators worldwide. Docker elegantly packages entire applications within a single software entity called images, allowing fast and consistent deployment over different host systems. However, it is not without drawbacks, as the close interaction with the operating system kernel gives rise to security concerns. The conducted systematic literature review aims to address concerns regarding the use of images from unknown sources. Multiple search terms were applied to a set of four scientific databases in order to find peer-reviewed articles that fulfill certain selection criteria. A final amount of 13 articles were selected and evaluated by using means of thematic coding. Analysis showed that users need to be wary of what images are used to deploy containers, as they might contain malicious code or other weaknesses. The use of automatic vulnerability detection using static and dynamic detection could help protect the user from bad images.
|
139 |
Virtualization performance in private cloud computing.Thovheyi, Khathutshelo Nicholas 04 October 2019 (has links)
M. Tech. (Department of Information Communication Technology,
Faculty of Applied and Computer Sciences), Vaal University of Technology. / Virtualization is the main technology that powers today’s cloud computing systems. Virtualization provides isolation as well as resource control that enable multiple workloads to run efficiently on a single shared machine and thus allows servers that traditionally require multiple physical machines to be consolidated to a single, cost-effective physical machine using virtual machines or containers. Due to virtual machine techniques, the strategies that improve performance like hardware acceleration, running concurrent virtual machines without the correct proper resource controls not used and correctly configured, the problems of scalability as well as service provisioning (crashing response time, resource contention and functionality or usability) for cloud computing, emanate from the configurations of the virtualized system. Virtualization performance is a critical factor in datacentre and cloud computing service delivery. To evaluate virtualization performance as well as to determine which virtual machine configuration provides effective performance, how to allocate and distribute resources for virtual machine performance equally is critical in this research study. In this study, datacentre purposed servers together with Type 1 (bare metal hypervisors), VMware ESXi 5.5, and Proxmox 5.3 were used to evaluate virtualization performance. The experimental environment was conducted on server Cisco UCS B200 M4 which was the host machine and the virtual environment that is encapsulated within the physical layer which hosts the guest virtual machines consisting of virtual hardware, Guest OSs, and third-party applications. The host server consists of virtual machines with one operating system, CentOS 7 64 bit. For performance evaluation purposes, each guest operating system was configured and allocated the same amount of virtual system resources. Various Workload/benchmarking tools were used for Network, CPU, Memory as well as Disk performance, namely; Iperf, Unibench, Ramspeed, and IOzone, respectively. In the case of Iozone, VMware was more than twice as fast as Proxmox. Although CPU utilization in Proxmox was not noticeably affected, considerably less CPU utilization was observed in VMware. While testing the memory performance with ramspeed, VMware performs 16 to 26% better than Proxmox. In the case of writing, VMware observed 31 to 51% better than Proxmox. In a network, it was observed that the performance on Proxmox was very close to the level of bare metal setup. The results of the performance tests show that the additional operations required by virtualization can be confirmed utilizing test programs. The number of additional operations and their type influence specifically to performance as overhead. In memory and disk areas, where the virtualization
procedure was clear, the test outcomes demonstrate that the measure of overhead is little. Processor and network virtualization, then again, was more perplexing. Hence the overhead is more significant. At the point when the overall performance of a virtual machine running in VMware ESXi Server is contrasted with a conventional system, the virtualization causes approximately an increase of 33% in performance.Because of the difficulty in providing optimal real system configurations, workload/benchmarks could provide close to real application systems for better results. The tests demonstrate that virtualization depends immensely on the host system and the virtualization software. Given the tests, both VMware ESXi Server and Proxmox are capable of providing Optimal performance.
|
140 |
Finding the Sweet Spot: Optimizing Kubernetes for Scalability and Resilience : A Comprehensive Study on Improving Resource Utilization and Performance in Containerized Environments.Rör, Adam January 2023 (has links)
Modern technology is rapidly and efficiently expanding, and by looking at the largest companies by market cap, one will find enterprises like Apple, Microsoft, Alphabet, and Meta. Given the complexity of modern software architecture, there arises a necessity for a software architecture that is both scalable and adaptable. This demand has given rise to the adoption of microservices as a preferred approach for building complex and distributed applications. However, managing microservices effectively is a difficult task. Therefore, Google created an orchestration tool called Kubernetes (K8). The primary purpose of this thesis is to extend the information about the characteristics of a K8 cluster by monitoring its performance in various scenarios. There is substantial documentation about how K8 works and why it is used. However, insufficient information exists regarding the performance of K8 in different scenarios. Extensive testing has transpired to extend the information about the characteristics of K8. Parameters such as the number of Pods, containers, mounts, and CPU cores have been thoroughly tested. Additionally, parameters such as container load, CPU limitation, container distribution, and memory allocation have been examined. The core result will include startup time and CPU utilization. The startup time is essential in a K8 cluster because of its ephemeral characteristics, meaning each Pod is short-lived and will restart frequently. CPU utilization testing is essential to analyze how K8 allocate resources and perform with different amounts of resources. The results show that the most significant parameters regarding startup time are, as one might expect, the number of containers, CPUs, Pods, and the load in each Pod. However, the complexity of the Pod, for instance, the number of mount points, has significantly less effect on the cluster than expected. Regarding CPU utilization, the results show that K8 does lower CPU usage if possible, resulting in equal CPU usage even with different numbers of CPUs. The most significant CPU usage parameter is the load of the application. Finally, this thesis work has filled some gaps in how a K8 cluster behaves during various circumstances, for instance, varying numbers of Pods, containers, or CPUs. One must consider several aspects while designing a K8 cluster. However, all aspects have not been considered, and the usage of K8 increases daily. Therefore, this thesis will hopefully be one of many reports investigating how a K8 cluster behaves and what to consider when building a cluster.
|
Page generated in 0.0395 seconds