• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 91
  • 4
  • 2
  • 1
  • Tagged with
  • 100
  • 63
  • 47
  • 41
  • 31
  • 31
  • 28
  • 27
  • 26
  • 24
  • 23
  • 19
  • 19
  • 15
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Scalable, Pluggable, and Fault Tolerant Multi-Modal Situational Awareness Data Stream Management Systems

Partin, Michael 18 May 2020 (has links)
No description available.
92

Detection of Denial of Service Attacks on the Open Radio Access Network Intelligent Controller through the E2 Interface

Radhakrishnan, Vikas Krishnan 03 July 2023 (has links)
Open Radio Access Networks (Open RANs) enable flexible cellular network deployments by adopting open-source software and white-box hardware to build reference architectures customizable to innovative target use cases. The Open Radio Access Network (O-RAN) Alliance defines specifications introducing new Radio Access Network (RAN) Intelligent Controller (RIC) functions that leverage open interfaces between disaggregated RAN elements to provide precise RAN control and monitoring capabilities using applications called xApps and rApps. Multiple xApps targeting novel use cases have been developed by the O-RAN Software Community (OSC) and incubated on the Near-Real-Time RIC (Near-RT RIC) platform. However, the Near-RT RIC has, so far, been demonstrated to support only a single xApp capable of controlling the RAN elements. This work studies the scalability of the OSC Near-RT RIC to support simultaneous control signaling by multiple xApps targeting the RAN element. We particularly analyze its internal message routing mechanism and experimentally expose the design limitations of the OSC Near-RT RIC in supporting simultaneous xApp control. To this end, we extend an existing open-source RAN slicing xApp and prototype a slice-aware User Equipment (UE) admission control xApp implementing the RAN Control E2 Service Model (E2SM) to demonstrate a multi-xApp control signaling use case and assess the control routing capability of the Near-RT RIC through an end-to-end O-RAN experiment using the OSC Near-RT RIC platform and an open-source Software Defined Radio (SDR) stack. We also propose and implement a tag-based message routing strategy for disambiguating multiple xApps to enable simultaneous xApp control. Our experimental results prove that our routing strategy ensures 100% delivery of control messages between multiple xApps and E2 Nodes while guaranteeing control scalability and xApp non-repudiation. Using the improved Near-RT RIC platform, we assess the security posture and resiliency of the OSC Near-RT RIC in the event of volumetric application layer Denial of Service (DoS) attacks exploiting the E2 interface and the E2 Application Protocol (E2AP). We design a DoS attack agent capable of orchestrating a signaling storm attack and a high-intensity resource exhaustion DoS attack on the Near-RT RIC platform components. Additionally, we develop a latency monitoring xApp solution to detect application layer signaling storm attacks. The experimental results indicate that signaling storm attacks targeting the E2 Terminator on the Near-RT RIC cause control loop violations over the E2 interface affecting service delivery and optimization for benign E2 Nodes. We also observe that a high-intensity E2 Setup DoS attack results in unbridled memory resource consumption leading to service interruption and application crash. Our results also show that the E2 interface at the Near-RT RIC is vulnerable to volumetric application layer DoS attacks, and robust monitoring, load-balancing, and DoS mitigation strategies must be incorporated to guarantee resiliency and high reliability of the Near-RT RIC. / Master of Science / Telecommunication networks need sophisticated controllers to support novel use cases and applications. Cellular base stations can be managed and optimized for better user experience through an intelligent radio controller called the Near-Real-Time Radio Access Network (RAN) Intelligent Controller (RIC) (Near-RT RIC), defined by the Open Radio Access Network (O-RAN) Alliance. This controller supports simultaneous connections to multiple base stations through the E2 interface and allows simple radio applications called xApps to control the behavior of those base stations. In this research work, we study the performance and behavior of the Near-RT RIC when a malicious or compromised base station tries to overwhelm the controller through a Denial of Service (DoS) attack. We develop a solution to determine the application layer communication delay between the controller and the base station to detect potential attacks trying to compromise the functionality and availability of the controller. To implement this solution, we also upgrade the controller to support multiple radio applications to interact and control one or more base stations simultaneously. Through the developed solution, we prove that the O-RAN Software Community (OSC) Near-RT RIC is highly vulnerable to DoS attacks from malicious base stations targeting the controller over the E2 interface.
93

Cloud-native storage solutions for Kubernetes : A performance comparison

Andersson, Filip January 2023 (has links)
Kubernetes is a container orchestration system that has been rising in popularity in recent years. The modular nature of Kubernetes allows the usage of different storage solutions, and for cloud environments, cloud-native distributed storage solutions maybe attractive due to their redundant nature. There are many tools for cloud-native distributed storage available on the market today with differing features and performance. Choosing the correct one for an organisation can be difficult. Organisations utilising Kubernetes in cloud environments would like to be as performance efficient as possible to save on costs and resources. This study aims to offer a benchmark and analysis for some of the most popular tools, to help organisations choose the ‘best’ solution for their operational needs, from a performance perspective. The benchmarks compare three cloud-native distributed storage solutions, OpenEBS, Portworx, and Rook-Ceph on both Amazon Elastic Kubernetes Service (EKS) and Azure Kubernetes Service (AKS). For a baseline comparison, the study will also benchmark the cloud providers own solutions; Azure Disk Storage, and Amazon Elastic Block Storage. The study compares these solutions from three key metrics; bandwidth, latency, and IOPS, in both read and write performance. / <p>Det finns övrigt digitalt material (t.ex. film-, bild- eller ljudfiler) eller modeller/artefakter tillhörande examensarbetet som ska skickas till arkivet.</p><p>There are other digital material (eg film, image or audio files) or models/artifacts that belongs to the thesis and need to be archived.</p>
94

Elastic, Interoperable and Container-based Cloud Infrastructures for High Performance Computing

López Huguet, Sergio 02 September 2021 (has links)
Tesis por compendio / [ES] Las aplicaciones científicas implican generalmente una carga computacional variable y no predecible a la que las instituciones deben hacer frente variando dinámicamente la asignación de recursos en función de las distintas necesidades computacionales. Las aplicaciones científicas pueden necesitar grandes requisitos. Por ejemplo, una gran cantidad de recursos computacionales para el procesado de numerosos trabajos independientes (High Throughput Computing o HTC) o recursos de alto rendimiento para la resolución de un problema individual (High Performance Computing o HPC). Los recursos computacionales necesarios en este tipo de aplicaciones suelen acarrear un coste muy alto que puede exceder la disponibilidad de los recursos de la institución o estos pueden no adaptarse correctamente a las necesidades de las aplicaciones científicas, especialmente en el caso de infraestructuras preparadas para la ejecución de aplicaciones de HPC. De hecho, es posible que las diferentes partes de una aplicación necesiten distintos tipos de recursos computacionales. Actualmente las plataformas de servicios en la nube se han convertido en una solución eficiente para satisfacer la demanda de las aplicaciones HTC, ya que proporcionan un abanico de recursos computacionales accesibles bajo demanda. Por esta razón, se ha producido un incremento en la cantidad de clouds híbridos, los cuales son una combinación de infraestructuras alojadas en servicios en la nube y en las propias instituciones (on-premise). Dado que las aplicaciones pueden ser procesadas en distintas infraestructuras, actualmente la portabilidad de las aplicaciones se ha convertido en un aspecto clave. Probablemente, las tecnologías de contenedores son la tecnología más popular para la entrega de aplicaciones gracias a que permiten reproducibilidad, trazabilidad, versionado, aislamiento y portabilidad. El objetivo de la tesis es proporcionar una arquitectura y una serie de servicios para proveer infraestructuras elásticas híbridas de procesamiento que puedan dar respuesta a las diferentes cargas de trabajo. Para ello, se ha considerado la utilización de elasticidad vertical y horizontal desarrollando una prueba de concepto para proporcionar elasticidad vertical y se ha diseñado una arquitectura cloud elástica de procesamiento de Análisis de Datos. Después, se ha trabajo en una arquitectura cloud de recursos heterogéneos de procesamiento de imágenes médicas que proporciona distintas colas de procesamiento para trabajos con diferentes requisitos. Esta arquitectura ha estado enmarcada en una colaboración con la empresa QUIBIM. En la última parte de la tesis, se ha evolucionado esta arquitectura para diseñar e implementar un cloud elástico, multi-site y multi-tenant para el procesamiento de imágenes médicas en el marco del proyecto europeo PRIMAGE. Esta arquitectura utiliza un almacenamiento distribuido integrando servicios externos para la autenticación y la autorización basados en OpenID Connect (OIDC). Para ello, se ha desarrollado la herramienta kube-authorizer que, de manera automatizada y a partir de la información obtenida en el proceso de autenticación, proporciona el control de acceso a los recursos de la infraestructura de procesamiento mediante la creación de las políticas y roles. Finalmente, se ha desarrollado otra herramienta, hpc-connector, que permite la integración de infraestructuras de procesamiento HPC en infraestructuras cloud sin necesitar realizar cambios en la infraestructura HPC ni en la arquitectura cloud. Cabe destacar que, durante la realización de esta tesis, se han utilizado distintas tecnologías de gestión de trabajos y de contenedores de código abierto, se han desarrollado herramientas y componentes de código abierto y se han implementado recetas para la configuración automatizada de las distintas arquitecturas diseñadas desde la perspectiva DevOps. / [CA] Les aplicacions científiques impliquen generalment una càrrega computacional variable i no predictible a què les institucions han de fer front variant dinàmicament l'assignació de recursos en funció de les diferents necessitats computacionals. Les aplicacions científiques poden necessitar grans requisits. Per exemple, una gran quantitat de recursos computacionals per al processament de nombrosos treballs independents (High Throughput Computing o HTC) o recursos d'alt rendiment per a la resolució d'un problema individual (High Performance Computing o HPC). Els recursos computacionals necessaris en aquest tipus d'aplicacions solen comportar un cost molt elevat que pot excedir la disponibilitat dels recursos de la institució o aquests poden no adaptar-se correctament a les necessitats de les aplicacions científiques, especialment en el cas d'infraestructures preparades per a l'avaluació d'aplicacions d'HPC. De fet, és possible que les diferents parts d'una aplicació necessiten diferents tipus de recursos computacionals. Actualment les plataformes de servicis al núvol han esdevingut una solució eficient per satisfer la demanda de les aplicacions HTC, ja que proporcionen un ventall de recursos computacionals accessibles a demanda. Per aquest motiu, s'ha produït un increment de la quantitat de clouds híbrids, els quals són una combinació d'infraestructures allotjades a servicis en el núvol i a les mateixes institucions (on-premise). Donat que les aplicacions poden ser processades en diferents infraestructures, actualment la portabilitat de les aplicacions s'ha convertit en un aspecte clau. Probablement, les tecnologies de contenidors són la tecnologia més popular per a l'entrega d'aplicacions gràcies al fet que permeten reproductibilitat, traçabilitat, versionat, aïllament i portabilitat. L'objectiu de la tesi és proporcionar una arquitectura i una sèrie de servicis per proveir infraestructures elàstiques híbrides de processament que puguen donar resposta a les diferents càrregues de treball. Per a això, s'ha considerat la utilització d'elasticitat vertical i horitzontal desenvolupant una prova de concepte per proporcionar elasticitat vertical i s'ha dissenyat una arquitectura cloud elàstica de processament d'Anàlisi de Dades. Després, s'ha treballat en una arquitectura cloud de recursos heterogenis de processament d'imatges mèdiques que proporciona distintes cues de processament per a treballs amb diferents requisits. Aquesta arquitectura ha estat emmarcada en una col·laboració amb l'empresa QUIBIM. En l'última part de la tesi, s'ha evolucionat aquesta arquitectura per dissenyar i implementar un cloud elàstic, multi-site i multi-tenant per al processament d'imatges mèdiques en el marc del projecte europeu PRIMAGE. Aquesta arquitectura utilitza un emmagatzemament integrant servicis externs per a l'autenticació i autorització basats en OpenID Connect (OIDC). Per a això, s'ha desenvolupat la ferramenta kube-authorizer que, de manera automatitzada i a partir de la informació obtinguda en el procés d'autenticació, proporciona el control d'accés als recursos de la infraestructura de processament mitjançant la creació de les polítiques i rols. Finalment, s'ha desenvolupat una altra ferramenta, hpc-connector, que permet la integració d'infraestructures de processament HPC en infraestructures cloud sense necessitat de realitzar canvis en la infraestructura HPC ni en l'arquitectura cloud. Es pot destacar que, durant la realització d'aquesta tesi, s'han utilitzat diferents tecnologies de gestió de treballs i de contenidors de codi obert, s'han desenvolupat ferramentes i components de codi obert, i s'han implementat receptes per a la configuració automatitzada de les distintes arquitectures dissenyades des de la perspectiva DevOps. / [EN] Scientific applications generally imply a variable and an unpredictable computational workload that institutions must address by dynamically adjusting the allocation of resources to their different computational needs. Scientific applications could require a high capacity, e.g. the concurrent usage of computational resources for processing several independent jobs (High Throughput Computing or HTC) or a high capability by means of using high-performance resources for solving complex problems (High Performance Computing or HPC). The computational resources required in this type of applications usually have a very high cost that may exceed the availability of the institution's resources or they are may not be successfully adapted to the scientific applications, especially in the case of infrastructures prepared for the execution of HPC applications. Indeed, it is possible that the different parts that compose an application require different type of computational resources. Nowadays, cloud service platforms have become an efficient solution to meet the need of HTC applications as they provide a wide range of computing resources accessible on demand. For this reason, the number of hybrid computational infrastructures has increased during the last years. The hybrid computation infrastructures are the combination of infrastructures hosted in cloud platforms and the computation resources hosted in the institutions, which are named on-premise infrastructures. As scientific applications can be processed on different infrastructures, the application delivery has become a key issue. Nowadays, containers are probably the most popular technology for application delivery as they ease reproducibility, traceability, versioning, isolation, and portability. The main objective of this thesis is to provide an architecture and a set of services to build up hybrid processing infrastructures that fit the need of different workloads. Hence, the thesis considered aspects such as elasticity and federation. The use of vertical and horizontal elasticity by developing a proof of concept to provide vertical elasticity on top of an elastic cloud architecture for data analytics. Afterwards, an elastic cloud architecture comprising heterogeneous computational resources has been implemented for medical imaging processing using multiple processing queues for jobs with different requirements. The development of this architecture has been framed in a collaboration with a company called QUIBIM. In the last part of the thesis, the previous work has been evolved to design and implement an elastic, multi-site and multi-tenant cloud architecture for medical image processing has been designed in the framework of a European project PRIMAGE. This architecture uses a storage integrating external services for the authentication and authorization based on OpenID Connect (OIDC). The tool kube-authorizer has been developed to provide access control to the resources of the processing infrastructure in an automatic way from the information obtained in the authentication process, by creating policies and roles. Finally, another tool, hpc-connector, has been developed to enable the integration of HPC processing infrastructures into cloud infrastructures without requiring modifications in both infrastructures, cloud and HPC. It should be noted that, during the realization of this thesis, different contributions to open source container and job management technologies have been performed by developing open source tools and components and configuration recipes for the automated configuration of the different architectures designed from the DevOps perspective. The results obtained support the feasibility of the vertical elasticity combined with the horizontal elasticity to implement QoS policies based on a deadline, as well as the feasibility of the federated authentication model to combine public and on-premise clouds. / López Huguet, S. (2021). Elastic, Interoperable and Container-based Cloud Infrastructures for High Performance Computing [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/172327 / Compendio
95

Deployment Cost Optimization For Real-Time Risk SaaS Services In The Cloud

Engström, Joel January 2024 (has links)
Microservice-based applications thrive when each service performs a specific, well-defined task independently, contributing to the overall system. However, what happens when these independent services overlap? Could consolidating some of these services yield substantial benefits? This thesis delves into the securities overlap among services within the Nasdaq Risk Platform. By evaluating metrics such as memory consumption, JVM overhead, and CPU utilization, it identifies potential candidates for consolidation. A proof-of-concept consolidation of actual Nasdaq Risk Platform services was deployed and compared against a non-consolidated version. The analysis revealed that the consolidated version performs similarly but exhibits a more uniform and stable usage curve. Additionally, by reducing the number of JVMs and combining the service securities cache, the total RAM usage decreased by 28%.
96

A Comparative Study on Container Orchestration and Serverless Computing Platforms

Kushkbaghi, Nick January 2024 (has links)
This report compares the performance of container orchestration architecture and serverless computing platforms within cloud computing. The focus is on their application in managing real-time communications for electric vehicle(EV) charging systems using the Open Charge Point Protocol (OCPP). With the growing demand for efficient and scalable cloud solutions, especially in sectors using Internet of Things (IoT) and real-time communication technologies, this study investigates how different architectures handle high-load scenarios and real-time data transmission. Through systematic load testing of Kubernetes (for container orchestration) and Azure Functions (for serverless computing), the report measures and analyzes response times, throughput, and error rates at various demand levels. The findings indicate that while Kubernetes performs robustly under consistent loads, Azure Functionsexcel in managing dynamic, high-load conditions, showcasing superior scalability and efficiency. A controlled experiment method ensures a precise and objective assessment of performance differences. The report concludes by proposing a hybrid model that leverages the strengths of both architectures to optimize cloud resource utilization and performance.
97

Predictive vertical CPU autoscaling in Kubernetes based on time-series forecasting with Holt-Winters exponential smoothing and long short-term memory / Prediktiv vertikal CPU-autoskalning i Kubernetes baserat på tidsserieprediktion med Holt-Winters exponentiell utjämning och långt korttidsminne

Wang, Thomas January 2021 (has links)
Private and public clouds require users to specify requests for resources such as CPU and memory (RAM) to be provisioned for their applications. The values of these requests do not necessarily relate to the application’s run-time requirements, but only help the cloud infrastructure resource manager to map requested virtual resources to physical resources. If an application exceeds these values, it might be throttled or even terminated. Consequently, requested values are often overestimated, resulting in poor resource utilization in the cloud infrastructure. Autoscaling is a technique used to overcome these problems. In this research, we formulated two new predictive CPU autoscaling strategies forKubernetes containerized applications, using time-series analysis, based on Holt-Winters exponential smoothing and long short-term memory (LSTM) artificial recurrent neural networks. The two approaches were analyzed, and their performances were compared to that of the default Kubernetes Vertical Pod Autoscaler (VPA). Efficiency was evaluated in terms of CPU resource wastage, and insufficient CPU percentage and amount for container workloads from Alibaba Cluster Trace 2018, and others. In our experiments, we observed that Kubernetes Vertical Pod Autoscaler (VPA) tended to perform poorly on workloads that periodically change. Our results showed that compared to VPA, predictive methods based on Holt- Winters exponential smoothing (HW) and Long Short-Term Memory (LSTM) can decrease CPU wastage by over 40% while avoiding CPU insufficiency for various CPU workloads. Furthermore, LSTM has been shown to generate stabler predictions compared to that of HW, which allowed for more robust scaling decisions. / Privata och offentliga moln kräver att användare begär mängden CPU och minne (RAM) som ska fördelas till sina applikationer. Mängden resurser är inte nödvändigtvis relaterat till applikationernas körtidskrav, utan är till för att molninfrastrukturresurshanteraren ska kunna kartlägga begärda virtuella resurser till fysiska resurser. Om en applikation överskrider dessa värden kan den saktas ner eller till och med krascha. För att undvika störningar överskattas begärda värden oftast, vilket kan resultera i ineffektiv resursutnyttjande i molninfrastrukturen. Autoskalning är en teknik som används för att överkomma dessa problem. I denna forskning formulerade vi två nya prediktiva CPU autoskalningsstrategier för containeriserade applikationer i Kubernetes, med hjälp av tidsserieanalys baserad på metoderna Holt-Winters exponentiell utjämning och långt korttidsminne (LSTM) återkommande neurala nätverk. De två metoderna analyserades, och deras prestationer jämfördes med Kubernetes Vertical Pod Autoscaler (VPA). Prestation utvärderades genom att observera under- och överutilisering av CPU-resurser, för diverse containerarbetsbelastningar från bl. a. Alibaba Cluster Trace 2018. Vi observerade att Kubernetes Vertical Pod Autoscaler (VPA) i våra experiment tenderade att prestera dåligt på arbetsbelastningar som förändras periodvist. Våra resultat visar att jämfört med VPA kan prediktiva metoder baserade på Holt-Winters exponentiell utjämning (HW) och långt korttidsminne (LSTM) minska överflödig CPU-användning med över 40 % samtidigt som de undviker CPU-brist för olika arbetsbelastningar. Ytterligare visade sig LSTM generera stabilare prediktioner jämfört med HW, vilket ledde till mer robusta autoskalningsbeslut.
98

Hybrid Cloud Migration Challenges. A case study at King

Boronin, Mikhail January 2020 (has links)
Migration to the cloud has been a popular topic in industry and academia in recent years. Despite many benefits that the cloud presents, such as high availability and scalability, most of the on-premise application architectures are not ready to fully exploit the benefits of this environment, and adapting them to this environment is a non-trivial task.Therefore, many organizations consider a gradual process of moving to the cloud with Hybrid Cloud architecture. In this paper, the author is making an effort of analyzing particular enterprise case in cloud migration topics like cloud deployment, cloud architecture and cloud management.This paper aims to identify, classify, and compare existing challenges in cloud migration, illustrate approaches to resolve these challenges and discover the best practices in cloud adoption and process of conversion teams to the cloud.
99

Container Orchestration : the Migration Path to Kubernetes

Andersson, Johan, Norrman, Fredrik January 2020 (has links)
As IT platforms grow larger and more complex, so does the underlying infrastructure. Virtualization is an essential factor for more efficient resource allocation, improving both the management and environmental impact. It allows more robust solutions and facilitates the use of IaC (infrastructure ascode). Many systems developed today consist of containerized microservices. Considered the standard of container orchestration, Kubernetes is the natural next step for many companies. But how do we move on from previous solutions to a Kubernetes cluster? We found that there are not a lot of detailed enough guidelines available, and set out to gain more knowledge by diving into the subject - implementing prototypes that would act as a foundation for a resulting guideline of how it can be done.
100

An Empirical Study on AI Workflow Automation for Positioning / En empirisk undersökning om automatiserat arbetsflöde inom AI för positionering

Jämtner, Hannes, Brynielsson, Stefan January 2022 (has links)
The maturing capabilities of Artificial Intelligence (AI) and Machine Learning (ML) have resulted in increased attention in research and development on adopting AI and ML in 5G and future networks. With the increased maturity, the usage of AI/ML models in production is becoming more widespread, and maintaining these systems is more complex and likely to incur technical debt when compared to standard software. This is due to inheriting all the complexities of traditional software in addition to ML-specific ones. To handle these complexities the field of ML Operations (MLOps) has emerged. The goal of MLOps is to extend DevOps to AI/ML and therefore speed up development and ease maintenance of AI/ML-based software, by, for example, supporting automatic deployment, monitoring, and continuous re-training of models. This thesis investigates how to construct an MLOps workflow by selecting a number of tools and using these to implement a workflow. Additionally, different approaches for triggering re-training are implemented and evaluated, resulting in a comparison of the triggers with regards to execution time, memory and CPU consumption, and the average performance of the Machine learning model.

Page generated in 0.055 seconds