• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 91
  • 4
  • 2
  • 1
  • Tagged with
  • 99
  • 63
  • 47
  • 40
  • 31
  • 31
  • 28
  • 27
  • 26
  • 23
  • 23
  • 19
  • 19
  • 15
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Scalable, Pluggable, and Fault Tolerant Multi-Modal Situational Awareness Data Stream Management Systems

Partin, Michael 18 May 2020 (has links)
No description available.
92

Detection of Denial of Service Attacks on the Open Radio Access Network Intelligent Controller through the E2 Interface

Radhakrishnan, Vikas Krishnan 03 July 2023 (has links)
Open Radio Access Networks (Open RANs) enable flexible cellular network deployments by adopting open-source software and white-box hardware to build reference architectures customizable to innovative target use cases. The Open Radio Access Network (O-RAN) Alliance defines specifications introducing new Radio Access Network (RAN) Intelligent Controller (RIC) functions that leverage open interfaces between disaggregated RAN elements to provide precise RAN control and monitoring capabilities using applications called xApps and rApps. Multiple xApps targeting novel use cases have been developed by the O-RAN Software Community (OSC) and incubated on the Near-Real-Time RIC (Near-RT RIC) platform. However, the Near-RT RIC has, so far, been demonstrated to support only a single xApp capable of controlling the RAN elements. This work studies the scalability of the OSC Near-RT RIC to support simultaneous control signaling by multiple xApps targeting the RAN element. We particularly analyze its internal message routing mechanism and experimentally expose the design limitations of the OSC Near-RT RIC in supporting simultaneous xApp control. To this end, we extend an existing open-source RAN slicing xApp and prototype a slice-aware User Equipment (UE) admission control xApp implementing the RAN Control E2 Service Model (E2SM) to demonstrate a multi-xApp control signaling use case and assess the control routing capability of the Near-RT RIC through an end-to-end O-RAN experiment using the OSC Near-RT RIC platform and an open-source Software Defined Radio (SDR) stack. We also propose and implement a tag-based message routing strategy for disambiguating multiple xApps to enable simultaneous xApp control. Our experimental results prove that our routing strategy ensures 100% delivery of control messages between multiple xApps and E2 Nodes while guaranteeing control scalability and xApp non-repudiation. Using the improved Near-RT RIC platform, we assess the security posture and resiliency of the OSC Near-RT RIC in the event of volumetric application layer Denial of Service (DoS) attacks exploiting the E2 interface and the E2 Application Protocol (E2AP). We design a DoS attack agent capable of orchestrating a signaling storm attack and a high-intensity resource exhaustion DoS attack on the Near-RT RIC platform components. Additionally, we develop a latency monitoring xApp solution to detect application layer signaling storm attacks. The experimental results indicate that signaling storm attacks targeting the E2 Terminator on the Near-RT RIC cause control loop violations over the E2 interface affecting service delivery and optimization for benign E2 Nodes. We also observe that a high-intensity E2 Setup DoS attack results in unbridled memory resource consumption leading to service interruption and application crash. Our results also show that the E2 interface at the Near-RT RIC is vulnerable to volumetric application layer DoS attacks, and robust monitoring, load-balancing, and DoS mitigation strategies must be incorporated to guarantee resiliency and high reliability of the Near-RT RIC. / Master of Science / Telecommunication networks need sophisticated controllers to support novel use cases and applications. Cellular base stations can be managed and optimized for better user experience through an intelligent radio controller called the Near-Real-Time Radio Access Network (RAN) Intelligent Controller (RIC) (Near-RT RIC), defined by the Open Radio Access Network (O-RAN) Alliance. This controller supports simultaneous connections to multiple base stations through the E2 interface and allows simple radio applications called xApps to control the behavior of those base stations. In this research work, we study the performance and behavior of the Near-RT RIC when a malicious or compromised base station tries to overwhelm the controller through a Denial of Service (DoS) attack. We develop a solution to determine the application layer communication delay between the controller and the base station to detect potential attacks trying to compromise the functionality and availability of the controller. To implement this solution, we also upgrade the controller to support multiple radio applications to interact and control one or more base stations simultaneously. Through the developed solution, we prove that the O-RAN Software Community (OSC) Near-RT RIC is highly vulnerable to DoS attacks from malicious base stations targeting the controller over the E2 interface.
93

Cloud-native storage solutions for Kubernetes : A performance comparison

Andersson, Filip January 2023 (has links)
Kubernetes is a container orchestration system that has been rising in popularity in recent years. The modular nature of Kubernetes allows the usage of different storage solutions, and for cloud environments, cloud-native distributed storage solutions maybe attractive due to their redundant nature. There are many tools for cloud-native distributed storage available on the market today with differing features and performance. Choosing the correct one for an organisation can be difficult. Organisations utilising Kubernetes in cloud environments would like to be as performance efficient as possible to save on costs and resources. This study aims to offer a benchmark and analysis for some of the most popular tools, to help organisations choose the ‘best’ solution for their operational needs, from a performance perspective. The benchmarks compare three cloud-native distributed storage solutions, OpenEBS, Portworx, and Rook-Ceph on both Amazon Elastic Kubernetes Service (EKS) and Azure Kubernetes Service (AKS). For a baseline comparison, the study will also benchmark the cloud providers own solutions; Azure Disk Storage, and Amazon Elastic Block Storage. The study compares these solutions from three key metrics; bandwidth, latency, and IOPS, in both read and write performance. / <p>Det finns övrigt digitalt material (t.ex. film-, bild- eller ljudfiler) eller modeller/artefakter tillhörande examensarbetet som ska skickas till arkivet.</p><p>There are other digital material (eg film, image or audio files) or models/artifacts that belongs to the thesis and need to be archived.</p>
94

Deployment Cost Optimization For Real-Time Risk SaaS Services In The Cloud

Engström, Joel January 2024 (has links)
Microservice-based applications thrive when each service performs a specific, well-defined task independently, contributing to the overall system. However, what happens when these independent services overlap? Could consolidating some of these services yield substantial benefits? This thesis delves into the securities overlap among services within the Nasdaq Risk Platform. By evaluating metrics such as memory consumption, JVM overhead, and CPU utilization, it identifies potential candidates for consolidation. A proof-of-concept consolidation of actual Nasdaq Risk Platform services was deployed and compared against a non-consolidated version. The analysis revealed that the consolidated version performs similarly but exhibits a more uniform and stable usage curve. Additionally, by reducing the number of JVMs and combining the service securities cache, the total RAM usage decreased by 28%.
95

A Comparative Study on Container Orchestration and Serverless Computing Platforms

Kushkbaghi, Nick January 2024 (has links)
This report compares the performance of container orchestration architecture and serverless computing platforms within cloud computing. The focus is on their application in managing real-time communications for electric vehicle(EV) charging systems using the Open Charge Point Protocol (OCPP). With the growing demand for efficient and scalable cloud solutions, especially in sectors using Internet of Things (IoT) and real-time communication technologies, this study investigates how different architectures handle high-load scenarios and real-time data transmission. Through systematic load testing of Kubernetes (for container orchestration) and Azure Functions (for serverless computing), the report measures and analyzes response times, throughput, and error rates at various demand levels. The findings indicate that while Kubernetes performs robustly under consistent loads, Azure Functionsexcel in managing dynamic, high-load conditions, showcasing superior scalability and efficiency. A controlled experiment method ensures a precise and objective assessment of performance differences. The report concludes by proposing a hybrid model that leverages the strengths of both architectures to optimize cloud resource utilization and performance.
96

Predictive vertical CPU autoscaling in Kubernetes based on time-series forecasting with Holt-Winters exponential smoothing and long short-term memory / Prediktiv vertikal CPU-autoskalning i Kubernetes baserat på tidsserieprediktion med Holt-Winters exponentiell utjämning och långt korttidsminne

Wang, Thomas January 2021 (has links)
Private and public clouds require users to specify requests for resources such as CPU and memory (RAM) to be provisioned for their applications. The values of these requests do not necessarily relate to the application’s run-time requirements, but only help the cloud infrastructure resource manager to map requested virtual resources to physical resources. If an application exceeds these values, it might be throttled or even terminated. Consequently, requested values are often overestimated, resulting in poor resource utilization in the cloud infrastructure. Autoscaling is a technique used to overcome these problems. In this research, we formulated two new predictive CPU autoscaling strategies forKubernetes containerized applications, using time-series analysis, based on Holt-Winters exponential smoothing and long short-term memory (LSTM) artificial recurrent neural networks. The two approaches were analyzed, and their performances were compared to that of the default Kubernetes Vertical Pod Autoscaler (VPA). Efficiency was evaluated in terms of CPU resource wastage, and insufficient CPU percentage and amount for container workloads from Alibaba Cluster Trace 2018, and others. In our experiments, we observed that Kubernetes Vertical Pod Autoscaler (VPA) tended to perform poorly on workloads that periodically change. Our results showed that compared to VPA, predictive methods based on Holt- Winters exponential smoothing (HW) and Long Short-Term Memory (LSTM) can decrease CPU wastage by over 40% while avoiding CPU insufficiency for various CPU workloads. Furthermore, LSTM has been shown to generate stabler predictions compared to that of HW, which allowed for more robust scaling decisions. / Privata och offentliga moln kräver att användare begär mängden CPU och minne (RAM) som ska fördelas till sina applikationer. Mängden resurser är inte nödvändigtvis relaterat till applikationernas körtidskrav, utan är till för att molninfrastrukturresurshanteraren ska kunna kartlägga begärda virtuella resurser till fysiska resurser. Om en applikation överskrider dessa värden kan den saktas ner eller till och med krascha. För att undvika störningar överskattas begärda värden oftast, vilket kan resultera i ineffektiv resursutnyttjande i molninfrastrukturen. Autoskalning är en teknik som används för att överkomma dessa problem. I denna forskning formulerade vi två nya prediktiva CPU autoskalningsstrategier för containeriserade applikationer i Kubernetes, med hjälp av tidsserieanalys baserad på metoderna Holt-Winters exponentiell utjämning och långt korttidsminne (LSTM) återkommande neurala nätverk. De två metoderna analyserades, och deras prestationer jämfördes med Kubernetes Vertical Pod Autoscaler (VPA). Prestation utvärderades genom att observera under- och överutilisering av CPU-resurser, för diverse containerarbetsbelastningar från bl. a. Alibaba Cluster Trace 2018. Vi observerade att Kubernetes Vertical Pod Autoscaler (VPA) i våra experiment tenderade att prestera dåligt på arbetsbelastningar som förändras periodvist. Våra resultat visar att jämfört med VPA kan prediktiva metoder baserade på Holt-Winters exponentiell utjämning (HW) och långt korttidsminne (LSTM) minska överflödig CPU-användning med över 40 % samtidigt som de undviker CPU-brist för olika arbetsbelastningar. Ytterligare visade sig LSTM generera stabilare prediktioner jämfört med HW, vilket ledde till mer robusta autoskalningsbeslut.
97

Hybrid Cloud Migration Challenges. A case study at King

Boronin, Mikhail January 2020 (has links)
Migration to the cloud has been a popular topic in industry and academia in recent years. Despite many benefits that the cloud presents, such as high availability and scalability, most of the on-premise application architectures are not ready to fully exploit the benefits of this environment, and adapting them to this environment is a non-trivial task.Therefore, many organizations consider a gradual process of moving to the cloud with Hybrid Cloud architecture. In this paper, the author is making an effort of analyzing particular enterprise case in cloud migration topics like cloud deployment, cloud architecture and cloud management.This paper aims to identify, classify, and compare existing challenges in cloud migration, illustrate approaches to resolve these challenges and discover the best practices in cloud adoption and process of conversion teams to the cloud.
98

Container Orchestration : the Migration Path to Kubernetes

Andersson, Johan, Norrman, Fredrik January 2020 (has links)
As IT platforms grow larger and more complex, so does the underlying infrastructure. Virtualization is an essential factor for more efficient resource allocation, improving both the management and environmental impact. It allows more robust solutions and facilitates the use of IaC (infrastructure ascode). Many systems developed today consist of containerized microservices. Considered the standard of container orchestration, Kubernetes is the natural next step for many companies. But how do we move on from previous solutions to a Kubernetes cluster? We found that there are not a lot of detailed enough guidelines available, and set out to gain more knowledge by diving into the subject - implementing prototypes that would act as a foundation for a resulting guideline of how it can be done.
99

An Empirical Study on AI Workflow Automation for Positioning / En empirisk undersökning om automatiserat arbetsflöde inom AI för positionering

Jämtner, Hannes, Brynielsson, Stefan January 2022 (has links)
The maturing capabilities of Artificial Intelligence (AI) and Machine Learning (ML) have resulted in increased attention in research and development on adopting AI and ML in 5G and future networks. With the increased maturity, the usage of AI/ML models in production is becoming more widespread, and maintaining these systems is more complex and likely to incur technical debt when compared to standard software. This is due to inheriting all the complexities of traditional software in addition to ML-specific ones. To handle these complexities the field of ML Operations (MLOps) has emerged. The goal of MLOps is to extend DevOps to AI/ML and therefore speed up development and ease maintenance of AI/ML-based software, by, for example, supporting automatic deployment, monitoring, and continuous re-training of models. This thesis investigates how to construct an MLOps workflow by selecting a number of tools and using these to implement a workflow. Additionally, different approaches for triggering re-training are implemented and evaluated, resulting in a comparison of the triggers with regards to execution time, memory and CPU consumption, and the average performance of the Machine learning model.

Page generated in 0.0243 seconds