• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • Tagged with
  • 5
  • 4
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Latency Aware SmartNIC based Load Balancer (LASLB)

kadwadkar, shivanand January 2021 (has links)
In the 21th century, we see a trend in which CPU processing power is not evolving at the same pace as it did in the century before. Also, in the current generation, the data requirements and the need for higher speed are increasing every day. This increasing demand requires multiple middlebox instances in order to scale. With recent progress in virtualization, middleboxes are getting virtualized and deployed as software (Network Function (NF)s) behind commodity CPUs. Various systems perform Load Balancing (LB) functionality in software, which consumes extra CPU at the NF side. There are research work in the past which tried to move the LB functionality from software to hardware. Majority of hardware­based load balancer only provides basic LB functionality and depends on NF to provide the current performance statistics. Providing statistics feedback to LB consumes processing power at the NF and creates an inter­dependency.   In this thesis work, we explore the possibility of moving the load balancing functionality to a Smart Network Interface Card (smartNIC). Our load balancer will distribute traffic among the set of CPUs where NF instances run. We will use P4 and C programming language in our design, which gives us the combination of high­speed parallel packet processing and the ability to implement relatively complex load balancing features. Our LB approach uses latency experienced by the packet as an estimate for the current CPU loading. In our design, higher latency is a sign of a more busy CPU. The Latency Aware smartNIC based Load Balancer (LASLB) also aims to reduce the tail latency by moving traffic from CPUs where traffic experiences high latency to CPU that processes traffic under low latency. The approach followed in the design does not require any statistics feedback support from the NF, which avoids the tight binding of LB with NF.   Our experiment on different traffic profiles has shown that LASLB can save ~30% CPU for NF. In terms of fairness of CPU loading, our evaluation indicates that in imbalanced traffic, the LASLB can load more evenly than other evaluated methods in smartNIC­ based LB category. Our evaluation also shows that LASLB can reduce 95th percentile tail latency by ~22% compared to software load balancing.
2

Towards Machine Learning Inference in the Data Plane

Langlet, Jonatan January 2019 (has links)
Recently, machine learning has been considered an important tool for various networkingrelated use cases such as intrusion detection, flow classification, etc. Traditionally, machinelearning based classification algorithms run on dedicated machines that are outside of thefast path, e.g. on Deep Packet Inspection boxes, etc. This imposes additional latency inorder to detect threats or classify the flows.With the recent advance of programmable data planes, implementing advanced function-ality directly in the fast path is now a possibility. In this thesis, we propose to implementArtificial Neural Network inference together with flow metadata extraction directly in thedata plane of P4 programmable switches, routers, or Network Interface Cards (NICs).We design a P4 pipeline, optimize the memory and computational operations for our dataplane target, a programmable NIC with Micro-C external support. The results show thatneural networks of a reasonable size (i.e. 3 hidden layers with 30 neurons each) can pro-cess flows totaling over a million packets per second, while the packet latency impact fromextracting a total of 46 features is 1.85μs.
3

Offloading Virtual Network Functions – Hierarchical Approach

Langlet, Jonatan January 2020 (has links)
Next generation mobile networks are designed to run in a virtualized environment, enabling rapid infrastructure deployment and high flexibility for coping with increasing traffic demands and new service requirements. Such network function virtualization imposes additional packet latencies and potential bottlenecks not present in legacy network equipment when run on dedicated hardware; such bottlenecks include PCIe transfer delays, virtualization overhead, and utilizing commodity server hardware which is not optimized for packet processing operations.Through recent developments in P4 programmable networking devices, it is possible to implement complex packet processing pipelines directly in the network data plane; allowing critical traffic flows to be offloaded and flexibly hardware accelerated on new programmable packet processing hardware, prior to entering the virtualized environment.In this thesis, we design and implement a novel hybrid NFV processing architecture which integrates programmable NICs and commodity server hardware, capable of offloading virtual network functions for specified traffic flows directly to the server network card; allowing these flows to completely bypass softwarization overhead, while less sensitive traffic process on the underlying host server.An evaluation in a testbed with customized traffic generators show that accelerated flows have significantly lower jitter and latency, compared with flows processed on commodity server hardware. Our evaluation gives important insights into the designs of such hardware accelerated virtual network deployments, showing that hybrid network architectures are a viable solution for enabling infrastructure scalability without sacrificing critical flow performance.
4

SNIC-DSM: SmartNIC based DSM Infrastructure for Heterogeneous-ISA Machines

Ramesh, Hemanth 14 August 2023 (has links)
Heterogeneous computing is increasingly used in today's datacenters to meet the increasing computational demands of applications. Heterogeneous hardware typically includes CPUs, GPUs, ASICs, and FPGAs, among others. An important emerging trend is instructionset- architecture (ISA)-heterogeneity: high-end x86 servers with attached SmartNICs and SmartSSDs that incorporate general-purpose CPUs, typically of the RISC ISA family (e.g., ARM, RISC-V). To alleviate resource congestion on server computing nodes, application workloads can be scaled-out across server x86 CPUs and SmartNIC ARM CPUs using the distributed shared memory (DSM) abstraction. We present SNIC-DSM, a SmartNIC-based DSM infrastructure for heterogeneous ISA machines. SNIC-DSM implements a low-latency messaging layer, which enables inter-node communication across multi-ISA CPUs, and a DSM protocol processor that provides memory coherency among these nodes, both implemented in SmartNIC's FPGA logic. SNIC-DSM is reconfigurable and allows the implementation of different memory consistency protocols. Our experimental studies using compute-intensive benchmarks reveal that SNIC-DSM outperforms the state-of-the-art DSM - Popcorn Linux's software DSM - when server resource congestion is high. / Master of Science / The availability of heterogeneous computing architectures has led to the development of distributed shared memory systems, which allows compute-intensive applications to run in a distributed manner on different types of computing devices such as graphics processors, reconfigurable logic devices, and custom integrated circuits. Adopting such a heterogeneous computing strategy yields better performance and improves power consumption. Generally, these DSM systems use a software-based approach, which offers great flexibility but suffers from software overheads. Hardware-based approaches are used to overcome these limitations but they generally do not offer flexibility. This thesis presents, SNIC-DSM, which is a reconfigurable implementation of the DSM framework. SNIC-DSM provides a platform for the host and smart networking devices such as SmartNICs to communicate with each other and enables application execution in a distributed manner by providing memory coherency. Our experimental evaluation using High-Performance Computing benchmarks reveals that SNIC-DSM improves performance when compared with software-based DSM.
5

Large-Message Nonblocking Allgather and Broadcast Offload via BlueField-2 DPU

Sarkauskas, Nicholas Robert 09 August 2022 (has links)
No description available.

Page generated in 0.043 seconds