Spelling suggestions: "subject:"packet processing"" "subject:"jacket processing""
11 |
Zpracování paketů pomocí knihovny DPDK / Packet Processing Using DPDK LibraryProcházka, Aleš January 2019 (has links)
This master thesis focuses on filtering and forwarding packets in high speed networks. Firstly the DPDK framework is introduced, which is used for fast packet processing. This project also introduces a design of application for high-speed packet filtering and design of tools for making it easier to work with that application. Subsequently, the implementation of this design is introduced and testing with comparison of results with a standard firewall
|
12 |
An evaluation of eXpress Data Path from a 5G perspective : Offloading packet processing functions of a 5G simulator to a driver context / En utvärdering av eXpress Data Path från ett 5G-perspektivByström, Adrian, Salo, Mattias January 2022 (has links)
The world of computer networks is constantly evolving towards more efficient algorithms and more effective ways of using hardware resources. One of these evolutions is the eXpress Data Path (XDP). XDP is an interrupt based data path in the Linux kernel. XDP uses JIT-compiled programs in a virtual machine in a device driver context. Through XDP, fast packet processing is possible while still keeping the functionality of the Linux kernel intact. Therefore, this thesis aims to illuminate possible use cases for XDP in 5G simulators, as this real-world application of XDP is of interest. Specifically, use-cases where there is a need for fast packet processing. This thesis evaluates the use-cases using a performance evaluation of XDP and a literary review of 5G simulators, XDP, and technologies relating to XDP. This evaluation indicates that XDP is a candidate for packet processing in 5G simulators, specifically when compared to what performance is possible currently. This thesis argues from the performance evaluation and the literary review that XDP can be used for small programs, preferably data ingestion, in 5G simulators.
|
13 |
Evaluation of embedded processors for next generation asic : Evaluation of open source Risc-V processors and tools ability to perform packet processing operations compared to Arm Cortex M7 processors / Utvärdering av inbyggda processorer för nästa generation asic : Utvärdering av öppen källkod Risc-V processorer och verktyg’s förmåga att utföra databehandlingsfunktioner i jämförelse med en Arm Cortex M7 processorMusasa Mutombo, Mike January 2021 (has links)
Nowadays, network processors are an integral part of information technology. With the deployment of 5G network ramping up around the world, numerous new devices are going to take advantage of their processing power and programming flexibility. Contemporary information technology providers of today such as Ericsson, spend a great amount of financial resources on licensing deals to use processors with proprietary instruction set architecture designs from companies like Arm holdings. There is a new non-proprietary instruction set architecture technology being developed known as Risc-V. There are many open source processors based on Risc-V architecture, but it is still unclear how well an open-source Risc-V processor performs network packet processing tasks compared to an Arm-based processor. The main purpose of this thesis is to design a test model simulating and evaluating how well an open-source Risc-V processor performs packet processing compared to an Arm Cortex M7 processor. This was done by designing a C code simulating some key packet processing functions processing 50 randomly generated 72 bytes data packets. The following functions were tested: framing, parsing, pattern matching, and classification. The code was ported and executed in both an Arm Cortex M7 processor and an emulated open source Risc-V processor. A working packet processing test code was built, evaluated on an Arm Cortex M7 processor. Three different open-source Risc-V processors were tested, Arianne, SweRV core, and Rocket-chip. The execution time of both cases was analyzed and compared. The execution time of the test code on Arm was 67, 5 ns. Based on the results, it can be argued that open source Risc-V processor tools are not fully reliable yet and ready to be used for packet processing applications. Further evaluation should be performed on this topic, with a more in-depth look at the SweRV core processor, at physical open-source Risc-V hardware instead of emulators. / Nätverksprocessorer är en viktig byggsten av informationsteknik idag. I takt med att 5G nätverk byggs ut runt om i världen, många fler enheter kommer att kunna ta del av deras kraftfulla prestanda och programerings flexibilitet. Informationsteknik företag som Ericsson, spenderarmycket ekonomiska resurser på licenser för att kunna använda proprietära instruktionsuppsättnings arkitektur teknik baserade processorer från ARM holdings. Det är väldigt kostam att fortsätta köpa licenser då dessa arkitekturer är en byggsten till designen av många processorer och andra komponenter. Idag finns det en lovande ny processor instruktionsuppsättnings arkitektur teknik som inte är licensierad så kallad Risc-V. Tack vare Risc-V har många propietära och öppen källkod processor utvecklats idag. Det finns dock väldigt lite information kring hur bra de presterar i nätverksapplikationer är känt idag. Kan en öppen-källkod Risc-V processor utföra nätverks databehandling funktioner lika bra som en proprietär Arm Cortex M7 processor? Huvudsyftet med detta arbete är att bygga en test model som undersöker hur väl en öppen-källkod Risc-V baserad processor utför databehandlings operationer av nätverk datapacket jämfört med en Arm Cortex M7 processor. Detta har utförts genom att ta fram en C programmeringskod som simulerar en mottagning och behandling av 72 bytes datapaket. De följande funktionerna testades, inramning, parsning, mönster matchning och klassificering. Koden kompilerades och testades i både en Arm Cortex M7 processor och 3 olika emulerade öppen källkod Risc-V processorer, Arianne, SweRV core och Rocket-chip. Efter att ha testat några öppen källkod Risc-V processorer och använt test koden i en ArmCortex M7 processor, kan det hävdas att öppen-källkod Risc-V processor verktygen inte är tillräckligt pålitliga än. Denna rapport tyder på att öppen-källkod Risc-V emulatorer och verktygen behöver utvecklas mer för att användas i nätverks applikationer. Det finns ett behov av ytterligare undersökning inom detta ämne i framtiden. Exempelvis, en djupare undersökning av SweRV core processor, eller en öppen-källkod Risc-V byggd hårdvara krävs.
|
14 |
Intelligent based Packet Scheduling Scheme using Internet Protocol/Multi-Protocol Label Switching (IP/MPLS) Technology for 5G. Design and Investigation of Bandwidth Management Technique for Service-Aware Traffic Engineering using Internet Protocol/Multi-Protocol Label Switching (IP/MPLS) for 5GMustapha, Oba Z. January 2019 (has links)
Multi-Protocol Label Switching (MPLS) makes use of traffic engineering (TE)
techniques and a variety of protocols to establish pre-determined highly
efficient routes in Wide Area Network (WAN). Unlike IP networks in which
routing decision has to be made through header analysis on a hop-by-hop
basis, MPLS makes use of a short bit sequence that indicates the forwarding
equivalence class (FEC) of a packet and utilises a predefined routing table to
handle packets of a specific FEC type. Thus header analysis of packets is not
required, resulting in lower latency. In addition, packets of similar
characteristics can be routed in a consistent manner. For example, packets
carrying real-time information can be routed to low latency paths across the
networks. Thus the key success to MPLS is to efficiently control and distribute
the bandwidth available between applications across the networks.
A lot of research effort on bandwidth management in MPLS networks has
already been devoted in the past. However, with the imminent roll out of 5G,
MPLS is seen as a key technology for mobile backhaul. To cope with the 5G
demands of rich, context aware and multimedia-based user applications, more
efficient bandwidth management solutions need to be derived.
This thesis focuses on the design of bandwidth management algorithms, more
specifically QoS scheduling, in MPLS network for 5G mobile backhaul. The
aim is to ensure the reliability and the speed of packet transfer across the
network. As 5G is expected to greatly improve the user experience with
innovative and high quality services, users’ perceived quality of service (QoS)
needs to be taken into account when deriving such bandwidth management
solutions. QoS expectation from users are often subjective and vague. Thus
this thesis proposes the use of fuzzy logic based solution to provide service aware and user-centric bandwidth management in order to satisfy
requirements imposed by the network and users.
Unfortunately, the disadvantage of fuzzy logic is scalability since dependable
fuzzy rules and membership functions increase when the complexity of being
modelled increases. To resolve this issue, this thesis proposes the use of neuro-fuzzy to solicit interpretable IF-THEN rules.The algorithms are
implemented and tested through NS2 and Matlab simulations. The
performance of the algorithms are evaluated and compared with other
conventional algorithms in terms of average throughput, delay, reliability, cost,
packet loss ratio, and utilization rate.
Simulation results show that the neuro-fuzzy based algorithm perform better
than fuzzy and other conventional packet scheduling algorithms using IP and
IP over MPLS technologies. / Tertiary Education Trust Fund (TETFUND)
|
15 |
Toward Highly-efficient GPU-centric Networking / Mot Högeffektiva GPU-centrerade NätverkGirondi, Massimo January 2024 (has links)
Graphics Processing Units (GPUs) are emerging as the most popular accelerator for many applications, powering the core of Machine Learning applications and many computing-intensive workloads. GPUs have typically been consideredas accelerators, with Central Processing Units (CPUs) in charge of the mainapplication logic, data movement, and network connectivity. In these architectures,input and output data of network-based GPU-accelerated application typically traverse the CPU, and the Operating System network stack multiple times, getting copied across the system main memory. These increase application latency and require expensive CPU cycles, reducing the power efficiency of systems, and increasing the overall response times. These inefficiencies become of higher importance in latency-bounded deployments, or with high throughput, where copy times could easily inflate the response time of modern GPUs. The main contribution of this dissertation is towards a GPU-centric network architecture, allowing GPUs to initiate network transfers without the intervention of CPUs. We focus on commodity hardware, using NVIDIA GPUs and Remote Direct Memory Access over Converged Ethernet (RoCE) to realize this architecture, removing the need of highly homogeneous clusters and ad-hoc designed network architecture, as it is required by many other similar approaches. By porting some rdma-core posting routines to GPU runtime, we can saturate a 100-Gbps link without any CPU cycle, reducing the overall system response time, while increasing the power efficiency and improving the application throughput.The second contribution concerns the analysis of Clockwork, a State-of-The-Art inference serving system, showing the limitations imposed by controller-centric, CPU-mediated architectures. We then propose an alternative architecture to this system based on an RDMA transport, and we study some performance gains that such a system would introduce. An integral component of an inference system is to account and track user flows,and distribute them across multiple worker nodes. Our third contribution aims to understand the challenges of Connection Tracking applications running at 100Gbps, in the context of a Stateful Load Balancer running on commodity hardware. / <p>QC 20240315</p>
|
Page generated in 0.0932 seconds