• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 14
  • 14
  • 6
  • 6
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A Fast Multi-pattern Matching Algorithm for Network Processors

Wu, Pao-chin 10 September 2006 (has links)
There are more and more Internet services such as video on demand, voice over IP,Blog, and so on. The network quality is important for providing good services. P2P technology can decentralize the usage of bandwidth, so a server can provide services with lower bandwidth. The bandwidth is filled by P2P traffic if we don¡¦t limit the usage of P2P applications, so we need a service controller that can limit the P2P traffic to provide better quality for other applications. The traditional network systems use software solutions or hardware solutions. The software solutions offer flexibility but have low performance; The hardware solutions offer highest speed but are inflexible and expensive to modify or upgrade. there is another solution known as network processors. A network processor can be programmed and has been optimizede for packet procecssing. We need a good service classifier to classify P2P traffic, then we can limit it. The performance of a signature based service classifier is dominated by the speed of its pattern matching algorithm. In this paper, we proposed a fast ulti-pattern matching algorithm by improving WM algorithm. Serveral algorithms are implemented on IXP2400 network processor for performance evaluation, and our proposed algorithm outperforms other algorithms if its parameters are properly set.
2

SMART : an innovative multimedia computer architecture for processing ATM cells in real-time

Cashman, Neil January 1998 (has links)
No description available.
3

Realization of Differentiated Quality of Service for Wideband Code Division Multiple Access Core Network

Fang, Yechang 05 March 2010 (has links)
The development of 3G (the 3rd generation telecommunication) value-added services brings higher requirements of Quality of Service (QoS). Wideband Code Division Multiple Access (WCDMA) is one of three 3G standards, and enhancement of QoS for WCDMA Core Network (CN) becomes more and more important for users and carriers. The dissertation focuses on enhancement of QoS for WCDMA CN. The purpose is to realize the DiffServ (Differentiated Services) model of QoS for WCDMA CN. Based on the parallelism characteristic of Network Processors (NPs), the NP programming model is classified as Pool of Threads (POTs) and Hyper Task Chaining (HTC). In this study, an integrated programming model that combines both of the two models was designed. This model has highly efficient and flexible features, and also solves the problems of sharing conflicts and packet ordering. We used this model as the programming model to realize DiffServ QoS for WCDMA CN. The realization mechanism of the DiffServ model mainly consists of buffer management, packet scheduling and packet classification algorithms based on NPs. First, we proposed an adaptive buffer management algorithm called Packet Adaptive Fair Dropping (PAFD), which takes into consideration of both fairness and throughput, and has smooth service curves. Then, an improved packet scheduling algorithm called Priority-based Weighted Fair Queuing (PWFQ) was introduced to ensure the fairness of packet scheduling and reduce queue time of data packets. At the same time, the delay and jitter are also maintained in a small range. Thirdly, a multi-dimensional packet classification algorithm called Classification Based on Network Processors (CBNPs) was designed. It effectively reduces the memory access and storage space, and provides less time and space complexity. Lastly, an integrated hardware and software system of the DiffServ model of QoS for WCDMA CN was proposed. It was implemented on the NP IXP2400. According to the corresponding experiment results, the proposed system significantly enhanced QoS for WCDMA CN. It extensively improves consistent response time, display distortion and sound image synchronization, and thus increases network efficiency and saves network resource.
4

A CAM-Based, High-Performance Classifier-Scheduler for a Video Network Processor.

Tarigopula, Srivamsi 05 1900 (has links)
Classification and scheduling are key functionalities of a network processor. Network processors are equipped with application specific integrated circuits (ASIC), so that as IP (Internet Protocol) packets arrive, they can be processed directly without using the central processing unit. A new network processor is proposed called the video network processor (VNP) for real time broadcasting of video streams for IP television (IPTV). This thesis explores the challenge in designing a combined classification and scheduling module for a VNP. I propose and design the classifier-scheduler module which will classify and schedule data for VNP. The proposed module discriminates between IP packets and video packets. The video packets are further processed for digital rights management (DRM). IP packets which carry regular traffic will traverse without any modification. Basic architecture of VNP and architecture of classifier-scheduler module based on content addressable memory (CAM) and random access memory (RAM) has been proposed. The module has been designed and simulated in Xilinx 9.1i; is built in ISE simulator with a throughput of 1.79 Mbps and a maximum working frequency of 111.89 MHz at a power dissipation of 33.6mW. The code has been translated and mapped for Spartan and Virtex family of devices.
5

WASP : Lightweight Programmable Ephemeral State on Routers to Support End-to-End Applications

Martin, Sylvain 07 November 2007 (has links)
We present WASP (World-friendly Active packets for ephemeral State Processing), a novel active networks architecture that enables ephemeral storage of information on routers in order to ease distributed application synchronisation and co-operation. We aimed at a design compatible with modern routers hardware and with network operators' goals. Our solution has to scale with the number of interfaces of the device and to support throughput of several Gbps. Throughout this thesis we searched for the best trade-off between features (platform exibility) and guarantees (platform safety), with as little performance sacri ce as possible. We picked the Ephemeral State Processing (ESP) router, developed by K. Calvert's team at University of Kentucky, as a starting point and extended it with our own virtual processor (VPU) to offer higher exibility to the network programmer. The VPU is a minimalist bytecode interpreter that manipulates the content of the "ephemeral state store" of the router according to a microprogram present in packets. It ultimately allows the microprogram to drop or forward the packet on any router, acting as remotely programmable filters around unmodified IP routing cores. We developed two implementations of WASP: a "reference" module for the Linux kernel, and, based on that prototype experience, a WASP filter application for the IXP2400 network processor that proves feasibility of our platform at higher speed. We extensively tested those two implementations against their ESP counterpart in order to estimate the overhead of our approach. High speed tests on the IXP were also performed to ensure WASP's robustness, and were actually rich in lessons for future development on programmable network devices. The nature of WASP makes it a platform of choice to detect properties of the network along a given path. Thanks to per-flow variables (even if ephemeral) and its ability to sustain custom processing at wire-speed, we can for instance implement lightweight measurement of QoS parameters or enforce application-specific congestion control. We have however opted -- in the context of this thesis -- for a focus on another use of the platform: using the ephemeral state to advertise and detect members of distributed applications (e.g. grid computing or peer-to-peer systems) in a purely decentralised way. To evaluate the benefits of this approach, we propose a model of a peer-to-peer community where peers try and join former neighbours, and we show through simulations how efficiency and quality of user experience evolve with the presence of more WASP routers in the network.
6

Overlay Architectures for FPGA-Based Software Packet Processing

Martin, Labrecque 16 June 2011 (has links)
Packet processing is the enabling technology of networked information systems such as the Internet and is usually performed with fixed-function custom-made ASIC chips. As communication protocols evolve rapidly, there is increasing interest in adapting features of the processing over time and, since software is the preferred way of expressing complex computation, we are interested in finding a platform to execute packet processing software with the best possible throughput. Because FPGAs are widely used in network equipment and they can implement processors, we are motivated to investigate executing software directly on the FPGAs. Off-the-shelf soft processors on FPGA fabric are currently geared towards performing embedded sequential tasks and, in contrast, network processing is most often inherently parallel between packet flows, if not between each individual packet. Our goal is to allow multiple threads of execution in an FPGA to reach a higher aggregate throughput than commercially available shared-memory soft multi-processors via improvements to the underlying soft processor architecture. We study a number of processor pipeline organizations to identify which ones can scale to a larger number of execution threads and find that tuning multithreaded pipelines can provide compact cores with high throughput. We then perform a design space exploration of multicore soft systems, compare single-threaded and multithreaded designs to identify scalability limits and develop processor architectures allowing threads to execute with as little architectural stalls as possible: in particular with instruction replay and static hazard detection mechanisms. To further reduce the wait times, we allow threads to speculatively execute by leveraging transactional memory. Our multithreaded multiprocessor along with our compilation and simulation framework makes the FPGA easy to use for an average programmer who can write an application as a single thread of computation with coarse-grained synchronization around shared data structures. Comparing with multithreaded processors using lock-based synchronization, we measure up to 57\% additional throughput with the use of transactional-memory-based synchronization. Given our applications, gigabit interfaces and 125 MHz system clock rate, our results suggest that soft processors can process packets in software at high throughput and low latency, while capitalizing on the FPGAs already available in network equipment.
7

Overlay Architectures for FPGA-Based Software Packet Processing

Martin, Labrecque 16 June 2011 (has links)
Packet processing is the enabling technology of networked information systems such as the Internet and is usually performed with fixed-function custom-made ASIC chips. As communication protocols evolve rapidly, there is increasing interest in adapting features of the processing over time and, since software is the preferred way of expressing complex computation, we are interested in finding a platform to execute packet processing software with the best possible throughput. Because FPGAs are widely used in network equipment and they can implement processors, we are motivated to investigate executing software directly on the FPGAs. Off-the-shelf soft processors on FPGA fabric are currently geared towards performing embedded sequential tasks and, in contrast, network processing is most often inherently parallel between packet flows, if not between each individual packet. Our goal is to allow multiple threads of execution in an FPGA to reach a higher aggregate throughput than commercially available shared-memory soft multi-processors via improvements to the underlying soft processor architecture. We study a number of processor pipeline organizations to identify which ones can scale to a larger number of execution threads and find that tuning multithreaded pipelines can provide compact cores with high throughput. We then perform a design space exploration of multicore soft systems, compare single-threaded and multithreaded designs to identify scalability limits and develop processor architectures allowing threads to execute with as little architectural stalls as possible: in particular with instruction replay and static hazard detection mechanisms. To further reduce the wait times, we allow threads to speculatively execute by leveraging transactional memory. Our multithreaded multiprocessor along with our compilation and simulation framework makes the FPGA easy to use for an average programmer who can write an application as a single thread of computation with coarse-grained synchronization around shared data structures. Comparing with multithreaded processors using lock-based synchronization, we measure up to 57\% additional throughput with the use of transactional-memory-based synchronization. Given our applications, gigabit interfaces and 125 MHz system clock rate, our results suggest that soft processors can process packets in software at high throughput and low latency, while capitalizing on the FPGAs already available in network equipment.
8

Design and Implementation of a High Performance Network Processor with Dynamic Workload Management

Duggisetty, Padmaja 23 November 2015 (has links)
Internet plays a crucial part in today's world. Be it personal communication, business transactions or social networking, internet is used everywhere and hence the speed of the communication infrastructure plays an important role. As the number of users increase the network usage increases i.e., the network data rates ramped up from a few Mb/s to Gb/s in less than a decade. Hence the network infrastructure needed a major upgrade to be able to support such high data rates. Technological advancements have enabled the communication links like optical fibres to support these high bandwidths, but the processing speed at the nodes remained constant. This created a need for specialised devices for packet processing in order to match the increasing line rates which led to emergence of network processors. Network processors were both programmable and flexible. To support the growing number of internet applications, a single core network processor has transformed into a multi/many core network processor with multiple cores on a single chip rather than just one core. This improved the packet processing speeds and hence the performance of a network node. Multi-core network processors catered to the needs of a high bandwidth networks by exploiting the inherent packet-level parallelism in a network. But these processors still had intrinsic challenges like load balancing. In order to maximise throughput of these multi-core network processors, it is important to distribute the traffic evenly across all the cores. This thesis describes a multi-core network processor with dynamic workload management. A multi-core network processor, which performs multiple applications is designed to act as a test bed for an effective workload management algorithm. An effective workload management algorithm is designed in order to distribute the workload evenly across all the available cores and hence maximise the performance of the network processor. Runtime statistics of all the cores were collected and updated at run time to aid in deciding the application to be performed on a core to to enable even distribution of workload among the cores. Hence, when an overloading of a core is detected, the applications to be performed on the cores are re-assigned. For testing purposes, we built a flexible and a reusable platform on NetFPGA 10G board which uses a FPGA-based approach to prototyping network devices. The performance of the designed workload management algorithm is tested by measuring the throughput of the system for varying workloads.
9

Micro-Network Processor : A Processor Architecture for Implementing NoC Routers

Martin Rovira, Julia, Manuel Fructoso Melero, Francisco January 2007 (has links)
<p>Routers are probably the most important component of a NoC, as the performance of the whole network is driven by the routers’ performance. Cost for the whole network in terms of area will also be minimised if the router design is kept small. A new application specific processor architecture for implementing NoC routers is proposed in this master thesis, which will be called µNP (Micro-Network Processor). The aim is to offer a solution in which there is a trade-off between the high performance of routers implemented in hardware and the high level of flexibility that could be achieved by loading a software that routed packets into a GPP. Therefore, a study including the design of a hardware based router and a GPP based router has been conducted. In this project the first version of the µNP has been designed and a complete instruction set, along with some sample programs, is also proposed. The results show that, in the best case for all implementation options, µNP was 7.5 times slower than the hardware based router. It has also behaved more than 100 times faster than the GPP based router, keeping almost the same degree of flexibility for routing purposes within NoC.</p>
10

Micro-Network Processor : A Processor Architecture for Implementing NoC Routers

Martin Rovira, Julia, Manuel Fructoso Melero, Francisco January 2007 (has links)
Routers are probably the most important component of a NoC, as the performance of the whole network is driven by the routers’ performance. Cost for the whole network in terms of area will also be minimised if the router design is kept small. A new application specific processor architecture for implementing NoC routers is proposed in this master thesis, which will be called µNP (Micro-Network Processor). The aim is to offer a solution in which there is a trade-off between the high performance of routers implemented in hardware and the high level of flexibility that could be achieved by loading a software that routed packets into a GPP. Therefore, a study including the design of a hardware based router and a GPP based router has been conducted. In this project the first version of the µNP has been designed and a complete instruction set, along with some sample programs, is also proposed. The results show that, in the best case for all implementation options, µNP was 7.5 times slower than the hardware based router. It has also behaved more than 100 times faster than the GPP based router, keeping almost the same degree of flexibility for routing purposes within NoC.

Page generated in 0.0809 seconds