1 |
Fairness-Aware Uplink Packet Scheduling Based on User Reciprocity for Long Term EvolutionWu, Hsuan-Cheng 03 August 2011 (has links)
none
|
2 |
Concurrent Implementation of Packet Processing Algorithms on Network ProcessorsGroves, Mark January 2006 (has links)
Network Processor Units (NPUs) are a compromise between software-based and hardwired packet processing solutions. While slower than hardwired solutions, NPUs have the flexibility of software-based solutions, allowing them to adapt faster to changes in network protocols. <br /><br /> Network processors have multiple processing engines so that multiple packets can be processed simultaneously within the NPU. In addition, each of these processing engines is multi-threaded, with special hardware support built in to alleviate some of the cost of concurrency. This hardware design allows the NPU to handle multiple packets concurrently, so that while one thread is waiting for a memory access to complete, another thread can be processing a different packet. By handling several packets simultaneously, an NPU can achieve similar processing power as traditional packet processing hardware, but with greater flexibility. <br /><br /> The flexibility of network processors is also one of the disadvantages associated with them. Programming a network processor requires an in-depth understanding of the hardware as well as a solid foundation in concurrent design and programming. This thesis explores the challenges of programming a network processor, the Intel IXP2400, using a single-threaded packet scheduling algorithm as a sample case. The algorithm used is a GPS approximation scheduler with constant time execution. The thesis examines the process of implementing the algorithm in a multi-threaded environment, and discusses the scalability and load-balancing aspects of such an algorithm. In addition, optimizations are made to the scheduler implementation to improve the potential concurrency. The synchronization primitives available on the network processor are also examined, as they play a significant part in minimizing the overhead required to synchronize memory accesses by the algorithm.
|
3 |
Concurrent Implementation of Packet Processing Algorithms on Network ProcessorsGroves, Mark January 2006 (has links)
Network Processor Units (NPUs) are a compromise between software-based and hardwired packet processing solutions. While slower than hardwired solutions, NPUs have the flexibility of software-based solutions, allowing them to adapt faster to changes in network protocols. <br /><br /> Network processors have multiple processing engines so that multiple packets can be processed simultaneously within the NPU. In addition, each of these processing engines is multi-threaded, with special hardware support built in to alleviate some of the cost of concurrency. This hardware design allows the NPU to handle multiple packets concurrently, so that while one thread is waiting for a memory access to complete, another thread can be processing a different packet. By handling several packets simultaneously, an NPU can achieve similar processing power as traditional packet processing hardware, but with greater flexibility. <br /><br /> The flexibility of network processors is also one of the disadvantages associated with them. Programming a network processor requires an in-depth understanding of the hardware as well as a solid foundation in concurrent design and programming. This thesis explores the challenges of programming a network processor, the Intel IXP2400, using a single-threaded packet scheduling algorithm as a sample case. The algorithm used is a GPS approximation scheduler with constant time execution. The thesis examines the process of implementing the algorithm in a multi-threaded environment, and discusses the scalability and load-balancing aspects of such an algorithm. In addition, optimizations are made to the scheduler implementation to improve the potential concurrency. The synchronization primitives available on the network processor are also examined, as they play a significant part in minimizing the overhead required to synchronize memory accesses by the algorithm.
|
4 |
A Pre-Scheduling Mechanism for LTE HandoverSu, Wei-Ming 19 July 2012 (has links)
none
|
5 |
Evaluation of Incentive-compatible Differentiated Scheduling for Packet-switched NetworksLin, Yunfeng January 2005 (has links)
Communication applications have diverse network service requirements. For instance, <em>Voice over IP</em> (VoIP) demands short end-to-end delay, whereas <em>File Transfer Protocol</em> (FTP) benefits more from high throughput than short delay. However, the Internet delivers a uniform best-effort service. As a result, much research has been conducted to enhance the Internet to provide service differentiation. Most of the existing proposals require additional access-control mechanisms, such as admission control and pricing, which are complicated to implement and render these proposals not incrementally deployable. <em>Incentive-compatible Differentiated Scheduling</em> (ICDS) provides incentives for applications to choose a service class according to their burst characteristics without additional access-control mechanisms. <br /><br /> This thesis investigates the behaviour of ICDS with different types of traffic by analysis and extensive simulations. The results show some evidences that ICDS can achieve its design goal. In addition, this thesis revises the initial ICDS algorithm to provide fast convergence for TCP traffic.
|
6 |
Evaluation of Incentive-compatible Differentiated Scheduling for Packet-switched NetworksLin, Yunfeng January 2005 (has links)
Communication applications have diverse network service requirements. For instance, <em>Voice over IP</em> (VoIP) demands short end-to-end delay, whereas <em>File Transfer Protocol</em> (FTP) benefits more from high throughput than short delay. However, the Internet delivers a uniform best-effort service. As a result, much research has been conducted to enhance the Internet to provide service differentiation. Most of the existing proposals require additional access-control mechanisms, such as admission control and pricing, which are complicated to implement and render these proposals not incrementally deployable. <em>Incentive-compatible Differentiated Scheduling</em> (ICDS) provides incentives for applications to choose a service class according to their burst characteristics without additional access-control mechanisms. <br /><br /> This thesis investigates the behaviour of ICDS with different types of traffic by analysis and extensive simulations. The results show some evidences that ICDS can achieve its design goal. In addition, this thesis revises the initial ICDS algorithm to provide fast convergence for TCP traffic.
|
7 |
Implementation of Dynamic Queuing Scheduler for DiffServ Networks on Linux PlatformWu, Wei-Cheng 10 July 2002 (has links)
Existing edge and core routers in DiffServ networks require an effective scheduling mechanism. In this thesis, we design and implement a DiffServ scheduler on Linux platform to provide QoS for different PHB requirements.
We first modify the PDD model proposed by Dovrolis, and then develop two new scheduling algorithms. The first algorithm is referred to as Priority Queue with Quantum (PQWQ) and the second one is referred to as Average Delay Queue (ADQ). PQWQ can provide lower delay for EF traffic than Deficit Round Robin (DRR), and higher network utilization than Priority Queue (PQ) with EF Token Bucket. In addition, PQWQ can guarantee a minimum bandwidth for AF and Default PHBs and avoid starvation in case of low priority PHBs.
The second scheduler, ADQ, is designed to provide different levels of delay for AF classes. The average delays of the four AF classes can be proportional by adjusting the Delay Differentiation Parameter (DDP). This proportional scheme may allow the higher priority class to send packets more quickly, and therefore achieve higher QoS.
Finally, we implement the two schedulers, PQWQ and ADQ, on Linux platform. We adopt share buffer scheme for AF PHB. Share buffer management can effectively improve the buffer utilization and avoid the unnecessary packet dropping due to the unfair buffer allocation. From the experimental results, we can observe that the new DiffServ schedulers not only provide lower delay and higher bandwidth utilization for EF PHB, but also achieve proportional delay among different AF classes.
|
8 |
A Jitter Minimization Mechanism with Credit/Deficit Adjustment in IPv6-Based DiffServ NetworksShiu, Yi-Min 13 August 2003 (has links)
In a DiffServ networks, edge and core router classify traffic flows into different PHBs and provide different QoS for the classified flows. In order to achieve satisfactory QoS guarantee, many packet schedulers were proposed. However IETF have not formally standardized an appropriate and effective packet scheduler to minimize the jitter for real-time traffic.
In RFC, EF flows are characterized with low-latency, low packet loss rate, and low jitter. Therefore, real-time traffic is often classified into EF flow. By considering the characteristics of real-time traffic, it is not appropriate to forward packets either too fast or too slow. Hence, in this Thesis, we propose a mechanism in which each packet is attached with its own per-hop queuing delay. If a packet is forwarded within its own per-hop queuing delay, we say the packet may arrive too early (credit accumulation). If a packet is forwarded beyond its own per-hop queuing delay, we say the packet has late arrival (deficit accumulation). The Credit/Deficit information can be stored in the IPv6 optional header so that it can pass through the whole networks. If we can minimize the Credit/Deficit, the jitter can be minimized too. Our design is based on a modified WFQ by adding functions such as estimated queuing delay and dynamic class changes. The dynamic class changes allow EF packets to switch among queues to achieve lower jitter and constant delay.
We first implement the traditional WFQ scheduler on Linux platform and then followed by the implementation of the Credit/Deficit WFQ (CDWFQ). The experimental results have shown that CDWFQ can provide nearly constant queuing delay, lower packet loss rate, and lower jitter for EF traffic flows.
|
9 |
Soft Real-Time Switched Ethernet: Best-Effort Packet Scheduling Algorithm, Implementation, and Feasibility AnalysisWang, Jinggang 10 October 2002 (has links)
In this thesis, we present a MAC-layer packet scheduling algorithm, called Best-effort Packet Scheduling Algorithm(BPA), for real-time switched Ethernet networks. BPA considers a message model where application messages have trans-node timeliness requirements that are specified using Jensen's benefit functions. The algorithm seeks to maximize aggregate message benefit by allowing message packets to inherit benefit functions of their parent messages and scheduling packets to maximize aggregate packet-level benefit. Since the packet scheduling problem is NP-hard, BPA heuristically computes schedules with a worst-case cost of O(n^2), faster than the O(n^3) cost of the best known Chen and Muhlethaler's Algorithm(CMA) for the same problem. Our simulation studies show that BPA performs the same or significantly better than CMA.
We also construct a real-time switched Ethernet by prototyping an Ethernet switch using a Personal Computer(PC) and implementing BPA in the network protocol stack of the Linux kernel for packet scheduling. Our actual performance measurements of BPA using the network implementation reveal the effectiveness of the algorithm.
Finally, we derive timeliness feasibility conditions of real-time switched Ethernet systems that use the BPA algorithm. The feasibility conditions allow real-time distributed systems to be constructed using BPA, with guaranteed soft timeliness. / Master of Science
|
10 |
有助於提高服務品質的前瞻式封包排程機制 / QoS-Aware Packet Scheduling by Looking Ahead Approach溫永全, Wen,Yung-Chuan Unknown Date (has links)
受到封包網路原本忽略時效性特性之影響,對時效性要求極高的多媒體網路服務,如Voice over IP (VoIP)以及Video on Demand (VoD)在All-IP整合的核心網路上提供時,其服務品質低於傳統之電路交換網路。
封包在網路的傳遞過程中受到各種因素之影響,於到達目的地時,可能會造成long delay time,high jitter或packet loss,而在目的地端幾乎已經沒有補救機會,故而如果能在傳遞的過程中,依封包的時效性及重要性做適度的次序調動(rescheduling)而不要依序傳遞(FIFO),讓過遲的封包提前送出,而將有時間餘裕的封包稍緩送出,如此截長補短,可提高網路效能及整體QoS滿意度。
我們在BBQ (Budget-Base QoS)的架構下發展一套簡單而有效的方法,在單佇列(Single Preemptive Queue)及多佇列(Multiple FIFO Queue)的router架構下,根據封包時效性及重要性賦予合適的profit function,並參考封包在後續路程上各router的負載狀態以便能更精確預估封包是否能及時到達目的地並調整profit function參數以調整封包的送出順序,如此能提高排程的效能。
我們先對單一服務等級的封包排程進行研究,獲得參數調整之技巧,再根據其結果設計多服務等級的封包排程方法,其重點在於如何調配profit function給不同的服務等級。
我們藉由NS-2模擬模擬器進行實驗,評估本方法的效能,實驗結果顯示我們的方法可以較每個router僅根據自身所知的資訊進行排程更可以有效提高網路效能,且能對不同的服務等級做差異化處理以提高整體QoS滿意度。最後在多等級服務的實驗環境及評估指標下,網路高負載的情況,本方法與Simulated Priority Queue排程演算法比較可以提升34%的整體滿意度。 / Running time sensitive multimedia services such as Voice-over-IP (VoIP) and Video-on-Demand (VoD) on All-IP networks may have lower quality than that on the traditional circuit-switched networks.
Influenced by many factors, packets transported on a packet-switched network, may suffer from long delay time, large jitter and high packet loss rate. When a packet arrives its destination late, there is no way to correct the problem. Thus, it will be beneficial if routers could forward packets base on their timeliness and importance, instead of using First-In-First-Out (FIFO) service plan, giving important late packets proper precedence. The overall QoS satisfaction will be improved significantly.
In this thesis, we develop a simple and effective scheduling policy based on this concept for the environments where packets have predefined hop-by-hop time schedule. Routers are assumed in two different queue architectures: ideal Single Preemptive Queue router and practical Multiple FIFO Queue router. To forward a packet, a router first assigns a suitable profit function to the packet based on its timeliness and importance as well as the loading status in its succeeding routers along its predefined traveling path, then inserts the packet into an appropriate position in the output queues. Taking the loading status of succeeding routers into account could predict more accurately whether the packet could reach its destination on time or not.
We conduct the research for the single service class environments first to learn the characteristics of this new scheduling policy, and then for the multiple service class environments based on the knowledge acquired. The challenge is to find the best way to assign proper profit functions to different classes of packets in order to utilize resources more wisely, e.g. urgent and important packets get precedence.
We evaluate the performance of this approach by simulation using NS-2 network simulator. Simulation results show that our approach outperforms our previous version which doesn't take the loading status of succeeding routers into account. Furthermore, our approach can outperform the Simulated Priority Queue by at least 34% under heavy load and our evaluation metrics.
|
Page generated in 0.0707 seconds