• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 3
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 21
  • 21
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

An Efficient Algorithm and Architecture for Network Processors

Batra, Shalini 11 August 2007 (has links)
A Buffer management algorithm plays an important role in determining the packet loss ratio in a computer network. Two types of packet buffer management algorithms, static and dynamic, can be used in a Network Interface Card (NIC) of a network terminal. In general, dynamic algorithms have better efficiency than the static algorithms. However, once the allocated buffer space is filled for an application, further incoming packets for that application get rejected. We propose a history-based scheme called History Based Dynamic Algorithm (HBDA), which reduces packet loss ratio by monitoring whether or not the application is active. For average network traffic loads, the HBDA improves the packet loss ratio by 15.9% and 11% (for load = 0.7) compared to DA and DADT, respectively. For heavy traffic load, improvement is 16.2% and 11.7% (for load = 0.7) and for actual traffic load improvement is 12.7% and 7.1% (for load = 0.7) over DA and DADT respectively. We also developed a new architecture for the Network Interface Card. The new architecture will support the multi-processor system and gives more consideration to the application with the highest priority. It has two control units for processing the incoming packets in parallel. For the traffic mix with average network traffic loads , the new architecture improves the packet loss ratio for priority application by a significant amount.
2

Increasing the efficiency of network interface card

Uppal, Amit 15 December 2007 (has links)
A Network Interface Card (NIC) is used for receiving the packets, processing the packets, passing the packets to the host processor, and sending the packets to other computers in a network. NIC uses the buffer management algorithm to distribute the buffer space among different applications. An application may use User Datagram Protocol (UDP) or Transmission Control Protocol (TCP), depending upon the type of application. Buffer Management Algorithm for UDP-based applications may be completely different from the one for TCP applications, since in UDP- based applications receiver do not send acknowledgement back to the sender. This thesis proposes two buffer management algorithms: 1) Fairly Shared Dynamic Algorithm (FSDA) for UDP-based applications; 2) Evenly Based Dynamic Algorithm (EBDA) for both UDP and TCP-based applications. FSDA utilizes full buffer memory and reduces the packet losses significantly. EBDA reduces packet losses by taking the packet size factor in summation rather than multiplication. This also helps in maintaining fairness among different applications. For the average network traffic load, the FSDA algorithm improves the packet loss ratio by 18.5 % over the dynamic algorithm and by 13.5% over the DADT, while EBDA improves by 16.7 % over the dynamic algorithm and by 11.8% over the DADT. For the heavy network traffic load, the FSDA algorithm improves the packet loss ratio by 16.8 % over the dynamic algorithm and by 12.5% over the DADT while EBDA improves the packet loss ratio by 16.8 % over the dynamic algorithm and by 12.6% over the DADT. For the actual traffic load, the improvement over DA and DADT is 13.6% and 7.5% for FSDA and 7.6% and 1.9% for EBDA.
3

Packet Data Flow Control in Evolved WCDMA Networks / Flödeskontroll av Paketdata i Vidareutvecklade WCDMA Nätverk

Bergström, Andreas January 2005 (has links)
<p>The key idea of the new, shared high-capacity channel HSDPA, is to adapt the transmission rate to fast variations in the current radio conditions, thus enabling download peak data rates much higher than what WCDMA can offer today. This has induced a need for data that traverses the mobile network to be intermediately buffered in the Radio Base Station, RBS. A scheduling algorithm then basically selects the user with the most beneficial instantaneous radio conditions for access to the high-speed channel and transmission of its data over the air interface.</p><p>The purpose of this thesis is to design a flow control algorithm for the transmission of data packets between the network node directly above the RBS, the RNC, and the RBS. This flow control algorithm should keep the level of the buffers in the RBS on such a level that the air interface may be fully utilized. Yet it is not desirable with large buffers since e.g., this induces longer round-trip times as well as loss of all data in the buffers whenever the user moves to another cell and a handover is performed. Theoretical argumentations and simulations show that both of these requirements may be met, even though it is a balancing act.</p><p>Suggested is a control-theoretic framework in which the level in the RBS buffers are kept sufficiently large by taking into account predictions of future outflow over air and by using methods to compensate for outstanding data on the transport network. This makes it possible to keep the buffer levels stable and high enough to fully utilize the air interface. By using a more flexible adaptive control algorithm, it is shown possible to reach an even higher utilization of the air interface with the same or even lower buffering, which reduces the amount of data lost upon handovers. This loss is shown to be even more reduced by actively taking system messages about upcoming handover events into account as well.</p>
4

Packet Data Flow Control in Evolved WCDMA Networks / Flödeskontroll av Paketdata i Vidareutvecklade WCDMA Nätverk

Bergström, Andreas January 2005 (has links)
The key idea of the new, shared high-capacity channel HSDPA, is to adapt the transmission rate to fast variations in the current radio conditions, thus enabling download peak data rates much higher than what WCDMA can offer today. This has induced a need for data that traverses the mobile network to be intermediately buffered in the Radio Base Station, RBS. A scheduling algorithm then basically selects the user with the most beneficial instantaneous radio conditions for access to the high-speed channel and transmission of its data over the air interface. The purpose of this thesis is to design a flow control algorithm for the transmission of data packets between the network node directly above the RBS, the RNC, and the RBS. This flow control algorithm should keep the level of the buffers in the RBS on such a level that the air interface may be fully utilized. Yet it is not desirable with large buffers since e.g., this induces longer round-trip times as well as loss of all data in the buffers whenever the user moves to another cell and a handover is performed. Theoretical argumentations and simulations show that both of these requirements may be met, even though it is a balancing act. Suggested is a control-theoretic framework in which the level in the RBS buffers are kept sufficiently large by taking into account predictions of future outflow over air and by using methods to compensate for outstanding data on the transport network. This makes it possible to keep the buffer levels stable and high enough to fully utilize the air interface. By using a more flexible adaptive control algorithm, it is shown possible to reach an even higher utilization of the air interface with the same or even lower buffering, which reduces the amount of data lost upon handovers. This loss is shown to be even more reduced by actively taking system messages about upcoming handover events into account as well.
5

QoS provisioning in mobile ad hoc network by improving buffer management

Lin, Yo-Ho 04 August 2009 (has links)
none
6

關鍵鏈專案管理中多重專案排程與控制之緩衝管理方法研究 / Buffer Management for Multi Project Scheduling and Control in Critical Chain Project Management

吳敬賢, Nuntasukasame, Noppadon Unknown Date (has links)
無 / Critical Chain Project Management (CCPM) has merged in last few years as a novel approach for managing projects. While there were many previous researches studied CCPM concerning with single project management, but CCPM multi project management was hardly paid attention, especially capacity-constraint buffer sizing approach. However, there were some research papers which examined and illustrated CCPM under multi-project environment; those papers assumed all the subprojects were identical. Despite the fact that such situation is impractical. The purpose of this dissertation is to compare Cut and paste method (C&PM) with Root square error method (RSEM) for applying in project buffer, feeding buffer and capacity-constraint buffer sizing and to change some subproject parameters which make an impact on the project schedule for multi-project scheduling. Keywords: Critical chain project management, Multi Project Scheduling, Buffer Management, Capacity constraint buffer, Buffer sizing method.
7

Online algoritmy pro rozvrhování paketů / Online Algorithms for Packet Scheduling

Veselý, Pavel January 2018 (has links)
We study online scheduling policies for buffer management models, in which packets are arriving over time to a buffer of a network switch to be sent through its single output port. However, the bandwidth of the port is limited and some packets need to be dropped, based on their weights. The goal of the scheduler is to maximize the weighted throughput, that is, the total weight of packets transmitted. Due to the natural lack of information about future, an optimal performance cannot be achieved, we thus pursue competitive analysis and its refinements to analyze online algorithms on worst-case inputs. Specifically, in the first part of the thesis, we focus on a simple online scheduling model with unit-size packets and deadlines, called Bounded-Delay Packet Scheduling. We design an optimal φ-competitive deterministic algo- rithm for the problem, where φ ≈ 1.618 is the golden ratio. It is based on a detailed understanding of an optimal schedule of pending packets, called the plan, which may be of independent interest. We also propose a semi-online setting with lookahead that allows the algorithm to see a little bit of future, namely, packets arriving in the next few steps. We provide an algorithm with lookahead for instances in which each packet can be scheduled in at most two consecutive slots and prove lower...
8

Improving Fairness among TCP Flows by Cross-layer Stateless Approach

Tsai, Hsu-Sheng 26 July 2008 (has links)
Transmission Control Protocol (TCP) has been recognized as the most important transport-layer protocol for the Internet. It is distinguished by its reliable transmission, flow control, and congestion control. However, the issue of fair bandwidth-sharing among competing flows was not properly addressed in TCP. As web-based applications and interactive applications grow more popular, the number of short-lived flows conveyed on the Internet continues to rise. With conventional TCP, short-lived flows will be unable to obtain a fair share of available bandwidth. As a result, short-lived flows will suffer from longer delays and a lower service rate. It is essential for the Internet to come up with an effective solution to this problem in order to accommodate the new traffic patterns. With a more equitable sharing of bottleneck bandwidth as its goal, two cross-layer stateless queue management schemes featuring Drop Maximum (DM) and Early Drop Maximum (EDM) are developed and presented in this dissertation. The fundamental idea is to drop packets from those flows having more than an equal share of bandwidth and retain low level of queue occupancy. The congestion window size of a TCP sender is carried in the options field on each packet. These proposed schemes will be exercised on routers and make its decision on packet dropping according to the congestion windows. In case of link congestion, the queued packet with the largest congestion window will be dropped from the queue. This will lower the sending rate of its sender and release part of the occupied bandwidth for the use of other competing flows. By so doing, the entire system will approach an equilibrium point with a rapid and fair distribution of bandwidth. As a stateless approach, these proposed schemes inherit numerous advantages in implementation and scalability. Extensive simulations were conducted to verify the feasibility and the effectiveness of the proposed schemes. For the simple proposed packet discard scheme, Drop Maximum outperforms the other two stateless buffer management schemes, i.e. Drop Tail and Random Early Drop, in the scenario of homogeneous flows. However, in heterogeneous flows, Random Early Drop gains superiority to packet discard schemes due to its additional buffer occupancy control mechanism. To overcome the lack of proper buffer occupancy control, Early Drop Maximum is thus proposed. As shown in the simulation results, this proposed scheme outperforms existing stateless techniques, including Drop Tail, Drop Maximum and Random Early Drop, in many respects, such as a fair sharing of available bandwidth and a short response time for short-lived flows.
9

JDiet: Footprint Reduction for Memory-Constrained Systems

Huffman, Michael John 01 June 2009 (has links) (PDF)
Main memory remains a scarce computing resource. Even though main memory is becoming more abundant, software applications are inexorably engineered to consume as much memory as is available. For example, expert systems, scientific computing, data mining, and embedded systems commonly suffer from the lack of main memory availability. This thesis introduces JDiet, an innovative memory management system for Java applications. The goal of JDiet is to provide the developer with a highly configurable framework to reduce the memory footprint of a memory-constrained system, enabling it to operate on much larger working sets. Inspired by buffer management techniques common in modern database management systems, JDiet frees main memory by evicting non-essential data to a disk-based store. A buffer retains a fixed amount of managed objects in main memory. As non-resident objects are accessed, they are swapped from the store to the buffer using an extensible replacement policy. While the Java virtual machine naïvely delegates virtual memory management to the operating system, JDiet empowers the system designer to select both the managed data and replacement policy. Guided by compile-time configuration, JDiet performs aspect-oriented bytecode engineering, requiring no explicit coupling to the source or compiled code. The results of an experimental evaluation of the effectiveness of JDiet are reported. A JDiet-enabled XML DOM parser is capable of parsing and processing over 200% larger input documents by sacrificing less than an order of magnitude in performance.
10

An Enhanced Dynamic Algorithm For Packet Buffer

Rajan, Vinod 11 December 2004 (has links)
A packet buffer for the protocol processor is a large memory space that holds incoming data packets for an application. Data packets for each application are stored in the form of FIFO queues in the packet buffer. Packets are dropped when the buffer is full. An efficient buffer management algorithm is required to manage the buffer space among the different FIFO queues and to avoid heavy packet loss. This thesis develops a simulation model for the packet buffer and studies the performance of conventional buffer management algorithms when applied to packet buffer. This thesis proposes a new buffer management algorithm, Dynamic Algorithm with Different Thresholds (DADT) to improve the packet loss ratio. This algorithm takes advantage of the different packet sizes for each application and proportionally allocates buffer space for each queue. The performance of the DADT algorithm is dependent upon the packet size distribution in a network traffic load. Three different network traffic loads are considered for our simulations. For the average network traffic load, the DADT algorithm shows an improvement of 6.7 % in packet loss ratio over the conventional dynamic buffer management algorithm. For the high and actual network traffic loads, the DADT algorithm shows an improvement of 5.45 % and 3.6 % in packet loss ratio respectively. Based on the simulation results, the DADT algorithm outperforms the conventional buffer management algorithms for various network traffic loads.

Page generated in 0.0658 seconds