11 |
TCP-Carson: A loss-event based Adaptive AIMD algorithm for Long-lived FlowsKannan, Hariharan 13 May 2002 (has links)
The diversity of network applications over the Internet has propelled researchers to rethink the strategies in the transport layer protocols. Current applications either use UDP without end-to-end congestion control mechanisms or, more commonly, use TCP. TCP continuously probes for bandwidth even at network steady state and thereby causes variation in the transmission rate and losses. This thesis proposes TCP Carson, a modification of the window-scaling approach of TCP Reno to suit long-lived flows using loss-events as indicators of congestion. We analyzed and evaluated TCP Carson using NS-2 over a wide range of test conditions. We show that TCP Carson reduces loss, improves throughput and reduces window-size variance. We believe that this adaptive approach will improve both network and application performance.
|
12 |
Wireless Data Transmission System for Real-time 3D Geophysical SensingViggiano, David Anthony 01 January 2007 (has links)
A wireless data transmission system was developed and implemented for use in real-time 3D geophysical sensing. Server and Client applications were designed to run on a stationary computer and a mobile computer attached to the geophysical sensor, respectively. Several methods for optimizing communication over wireless networks using commonly available hardware were tested and compared, and a scheme for varying the size of transmissions in accordance with the recent performance of the wireless network was chosen. The final system was integrated with a 3D Ground Penetrating Radar (GPR) system and tested in a field experiment that spanned two weeks and involved the acquisition of 16 data volumes. The system performed successfully throughout the experiment and provided necessary feedback to assist in the error-free acquisition of all data volumes. Future development of the system is discussed in order for the system to support an automated wireless geophysical sensor network with multiple client sensors moved around the survey area by automated robots.
|
13 |
STCP: A New Transport Protocol for High-Speed NetworksShivarudraiah, Ranjitha 17 November 2009 (has links)
Transmission Control Protocol (TCP) is the dominant transport protocol today and likely to be adopted in future high‐speed and optical networks. A number of literature works have been done to modify or tune the Additive Increase Multiplicative Decrease (AIMD) principle in TCP to enhance the network performance. In this work, to efficiently take advantage of the available high bandwidth from the high‐speed and optical infrastructures, we propose a Stratified TCP (STCP) employing parallel virtual transmission layers in high‐speed networks. In this technique, the AIMD principle of TCP is modified to make more aggressive and efficient probing of the available link bandwidth, which in turn increases the performance. Simulation results show that STCP offers a considerable improvement in performance when compared with other TCP variants such as the conventional TCP protocol and Layered TCP (LTCP).
|
14 |
TCPスループットに対する無線層でのFEC, ARQ適用に関する一考察内藤, 克浩, 岡田, 啓, 山里, 敬也, 片山, 正昭, 小川, 明 01 January 2001 (has links)
No description available.
|
15 |
Effects of packet aggregation on TCP performanceLu, Jia-ying 08 September 2006 (has links)
Abstract
Due to advances of technologies and growth of Internet usage, demand for larger and larger network capacity remains the major challenge for network operators. To meet the increasing demand, optical network has become the key technology in the current and next-generation Internet. In terms of network architecture, optical packet switching (OPS) is a promising up-star in achieving high efficiency just as the electronic counterpart. However, it is currently far reached because of the difficulty in making optical random access memory and ultra-high cost in making fast optical switches that can handle more than 10^9 packets per second. Optical burst switching (OBS), on the other hand, is a more achievable, economical alternative. In OBS networks, packets are aggregated into much larger sized bursts before entering the core network. It thus does not require fast optical switches. And by incorporating one-way delayed reservation scheme, OBS avoids using optical RAM. There have been many research activities toward OBS. However, for Internet with 90% of TCP traffic, the effect of packet aggregation introduced by OBS on TCP performance is still not well understood. Detti and Listanti derived a model for it and the model was verified in simulation [2]. Yet, we found many of the assumptions in their study are not realistic. The obtained result is therefore questionable. In this thesis, we relax their assumptions and design two new models accordingly in order to get deeper understanding on the effects of packet aggregation on TCP performance. From our simulation results, we conclude three affecting factors: burst assembly, assembly delay and assembler buffer size. Burst assembly shows positive effect, while the other two demonstrate negative effects, on TCP throughput.
|
16 |
基地局ダイバーシチを利用したTCP無線接続に関する一検討内藤, 克浩, 岡田, 啓, 山里, 敬也, 片山, 正昭 01 April 2004 (has links)
No description available.
|
17 |
dspIP : a TCP/IP implementation for a Digital Signal ProcessorTourish, John Patrick 10 December 2013 (has links)
From the initial implementations for the DEC PDP-11 to those of today done for commodity PICs, the TCP/IP code stack continues to work its way into a smaller and more omnipresent class of devices. One shortcoming of current devices on the leading edge of this trend is that they belong more to the microcontroller categories, which typically lack any appreciable signal processing capability. Applications such as consumer electronics and wireless sensor networks could benefit greatly from single-chip network-capable devices which are based on a Digital Signal Processing (DSP) core rather than a microcontroller. This report details the design and implementation of a partial TCP/IP code stack intended for such a DSP. / text
|
18 |
TCP Adaptation Framework in Data CentersGhobadi, Monia 09 January 2014 (has links)
Congestion control has been extensively studied for many years. Today, the Transmission Control Protocol (TCP) is used in a wide range of networks (LAN, WAN, data center, campus network, enterprise network, etc.) as the de facto congestion control mechanism. Despite its common usage, TCP operates in these networks with little knowledge of the underlying network or traffic characteristics. As a result, it is doomed to continuously increase or decrease its congestion window size in order to handle changes in the network or traffic conditions. Thus, TCP frequently overshoots or undershoots the ideal rate making it a "Jack of all trades, master of none" congestion control protocol.
In light of the emerging popularity of centrally controlled Software-Defined Networks (SDNs), we ask whether we can take advantage of the information available at the central controller to improve TCP. Specifically, in this thesis, we examine the design and implementation of OpenTCP, a dynamic and programmable TCP adaptation framework for SDN-enabled data centers. OpenTCP gathers global information about the status of the network and traffic conditions through the SDN controller, and uses this information to adapt TCP. OpenTCP periodically sends updates to end-hosts which, in turn, update their behaviour using a simple kernel module.
In this thesis, we first present two real-world TCP adaptation experiments in depth: (1) using TCP pacing in inter-data center communications with shallow buffers, and (2) using Trickle to rate limit TCP video streaming. We explain the design, implementation, limitation, and benefits of each TCP adaptation to highlight the potential power of having a TCP adaptation framework in today's networks. We then discuss the architectural design of OpenTCP, as well as its implementation and deployment at SciNet, Canada's largest supercomputer center. Furthermore, we study use-cases of OpenTCP using the ns-2 network simulator. We conclude that OpenTCP-based congestion control simplifies the process of adapting TCP to network conditions, leads to improvements in TCP performance, and is practical in real-world settings.
|
19 |
TCP Adaptation Framework in Data CentersGhobadi, Monia 09 January 2014 (has links)
Congestion control has been extensively studied for many years. Today, the Transmission Control Protocol (TCP) is used in a wide range of networks (LAN, WAN, data center, campus network, enterprise network, etc.) as the de facto congestion control mechanism. Despite its common usage, TCP operates in these networks with little knowledge of the underlying network or traffic characteristics. As a result, it is doomed to continuously increase or decrease its congestion window size in order to handle changes in the network or traffic conditions. Thus, TCP frequently overshoots or undershoots the ideal rate making it a "Jack of all trades, master of none" congestion control protocol.
In light of the emerging popularity of centrally controlled Software-Defined Networks (SDNs), we ask whether we can take advantage of the information available at the central controller to improve TCP. Specifically, in this thesis, we examine the design and implementation of OpenTCP, a dynamic and programmable TCP adaptation framework for SDN-enabled data centers. OpenTCP gathers global information about the status of the network and traffic conditions through the SDN controller, and uses this information to adapt TCP. OpenTCP periodically sends updates to end-hosts which, in turn, update their behaviour using a simple kernel module.
In this thesis, we first present two real-world TCP adaptation experiments in depth: (1) using TCP pacing in inter-data center communications with shallow buffers, and (2) using Trickle to rate limit TCP video streaming. We explain the design, implementation, limitation, and benefits of each TCP adaptation to highlight the potential power of having a TCP adaptation framework in today's networks. We then discuss the architectural design of OpenTCP, as well as its implementation and deployment at SciNet, Canada's largest supercomputer center. Furthermore, we study use-cases of OpenTCP using the ns-2 network simulator. We conclude that OpenTCP-based congestion control simplifies the process of adapting TCP to network conditions, leads to improvements in TCP performance, and is practical in real-world settings.
|
20 |
TCP Behavior in Quality of Service NetworksAthuraliya, Sanjeewa Aruna January 2007 (has links)
Best effort networks fail to deliver the level of service emerging Internet applications demand. As a result many networks are being transformed to Quality of Service (QoS) networks, of which most are Differentiated Services (DiffServ) networks. While the deployment of such networks has been feasible, it is extremely difficult to overhaul the transport layer protocols such as Transmission Control Protocol (TCP) running on hundreds of millions of end nodes around the world. TCP, which has been designed to run on a best effort network, perform poorly in a DiffServ network. It fails to deliver the performance guarantees expected of DiffServ. In this thesis we investigate two aspects of TCP performance in a DiffServ network unaccounted for in previous studies. We develop a deterministic model of TCP that intrinsically captures flow aggregation, a key component of DiffServ. The other important aspect of TCP considered in this thesis is its' transient behavior. Using our deterministic model we derive a classical control system model of TCP applicable in a DiffServ network. Performance issues of TCP can potentially inhibit the adoption of DiffServ. A DiffServ network commonly use token buckets, that are placed at the edge of the network, to mark packets according to their conformance to Service Level Agreements (SLA). We propose two token bucket variants designed to mitigate TCP issues present in a DiffServ network. Our first proposal incorporates a packet queue alongside the token bucket. The other proposal introduces a feedback controller around the token bucket. We validate both analytically and experimentally the performance of the proposed token buckets. By confining our changes to the token bucket we avoid any changes at end-nodes. The proposed token buckets can also be incrementally deployed. Most part of the Internet still remains as a best effort network. However, most nodes run various QoS functions locally. We look at one such important QoS function, i.e. the ability to survive against flows that are non-responsive to congestion, the equivalent of a Denial of Service (DoS) attack. We analyze existing techniques and propose improvements.
|
Page generated in 0.0373 seconds