Spelling suggestions: "subject:"buffer storage (computer science)"" "subject:"buffer storage (coomputer science)""
11 |
An evolutionary approach to improve end-to-end performance in TCP/IP networksPrasad, Ravi S. 08 January 2008 (has links)
Despite the persistent change and growth that characterizes the Internet,
the Transmission Control Protocol (TCP) still dominates at
the transport layer, carrying more than 90\% of the global traffic.
Despite its astonishing success, it has been observed that TCP
can cause poor end-to-end performance, especially for large transfers
and in network paths with high bandwidth-delay product.
In this thesis, we focus on mechanisms that can
address key problems in TCP performance, without
any modification in the protocol itself.
This evolutionary approach is important in practice, as the deployment
of clean-slate transport protocols in the Internet has been proved
to be extremely difficult.
Specifically, we identify a number of TCP-related problems
that can cause poor end-to-end performance.
These problems include poorly dimensioned socket buffer
sizes at the end-hosts, suboptimal buffer sizing at routers and switches,
and congestion unresponsive TCP traffic aggregates.
We propose solutions that can address these issues,
without any modification to TCP. <br> <br>
In network paths with significant available bandwidth, increasing
the TCP window till observing loss can result in
much lower throughput than the path's available bandwidth.
We show that changes in TCP are {em not required} to utilize all the
available bandwidth, and propose the application-layer
SOcket Buffer Auto-Sizing (SOBAS) mechanism to achieve this goal.
SOBAS relies on run-time estimation
of the round trip time (RTT) and receive rate, and limits its socket buffer
size when the receive rate approaches the path's available bandwidth.
In a congested network, SOBAS does not limit its socket buffer size.
Our experiment results show that SOBAS improves TCP throughput in uncongested
network without hurting TCP performance in congested networks.
<br> <br>
Improper router buffer sizing can also result in poor TCP throughput.
Previous research in router buffer sizing
focused on network performance metrics such as link utilization or loss rate.
Instead, we focus on the impact of buffer sizing on end-to-end TCP performance.
We find that the router buffer size that
optimizes TCP throughput is largely determined by
the link's output to input capacity ratio.
If that ratio is larger than one,
the loss rate drops exponentially with the buffer size
and the optimal buffer size is close to zero.
Otherwise, if the output to input capacity ratio is lower than one,
the loss rate follows a power-law reduction with the buffer size
and significant buffering is needed.
The amount of buffering required in this case depends on whether
most flows end in the slow-start phase or in the congestion avoidance phase.
<br> <br>
TCP throughput also depends on whether the cross-traffic reduces its
send rate upon congestion.
We define this cross-traffic property as {em congestion responsiveness}.
Since the majority of Internet traffic uses TCP, which reduces its send rate
upon congestion, an aggregate of many TCP flows is believed to be
congestion responsive. Here, we show that the congestion responsiveness of
aggregate traffic also depends on the flow arrival process. If the flow
arrival process follows an open-loop model, then even if the traffic consists
exclusively of TCP transfers, the aggregate traffic can still be unresponsive
to congestion. TCP flows that arrive in the network in a closed-loop manner
are always congestion responsive, on the other hand.
We also propose a scheme to estimate the fraction of traffic that
follows the closed-loop model in a given link, and give practical
guidelines to increase that fraction with simple application-layer
modifications.
|
12 |
Using virtual memory to improve cache and TLB performance /Romer, Theodore H. January 1998 (has links)
Thesis (Ph. D.)--University of Washington, 1998. / Vita. Includes bibliographical references (p. [137]-143).
|
13 |
Virtual memory alternatives for transaction buffer management in a single-level store /McNamee, Dylan James, January 1996 (has links)
Thesis (Ph. D.)--University of Washington, 1996. / Vita. Includes bibliographical references (p. [111]-120).
|
14 |
D-Buffer : a new hidden-line algorithm in image-space /Dong, Xiaomin, January 1999 (has links)
Thesis (M.Sc.)--Memorial University of Newfoundland, 1999. / Bibliography: leaves 59-64.
|
15 |
Reducing internet latency for thin-stream applications over reliable transport with active queue managementGrigorescu, Eduard January 2018 (has links)
An increasing number of network applications use reliable transport protocols. Applications with constant data transmission recover from loss without major performance disruption, however, applications that send data sporadically, in small packets, also called thin-streams, experience frequently high latencies due to 'Bufferbloat', that reduce the application performance. Active Queue Management mechanisms were proposed to dynamically manage the queues in routers by dropping packets early and reduce these, hence reducing latency. While their deployment to the internet remains an open issue, the proper investigation into how their functioning mechanism impacts latency is the main focus of this work and research questions have been devised to investigate the AQM impact on latency. A range of AQM mechanisms has been evaluated by the research, exploring performance of the methods for latency sensitive network applications. This has explored new single queue AQM mechanisms such as Controlled Delay (CODEL) and Proportional Integral Enhanced (PIE) and Adaptive RED (ARED). The evaluation has shown great improvements in queuing latency when AQM are used over a range of network scenarios. Scheduling AQM algorithms such as FlowQueue CODEL (FQ-CODEL) isolates traffic and minimises the impact of Bufferbloat on flows. The core components of FQ-CODEL, still widely misunderstood at the time of its inception, have been explained in depth by this study and their contribution to reducing latency have been evaluated. The results show significant reductions in queuing latency for thin streams using FQ-CODEL. When TCP is used for thin streams, high application latencies can arise when there are retransmissions, for example after dropping packets by an AQM mechanism. This delay is a result of TCP's loss-based congestion control mechanism that controls sender transmission rate following packet loss. ECN, a marking sender-side improvement to TCP reduces applicationlayer latency without disrupting the overall network performance. The thesis evaluated the benefit of using ECN using a wide range of experiments. The findings show that FQ-CODEL with ECN provides a substantial reduction of application latency compared to a drop-based AQM. Moreover, this study recommends the combination of FQ-CODEL with other mechanisms, to reduce application latency. Mechanisms such as ABE, have been shown to increase aggregate throughput and reduce application latency for thin-stream applications.
|
16 |
Increasing the efficiency of network interface cardUppal, Amit, January 2007 (has links)
Thesis (M.S.)--Mississippi State University. Department of Electrical and Computer Engineering. / Title from title screen. Includes bibliographical references.
|
17 |
On pipelined multistage interconnection networks /Thuppal, Rajagopalan, January 1998 (has links)
Thesis (M. Eng.), Memorial University of Newfoundland, 1998. / Bibliography: leaves 107-112.
|
18 |
Modeling and analysis of the performance of networks in finite-buffer regimeTorabkhani, Nima 22 May 2014 (has links)
In networks, using large buffers tend to increase end-to-end packet delay and its deviations, conflicting with real-time applications such as online gaming, audio-video services, IPTV, and VoIP. Further, large buffers complicate the design of high speed routers, leading to more power consumption and board space. According to Moore's law, switching speeds double every 18 months while memory access speeds double only every 10 years. Hence, as memory requirements increasingly become a limiting aspect of router design, studying networks in finite-buffer regime seems necessary for network engineers. This work focuses on both practical and theoretical aspects of finite-buffer networks. In Chapters 1-7, we investigate the effects of finite buffer sizes on the throughput and packet delay in different networks. These performance measures are shown to be linked to the stationary distribution of an underlying irreducible Markov chain that exactly models the changes in the network. An iterative scheme is proposed to approximate the steady-state distribution of buffer occupancies by decoupling the exact chain to smaller chains. These approximate solutions are used to analytically characterize network throughput and packet delay, and are also applied to some network performance optimization problems. Further, using simulations, it is confirmed that the proposed framework yields accurate estimates of the throughput and delay performance measures and captures the vital trends and tradeoffs in these networks. In Chapters 8-10, we address the problem of modeling and analysis of the performance of finite-memory random linear network coding in erasure networks. When using random linear network coding, the content of buffers creates dependencies which cannot be captured directly using the classical queueing theoretical models. A careful derivation of the buffer occupancy states and their transition rules are presented as well as decodability conditions when random linear network coding is performed on a stream of arriving packets.
|
19 |
Dynamically Reconfigurable Optical Buffer and Multicast-Enabled Switch Fabric for Optical Packet SwitchingYeo, Yong-Kee 30 November 2006 (has links)
Optical packet switching (OPS) is one of the more promising solutions for meeting the diverse needs of broadband networking applications of the future. By virtue of its small data traffic granularity as well as its nanoseconds switching speed, OPS can be used to provide connection-oriented or connectionless services for different groups of users with very different networking requirements. The optical buffer and the switch fabric are two of the most important components in an OPS router. In this research, novel designs for the optical buffer and switch fabric are proposed and experimentally demonstrated. In particular, an optical buffer that is based on a folded-path delay-line tree architecture will be discussed. This buffer is the most compact non-recirculating optical delay line buffer to date, and it uses an array of high-speed ON-OFF optical reflectors to dynamically reconfigure its delay within several nanoseconds. A major part of this research is devoted to the design and performance optimization of these high-speed reflectors. Simulations and measurements are used to compare different reflector designs as well as to determine their optimal operating conditions. Another important component in the OPS router is the switch fabric, and it is used to perform space switching for the optical packets. Optical switch fabrics are used to overcome the limitations imposed by conventional electronic switch fabrics: high power consumption and dependency on the modulation format and bit-rate of the signals. Currently, only those fabrics that are based on the broadcast-and-select architecture can provide truly non-blocking multicast services to all input ports. However, a major drawback of these fabrics is that they are implemented using a large number of optical gates based on semiconductor optical amplifiers (SOA). This results in large component count and high energy consumption. In this research, a new multicast-capable switch fabric which does not require any SOA gates is proposed. This fabric relies on a passive all-optical gate that is based on the Four-wave mixing (FWM) wavelength conversion process in a highly-nonlinear fiber. By using this new switch architecture, a significant reduction in component count can be expected.
|
20 |
Modeling future all-optical networks without buffering capabilitiesDe Vega Rodrigo, Miguel 27 October 2008 (has links)
In this thesis we provide a model for a bufferless optical burst switching (OBS) and an optical packet switching (OPS) network. The thesis is divided in three parts. <p><p>In the first part we introduce the basic functionality and structure of OBS and OPS networks. We identify the blocking probability as the main performance parameter of interest. <p><p>In the second part we study the statistical properties of the traffic that will likely run through these networks. We use for this purpose a set of traffic traces obtained from the Universidad Politécnica de Catalunya. Our conclusion is that traffic entering the optical domain in future OBS/OPS networks will be long-range dependent (LRD). <p><p>In the third part we present the model for bufferless OBS/OPS networks. This model takes into account the results from the second part of the thesis concerning the LRD nature of traffic. It also takes into account specific issues concerning the functionality of a typical bufferless packet-switching network. The resulting model presents scalability problems, so we propose an approximative method to compute the blocking probability from it. We empirically evaluate the accuracy of this method, as well as its scalability. / Doctorat en Sciences de l'ingénieur / info:eu-repo/semantics/nonPublished
|
Page generated in 0.107 seconds