• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 2
  • 1
  • Tagged with
  • 8
  • 8
  • 8
  • 8
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Improving TCP performance over heterogeneous networks : the investigation and design of End to End techniques for improving TCP performance for transmission errors over heterogeneous data networks

Alnuem, M. A. January 2009 (has links)
Transmission Control Protocol (TCP) is considered one of the most important protocols in the Internet. An important mechanism in TCP is the congestion control mechanism which controls TCP sending rate and makes TCP react to congestion signals. Nowadays in heterogeneous networks, TCP may work in networks with some links that have lossy nature (wireless networks for example). TCP treats all packet loss as if they were due to congestion. Consequently, when used in networks that have lossy links, TCP reduces sending rate aggressively when there are transmission (non-congestion) errors in an uncongested network. One solution to the problem is to discriminate between errors; to deal with congestion errors by reducing TCP sending rate and use other actions for transmission errors. In this work we investigate the problem and propose a solution using an end-to-end error discriminator. The error discriminator will improve the current congestion window mechanism in TCP and decide when to cut and how much to cut the congestion window. We have identified three areas where TCP interacts with drops: congestion window update mechanism, retransmission mechanism and timeout mechanism. All of these mechanisms are part of the TCP congestion control mechanism. We propose changes to each of these mechanisms in order to allow TCP to cope with transmission errors. We propose a new TCP congestion window action (CWA) for transmission errors by delaying the window cut decision until TCP receives all duplicate acknowledgments for a given window of data (packets in flight). This will give TCP a clear image about the number of drops from this window. The congestion window size is then reduced only by number of dropped packets. Also, we propose a safety mechanism to prevent this algorithm from causing congestion to the network by using an extra congestion window threshold (tthresh) in order to save the safe area where there are no drops of any kind. The second algorithm is a new retransmission action to deal with multiple drops from the same window. This multiple drops action (MDA) will prevent TCP from falling into consecutive timeout events by resending all dropped packets from the same window. A third algorithm is used to calculate a new back-off policy for TCP retransmission timeout based on the network's available bandwidth. This new retransmission timeout action (RTA) helps relating the length of the timeout event with current network conditions, especially with heavy transmission error rates. The three algorithms have been combined and incorporated into a delay based error discriminator. The improvement of the new algorithm is measured along with the impact on the network in terms of congestion drop rate, end-to-end delay, average queue size and fairness of sharing the bottleneck bandwidth. The results show that the proposed error discriminator along with the new actions toward transmission errors has increased the performance of TCP. At the same time it has reduced the load on the network compared to existing error discriminators. Also, the proposed error discriminator has managed to deliver excellent fairness values for sharing the bottleneck bandwidth. Finally improvements to the basic error discriminator have been proposed by using the multiple drops action (MDA) for both transmission and congestion errors. The results showed improvements in the performance as well as decreases in the congestion loss rates when compared to a similar error discriminator.
2

Improving TCP performance over heterogeneous networks : The investigation and design of End to End techniques for improving TCP performance for transmission errors over heterogeneous data networks.

Alnuem, M.A. January 2009 (has links)
Transmission Control Protocol (TCP) is considered one of the most important protocols in the Internet. An important mechanism in TCP is the congestion control mechanism which controls TCP sending rate and makes TCP react to congestion signals. Nowadays in heterogeneous networks, TCP may work in networks with some links that have lossy nature (wireless networks for example). TCP treats all packet loss as if they were due to congestion. Consequently, when used in networks that have lossy links, TCP reduces sending rate aggressively when there are transmission (non-congestion) errors in an uncongested network. One solution to the problem is to discriminate between errors; to deal with congestion errors by reducing TCP sending rate and use other actions for transmission errors. In this work we investigate the problem and propose a solution using an end-to-end error discriminator. The error discriminator will improve the current congestion window mechanism in TCP and decide when to cut and how much to cut the congestion window. We have identified three areas where TCP interacts with drops: congestion window update mechanism, retransmission mechanism and timeout mechanism. All of these mechanisms are part of the TCP congestion control mechanism. We propose changes to each of these mechanisms in order to allow TCP to cope with transmission errors. We propose a new TCP congestion window action (CWA) for transmission errors by delaying the window cut decision until TCP receives all duplicate acknowledgments for a given window of data (packets in flight). This will give TCP a clear image about the number of drops from this window. The congestion window size is then reduced only by number of dropped packets. Also, we propose a safety mechanism to prevent this algorithm from causing congestion to the network by using an extra congestion window threshold (tthresh) in order to save the safe area where there are no drops of any kind. The second algorithm is a new retransmission action to deal with multiple drops from the same window. This multiple drops action (MDA) will prevent TCP from falling into consecutive timeout events by resending all dropped packets from the same window. A third algorithm is used to calculate a new back-off policy for TCP retransmission timeout based on the network¿s available bandwidth. This new retransmission timeout action (RTA) helps relating the length of the timeout event with current network conditions, especially with heavy transmission error rates. The three algorithms have been combined and incorporated into a delay based error discriminator. The improvement of the new algorithm is measured along with the impact on the network in terms of congestion drop rate, end-to-end delay, average queue size and fairness of sharing the bottleneck bandwidth. The results show that the proposed error discriminator along with the new actions toward transmission errors has increased the performance of TCP. At the same time it has reduced the load on the network compared to existing error discriminators. Also, the proposed error discriminator has managed to deliver excellent fairness values for sharing the bottleneck bandwidth. Finally improvements to the basic error discriminator have been proposed by using the multiple drops action (MDA) for both transmission and congestion errors. The results showed improvements in the performance as well as decreases in the congestion loss rates when compared to a similar error discriminator. / Ministry of Higher Education and King Saud University in Saudi Arabia.
3

PROTOCOL LAYERING

Grebe, David L. 10 1900 (has links)
International Telemetering Conference Proceedings / October 21, 2002 / Town & Country Hotel and Conference Center, San Diego, California / The advent of COTS based network-centric data systems brings a whole new vocabulary into the realm of instrumentation. The Communications and computer industries have developed networks to a high level and they continue to evolve. One of the basic techniques that has proven itself useful with this technology is the use of a “layered architecture.” This paper is an attempt to discuss the basic ideas behind this concept and to give some understanding of the vocabulary that has grown up with it.
4

QoS Over Multihop Wireless Networks

Saxena, Tarun 04 1900 (has links)
The aim of this work is to understand the requirements behind Quality of Service (QoS) for Multihop Wireless Networks and evaluate the performance of different such strategies. This work starts by establishing the basis for requirement of QoS and evaluates different approaches for providing QoS. Bandwidth is selected as the most important resource amongst the resources identified for ensuring QoS. The problem is modeled as an optimization problem that tries to maximize the amount of bandwidth available in the system while providing bounds over the bandwidth available over a route. Other QoS parameters are bound by hard limits and are not involved in the optimization problem. The existence of spatial reuse rules has been established through simulations for a TCP based network. This establishes that the reuse rules are independent of the MAC and network layer protocols used. This idea is used in designing the simulations for strategies that use controlled spatial reuse and give known bounds for QoS. Simulations take the network and a set of connections to generate the best possible schedule that guarantees bandwidth to individual connections and maximizes the total number of slots used in the entire system. The total number of slots used is a measure of the bandwidth in use. The set of graphs and connections is generated by a random graph and connection generator and the data set is large enough to average the results. There are two different approaches used for scheduling the connections. The first approach uses graph coloring and provides a simpler implementation in terms of network deployments. Second approach uses on-demand slot allocation. The approaches are compared for their pros and cons. The first approach uses graph coloring to allocate fixed number of slots to each link. This makes an equivalent of a wired network with fixed bandwidth over each link. This network is simpler to operate and analyze at the cost of one time expense of graph coloring. The assumption here is that the network is static or has low mobility. The on demand approach is more flexible in terms of slot assignment and is adaptable to the changing traffic patterns. The cons are that connection establishment is more expensive in terms of bandwidth required and is more complicated and difficult to analyze. The advantages include low initial network establishment cost and accommodation of mobility.
5

PCM Backfill: Providing PCM to the Control Room Without Dropouts

Morgan, Jon, Jones, Charles H. 10 1900 (has links)
ITC/USA 2014 Conference Proceedings / The Fiftieth Annual International Telemetering Conference and Technical Exhibition / October 20-23, 2014 / Town and Country Resort & Convention Center, San Diego, CA / One of the initial control room capabilities to be demonstrated by iNET program is the ability to provide data displays in the control room that do not contain data dropouts. This concept is called PCM Backfill where PCM data is both transmitted via traditional SST and recorded onboard via an iNET compatible recorder. When data dropouts occur, data requests are made over the telemetry network to the recorder for the missing portions of the PCM data stream. The retrieved data is sent over the telemetry network to the backfill application and ultimately delivered to a pristine data display. The integration of traditional SST and the PCM Backfill capability provides both real-time safety of flight data side-by-side with pristine data suitable for advanced analysis.
6

Avaliação de desempenho de variantes dos Protocolos DCCP e TCP em cenários representativos

Doria, Priscila Lôbo Gonçalves 15 May 2012 (has links)
The Datagram Congestion Control Protocol (DCCP) is a prominent transport protocol that has attracted the attention of the scientific community for its rapid progress and good results. The main novelty of DCCP is the performance priority design, as in UDP, however with congestion control capabilities, as in TCP. Literature about DCCP is still scarce and needs to be complemented to gather enouth scientific elements to support new research properly. In this context, this work joins the efforts of the scientific community to analise, mensure, compare and characterize DCCP in relevant scenarios that cover many real world situations. Three open questions were preliminarly identified in the literature: How DCCP behaves (i) when fighting for the same link bandwidth with other transport protocols; (ii) with highly relevant ones (e.g., Compound TCP, CUBIC) and (iii) fighting for the same link bandwidth with Compound TCP and CUBIC, adopting multimedia applications (e.g., VoIP). In this work, computational simulations are used to compare the performance of two DCCP variants (DCCP CCID2 and DCCP CCID3) with three highly representative TCP variants (Compound TCP, CUBIC and TCP SACK), in real world scenarios, including concurrent use of the same link by protocols, link errors and assorted bandwidths, latencies and traffic patterns. The simulation results show that, under contention, in most scenarios DCCP CCID2 has achieved higher throughput than Compound TCP or TCP SACK. Throughout the simulations there was a tendency of DCCP CCID3 to have lower throughput than the other chosen protocol. However, the results also showed that DCCP CCID3 has achieved significanly better throughput in the presence of link errors and higher values of latency and bandwidth, eventualy outperforming Compound TCP and TCP SACK. Finally, there was a tendency of predominance of CUBIC´ throughtput, which can be explained by its aggressive algorithm (i.e., non-linear) of return of the transmission window to the previous value before the discard event. However, CUBIC has presented the highest packet drop and the lowest delivery rate. / O Datagram Congestion Control Protocol (DCCP) é um proeminente protocolo de transporte que vem atraindo a atenção da comunidade científica pelos seus rápidos avanços e bons resultados. A principal inovação do DCCP é a priorização de desempenho, como ocorre com o UDP, mas com capacidade de realizar controle de congestionamento, como ocorre com o TCP. Entretanto, a literatura sobre o DCCP ainda é escassa e necessita ser complementada para trazer elementos científicos suficientes para novas pesquisas. Neste contexto, este trabalho vem se somar aos esforços da comunidade científica para analisar, mensurar, comparar e caracterizar o DCCP em cenários representativos que incorporem diversas situações de uso. Identificaram-se então três questões alvo, ainda em aberto na literatura: qual é o comportamento do DCCP (i) quando disputa o mesmo enlace com outros protocolos de transporte; (ii) com protocolos de transporte relevantes (e.g., Compound TCP, CUBIC) e (iii) em disputa no mesmo enlace com o Compound TCP e o CUBIC, utilizando aplicações multimídia (e.g., VoIP). Neste trabalho, simulações computacionais são utilizadas para comparar duas variantes do DCCP (CCID2 e CCID3) a três variantes do TCP (Compound TCP, CUBIC e TCP SACK), em cenários onde ocorrem situações de mundo real, incluindo utilização concorrente do enlace pelos protocolos, presença de erros de transmissão no enlace, variação de largura de banda, variação de latência, e variação de padrão e distribuição de tráfego. Os resultados das simulações apontam que, sob contenção, na maioria dos cenários o DCCP CCID2 obteve vazão superior à do Compound TCP, do DCCP CCID3 e do TCP SACK. Ao longo das simulações observou-se uma tendência do DCCP CCID3 a ter vazão inferior à dos demais protocolos escolhidos. Entretanto, os resultados apontaram que o DCCP CCID3 obteve desempenho significativamente melhor na presença de erros de transmissão e com valores maiores de latência e de largura de banda, chegando a ultrapassar a vazão do DCCP CCID2 e do TCP SACK. Por fim, observou-se uma tendência de predominância do protocolo CUBIC no tocante à vazão, que pode ser determinada pelo seu algoritmo agressivo (i.e., não-linear) de retorno da janela de transmissão ao valor anterior aos eventos de descarte. Entretanto, o CUBIC apresentou o maior descarte de pacotes e a menor taxa de entrega.
7

Performance modeling of congestion control and resource allocation under heterogeneous network traffic : modeling and analysis of active queue management mechanism in the presence of poisson and bursty traffic arrival processes

Wang, Lan January 2010 (has links)
Along with playing an ever-increasing role in the integration of other communication networks and expanding in application diversities, the current Internet suffers from serious overuse and congestion bottlenecks. Efficient congestion control is fundamental to ensure the Internet reliability, satisfy the specified Quality-of-Service (QoS) constraints and achieve desirable performance in response to varying application scenarios. Active Queue Management (AQM) is a promising scheme to support end-to-end Transmission Control Protocol (TCP) congestion control because it enables the sender to react appropriately to the real network situation. Analytical performance models are powerful tools which can be adopted to investigate optimal setting of AQM parameters. Among the existing research efforts in this field, however, there is a current lack of analytical models that can be viewed as a cost-effective performance evaluation tool for AQM in the presence of heterogeneous traffic, generated by various network applications. This thesis aims to provide a generic and extensible analytical framework for analyzing AQM congestion control for various traffic types, such as non-bursty Poisson and bursty Markov-Modulated Poisson Process (MMPP) traffic. Specifically, the Markov analytical models are developed for AQM congestion control scheme coupled with queue thresholds and then are adopted to derive expressions for important QoS metrics. The main contributions of this thesis are listed as follows: • Study the queueing systems for modeling AQM scheme subject to single-class and multiple-classes Poisson traffic, respectively. Analyze the effects of the varying threshold, mean traffic arrival rate, service rate and buffer capacity on the key performance metrics. • Propose an analytical model for AQM scheme with single class bursty traffic and investigate how burstiness and correlations affect the performance metrics. The analytical results reveal that high burstiness and correlation can result in significant degradation of AQM performance, such as increased queueing delay and packet loss probability, and reduced throughput and utlization. • Develop an analytical model for a single server queueing system with AQM in the presence of heterogeneous traffic and evaluate the aggregate and marginal performance subject to different threshold values, burstiness degree and correlation. • Conduct stochastic analysis of a single-server system with single-queue and multiple-queues, respectively, for AQM scheme in the presence of multiple priority traffic classes scheduled by the Priority Resume (PR) policy. • Carry out the performance comparison of AQM with PR and First-In First-Out (FIFO) scheme and compare the performance of AQM with single PR priority queue and multiple priority queues, respectively.
8

Performance modeling of congestion control and resource allocation under heterogeneous network traffic. Modeling and analysis of active queue management mechanism in the presence of poisson and bursty traffic arrival processes.

Wang, Lan January 2010 (has links)
Along with playing an ever-increasing role in the integration of other communication networks and expanding in application diversities, the current Internet suffers from serious overuse and congestion bottlenecks. Efficient congestion control is fundamental to ensure the Internet reliability, satisfy the specified Quality-of-Service (QoS) constraints and achieve desirable performance in response to varying application scenarios. Active Queue Management (AQM) is a promising scheme to support end-to-end Transmission Control Protocol (TCP) congestion control because it enables the sender to react appropriately to the real network situation. Analytical performance models are powerful tools which can be adopted to investigate optimal setting of AQM parameters. Among the existing research efforts in this field, however, there is a current lack of analytical models that can be viewed as a cost-effective performance evaluation tool for AQM in the presence of heterogeneous traffic, generated by various network applications. This thesis aims to provide a generic and extensible analytical framework for analyzing AQM congestion control for various traffic types, such as non-bursty Poisson and bursty Markov-Modulated Poisson Process (MMPP) traffic. Specifically, the Markov analytical models are developed for AQM congestion control scheme coupled with queue thresholds and then are adopted to derive expressions for important QoS metrics. The main contributions of this thesis are listed as follows: iii ¿ Study the queueing systems for modeling AQM scheme subject to single-class and multiple-classes Poisson traffic, respectively. Analyze the effects of the varying threshold, mean traffic arrival rate, service rate and buffer capacity on the key performance metrics. ¿ Propose an analytical model for AQM scheme with single class bursty traffic and investigate how burstiness and correlations affect the performance metrics. The analytical results reveal that high burstiness and correlation can result in significant degradation of AQM performance, such as increased queueing delay and packet loss probability, and reduced throughput and utlization. ¿ Develop an analytical model for a single server queueing system with AQM in the presence of heterogeneous traffic and evaluate the aggregate and marginal performance subject to different threshold values, burstiness degree and correlation. ¿ Conduct stochastic analysis of a single-server system with single-queue and multiple-queues, respectively, for AQM scheme in the presence of multiple priority traffic classes scheduled by the Priority Resume (PR) policy. ¿ Carry out the performance comparison of AQM with PR and First-In First-Out (FIFO) scheme and compare the performance of AQM with single PR priority queue and multiple priority queues, respectively.

Page generated in 0.119 seconds