Spelling suggestions: "subject:"packet loss"" "subject:"packet oss""
41 |
VIRTUAL PRIVATE NETWORKS : An Analysis of the Performance in State-of-the-Art Virtual Private Network solutions in Unreliable Network ConditionsHabibovic, Sanel January 2019 (has links)
This study aimed to identify the differences between state-of-the-art VPN solutions on different operating systems. It was done because a novel VPN protocol is in the early stages of release and a comparison of it, to other current VPN solutions is interesting. It is interesting because current VPN solutions are well established and have existed for a while and the new protocol stirs the pot in the VPN field. Therefore a contemporary comparison between them could aid system administrators when choosing which VPN to implement. To choose the right VPN solution for the occasion could increase performance for the users and save costs for organizations who wish to deploy VPNs. With the remote workforce increasing issues of network reliability also increases, due to wireless connections and networks beyond the control of companies. This demands an answer to the question how do VPN solutions differ in performance with stable and unstable networks? This work attempted to answer this question. This study is generally concerning VPN performance but mainly how the specific solutions perform under unreliable network conditions.It was achieved by researching past comparisons of VPN solutions to identify what metrics to analyze and which VPN solutions have been recommended. Then a test bed was created in a lab network to control the network when testing, so the different VPN implementations and operating systems have the same premise. To establish baseline results, performance testing was done on the network without VPNs, then the VPNs were tested under reliable network conditions and then with unreliable network conditions. The results of that were compared and analyzed. The results show a difference in the performance of the different VPNs, also there is a difference on what operating system is used and there are also differences between the VPNs with the unreliability aspects switched on. The novel VPN protocol looks promising as it has overall good results, but it is not conclusive as the current VPN solutions can be configured based on what operating system and settings are chosen. With this set-up, VPNs on Linux performed much better under unreliable network conditions when compared to setups using other operating systems. The outcome of this work is that there is a possibility that the novel VPN protocol is performing better and that certain combinations of VPN implementation and OS are better performing than others when using the default configuration. This works also pointed out how to improve the testing and what aspects to consider when comparing VPN implementations.
|
42 |
Stochastic Modeling and Simulation of the TCP protocolOlsén, Jörgen January 2003 (has links)
<p>The success of the current Internet relies to a large extent on a cooperation between the users and the network. The network signals its current state to the users by marking or dropping packets. The users then strive to maximize the sending rate without causing network congestion. To achieve this, the users implement a flow-control algorithm that controls the rate at which data packets are sent into the Internet. More specifically, the <i>Transmission Control Protocol (TCP)</i> is used by the users to adjust the sending rate in response to changing network conditions. TCP uses the observation of packet loss events and estimates of the round trip time (RTT) to adjust its sending rate. </p><p>In this thesis we investigate and propose stochastic models for TCP. The models are used to estimate network performance like throughput, link utilization, and packet loss rate. The first part of the thesis introduces the TCP protocol and contains an extensive TCP modeling survey that summarizes the most important TCP modeling work. Reviewed models are categorized as renewal theory models, fixed-point methods, fluid models, processor sharing models or control theoretic models. The merits of respective category is discussed and guidelines for which framework to use for future TCP modeling is given. </p><p>The second part of the thesis contains six papers on TCP modeling. Within the renewal theory framework we propose single source TCP-Tahoe and TCP-NewReno models. We investigate the performance of these protocols in both a DropTail and a RED queuing environment. The aspects of TCP performance that are inherently depending on the actual implementation of the flow-control algorithm are singled out from what depends on the queuing environment.</p><p>Using the fixed-point framework, we propose models that estimate packet loss rate and link utilization for a network with multiple TCP-Vegas, TCP-SACK and TCP-Reno on/off sources. The TCP-Vegas model is novel and is the first model capable of estimating the network's operating point for TCP-Vegas sources sending on/off traffic. All TCP and network models in the contributed research papers are validated via simulations with the network simulator <i>ns-2</i>. </p><p>This thesis serves both as an introduction to TCP and as an extensive orientation about state of the art stochastic TCP models.</p>
|
43 |
Stochastic Modeling and Simulation of the TCP protocolOlsén, Jörgen January 2003 (has links)
The success of the current Internet relies to a large extent on a cooperation between the users and the network. The network signals its current state to the users by marking or dropping packets. The users then strive to maximize the sending rate without causing network congestion. To achieve this, the users implement a flow-control algorithm that controls the rate at which data packets are sent into the Internet. More specifically, the Transmission Control Protocol (TCP) is used by the users to adjust the sending rate in response to changing network conditions. TCP uses the observation of packet loss events and estimates of the round trip time (RTT) to adjust its sending rate. In this thesis we investigate and propose stochastic models for TCP. The models are used to estimate network performance like throughput, link utilization, and packet loss rate. The first part of the thesis introduces the TCP protocol and contains an extensive TCP modeling survey that summarizes the most important TCP modeling work. Reviewed models are categorized as renewal theory models, fixed-point methods, fluid models, processor sharing models or control theoretic models. The merits of respective category is discussed and guidelines for which framework to use for future TCP modeling is given. The second part of the thesis contains six papers on TCP modeling. Within the renewal theory framework we propose single source TCP-Tahoe and TCP-NewReno models. We investigate the performance of these protocols in both a DropTail and a RED queuing environment. The aspects of TCP performance that are inherently depending on the actual implementation of the flow-control algorithm are singled out from what depends on the queuing environment. Using the fixed-point framework, we propose models that estimate packet loss rate and link utilization for a network with multiple TCP-Vegas, TCP-SACK and TCP-Reno on/off sources. The TCP-Vegas model is novel and is the first model capable of estimating the network's operating point for TCP-Vegas sources sending on/off traffic. All TCP and network models in the contributed research papers are validated via simulations with the network simulator ns-2. This thesis serves both as an introduction to TCP and as an extensive orientation about state of the art stochastic TCP models.
|
44 |
Determination Of Network Delay Distribution Over The InternetKarakas, Mehmet 01 December 2003 (has links) (PDF)
The rapid growth of the Internet and the proliferation of its new applications pose a serious challenge in network performance management and monitoring. The current Internet has no mechanism for providing feedback on network congestion to the end-systems at the IP layer. For applications and their end hosts, end-to-end measurements may be the only way of measuring network performance.
Understanding the packet delay and loss behavior of the Internet is important for proper design of network algorithms such as routing and flow control algorithms, for the dimensioning of buffers and link capacity, and for choosing parameters in simulation and analytic studies.
In this thesis, round trip time (RTT), one-way network delay and packet loss in the Internet are measured at different times of the day, using a Voice over IP (VoIP) device. The effect of clock skew on one-way network delay measurements is eliminated by a Linear Programming algorithm, implemented in MATLAB. Distributions of one-way network delay and RTT in the Internet are determined. It is observed that delay distribution has a gamma-like shape with heavy tail. It is tried to model delay distribution with gamma, lognormal and Weibull distributions. It is observed that most of the packet losses in the Internet are single packet losses. The effect of firewall on delay measurements is also observed.
|
45 |
A cooperative MAC protocol to improve the performance of in-home broadband PLC systemsOliveira, Roberto Massi de 11 March 2015 (has links)
Submitted by Renata Lopes (renatasil82@gmail.com) on 2017-06-07T11:55:49Z
No. of bitstreams: 1
robertomussideoliveira.pdf: 1293292 bytes, checksum: 78c6c9fd9415c0b3990a1aaf55b842a2 (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-06-07T13:31:58Z (GMT) No. of bitstreams: 1
robertomussideoliveira.pdf: 1293292 bytes, checksum: 78c6c9fd9415c0b3990a1aaf55b842a2 (MD5) / Made available in DSpace on 2017-06-07T13:31:58Z (GMT). No. of bitstreams: 1
robertomussideoliveira.pdf: 1293292 bytes, checksum: 78c6c9fd9415c0b3990a1aaf55b842a2 (MD5)
Previous issue date: 2015-03-11 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Nesse trabalho, discutimos o uso de protocolos de cooperação na subcamada de controle de acesso ao meio (CMAC) para reduzir a taxa de perda de pacote e aumentar o goodput em um sistema de comunicação banda larga via rede elétrica (PLC) de ambientes residenciais. Para apoiar essa discussão, nós, pela primeira vez, apresentamos uma análise estatística da taxa de erro de pacote (PER) de canais PLC residenciais a partir de dados medidos em um modelo com um único relay. Adicionalmente, nós esboçamos um simples protocolo CMAC capaz de explorar a diversidade oferecida por uma rede elétrica doméstica. Usando esse protocolo, nosso objetivo é mostrar o impacto da variação da largura de banda, da variação da PER e da posição relativa do relay no desempenho do sistema. Sendo assim, nós mostramos que a taxa de perda de pacote e o goodput melhoram quando a largura de banda de frequência aumenta. Além disso, resultados mostram que a cooperação na camada de enlace não oferece vantagens caso os valores de PER do enlace direto e do enlace intermediado pelo relay sejam muito altos ou muito baixos. Nós também notamos que as melhorias estudadas dependem da posição do nó relay em relação ao nó fonte e ao nó destino (i.e., notamos melhoras na rede nos casos em que o relay estava situado próximo à fonte e no meio do caminho entre a fonte e o destino). Finalmente, uma comparação entre os esquemas de acesso múltiplo por divisão de frequências ortogonais - acesso múltiplo por divisão de tempo (OFDMA-TDMA) e acesso múltiplo por divisão de tempo - multiplexação por divisão ortogonal de frequência (TDMA-OFDM) mostra que o simples protocolo CMAC é mais eficaz quando usado juntamente com o primeiro esquema do que com o último. Em suma, a nossa contribuição é dividida em duas etapas: primeiramente, desenvolvemos um simples protocolo MAC de cooperação que traz melhorias de desempenho na rede quando comparado com um sistema sem a cooperação; em segundo lugar, nós realizamos uma análise sistemática de diferentes cenários, mostrando os benefícios e limitações da cooperação na camada de enlace de redes PLC. / In this work, we discuss the use of cooperative medium access control (CMAC) protocols to reduce packet loss rate and to improve goodput of in-home broadband power line communication (PLC) systems. To support this discussion, we, for the first time, present a statistical packet error rate (PER) analysis of measured in-home PLC channels by adopting a single relay model. Additionally, we outline a simple CMAC protocol that is capable of exploiting the diversity offered by in-home electric power grids. Using this protocol, we aim to show the impact of bandwidth variation, PER variation and of relative relay location on system performance. Thus, we show that packet loss rate and goodput improve when frequency bandwidth increases. Also, results show that cooperation at the link layer does not offer advantages if the PER values of direct and relayed links are very high or very low. Furthermore, we note that the improvements depend on the location of the node relay in relation to the nodes source and the destination (i.e., network improves if the relay is located near the source or in the midway between the source and the destination). Finally, a comparison between orthogonal frequency division multiple access - time division multiple access (OFDMA-TDMA) and time division multiple access - orthogonal frequency division multiplexing (TDMA-OFDM) schemes show that the simple CMAC protocol is more effective when it is used together with the former scheme than the latter.
|
46 |
A Cooperative MAC Protocol to Improve the Performance of In-Home Broadband PLC SystemsOliveira, Roberto Massi de 11 March 2015 (has links)
Submitted by Renata Lopes (renatasil82@gmail.com) on 2015-12-01T12:45:13Z
No. of bitstreams: 1
robertomussideoliveira.pdf: 1293292 bytes, checksum: 78c6c9fd9415c0b3990a1aaf55b842a2 (MD5) / Made available in DSpace on 2015-12-01T12:45:13Z (GMT). No. of bitstreams: 1
robertomussideoliveira.pdf: 1293292 bytes, checksum: 78c6c9fd9415c0b3990a1aaf55b842a2 (MD5)
Previous issue date: 2015-03-11 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Nesse trabalho, discutimos o uso de protocolos de coopera¸c˜ao na subcamada de controle
de acesso ao meio (CMAC) para reduzir a taxa de perda de pacote e aumentar o
goodput em um sistema de comunica¸c˜ao banda larga via rede el´etrica (PLC) de ambientes
residenciais. Para apoiar essa discuss˜ao, n´os, pela primeira vez, apresentamos uma
an´alise estat´ıstica da taxa de erro de pacote (PER) de canais PLC residenciais a partir de
dados medidos em um modelo com um ´unico relay. Adicionalmente, n´os esbo¸camos um
simples protocolo CMAC capaz de explorar a diversidade oferecida por uma rede el´etrica
dom´estica. Usando esse protocolo, nosso objetivo ´e mostrar o impacto da varia¸c˜ao da
largura de banda, da varia¸c˜ao da PER e da posi¸c˜ao relativa do relay no desempenho do
sistema. Sendo assim, n´os mostramos que a taxa de perda de pacote e o goodput melhoram
quando a largura de banda de frequˆencia aumenta. Al´em disso, resultados mostram
que a coopera¸c˜ao na camada de enlace n˜ao oferece vantagens caso os valores de PER do
enlace direto e do enlace intermediado pelo relay sejam muito altos ou muito baixos. N´os
tamb´em notamos que as melhorias estudadas dependem da posi¸c˜ao do n´o relay em rela¸c˜ao
ao n´o fonte e ao n´o destino (i.e., notamos melhoras na rede nos casos em que o relay estava
situado pr´oximo `a fonte e no meio do caminho entre a fonte e o destino). Finalmente, uma
compara¸c˜ao entre os esquemas de acesso m´ultiplo por divis˜ao de frequˆencias ortogonais
- acesso m´ultiplo por divis˜ao de tempo (OFDMA-TDMA) e acesso m´ultiplo por divis˜ao
de tempo - multiplexa¸c˜ao por divis˜ao ortogonal de frequˆencia (TDMA-OFDM) mostra
que o simples protocolo CMAC ´e mais eficaz quando usado juntamente com o primeiro
esquema do que com o ´ultimo. Em suma, a nossa contribui¸c˜ao ´e dividida em duas etapas:
primeiramente, desenvolvemos um simples protocolo MAC de coopera¸c˜ao que traz melhorias
de desempenho na rede quando comparado com um sistema sem a coopera¸c˜ao; em
segundo lugar, n´os realizamos uma an´alise sistem´atica de diferentes cen´arios, mostrando
os benef´ıcios e limita¸c˜oes da coopera¸c˜ao na camada de enlace de redes PLC. / In this work, we discuss the use of cooperative medium access control (CMAC) protocols
to reduce packet loss rate and to improve goodput of in-home broadband power
line communication (PLC) systems. To support this discussion, we, for the first time,
present a statistical packet error rate (PER) analysis of measured in-home PLC channels
by adopting a single relay model. Additionally, we outline a simple CMAC protocol that
is capable of exploiting the diversity offered by in-home electric power grids. Using this
protocol, we aim to show the impact of bandwidth variation, PER variation and of relative
relay location on system performance. Thus, we show that packet loss rate and goodput
improve when frequency bandwidth increases. Also, results show that cooperation at the
link layer does not offer advantages if the PER values of direct and relayed links are very
high or very low. Furthermore, we note that the improvements depend on the location of
the node relay in relation to the nodes source and the destination (i.e., network improves if
the relay is located near the source or in the midway between the source and the destination).
Finally, a comparison between orthogonal frequency division multiple access - time
division multiple access (OFDMA-TDMA) and time division multiple access - orthogonal
frequency division multiplexing (TDMA-OFDM) schemes show that the simple CMAC
protocol is more effective when it is used together with the former scheme than the latter.
|
47 |
Evaluation of communication protocols between vehicle and server : Evaluation of data transmission overhead by communication protocolsWickman, Tomas January 2016 (has links)
This thesis project has studied a number of protocols that could be used to communicate between a vehicle and a remote server in the context of Scania’s connected services. While there are many factors that are of interest to Scania (such as response time, transmission speed, and amount of data overhead for each message), this thesis will evaluate each protocol in terms of how much data overhead is introduced and how packet loss affects this overhead. The thesis begins by giving an overview of how a number of alternative protocols work and what they offer with regards to Scania’s needs. Next these protocols are compared based on previous studies and each protocol’s specifications to determine which protocol would be the best choice for realizing Scania’s connected services. Finally, a test framework was set up using a virtual environment to simulate different networking conditions. Each of the candidate protocols were deployed in this environment and setup to send sample data. The behaviour of each protocol during these tests served as the basis for the analysis of all of these protocols. The thesis draws the conclusion that to reduce the data transmission overhead between vehicles and Scania’s servers the most suitable protocol is the UDP based MQTT SN. / I den här rapporten har jag undersökt ett antal protokoll som kan användas för att kommunicera mellan server och lastbil och därmed användas för Scanias Connected Services. Då det är många faktorer som är intressanta när det kommer till kommunikation mellan lastbil och server för Scania som till exempel responstid, överföringshastighet och mängden extra data vid överföring så har jag valt att begränsa mig till att utvärdera protokollen utifrån hur mycket extra data de använder vid överföring och hur detta påverkas av paketförlust. Rapporten börjar med att ge en överblick över vilka tänkbara protokoll som kan användas och vad de kan erbjuda gällande Scanias behov. Efter det så jämförs protokollen baserat på tidigare studier och protokollens specifikationer för att avgöra vilket protokoll som är bäst lämpat att användas i Scanias Connected Services. Sists så skapas ett virtuellt ramverk för att simulera olike nätverksförhållanden. Här testas varje protokoll och får sända olike datamängder för att sedan få sin prestanda utvärderad baserat på hur mycket extra data som sändes. Dessa resultat ligger sedan till grund för den analys och slutsats angående vilket protokoll som är bäst lämpat att användas av Scania. Rapporten drar slutsatsen att baserat på den information som finns tillgänglig och de resultat som ficks av testerna så skulle den UDP baserade MQTT-SN vara bäst lämpad för att minimera mängden extra data som skickas.
|
48 |
5G user satisfaction enabled by FASP : Evaluating the performance of Aspera's FASPHagernäs, Patrik January 2015 (has links)
With Ericsson’s goal to have optimal user experience at 5G’s 2020 release, it is very important to optimize transport protocols and techniques to manage the increasing amount of data traffic. Additionally, it will be important to manage handovers between very high speed 5G networks and older networks. Today most of the traffic is video on demand and the amount of this kind of traffic is expected to increase. Moreover, the current amount of data traffic will increase by an order of magnitude over the next few years. This thesis focuses on radio access networks and the difficulties they face in delivering high speed data traffic. This thesis analyzes one of the most used TCP protocols, CUBIC, as well as a new transport protocol developed by Aspera, called the Fast and Secure Protocol. Aspera’s FASP is a new transport protocol that promises full link utilization. FASP is built upon UDP and uses advanced round trip time measurements and queuing delay to detect the available bandwidth between two communicating hosts. This thesis project also provides methods to realize experiments to assess the limitations of transport protocols. These experiments are conducted in an environment that resembles the upcoming 5G radio access network. Results have shown that both delay and packet loss affect TCP more than we expected and that high packet loss is devastating. In contrast, Aspera’s FASP is very resistant to both delay and packet loss. These results and analysis provide a foundation upon which others can build. / Med Ericssons mål att ha optimal användarupplevelse vid släppet av 5G år 2020 är det oerhört viktigt att optimera transportprotokoll och tekniker för att hantera den ökande mängden datatrafik. En annan viktig aspekt kan vara att hantera överlämningar mellan 5G nätverk och äldre radionätverk. Idag är den största trafiken streamad video och prognoser visar att den sortens trafik bara kommer att öka. Prognoserna visar också att all trafik kommer att öka mångfaldigt de närmaste åren. Denna thesis kommer att fokusera på svårigheterna just inom radionätverk. Denna thesis kommer att analysera ett av vårt mest använda transportprotokoll CUBIC TCP, den kommer också att analysera ett helt nytt transportprotokoll utvecklat av Aspera, Fast and Secure Protocol. Aspera lovar fullt utnyttjande av den mellanliggande länken. FASP är byggt ovanpå UDP och använder avancerade tur- och returtidsmätningar för att använda all outnyttjad bandbredd. Denna thesis visar även hur man kan göra experiment för att hitta begränsningar i transportprotokoll. Alla dessa experiment kommer utförs i en miljö som efterliknar det nya 5G-nätverket. Resultatet visar att både förlora paket tillsammans med en hög fördröjning påverkar mycket mer än väntat och att frekvent förlora paket är förödande för TCP. Asperas FASP är i motsats mycket tålig mot både paketförlust och hög fördröjning. Detta resultat och denna analys lägger en grund var andra kan arbeta vidare.
|
49 |
Exploring web protocols for use on cellular networks : QUIC on poor network linksElo, Hans-Filip January 2018 (has links)
New developments in web transport such as HTTP/2 and first and foremost QUIC promise fewer connections to track as well as shorter connection setup times. These protocols have proven themselves on modern reliable connections with a high bandwidth-delay-product, but how do they perform over cellular connections in rural or crowded areas where the connections are much more unreliable? A lot of new users of the web in todays mobile-first usage scenarios are located on poor connections. A testbench was designed that allowed for web browsing over limited network links in a con- trolled environment. We have compared the network load time of page loading over the protocols QUIC, HTTP/2 and HTTP/1.1 using a variety of different network conditions. We then used these measurements as a basis for suggesting which protocol to use during different conditions. The results show that newer is not always better. QUIC in general works reasonably well under all conditions, while HTTP/1.1 and HTTP/2 trade blows depending on connection conditions, with HTTP/1.1 sometimes outperforming both of the newer protocols.
|
50 |
Improved performance high speed network intrusion detection systems (NIDS). A high speed NIDS architectures to address limitations of Packet Loss and Low Detection Rate by adoption of Dynamic Cluster Architecture and Traffic Anomaly Filtration (IADF).Akhlaq, Monis January 2011 (has links)
Intrusion Detection Systems (IDS) are considered as a vital component in network security architecture. The system allows the administrator to detect unauthorized use of, or attack upon a computer, network or telecommunication infrastructure. There is no second thought on the necessity of these systems however; their performance remains a critical question.
This research has focussed on designing a high performance Network Intrusion Detection Systems (NIDS) model. The work begins with the evaluation of Snort, an open source NIDS considered as a de-facto IDS standard. The motive behind the evaluation strategy is to analyze the performance of Snort and ascertain the causes of limited performance. Design and implementation of high performance techniques are considered as the final objective of this research.
Snort has been evaluated on highly sophisticated test bench by employing evasive and avoidance strategies to simulate real-life normal and attack-like traffic. The test-methodology is based on the concept of stressing the system and degrading its performance in terms of its packet handling capacity. This has been achieved by normal traffic generation; fussing; traffic saturation; parallel dissimilar attacks; manipulation of background traffic, e.g. fragmentation, packet sequence disturbance and illegal packet insertion. The evaluation phase has lead us to two high performance designs, first distributed hardware architecture using cluster-based adoption and second cascaded phenomena of anomaly-based filtration and signature-based detection.
The first high performance mechanism is based on Dynamic Cluster adoption using refined policy routing and Comparator Logic. The design is a two tier mechanism where front end of the cluster is the load-balancer which distributes traffic on pre-defined policy routing ensuring maximum utilization of cluster resources. The traffic load sharing mechanism reduces the packet drop by exchanging state information between load-balancer and cluster nodes and implementing switchovers between nodes in case the traffic exceeds pre-defined threshold limit. Finally, the recovery evaluation concept using Comparator Logic also enhance the overall efficiency by recovering lost data in switchovers, the retrieved data is than analyzed by the recovery NIDS to identify any leftover threats.
Intelligent Anomaly Detection Filtration (IADF) using cascaded architecture of anomaly-based filtration and signature-based detection process is the second high performance design. The IADF design is used to preserve resources of NIDS by eliminating large portion of the traffic on well defined logics. In addition, the filtration concept augment the detection process by eliminating the part of malicious traffic which otherwise can go undetected by most of signature-based mechanisms. We have evaluated the mechanism to detect Denial of Service (DoS) and Probe attempts based by analyzing its performance on Defence Advanced Research Projects Agency (DARPA) dataset. The concept has also been supported by time-based normalized sampling mechanisms to incorporate normal traffic variations to reduce false alarms. Finally, we have observed that the IADF has augmented the overall detection process by reducing false alarms, increasing detection rate and incurring lesser data loss. / National University of Sciences & Technology (NUST), Pakistan
|
Page generated in 1.0966 seconds