Spelling suggestions: "subject:"packet loss"" "subject:"packet oss""
11 |
Compensating for Unreliable Communication Links in Networked Control SystemsHenriksson, Erik January 2009 (has links)
<p>Control systems utilizing wireless sensor and actuator networks can be severely affectedby the properties of the communication links. Radio fading and interferencemay cause communication losses and outages in situations when the radio environmentis noisy and low transmission power is desirable. This thesis proposes amethod to compensate for such unpredictable losses of data in the feedback controlloop by introducing a predictive outage compensator (POC). The POC is a filter tobe implemented at the receiver sides of networked control systems where it generatesartificial samples when data are lost. If the receiver node does not receive thedata, the POC suggests a command based on the history of past data. It is shownhow to design, tune and implement a POC. Theoretical bounds and simulationresults show that a POC can improve the closed-loop control performance undercommunication losses considerably. We provide a deterministic and a stochasticmethod to synthesize POCs. Worst-case performance bounds are given that relatethe closed-loop performance with the complexity of the compensator. We also showthat it is possible to achieve good performance with a low-order implementationbased on Hankel norm approximation. Tradeoffs between achievable performance,communication loss length, and POC order are discussed. The results are illustratedon a simulated example of a multiple-tank process. The thesis is concludedby an experimental validation of wireless control of a physical lab process. Herethe controller and the physical system are separated geographically and interfacedthrough a wireless medium. For the remote control we use a hybrid model predictivecontroller. The results reflect the difficulties in wireless control as well as theyhighlight the flexibility and possibilities one obtains by using wireless instead of awired communication medium.</p> / VR, SSF, VINNOVA via Networked Embedded Control Systems, EU Sixt Framework Program via HYCON and SOCRADES
|
12 |
Systém pro monitorování kvality připojení / A system for monitoring the quality of connectivityVyskočil, Vladimír January 2010 (has links)
The aim of the theses is to design a methodology for centralized monitoring of data traffic in an ISP network that would detect individual connection with connectivity problems, and evaluate these problems. The methodology should at least evaluate packet loss and latency between a centrally located monitoring point and endpoints at user premises. A further goal is to implement this methodology, test it and evaluate the results.
|
13 |
Evaluating LoRa and WiFi JammingÖst, Albert January 2018 (has links)
Internet of Things changes our world with everything we have around us, our everyday things will be connected to the Internet. According to experts, in two years there will be up to 29 billion devices connected to the Internet. With all of the information that is produced it is important to keep the communication secure, otherwise there can be serious problems in the future. Therefore the objective with this study has been to investigate the area of jamming attacks on wireless communication for Internet of Things, more specifically on LoRa and WiFi technologies. This was made by a literature study to research about Internet of Things, the industrial side of it, the two communication technologies and wireless jamming of them. Additionally to this a small scale test bed system consisting of two LoRa nodes (an Arduino and a LoRa gateway), two WiFi nodes (a laptop and router) and a software defined radio frequency jammer (a HackRF One) were set up. Jamming was performed on the system and evaluated form the perspective of a typical industrial Internet of Things scenario. The testing on the system was done by measuring the received signal strength index, round trip time for a message and packet losses. The study showed that the WiFi communication broke down completely while the LoRa communication stood strong up to the jammer. This concluded that LoRa communication is secure for a typical Internet of Things scenario, from this particular jamming device, or a similar one.
|
14 |
End to end Multi-Objective Optimisation of H.264 and HEVC CODECsAl Barwani, Maryam Mohsin Salim January 2018 (has links)
All multimedia devices now incorporate video CODECs that comply with international video coding standards such as H.264 / MPEG4-AVC and the new High Efficiency Video Coding Standard (HEVC) otherwise known as H.265. Although the standard CODECs have been designed to include algorithms with optimal efficiency, large number of coding parameters can be used to fine tune their operation, within known constraints of for e.g., available computational power, bandwidth, consumer QoS requirements, etc. With large number of such parameters involved, determining which parameters will play a significant role in providing optimal quality of service within given constraints is a further challenge that needs to be met. Further how to select the values of the significant parameters so that the CODEC performs optimally under the given constraints is a further important question to be answered. This thesis proposes a framework that uses machine learning algorithms to model the performance of a video CODEC based on the significant coding parameters. Means of modelling both the Encoder and Decoder performance is proposed. We define objective functions that can be used to model the performance related properties of a CODEC, i.e., video quality, bit-rate and CPU time. We show that these objective functions can be practically utilised in video Encoder/Decoder designs, in particular in their performance optimisation within given operational and practical constraints. A Multi-objective Optimisation framework based on Genetic Algorithms is thus proposed to optimise the performance of a video codec. The framework is designed to jointly minimize the CPU Time, Bit-rate and to maximize the quality of the compressed video stream. The thesis presents the use of this framework in the performance modelling and multi-objective optimisation of the most widely used video coding standard in practice at present, H.264 and the latest video coding standard, H.265/HEVC. When a communication network is used to transmit video, performance related parameters of the communication channel will impact the end-to-end performance of the video CODEC. Network delays and packet loss will impact the quality of the video that is received at the decoder via the communication channel, i.e., even if a video CODEC is optimally configured network conditions will make the experience sub-optimal. Given the above the thesis proposes a design, integration and testing of a novel approach to simulating a wired network and the use of UDP protocol for the transmission of video data. This network is subsequently used to simulate the impact of packet loss and network delays on optimally coded video based on the framework previously proposed for the modelling and optimisation of video CODECs. The quality of received video under different levels of packet loss and network delay is simulated, concluding the impact on transmitted video based on their content and features.
|
15 |
Quantization for Low Delay and Packet LossSubasingha, Subasingha Shaminda 22 April 2010 (has links)
Quantization of multimodal vector data in Realtime Interactive Communication Networks (RICNs) associated with application areas such as speech, video, audio, and haptic signals introduces a set of unique challenges. In particular, achieving the necessary distortion performance with minimum rate while maintaining low end-to-end delay and handling packet losses is of paramount importance. This dissertation presents vector quantization schemes which aim to satisfy these important requirements based on two source coding paradigms; 1) Predictive coding 2) Distributed source coding. Gaussian Mixture Models (GMMs) can be used to model any probability density function (pdf) with an arbitrarily small error given a sufficient number of mixture components. Hence, Gaussian Mixture Models can be effectively used to model the underlying pdfs of a variety of data in RICN applications. In this dissertation, first we present Gaussian Mixture Models Kalman predictive coding, which uses transform domain predictive GMM quantization techniques with Kalman filtering principles. In particular, we show how suitable modeling of quantization noise leads to a signal-adaptive GMM Kalman predictive coder that provides improved coding performance. Moreover, we demonstrate how running a GMM Kalman predictive coder to convergence can be used to design a stationary GMM Kalman predictive coding system which provides improved coding of GMM vector data but now with only a modest increase in run-time complexity over the baseline. Next, we address the issues of packet loss in the networks using GMM Kalman predictive coding principles. In particular, we show how an initial GMM Kalman predictive coder can be utilized to obtain a robust GMM predictive coder specifically designed to operate in packet loss. We demonstrate how one can define sets of encoding and decoding modes, and design special Kalman encoding and decoding gains for each mode. With this framework, GMM predictive coding design can be viewed as determining the special Kalman gains that minimize the expected mean squared error at the decoder in packet loss conditions. Finally, we present analytical techniques for modeling, analyzing and designing Wyner-Ziv(WZ) quantizers for Distributed Source Coding for jointly Gaussian vector data with imperfect side information. In most of the DSC implementations, the side information is not explicitly available in the decoder. Thus, almost all of the practical implementations obtain the side information from the previously decoded frames. Due to model imperfections, packet losses, previous decoding errors, and quantization noise, the available side information is usually noisy. However, the design of Wyner-Ziv quantizers for imperfect side information has not been widely addressed in the DSC literature. The analytical techniques presented in this dissertation explicitly assume the existence of imperfect side information in the decoder. Furthermore, we demonstrate how the design problem for vector data can be decomposed into independent scalar design subproblems. Then, we present the analytical techniques to compute the optimum step size and bit allocation for each scalar quantizer such that the decoder's expected vector Mean Squared Error(MSE) is minimized. The simulation results verify that the predicted MSE based on the presented analytical techniques closely follow the simulation results.
|
16 |
A Dynamic Queue Adjustment Based on Packet Loss Ratio in Wireless NetworksChu, Tsuh-Feng 13 August 2003 (has links)
Traditional TCP when applied in wireless networks may encounter two limitations. The first limitation is the higher bit error rate (BER) due to noise, fading, and multipath interference. Because traditional TCP is designed for wired and reliable networks, packet loss is mainly caused by network congestions. As a result, TCP may decrease congestion window inappropriately upon detecting a packet loss. The second limitation is about the packet scheduling, which mostly does not consider wireless characteristics.
In this Thesis, we propose a local retransmission mechanism to improve TCP throughput for wireless networks with higher BER. In addition, we measure the packet loss ratio (PLR) to adjust the queue weight such that the available bandwidth for each queue can be changed accordingly. In our mechanism, the queue length is used to determine whether there is a congestion in wireless networks. When the queue length exceeds a threshold, it indicates that the wireless networks may have congestion very likely. We not only propose the dynamic weight-adjustment mechanism, but also solve the packet out-of-sequence problem, which results form when a TCP flow changes to a new queue.
For the purpose of demonstration, we implement the proposed weight-adjustment mechanisms on the Linux platform. Through the measurements and discussions, we have shown that the proposed mechanisms can effectively improve the TCP throughput in wireless networks.
|
17 |
A Packet-Buffered Mobile IP with Fast Retransmission in Wireless LANsLyu, Sian-Bin 19 August 2003 (has links)
Today¡¦s mobile IP supports host mobility by dynamic changing IP addresses while the mobile host roaming in the Internet. However, There still exists performance problems during handoffs, such as packet loss, throughput degradation, and so on.
In this Thesis, we propose a mechanism to reduce packet loss during handoff. The packet buffering mechanism at a home agent is initiated by mobile hosts when the signal-to-noise ratio of the wireless link falls below some predefined threshold. Once the handoff has completed, the home agent immediately delivers the first packet in the buffer to the mobile host. The home agent then clears the buffered packets already received by the mobile host through the returned ACK such that no further duplicate packets are sent out. In addition, we propose a route-selection policy to reduce end-to-end transmission delay by sending out probe packets along the paths.
For the purpose of demonstration, we implement the mechanism on Linux platform. Through the measurements from the experiment, we have shown that the proposed mechanism can improve the throughput and solve the packet retransmission problem while handoffs.
|
18 |
Implementation of Measurement Module For Seamless Vertical HandoverIckin, Selim January 2010 (has links)
Research on heterogeneous seamless handover has become popular, since the wireless networking systems were introduced with mobility. The typical current vertical handover mechanism consists of an architecture built at Layer 3 and implemented to serve for different technologies. Even though it has advantages in terms of simplicity, there still exist drawbacks such as the need of adaptation of the network architecture to different network technologies and systems. Providing transparency to the heterogeneous seamless handover can be provided by conducting the handover process at a higher layer. By that way, efficient handover decisions for vertical handover are made with more number of constraints that will lead to high performance, accessibility and low cost. Making this possible is by providing Quality of Experience (QoE) and obtaining current information of the throughput by measurements in the network. Analyzing and interpreting the statistics collected through measurements are vital in terms of decision making and to decide when to perform vertical handover. This thesis consists of the implementation of measurement module in two different approaches (Payload Dependent Approach and Payload Independent Approach) that will provide these statistics to the storage module in PERIMETER project.
|
19 |
Investigation of packet delay jitter metrics in face of loss and reorderingMarfull, Héctor January 2009 (has links)
Nowadays mobility is a field of great importance. The fact of travelling or moving should not mean the rupture of the connection to the Internet. And the current objective is not only to be world-wide connected, it is also to be it always through the best available connection (ABC). It means the need to perform vertical handover to switch between different networks, while maintaining the same Internet connection. All this has to be done in a transparent way to the user. In order provide the highest Quality of Experience some tools are needed to enable checking the status and performance of the different available networks, measuring and collecting statistics, in order to take advantage of each one of them. This thesis presents the theoretical base for a measurement module by describing and analysing different metrics, with special emphasis on delay jitter, collecting and comparing different methods, and discussing their main characteristics and suitability for this goal.
|
20 |
Evaluation of TCP Performance in 3G Mobile NetworksJogi, Mutyalu, Vundavalli, Madhu January 2011 (has links)
With the increase in mobile broadband services the operators are gaining profits by providing high speed Internet access over the mobile network. On the other side they are also facing challenges to give QoS guarantee to the customers. In this thesis we investigate the impact of data rate and payload size on One Way Delay (OWD) and packet loss over TCP performance in 3G networks. Our goal is to evaluate the OWD and packet loss characteristics with respect to payload size and data rate from the collected network level traces. To collect these traces an experimental testbed is setup with Endace Data Acquisition and Generation (DAG) cards, for accurate measurements Endace DAG cards together with Global Positioning System (GPS) synchronization is implemented. The experiments are conducted for three different Swedish mobile operator networks and further the statistics of OWD measurements and packet loss for different data rates and payload sizes are evaluated. Our results indicate that the minimal OWD occurred at higher data rates and also shows a high delay variability. The packet loss has much impact on higher data rates and larger payload sizes, as the packet loss increases with the increase in data rate and payload size.
|
Page generated in 0.0745 seconds