• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 36
  • 8
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 64
  • 64
  • 18
  • 17
  • 15
  • 14
  • 13
  • 12
  • 11
  • 11
  • 10
  • 10
  • 9
  • 9
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Implementation and evaluation of packet loss concealment schemes with the JM reference software / Implementation och utvärdering av metoder för att dölja paketförluster med JM-referensmjukvaran

Cooke, Henrik January 2010 (has links)
Communication over today’s IP-based networks are to some extent subject to packet loss. Most real-time applications, such as video streaming, need methods to hide this effect, since resending lost packets may introduce unacceptable delays. For IP-based video streaming applications such a method is referred to as a packet loss concealment scheme. In this thesis a recently proposed mixture model and least squares-based packet loss concealment scheme is implemented and evaluated together with three more well known concealment methods. The JM reference software is used as basis for the implementation, which is a public available software codec for the H.264 video coding standard. The evaluation is carried out by comparing the schemes in terms of objective measurements, subjective observations and a study with human observers. The recently proposed packet loss concealment scheme shows good performance with respect to the objective measures, and careful observations indicate better concealment of scenes with fast motion and rapidly changing video content. The study with human observers verifies the results for the case when a more sophisticated packetization technique is used. A new packet loss concealment scheme, based on joint modeling of motion vectors and pixels, is also investigated in the last chapter as an additional contribution of the thesis.
22

Analysis of RED packet loss performance in a simulated IP WAN

Engelbrecht, Nico 26 June 2013 (has links)
The Internet supports a diverse number of applications, which have different requirements for a number of services. Next generation networks provide high speed connectivity between hosts, which leaves the service provider to configure network devices appropriately, in order to maximize network performance. Service provider settings are based on best recommendation parameters, which give an opportunity to optimize these settings even further. This dissertation focuses on a packet discarding algorithm, known as random early detection (RED), to determine parameters which will maximize utilization of a resource. The two dominant traffic protocols used across an IP backbone are UDP and TCP. UDP traffic flows transmit packets regardless of network conditions, dropping packets without changing its transmission rates. However, TCP traffic flows concern itself with the network condition, reducing its packet transmission rate based on packet loss. Packet loss indicates that a network is congested. The sliding window concept, also known as the TCP congestion window, adjusts to the amount of acknowledgements the source node receives from the destination node. This paradigm provides a means to transmit data across the available bandwidth across a network. A well known and widely implemented simulation environment, the network simulator 2 (NS2), was used to analyze the RED mechanism. The network simulator 2 (NS2) software gained its popularity as being a complex networking simulation tool. Network protocol traffic (UDP and TCP) characteristics comply with theory, which verifies that the traffic generated by this simulator is valid. It is shown that the autocorrelation function differs between these two traffic types, verifying that the generated traffic does conform to theoretical and practical results. UDP traffic has a short-range dependency while TCP traffic has a long-range dependency. Simulation results show the effects of the RED algorithm on network traffic and equipment performance. It is shown that random packet discarding improves source transmission rate stabilization, as well as node utilization. If the packet dropping probability is set high, the TCP source transmission rates will be low, but a low packet drop probability provides high transmission rates to a few sources and low transmission rates to the majority of other sources. Therefore, an ideal packet drop probability was obtained to complement TCP source transmission rates and node utilization. Statistical distributions were fitted to sampled data from the simulations, which also show improvements to the network with random packet discarding. The results obtained contribute to congestion control across wide area networks. Even though a number of queuing management implementation exists, RED is the most widely used implementation used by service providers. / Dissertation (MEng)--University of Pretoria, 2013. / Electrical, Electronic and Computer Engineering / unrestricted
23

Performance Analysis of Secondary Link with Cross-Layer Design and Cooperative Relay in Cognitive Radio Networks

Ma, Hao 06 1900 (has links)
In this thesis, we investigate two different system infrastructures in underlay cognitive radio network, in which two popular techniques, cross-layer design and cooperative communication, are considered, respectively. In particular, we introduce the Aggressive Adaptive Modulation and Coding (A-AMC) into the cross-layer design and achieve the optimal boundary points in closed form to choose the AMC and A-AMC transmission modes by taking into account the Channel State Information (CSI) from the secondary transmitter to both the primary receiver and the secondary receiver. What’s more, for the cooperative communication design, we consider three different relay selection schemes: Partial Relay Selection, Opportunistic Relay Selection and Threshold Relay Selection. The Probability Density Functions (PDFs) of the Signal-to- Noise Ratio (SNR) in each hop for different selection schemes are provided, and then the exact closed-form expressions for the end-to-end packet loss rate in the secondary link considering the cooperation of the Decode-and-Forward (DF) relay for different relay selection schemes are derived.
24

Bridging the Gap: Integration, Evaluation and Optimization of Network Coding-based Forward Error Correction

Schütz, Bertram 18 October 2021 (has links)
The formal definition of network coding by Ahlswede et al. in 2000 has led to several breakthroughs in information theory, for example solving the bottleneck problem in butterfly networks and breaking the min-cut max-flow theorem for multicast communication. Especially promising is the usage of network coding as a packet-level Forward Error Correction (FEC) scheme to increase the robustness of a data stream against packet loss, also known as intra-session coding. Yet, despite these benefits, network coding-based FEC is still rarely deployed in real-world networks. To bridge this gap between information theory and real-world usage, this cumulative thesis will present our contributions to the integration, evaluation, and optimization of network coding-based FEC. The first set of contributions introduces and evaluates efficient ways to integrate coding into UDP-based IoT protocols to speed up bulk data transfers in lossy scenarios. This includes a packet-level FEC extension for the Constrained Application Protocol (CoAP) [P1] and one for MQTT for Sensor Networks (MQTT-SN), which levels the underlying publish-subscribe architecture [P2]. The second set of contributions addresses the development of novel evaluation tools and methods to better quantify possible coding gains. This includes link ’em, our award-winning link emulation bridge for reproducible networking research [P3], and also SPQER, a word recognition-based metric to evaluate the impact of packet loss on the Quality of Experience of Voice over IP applications [P5]. Finally, we highlight the impact of padding overhead for applications with heterogeneous packet lengths [P6] and introduce a novel packet-preserving coding scheme to significantly reduce this problem [P4]. Because many of the shown contributions can be applied to other areas of network coding research as well, this thesis does not only make meaningful contributions to specific network coding challenges, but also paves the way for future work to further close the gap between information theory and real-world usage.
25

Compensating for Unreliable Communication Links in Networked Control Systems

Henriksson, Erik January 2009 (has links)
Control systems utilizing wireless sensor and actuator networks can be severely affectedby the properties of the communication links. Radio fading and interferencemay cause communication losses and outages in situations when the radio environmentis noisy and low transmission power is desirable. This thesis proposes amethod to compensate for such unpredictable losses of data in the feedback controlloop by introducing a predictive outage compensator (POC). The POC is a filter tobe implemented at the receiver sides of networked control systems where it generatesartificial samples when data are lost. If the receiver node does not receive thedata, the POC suggests a command based on the history of past data. It is shownhow to design, tune and implement a POC. Theoretical bounds and simulationresults show that a POC can improve the closed-loop control performance undercommunication losses considerably. We provide a deterministic and a stochasticmethod to synthesize POCs. Worst-case performance bounds are given that relatethe closed-loop performance with the complexity of the compensator. We also showthat it is possible to achieve good performance with a low-order implementationbased on Hankel norm approximation. Tradeoffs between achievable performance,communication loss length, and POC order are discussed. The results are illustratedon a simulated example of a multiple-tank process. The thesis is concludedby an experimental validation of wireless control of a physical lab process. Herethe controller and the physical system are separated geographically and interfacedthrough a wireless medium. For the remote control we use a hybrid model predictivecontroller. The results reflect the difficulties in wireless control as well as theyhighlight the flexibility and possibilities one obtains by using wireless instead of awired communication medium. / VR, SSF, VINNOVA via Networked Embedded Control Systems, EU Sixt Framework Program via HYCON and SOCRADES
26

An Enhanced Dynamic Algorithm For Packet Buffer

Rajan, Vinod 11 December 2004 (has links)
A packet buffer for the protocol processor is a large memory space that holds incoming data packets for an application. Data packets for each application are stored in the form of FIFO queues in the packet buffer. Packets are dropped when the buffer is full. An efficient buffer management algorithm is required to manage the buffer space among the different FIFO queues and to avoid heavy packet loss. This thesis develops a simulation model for the packet buffer and studies the performance of conventional buffer management algorithms when applied to packet buffer. This thesis proposes a new buffer management algorithm, Dynamic Algorithm with Different Thresholds (DADT) to improve the packet loss ratio. This algorithm takes advantage of the different packet sizes for each application and proportionally allocates buffer space for each queue. The performance of the DADT algorithm is dependent upon the packet size distribution in a network traffic load. Three different network traffic loads are considered for our simulations. For the average network traffic load, the DADT algorithm shows an improvement of 6.7 % in packet loss ratio over the conventional dynamic buffer management algorithm. For the high and actual network traffic loads, the DADT algorithm shows an improvement of 5.45 % and 3.6 % in packet loss ratio respectively. Based on the simulation results, the DADT algorithm outperforms the conventional buffer management algorithms for various network traffic loads.
27

Quality of Experience for the Operation of a Small Scale Ground Vehicle over Unreliable Wireless Links

Saadou Yaye, Abdoulaye 17 September 2015 (has links)
No description available.
28

Improved algorithms for TCP congestion control

Edwan, Talal A. January 2010 (has links)
Reliable and efficient data transfer on the Internet is an important issue. Since late 70's the protocol responsible for that has been the de facto standard TCP, which has proven to be successful through out the years, its self-managed congestion control algorithms have retained the stability of the Internet for decades. However, the variety of existing new technologies such as high-speed networks (e.g. fibre optics) with high-speed long-delay set-up (e.g. cross-Atlantic links) and wireless technologies have posed lots of challenges to TCP congestion control algorithms. The congestion control research community proposed solutions to most of these challenges. This dissertation adds to the existing work by: firstly tackling the highspeed long-delay problem of TCP, we propose enhancements to one of the existing TCP variants (part of Linux kernel stack). We then propose our own variant: TCP-Gentle. Secondly, tackling the challenge of differentiating the wireless loss from congestive loss in a passive way and we propose a novel loss differentiation algorithm which quantifies the noise in packet inter arrival times and use this information together with the span (ratio of maximum to minimum packet inter arrival times) to adapt the multiplicative decrease factor according to a predefined logical formula. Finally, extending the well-known drift model of TCP to account for wireless loss and some hypothetical cases (e.g. variable multiplicative decrease), we have undertaken stability analysis for the new version of the model.
29

Performance of 3G data services over Mobile Networks in Sweden

Kommalapati, Ravichandra January 2010 (has links)
The emerging technologies in the field of telecommunications enable us to access high speed data services through mobile handsets and portable modems over the mobile networks. The recent statistics also shows the use of mobile broad band services are increasing and gaining popularity. In this thesis we have investigated the impact of payload size and data rate on one-way delay and packet loss in operational 3G mobile networks, through network level measurements. To collect the network level traces an experimental testbed is developed. For accurate measurement Endace DAG cards together with GPS synchronization is implemented. Results are gathered from three different commercial mobile operators in Sweden. From the results it is concluded that the combination of maximum payload size and data rate resulted in minimum one-way delay. It is also observed within the big payload size category, that the percentage of packet loss is less as compared to the smaller payload sizes. Such findings will improve efficiency of application developers to meet the challenges of UMTS network conditions.
30

Reliability in wireless sensor networks / Fiabilisation des transmissions dans les réseaux de capteurs sans fils

Maalel, Nourhene 30 June 2014 (has links)
Vu les perspectives qu'ils offrent, les réseaux de capteur sans fil (RCSF) ont perçu un grand engouement de la part de la communauté de recherche ces dernières années. Les RCSF couvrent une large gamme d'applications variant du contrôle d'environnement, le pistage de cible aux applications de santé. Les RCSFs sont souvent déployés aléatoirement. Ce dispersement des capteurs nécessite que les protocoles de transmission utilisés soient résistants aux conditions environnementales (fortes chaleurs ou pluies par exemple) et aux limitations de ressources des nœuds capteurs. En effet, la perte de plusieurs nœuds capteurs peut engendrer la perte de communication entre les différentes entités. Ces limitations peuvent causer la perte des paquets transmis ce qui entrave l'activité du réseau. Par conséquent, il est important d'assurer la fiabilité des transmissions de données dans les RCSF d'autant plus pour les applications critiques comme la détection d'incendies. Dans cette thèse, nous proposons une solution complète de transmission de données dans les RCSF répondant aux exigences et contraintes de ce type de réseau. Dans un premier temps, nous étudions les contraintes et les challenges liés à la fiabilisation des transmissions dans les RCSFs et nous examinons les travaux proposés dans la littérature. Suite à cette étude nous proposons COMN2, une approche distribuée et scalable permettant de faire face à la défaillance des nœuds. Ensuite, nous proposons un mécanisme de contrôle d'erreur minimisant la perte de paquets et proposant un routage adaptatif en fonction de la qualité du lien. Cette solution est basée sur des acquittements implicites (overhearing) pour la détection des pertes des paquets. Nous proposons ensuite ARRP une variante de AJIA combinant les avantages des retransmissions, de la collaboration des nœuds et des FEC. Enfin, nous simulons ces différentes solutions et vérifions leurs performances par rapport à leurs concurrents de l'état de l'art. / Over the past decades, we have witnessed a proliferation of potential application domainsfor wireless sensor networks (WSN). A comprehensive number of new services such asenvironment monitoring, target tracking, military surveillance and healthcare applicationshave arisen. These networked sensors are usually deployed randomly and left unattendedto perform their mission properly and efficiently. Meanwhile, sensors have to operate ina constrained environment with functional and operational challenges mainly related toresource limitations (energy supply, scarce computational abilities...) and to the noisyreal world of deployment. This harsh environment can cause packet loss or node failurewhich hamper the network activity. Thus, continuous delivery of data requires reliabledata transmission and adaptability to the dynamic environment. Ensuring network reliabilityis consequently a key concern in WSNs and it is even more important in emergencyapplication such disaster management application where reliable data delivery is the keysuccess factor. The main objective of this thesis is to design a reliable end to end solution for data transmission fulfilling the requirements of the constrained WSNs. We tackle two design issues namely recovery from node failure and packet losses and propose solutions to enhance the network reliability. We start by studying WSNs features with a focus on technical challenges and techniques of reliability in order to identify the open issues. Based on this study, we propose a scalable and distributed approach for network recovery from nodefailures in WSNs called CoMN2. Then, we present a lightweight mechanism for packetloss recovery and route quality awareness in WSNs called AJIA. This protocol exploitsthe overhearing feature characterizing the wireless channels as an implicit acknowledgment(ACK) mechanism. In addition, the protocol allows for an adaptive selection of therouting path by achieving required retransmissions on the most reliable link. We provethat AJIA outperforms its competitor AODV in term of delivery ratio in different channelconditions. Thereafter, we present ARRP, a variant of AJIA, combining the strengthsof retransmissions, node collaboration and Forward Error Correction (FEC) in order toprovide a reliable packet loss recovery scheme. We verify the efficiency of ARRP throughextensive simulations which proved its high reliability in comparison to its competitor.

Page generated in 0.0532 seconds