• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 119
  • 31
  • 20
  • 9
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 211
  • 211
  • 71
  • 63
  • 34
  • 29
  • 29
  • 28
  • 27
  • 26
  • 25
  • 25
  • 25
  • 25
  • 24
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Exploiting the implicit error correcting ability of networks that use random network coding / by Suné von Solms

Von Solms, Suné January 2009 (has links)
In this dissertation, we developed a method that uses the redundant information implicitly generated inside a random network coding network to apply error correction to the transmitted message. The obtained results show that the developed implicit error correcting method can reduce the effect of errors in a random network coding network without the addition of redundant information at the source node. This method presents numerous advantages compared to the documented concatenated error correction methods. We found that various error correction schemes can be implemented without adding redundancy at the source nodes. The decoding ability of this method is dependent on the network characteristics. We found that large networks with a high level of interconnectivity yield more redundant information allowing more advanced error correction schemes to be implemented. Network coding networks are prone to error propagation. We present the results of the effect of link error probability on our scheme and show that our scheme outperforms concatenated error correction schemes for low link error probability. / Thesis (M.Ing. (Computer Engineering))--North-West University, Potchefstroom Campus, 2010.
32

Exploiting the implicit error correcting ability of networks that use random network coding / by Suné von Solms

Von Solms, Suné January 2009 (has links)
In this dissertation, we developed a method that uses the redundant information implicitly generated inside a random network coding network to apply error correction to the transmitted message. The obtained results show that the developed implicit error correcting method can reduce the effect of errors in a random network coding network without the addition of redundant information at the source node. This method presents numerous advantages compared to the documented concatenated error correction methods. We found that various error correction schemes can be implemented without adding redundancy at the source nodes. The decoding ability of this method is dependent on the network characteristics. We found that large networks with a high level of interconnectivity yield more redundant information allowing more advanced error correction schemes to be implemented. Network coding networks are prone to error propagation. We present the results of the effect of link error probability on our scheme and show that our scheme outperforms concatenated error correction schemes for low link error probability. / Thesis (M.Ing. (Computer Engineering))--North-West University, Potchefstroom Campus, 2010.
33

Centralized and distributed address correlated network coding protocols / Optimisation et application du codage réseau dans l'architecture des futurs réseaux sans fils

Abdul-Nabi, Samih 28 September 2015 (has links)
Le codage de reseau (CR) est une nouvelle technique reposant, sur la realisation par les noeuds du reseau, des fonctions de codage et de decodage des donnees afin d’ameliorerle debit et reduire les retards. En utilisant des algorithmes algebriques, le codage consiste àcombiner ensemble les paquets transmis et le decodage consiste à restaurer ces paquets. Cette operation permet de reduire le nombre total de transmissions de paquets pour echanger les donnees, mais requiere des traitements additionnels au niveau des noeuds. Le codage de reseau peut etre applique au niveau de differentes couches ISO.Toutefois dans ce travail, sa mise en noeuvre est effectuee au niveau de la couche reseau. Dans ce travail de thèse, nous presentons des techniques de codage de reseau s’appuyantsur de nouveaux protocoles permettant d’optimiser l’utilisation de la bande passante,D’ameliorer la qualite de service et de reduire l’impact de la perte de paquets dans les reseaux a pertes. Plusieurs defis ont ete releves notamment concernant les fonctions de codage/decodage et tous les mecanismes connexes utilises pour livrer les paquets echanges entre les noeuds. Des questions comme le cycle de vie des paquets dans le reseau, lacardinalite des messages codes, le nombre total d’octets transmis et la duree du temps de maintien des paquets ont ete adressees analytiquement, en s’appuyant sur des theoremes, qui ont ete ensuite confirmes par des simulations. Dans les reseaux a pertes, les methodes utilisees pour etudier precisement le comportement du reseau conduisent a la proposition de nouveaux mecanismes pour surmonter cette perte et reduire la charge.Dans la premiere partie de la these, un etat de l’art des techniques de codage de reseauxest presente a partir des travaux de Alshwede et al. Les differentes techniques sont detaillees mettant l’accent sur les codages lineaires et binaires. Ces techniques sont decrites en s’appuyant sur differents scenarios pour aider a comprendre les avantages etles inconvenients de chacune d’elles. Dans la deuxieme partie, un nouveau protocole base sur la correlation des adresses (ACNC) est presente, et deux approches utilisant ce protocole sont introduites ; l’approche centralisee ou le decodage se fait aux noeuds d’extremites et l’approche distribueeou chaque noeud dans le reseau participe au decodage. Le decodage centralise est elabore en presentant d’abord ses modeles de decision et le detail du decodage aux noeuds d’extremites. La cardinalite des messages codes recus et les exigences de mise en mémoire tampon au niveau des noeuds d’extremites sont etudiees et les notions d’age et de maturite sont introduites. On montre que le decodage distribue permet de reduire la charge sur les noeuds d’extremite ainsi que la memoire tampon au niveau des noeuds intermediaires. La perte et le recouvrement avec les techniques de codage de reseau sont examines pour les deux approches proposees. Pour l’approche centralisee, deux mecanismes pour limiter l’impact de la perte sont presentes. A cet effet, le concept de fermetures et le concept dessous-ensembles couvrants sont introduits. Les recouvrements optimaux afin de trouver l’ensemble optimal de paquets a retransmettre dans le but de decoder tous les paquets reçus sont definis. Pour le decodage distribue, un nouveau mecanisme de fiabilite saut a saut est propose tirant profit du codage de reseau et permettant de recuperer les paquets perdus sans la mise en oeuvre d’un mecanisme d’acquittement. / Network coding (NC) is a new technique in which transmitted data is encoded and decoded by the nodes of the network in order to enhance throughput and reduce delays. Using algebraic algorithms, encoding at nodes accumulates various packets in one message and decoding restores these packets. NC requires fewer transmissions to transmit all the data but more processing at the nodes. NC can be applied at any of the ISO layers. However, the focus is mainly on the network layer level. In this work, we introduce novelties to the NC paradigm with the intent of building easy to implement NC protocols in order to improve bandwidth usage, enhance QoS and reduce the impact of losing packets in lossy networks. Several challenges are raised by this thesis concerning details in the coding and decoding processes and all the related mechanisms used to deliver packets between end nodes. Notably, questions like the life cycle of packets in coding environment, cardinality of coded messages, number of bytes overhead transmissions and buffering time duration are inspected, analytically counted, supported by many theorems and then verified through simulations. By studying the packet loss problem, new theorems describing the behavior of the network in that case have been proposed and novel mechanisms to overcome this loss have been provided. In the first part of the thesis, an overview of NC is conducted since triggered by the work of Alshwede et al. NC techniques are then detailed with the focus on linear and binary NC. These techniques are elaborated and embellished with examples extracted from different scenarios to further help understanding the advantages and disadvantages of each of these techniques. In the second part, a new address correlated NC (ACNC) protocol is presented and two approaches using ACNC protocol are introduced, the centralized approach where decoding is conducted at end nodes and the distributed decoding approach where each node in the network participates in the decoding process. Centralized decoding is elaborated by first presenting its decision models and the detailed decoding procedure at end nodes. Moreover, the cardinality of received coded messages and the buffering requirements at end nodes are investigated and the concepts of aging and maturity are introduced. The distributed decoding approach is presented as a solution to reduce the overhead on end nodes by distributing the decoding process and buffering requirements to intermediate nodes. Loss and recovery in NC are examined for both centralized and distributed approaches. For the centralized decoding approach, two mechanisms to limit the impact of loss are presented. To this effect, the concept of closures and covering sets are introduced and the covering set discovery is conducted on undecodable messages to find the optimized set of packets to request from the sender in order to decode all received packets. For the distributed decoding, a new hop-to-hop reliability mechanism is proposed that takes advantage of the NC itself and depicts loss without the need of an acknowledgement mechanism.
34

Priority-Based Data Transmission in Wireless Networks using Network Coding

Ostovari, Pouya January 2015 (has links)
With the rapid development of mobile devices technology, they are becoming very popular and a part of our everyday lives. These devices, which are equipped with wireless radios, such as cellular and WiFi radios, affect almost every aspect of our lives. People use smartphone and tablets to access the Internet, watch videos, chat with their friends, and etc. The wireless connections that these devices provide is more convenient than the wired connections. However, there are two main challenges in wireless networks: error-prone wireless links and network resources limitation. Network coding is widely used to provide reliable data transmission and to use the network resources efficiently. Network coding is a technique in which the original packets are mixed together using algebraic operations. In this dissertation, we study the applications of network coding in making the wireless transmissions robust against transmission errors and in efficient resource management. In many types of data, the importance of different parts of the data are different. For instance, in the case of numeric data, the importance of the data decreases from the most significant to the least significant bit. Also, in multi-layer videos, the importance of the packets in different layers of the videos are not the same. We propose novel data transmission methods in wireless networks that considers the unequal importance of the different parts of the data. In order to provide robust data transmissions and use the limited resources efficiently, we use random linear network coding technique, which is a type of network coding. In the first part of this dissertation, we study the application of network coding in resource management. In order to use the the limited storage of cache nodes efficiently, we propose to use triangular network coding for content distribution. We also design a scalable video-on-demand system, which uses helper nodes and network coding to provide users with their desired video quality. In the second part, we investigate the application of network coding in providing robust wireless transmissions. We propose symbol-level network coding, in which each packet is partitioned to symbols with different importance. We also propose a method that uses network coding to make multi-layer videos robust against transmission errors. / Computer and Information Science
35

Peer to peer multicast overlay for smart content delivery

Graça, Afonso da Rocha January 2012 (has links)
Tese de Mestrado Integrado. Engenharia Informática e Computação. Faculdade de Engenharia. Universidade do Porto. 2012
36

Network Coding in Multihop Wireless Networks: Throughput Analysis and Protocol Design

Yang, Zhenyu 29 April 2011 (has links)
Multi-hop wireless networks have been widely considered as promising approaches to provide more convenient Internet access for their easy deployment, extended coverage, and low deployment cost. However, providing high-speed and reliable services in these networks is challenging due to the unreliable wireless links, broadcast nature of wireless transmissions, and frequent topology changes. On the other hand, network coding (NC) is a technique that could significantly improve the network throughput and the transmission reliability by allowing intermediate nodes to combine received packets. More recently proposed symbol level network coding (SLNC), which combines packets at smaller symbol scale, is a more powerful technique to mitigate the impact of lossy links and packet collisions in wireless networks. NC, especially SLNC, is thus a particular effective approach to providing higher data rate and better transmission reliability for applications such as mobile content distribution in multihop wireless networks. This dissertation focuses on exploiting NC in multihop wireless networks. We studied the unique features of NC and designed a suite of distributed and localized algorithms and protocols for content distribution networks using NC and SLNC. We also carried out a theoretical study on the network capacity and performance bounds achievable by SLNC in mobile wireless networks. We proposed CodeOn and CodePlay for popular content distribution and live multimedia streaming (LMS) in vehicular ad hoc networks (VANETs), respectively, where many important practical factors are taken into consideration, including vehicle distribution, mobility pattern, channel fading and packet collision. Specifically, CodeOn is a novel push based popular content distribution scheme based on SLNC, where contents are actively broadcast to vehicles from road side access points and further distributed among vehicles using a cooperative VANET. In order to fully enjoy the benefits of SLNC, we proposed a suite of techniques to maximize the downloading rate, including a prioritized and localized relay selection mechanism where the selection criteria is based on the usefulness of contents possessed by vehicles, and a lightweight medium access protocol that naturally exploits the abundant concurrent transmission opportunities. CodePlay is designed for LMS applicaitions in VANETs, which could fully take advantage of SLNC through a coordinated local push mechanism. Streaming contents are actively disseminated from dedicated sources to interested vehicles via local coordination of distributively selected relays, each of which will ensure smooth playback for vehicles nearby. CodeOn pursues a single objective of maximizing downloading rate, while CodePlay improves the performance of LMS service in terms of streaming rate, service delivery delay, and bandwidth efficiency simultaneously. CodeOn and CodePlay are among the first works that exploit the features of SLNC to simplify the protocol design whilst achieving better performance. We also developed an analytical framework to compute the expected achievable throughput of mobile content distribution in VANETs using SLNC. We presented a general analytical model for the expected achievable throughput of SLNC in a static wireless network based on flow network theory and queuing theory. Then we further developed the model to derive the expected achievable accumulated throughput of a vehicle driving through the area of interest under a mobility pattern. Our proposed framework captures the effects of multiple practical factors, including vehicle distribution and mobility pattern, channel fading and packet collision, and we characterized the impacts of those factors on the expected achievable throughput. The results from this research are not only of interest from theoretical perspective but also provide insights and guidelines on protocol design in SLNC-based networks.
37

Bandwidth-efficient video streaming with network coding on peer-to-peer networks

Huang, Shenglan January 2017 (has links)
Over the last decade, live video streaming applications have gained great popularity among users but put great pressure on video servers and the Internet. In order to satisfy the growing demands for live video streaming, Peer-to-Peer(P2P) has been developed to relieve the video servers of bandwidth bottlenecks and computational load. Furthermore, Network Coding (NC) has been proposed and proved as a significant breakthrough in information theory and coding theory. According to previous research, NC not only brings substantial improvements regarding throughput and delay in data transmission, but also provides innovative solutions for multiple issues related to resource allocation, such as the coupon-collection problem, allocation and scheduling procedure. However, the complex NC-driven P2P streaming network poses substantial challenges to the packet scheduling algorithm. This thesis focuses on the packet scheduling algorithm for video multicast in NC-driven P2P streaming network. It determines how upload bandwidth resources of peer nodes are allocated in different transmission scenarios to achieve a better Quality of Service(QoS). First, an optimized rate allocation algorithm is proposed for scalable video transmission (SVT) in the NC-based lossy streaming network. This algorithm is developed to achieve the tradeoffs between average video distortion and average bandwidth redundancy in each generation. It determines how senders allocate their upload bandwidth to different classes in scalable data so that the sum of the distortion and the weighted redundancy ratio can be minimized. Second, in the NC-based non-scalable video transmission system, the bandwidth ineffi- ciency which is caused by the asynchronization communication among peers is reduced. First, a scalable compensation model and an adaptive push algorithm are proposed to reduce the unrecoverable transmission caused by network loss and insufficient bandwidth resources. Then a centralized packet scheduling algorithm is proposed to reduce the unin- formative transmission caused by the asynchronized communication among sender nodes. Subsequently, we further propose a distributed packet scheduling algorithm, which adds a critical scalability property to the packet scheduling model. Third, the bandwidth resource scheduling for SVT is further studied. A novel multiple- generation scheduling algorithm is proposed to determine the quality classes that the receiver node can subscribe to so that the overall perceived video quality can be maxi- mized. A single generation scheduling algorithm for SVT is also proposed to provide a faster and easier solution to the video quality maximization function. Thorough theoretical analysis is conducted in the development of all proposed algorithms, and their performance is evaluated via comprehensive simulations. We have demon- strated, by adjusting the conventional transmission model and involving new packet scheduling models, the overall QoS and bandwidth efficiency are dramatically improved. In non-scalable video streaming system, the maximum video quality gain can be around 5dB compared with the random push method, and the overall uninformative transmiss- sion ratio are reduced to 1% - 2%. In scalable video streaming system, the maximum video quality gain can be around 7dB, and the overall uninformative transmission ratio are reduced to 2% - 3%.
38

Using topological information in opportunistic network coding / by Magdalena Johanna (Leenta) Grobler

Grobler, Magdalena Johanna January 2008 (has links)
Thesis (M.Ing. (Computer and Electronical Engineering))--North-West University, Potchefstroom Campus, 2009.
39

Cooperative Strategies in Multi-Terminal Wireless Relay Networks

Du, Jinfeng January 2012 (has links)
Smart phones and tablet computers have greatly boosted the demand for services via wireless access points, keeping constant pressure on the network providers to deliver vast amounts of data over the wireless infrastructure. To enlarge coverage and enhance throughput, relaying has been adopted in the new generation of wireless communication systems, such as in the Long-Term Evolution Advanced standard,  and will continue to play an important role in the next generation wireless infrastructure. Depending on functionality, relaying can be characterizing into three main categories: amplify-and-forward (AF), compression-and-forward (CF), and decode-and-forward (DF).  In this thesis, we investigate different cooperative strategies in wireless networks when relaying is in use. We first investigate  the capacity outer and inner bounds for a wireless multicast relay network where two sources, connected by error-free backhaul, multicast to two destinations with the help of a full-duplex relay node.  For high-rate backhaul scenarios, we find the exact cut-set bound of the capacity region by extending the proof of the converse for the Gaussian relay channel. For low-rate backhaul scenarios, we present two genie-aided outer bounds by extending the previous proof and introducing two lemmas on conditional (co-)variance. Our inner bounds are derived from various cooperative strategies by combining DF/CF/AF relaying with network coding schemes. We also extend the noisy network coding scheme and the short-message noisy network coding approach to correlated sources. For low-rate backhaul, we propose a new coding scheme, partial-decode-and-forward based linear network coding. We derive the achievable rate regions  for these schemes and measure the performance in term of achievable rates over Gaussian channels. By numerical investigation we observe significant gains over benchmark schemes and demonstrate that the gap between upper and lower bounds is in general not large. We also show that for high-rate backhaul, the cut-set bound can be achieved  when the signal-to-noise ratios lie in the sphere defined by the source-relay and relay-destination channel gains. For wireless networks with independent noise, we propose a simple framework to get capacity outer and inner bounds based on the ``one-shot'' bounding models. We first extend the models for two-user broadcast channels to many-user scenarios and then establish the gap between upper and lower bounding models. For networks with coupled links, we propose  a channel decoupling method which can decompose the network into overlapping multiple-access channels and broadcast channels.  We then apply the one-shot models and create an upper bounding network with only  bit-pipe connections. When developing the lower bounding network, we propose a  two-step update of these models for each coupled broadcast and multiple-access channels. We demonstrate by some examples that the resulting upper bound is in general very good and the gap between the upper and lower bounds is usually not large. For relay-aided downlink scenarios, we propose a cooperation scheme by cancelling interference at the transmitter. It is indeed a symbol-by-symbol approach to one-dimension dirty paper coding (DPC). For finite-alphabet signaling and interference, we derive the optimal (in terms of maximum mutual information) modulator under a given power constraint. A sub-optimal modulator is also proposed by formulating an optimization problem that maximizes the minimum distance of the signal constellation, and this non-convex optimization problem is approximately solved by semi-definite relaxation.  Bit-level simulation shows that the optimal and sub-optimal modulators can achieve significant gains over the Tomlinson-Harashima precoder (THP) benchmark and over non-DPC reference schemes, especially when the power of the interference is larger than the power of the noise. / <p>QC 20121015</p>
40

A Quest for High-performance Peer-to-peer Live Multimedia Streaming

Wang, Mea 01 August 2008 (has links)
Demands for multimedia content, one form of digital content, are continuously increasing at a phenomenal pace, as video features are commonly available on personal devices, such as iPod, cell phone, laptop, PDA, and Blackberry. The streaming service poses unique bandwidth and delay challenges to application designers. The size of a typical video content is usually orders of magnitude larger than that of any other type of content, resulting in high demands for bandwidth contribution from the content providers. Even more challenging, the content must be delivered to end hosts in real time to maintain smooth playback, i.e., the content must be transmitted at a satisfactory rate. In this thesis, we present our research towards a high-quality peer-to-peer live streaming system that utilizes network coding, a novel technique that permits coding at every peer, which has proven benefits in file dissemination applications. To ensure the practicality of our work, it is our imperative objective to conduct all experiments under realistic settings.

Page generated in 0.0679 seconds