• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 119
  • 31
  • 20
  • 9
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 211
  • 211
  • 71
  • 63
  • 34
  • 29
  • 29
  • 28
  • 27
  • 26
  • 25
  • 25
  • 25
  • 25
  • 24
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Transmission Schemes, Caching Algorithms and P2P Content Distribution with Network Coding for Efficient Video Streaming Services

Kao, Yung-cheng 23 February 2010 (has links)
For more than a decade, streaming media services, including on-line conferences, distance education and movie broadcasting, have gained much popularity on the Internet. Due to the high bandwidth requirements and long lived nature of video streaming, it requires huge transmission cost to support these streaming media services. In addition, how to adapt rich multimedia content to satisfy various resource-constrained devices presents a challenge. The limited and time-varying network bandwidth complicates the content adaptation tasks. Differentiated content delivery may be required to meet diverse client profiles and user preferences. Therefore, in order to reduce transmission cost to serve heterogeneous clients for efficient streaming, in this dissertation, several novel schemes including transcoding-enable proxy caching scheme, reactive transmission schemes, and network coding P2P content distribution scheme, are proposed to support efficient multiple-version and layered video delivery in the proxy-attached network environment as well as to provide efficient interactive IPTV service in a peer-to-peer network. Firstly, for multiple-version cache consideration in the transcoding-enable proxy, we focus on reducing the required server bandwidth and startup delay by caching the optimal versions of the video. A generalized video object profit function is derived from the extended weighted transcoding graph to calculate the individual cache profit of certain version of a video object, and the aggregate profit from caching multiple versions of the same video object. This proposed function takes into account the popularity of certain version of a video object, the transcoding delay among versions and the average access duration of each version. Based on the profit function, cache replacement algorithms are proposed to reduce the startup delay and network traffic by efficiently caching video objects with maximum profits. Next, a set of proxy-assisted transmission schemes are proposed to reduce the transmission cost for layered video streaming by integrating the proxy caching with reactive transmission schemes, peer-to-peer mesh networks and multicast capability. These proposed transmission schemes make multiple requests to be serviced by the single transmission and thus to significantly reduce the total required transmission cost. The optimal proxy prefix cache allocation is also calculated for each transmission scheme to identify the cache layers and cache length of each video to minimize the aggregate transmission cost. The process considers the fact that reduction in transmission cost by caching X layers of a video is not only from requests on X layers, but also from requests on less than X layers. Finally, we proposed a network coding equivalent content distribution (NCECD) scheme to decrease server stress, startup delay and jumping latency to support random access operations which are desirable for peer-to-peer on-demand video streaming. The random access operations are difficult to be efficiently supported, due to the asynchronous interactive behaviors of users and the dynamic nature of peers. In NCECD, videos are divided into segments which are then further divided into blocks. These blocks are then encoded into independent encoded blocks that are distributed to the local storage of different peers. With NCECD, a new client only needs to connect to a sufficient number of parent peers in order to view the whole video and rarely needs to find new parents when performing random access operations. Whereas most existing methods must search for parent peers containing interested segments, NCECD uses the properties of network coding to cache equivalent content on most peers, so that searches are rarely needed. The analysis of system parameters is given to achieve reasonable block loss rates for peer-to-peer interactive video-on-demand streaming. Experimental results demonstrate that these proposed schemes can lead to significant transmission cost saving, high delay saving ratio, high bandwidth saving ratio, low startup and jumping searching delays, connecting to a new parent peer delay and less server resources. Hence, these proposed schemes can further be integrated and utilized to build an efficient video streaming platform for providing high-performance and high-quality IPTV services to a diversity of clients.
92

Cooperative Communication with Network Coding

Song, I-lin 21 January 2010 (has links)
To effectively combat MAI and MI in wireless networks, we exploit complementary code technique in this thesis. Terminals in cooperative communication system are not only doing the transmission or relaying, but also involve a novel strategy "network coding" which has been investigated widely. In our work, we aim to combine network coding into the conventional cooperative communication system, but we face certain problems in it. Cooperative system has diversity at the destination, but when network coding operation involved, theoretically, it violate the rules of diversity, since the new signals transmitted by relay are no longer as same as the signals from sources. However, we discover a method to solve this problem, which is using the multiplier in relay nodes to replace the conventional network coding operation- XOR. After creating the network coding-based system, our goal is to achieve diversity in cooperative communication system. In this work, we use MRC (maximum ratio combining) for the performance analysis, which is the optimal strategy. Many math works will be shown in the following chapters.
93

Efficient content distribution in IPTV environments

Galijasevic, Mirza, Liedgren, Carl January 2008 (has links)
<p>Existing VoD solutions often rely on unicast to distribute content, which leads to a higher load on the VoD server as more nodes become interested in the content. In such case, P2P is an alternative way of distributing content since it makes better use of available resources in the network. In this report, several P2P structures are evaluated from an operators point of view. We believe BitTorrent is the most adequate protocol for a P2P solution in IPTV environments. Two BitTorrent clients have been implemented on an IP-STB as proof of concept to find out whether P2P is suited for IPTV environments. Several tests were conducted to evaluate the performance of both clients and to see if they were able to reach a sufficient throughput on the IP-STB. Based upon the tests and the overall impressions, we are convinced that this particular P2P protocol is well suited for IPTV environments. Hopefully, a client developed from scratch for the IP-STB will offer even greater characteristics.</p><p>Further, we have studied how to share recorded content among IP-STBs. Such a design would probably have many similarities to BitTorrent since a central node needs to keep track of content; the IP-STBs take care of the rest.</p><p>The report also brings up whether BitTorrent is suitable for streaming. We believe that the necessary changes required to obtain such functionality will disrupt the strengths of BitTorrent. Some alternative solutions are presented where BitTorrent has been extended with additional modules, such as a server.</p>
94

An information theoretic approach to structured high-dimensional problems

Das, Abhik Kumar 06 February 2014 (has links)
A majority of the data transmitted and processed today has an inherent structured high-dimensional nature, either because of the process of encoding using high-dimensional codebooks for providing a systematic structure, or dependency of the data on a large number of agents or variables. As a result, many problem setups associated with transmission and processing of data have a structured high-dimensional aspect to them. This dissertation takes a look at two such problems, namely, communication over networks using network coding, and learning the structure of graphical representations like Markov networks using observed data, from an information-theoretic perspective. Such an approach yields intuition about good coding architectures as well as the limitations imposed by the high-dimensional framework. Th e dissertation studies the problem of network coding for networks having multiple transmission sessions, i.e., multiple users communicating with each other at the same time. The connection between such networks and the information-theoretic interference channel is examined, and the concept of interference alignment, derived from interference channel literature, is coupled with linear network coding to develop novel coding schemes off ering good guarantees on achievable throughput. In particular, two setups are analyzed – the first where each user requires data from only one user (multiple unicasts), and the second where each user requires data from potentially multiple users (multiple multicasts). It is demonstrated that one can achieve a rate equalling a signi ficant fraction of the maximal rate for each transmission session, provided certain constraints on the network topology are satisfi ed. Th e dissertation also analyzes the problem of learning the structure of Markov networks from observed samples – the learning problem is interpreted as a channel coding problem and its achievability and converse aspects are examined. A rate-distortion theoretic approach is taken for the converse aspect, and information-theoretic lower bounds on the number of samples, required for any algorithm to learn the Markov graph up to a pre-speci fied edit distance, are derived for ensembles of discrete and Gaussian Markov networks based on degree-bounded graphs. The problem of accurately learning the structure of discrete Markov networks, based on power-law graphs generated from the con figuration model, is also studied. The eff ect of power-law exponent value on the hardness of the learning problem is deduced from the converse aspect – it is shown that discrete Markov networks on power-law graphs with smaller exponent values require more number of samples to ensure accurate recovery of their underlying graphs for any learning algorithm. For the achievability aspect, an effi cient learning algorithm is designed for accurately reconstructing the structure of Ising model based on power-law graphs from the con figuration model; it is demonstrated that optimal number of samples su ffices for recovering the exact graph under certain constraints on the Ising model potential values. / text
95

Coding-Based System Primitives for Airborne Cloud Computing

Lin, Chit-Kwan January 2011 (has links)
The recent proliferation of sensors in inhospitable environments such as disaster or battle zones has not been matched by in situ data processing capabilities due to a lack of computing infrastructure in the field. We envision a solution based on small, low-altitude unmanned aerial vehicles (UAVs) that can deploy elastically-scalable computing infrastructure anywhere, at any time. This airborne compute cloud—essentially, micro-data centers hosted on UAVs—would communicate with terrestrial assets over a bandwidth-constrained wireless network with variable, unpredictable link qualities. Achieving high performance over this ground-to-air mobile radio channel thus requires making full and efficient use of every single transmission opportunity. To this end, this dissertation presents two system primitives that improve throughput and reduce network overhead by using recent distributed coding methods to exploit natural properties of the airborne environment (i.e., antenna beam diversity and anomaly sparsity). We first built and deployed an UAV wireless networking testbed and used it to characterize the ground-to-UAV wireless channel. Our flight experiments revealed that antenna beam diversity from using multiple SISO radios boosts reception range and aggregate throughput. This observation led us to develop our first primitive: ground-to-UAV bulk data transport. We designed and implemented FlowCode, a reliable link layer for uplink data transport that uses network coding to harness antenna beam diversity gains. Via flight experiments, we show that FlowCode can boost reception range and TCP throughput as much as 4.5-fold. Our second primitive permits low-overhead cloud status monitoring. We designed CloudSense, a network switch that compresses cloud status streams in-network via compressive sensing. CloudSense is particularly useful for anomaly detection tasks requiring global relative comparisons (e.g., MapReduce straggler detection) and can achieve up to 16.3-fold compression as well as early detection of the worst anomalies. Our efforts have also shed light on the close relationship between network coding and compressive sensing. Thus, we offer FlowCode and CloudSense not only as first steps toward the airborne compute cloud, but also as exemplars of two classes of applications—approximation intolerant and tolerant—to which network coding and compressive sensing should be judiciously and selectively applied. / Engineering and Applied Sciences
96

Efficient, provably secure code constructions

Agrawal, Shweta Prem 31 May 2011 (has links)
The importance of constructing reliable and efficient methods for securing digital information in the modern world cannot be overstated. The urgency of this need is reflected in mainstream media--newspapers and websites are full of news about critical user information, be it credit card numbers, medical data, or social security information, being compromised and used illegitimately. According to news reports, hackers probe government computer networks millions of times a day, about 9 million Americans have their identities stolen each year and cybercrime costs large American businesses 3.8 million dollars a year. More than 1 trillion worth of intellectual property has already been stolen from American businesses. It is this evergrowing problem of securing valuable information that our thesis attempts to address (in part). In this thesis, we study methods to secure information that are fast, convenient and reliable. Our overall contribution has four distinct threads. First, we construct efficient, "expressive" Public Key Encryption systems (specifically, Identity Based Encryption systems) based on the hardness of lattice problems. In Identity Based Encryption (IBE), any arbitrary string such as the user's email address or name can be her public key. IBE systems are powerful and address several problems faced by the deployment of Public Key Encryption. Our constructions are secure in the standard model. Next, we study secure communication over the two-user interference channel with an eavesdropper. We show that using lattice codes helps enhance the secrecy rate of this channel in the presence of an eavesdropper. Thirdly, we analyze the security requirements of network coding. Network Coding is an elegant method of data transmission which not only helps achieve capacity in several networks, but also has a host of other benefits. However, network coding is vulnerable to "pollution attacks" when there are malicious users in the system. We design mechanisms to prevent pollution attacks. In this setting, we provide two constructions -- a homomorphic Message Authentication Code (HMAC) and a Digital Signature, to secure information that is transmitted over such networks. Finally, we study the benefits of using Compressive Sensing for secure communication over the Wyner wiretap channel. Compressive Sensing has seen an explosion of interest in the last few years with its elegant mathematics and plethora of applications. So far however, Compressive Sensing had not found application in the domain of secrecy. Given its inherent assymetry, we ask (and answer in the affirmative) the question of whether it can be deployed to enable secure communication. Our results allow linear encoding and efficient decoding (via LASSO) at the legitimate receiver, along with infeasibility of message recovery (via an information theoretic analysis) at the eavesdropper, regardless of decoding strategy. / text
97

Combatting loss in wireless networks

Rozner, Eric John 27 January 2012 (has links)
The wireless medium is lossy due to many reasons, such as signal attenuation, multi-path propagation, and collisions. Wireless losses degrade network throughput, reliability, and latency. The goal of this dissertation is to combat wireless losses by developing effective techniques and protocols across different network layers. First, a novel opportunistic routing protocol is developed to overcome wireless losses at the network layer. Opportunistic routing protocols exploit receiver diversity to route traffic in the face of loss. A distinctive feature of the protocol is the performance derived from its optimization can be achieved in real IEEE 802.11 networks. At its heart lies a simple yet realistic model of the network that captures wireless interference, losses, traffic, and MAC-induced dependencies. Then a model-driven optimization algorithm is designed to accurately optimize the end-to-end performance, and techniques are developed to map the resulting optimization solutions to practical routing configurations. Its effectiveness is demonstrated using simulation and testbed experiments. Second, an efficient retransmission scheme (ER) is developed at the link layer for wireless networks. Instead of retransmitting lost packets in their original forms, ER codes packets lost at different destinations and uses a single retransmission to potentially recover multiple packet losses. A simple and practical protocol is developed to realize the idea, and it is evaluated using simulation and testbed experiments to demonstrate its effectiveness. Third, detailed measurement traces are collected to understand wireless losses in dynamic and mobile environments. Existing wireless drivers are modified to enable the logging and analysis of network activity under varying end-host configurations. The results indicate that mobile clients can suffer from consecutive packet losses, or burst errors. The burst errors are then analyzed in more detail to gain further insights into the problem. With these insights, recommendations for future research directions to mitigate loss in mobile environments are presented. / text
98

Coding Schemes for Physical Layer Network Coding Over a Two-Way Relay Channel

Hern, Brett Michael 16 December 2013 (has links)
We consider a two-way relay channel in which two transmitters want to exchange information through a central relay. The relay observes a superposition of the trans- mitted signals from which a function of the transmitted messages is computed for broadcast. We consider the design of codebooks which permit the recovery of a function at the relay and derive information-theoretic bounds on the rates for reliable decoding at the relay. In the spirit of compute-and-forward, we present a multilevel coding scheme that permits reliable computation (or, decoding) of a class of functions at the relay. The function to be decoded is chosen at the relay depending on the channel realization. We define such a class of reliably computable functions for the proposed coding scheme and derive rates that are universally achievable over a set of channel gains when this class of functions is used at the relay. We develop our framework with general modulation formats in mind, but numerical results are presented for the case where each node transmits using 4-ary and 8-ary modulation schemes. Numerical results demonstrate that the flexibility afforded by our proposed scheme permits substantially higher rates than those achievable by always using a fixed function or considering only linear functions over higher order fields. Our numerical results indicate that it is favorable to allow the relay to attempt both compute-and-forward and decode-and-forward decoding. Indeed, either method considered separately is suboptimal for computation over general channels. However, we obtain a converse result when the transmitters are restricted to using identical binary linear codebooks generated uniformly at random. We show that it is impossible for this code ensemble to achieve any rate higher than the maximum of the rates achieved using compute-and-forward and decode-and-forward decoding. Finally, we turn our attention to the design of low density parity check (LDPC) ensembles which can practically achieve these information rates with joint-compute- and-forward message passing decoding. To this end, we construct a class of two-way erasure multiple access channels for which we can exactly characterize the performance of joint-compute-and-forward message passing decoding. We derive the processing rules and a density evolution like analysis for several classes of LDPC ensembles. Utilizing the universally optimal performance of spatially coupled LDPC ensembles with message passing decoding, we show that a single encoder and de- coder with puncturing can achieve the optimal rate region for a range of channel parameters.
99

Development of an energy and geographic aware opportunistic network coding scheme / Mario Johann Engelbrecht

Engelbrecht, Mario Johann January 2012 (has links)
The evolution of communication networks has led us to an era where you cannot only perform surgery halfway across the world, but do so while being in the comfort of your own home. By eliminating the need for wires, wireless networks revolutionised communication networks by enabling nodes to communicate while being in a mobile state. The concept opened many doors to new applications and possibilities. Network Coding is a technique that optimises the throughput of a network by coding packets. Geo-Routing is a routing method that operates by using the geographical distances between nodes as the routing metric. Opportunistic Routing is a routing method that exploits the broadcast characteristics of wireless networks. In this thesis, we developed a routing scheme that incorporates Network Coding, Geo- Routing and energy aware conditions. It accomplishes this task by using one of the key phases constituting Opportunistic Routing. The developed routing scheme was implemented in OMNeT++. Various simulation experiments were conducted in OMNeT++ pertaining to the implemented scheme. The results indicate significant increase in performance metrics such as throughput and survivability. / Thesis (MIng (Computer and Electronic Engineering))--North-West University, Potchefstroom Campus, 2013
100

Development of an energy and geographic aware opportunistic network coding scheme / Mario Johann Engelbrecht

Engelbrecht, Mario Johann January 2012 (has links)
The evolution of communication networks has led us to an era where you cannot only perform surgery halfway across the world, but do so while being in the comfort of your own home. By eliminating the need for wires, wireless networks revolutionised communication networks by enabling nodes to communicate while being in a mobile state. The concept opened many doors to new applications and possibilities. Network Coding is a technique that optimises the throughput of a network by coding packets. Geo-Routing is a routing method that operates by using the geographical distances between nodes as the routing metric. Opportunistic Routing is a routing method that exploits the broadcast characteristics of wireless networks. In this thesis, we developed a routing scheme that incorporates Network Coding, Geo- Routing and energy aware conditions. It accomplishes this task by using one of the key phases constituting Opportunistic Routing. The developed routing scheme was implemented in OMNeT++. Various simulation experiments were conducted in OMNeT++ pertaining to the implemented scheme. The results indicate significant increase in performance metrics such as throughput and survivability. / Thesis (MIng (Computer and Electronic Engineering))--North-West University, Potchefstroom Campus, 2013

Page generated in 0.0619 seconds