• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 8
  • 1
  • Tagged with
  • 19
  • 18
  • 14
  • 9
  • 8
  • 7
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • 4
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Slepian-Wolf coded nested quantization (SEC-NQ) for Wyner-Ziv coding: high-rate performance analysis, code design, and application to cooperative networks

Liu, Zhixin 15 May 2009 (has links)
No description available.
12

Implementation Of A Distributed Video Codec

Isik, Cem Vedat 01 February 2008 (has links) (PDF)
Current interframe video compression standards such as the MPEG4 and H.264, require a high-complexity encoder for predictive coding to exploit the similarities among successive video frames. This requirement is acceptable for cases where the video sequence to be transmitted is encoded once and decoded many times. However, some emerging applications such as video-based sensor networks, power-aware surveillance and mobile video communication systems require computational complexity to be shifted from encoder to decoder. Distributed Video Coding (DVC) is a new coding paradigm, based on two information-theoretic results, Slepian-Wolf and Wyner-Ziv, which allows exploiting source statistics at the decoder only. This architecture, therefore, enables very simple encoders to be used in video coding. Wyner-Ziv video coding is a particular case of DVC which deals with lossy source coding where side information is available at the decoder only. In this thesis, we implemented a DVC codec based on the DISCOVER (DIStributed COding for Video sERvices) project and carried out a detailed analysis of each block. Several algorithms have been implemented for each block and results are compared in terms of rate-distortion. The implemented architecture is aimed to be used as a testbed for future studies.
13

Coding with side information

Cheng, Szeming 01 November 2005 (has links)
Source coding and channel coding are two important problems in communications. Although side information exists in everyday scenario, the effect of side information is not taken into account in the conventional setups. In this thesis, we focus on the practical designs of two interesting coding problems with side information: Wyner-Ziv coding (source coding with side information at the decoder) and Gel??fand-Pinsker coding (channel coding with side information at the encoder). For WZC, we split the design problem into the two cases when the distortion of the reconstructed source is zero and when it is not. We review that the first case, which is commonly called Slepian-Wolf coding (SWC), can be implemented using conventional channel coding. Then, we detail the SWC design using the low-density parity-check (LDPC) code. To facilitate SWC design, we justify a necessary requirement that the SWC performance should be independent of the input source. We show that a sufficient condition of this requirement is that the hypothetical channel between the source and the side information satisfies a symmetry condition dubbed dual symmetry. Furthermore, under that dual symmetry condition, SWC design problem can be simply treated as LDPC coding design over the hypothetical channel. When the distortion of the reconstructed source is non-zero, we propose a practical WZC paradigm called Slepian-Wolf coded quantization (SWCQ) by combining SWC and nested lattice quantization. We point out an interesting analogy between SWCQ and entropy coded quantization in classic source coding. Furthermore, a practical scheme of SWCQ using 1-D nested lattice quantization and LDPC is implemented. For GPC, since the actual design procedure relies on the more precise setting of the problem, we choose to investigate the design of GPC as the form of a digital watermarking problem as digital watermarking is the precise dual of WZC. We then introduce an enhanced version of the well-known spread spectrum watermarking technique. Two applications related to digital watermarking are presented.
14

Codage de sources distribuées : Outils et Applications à la compression vidéo

Toto-Zarasoa, Velotiaray 29 November 2010 (has links) (PDF)
Le codage de sources distribuées est une technique permettant de compresser plusieurs sources corrélées sans aucune coopération entre les encodeurs, et sans perte de débit si leur décodage s'effectue conjointement. Fort de ce principe, le codage de vidéo distribué exploite la corrélation entre les images successives d'une vidéo, en simplifiant au maximum l'encodeur et en laissant le décodeur exploiter la corrélation. Parmi les contributions de cette thèse, nous nous intéressons dans une première partie au codage asymétrique de sources binaires dont la distribution n'est pas uniforme, puis au codage des sources à états de Markov cachés. Nous montrons d'abord que, pour ces deux types de sources, exploiter la distribution au décodeur permet d'augmenter le taux de compression. En ce qui concerne le canal binaire symétrique modélisant la corrélation entre les sources, nous proposons un outil, basé sur l'algorithme EM, pour en estimer le paramètre. Nous montrons que cet outil permet d'obtenir une estimation rapide du paramètre, tout en assurant une précision proche de la borne de Cramer-Rao. Dans une deuxième partie, nous développons des outils permettant de décoder avec succès les sources précédemment étudiées. Pour cela, nous utilisons des codes Turbo et LDPC basés syndrome, ainsi que l'algorithme EM. Cette partie a été l'occasion de développer des nouveaux outils pour atteindre les bornes des codages asymétrique et non-asymétrique. Nous montrons aussi que, pour les sources non-uniformes, le rôle des sources corrélées n'est pas symétrique. Enfin, nous montrons que les modèles de sources proposés modélisent bien les distributions des plans de bits des vidéos; nous montrons des résultats prouvant l'efficacité des outils développés. Ces derniers permettent d'améliorer de façon notable la performance débit-distorsion d'un codeur vidéo distribué, mais sous certaines conditions d'additivité du canal de corrélation.
15

Football on mobile phones : algorithms, architectures and quality of experience in streaming video

Sun, Jiong January 2006 (has links)
<p>In this thesis we study algorithms and architectures that can provide a better Quality of Experience (QoE) for streaming video systems and services. With cases and examples taken from the application scenarios of football on mobile phones, we address the fundamental problems behind streaming video services. Thus, our research results can be applied and extended to other networks, to other sports and to other cultural activities.</p><p>In algorithm development, we propose five different schemes. We suggest a blind motion estimation and a trellis based motion estimation with dynamic programming algorithms for Wyner-Ziv coding. We develop a trans-media technology, vibrotactile coding of visual signals for mobile phones. We propose a new bandwidth prediction scheme for real-time video conference. We also provide an effective method based on dynamic programming to select optimal services and maximize QoE.</p><p>In architecture design, we offer three architectures for real-time interactive video and two for streaming live football information. The former three are: a structure of motion estimation in Wyner-Ziv coding for real-time video; a variable bit rate Wyner-Ziv video coding structure based on multi-view camera array; and a dynamic resource allocation structure based on 3-D object motion. The latter two are: a vibrotactile signal rendering system for live information; and a Universal Multimedia Access architecture for streaming live football video.</p><p>In QoE exploration, we give a detailed and deep discussion of QoE and the enabling techniques. We also develop a conceptual model for QoE. Moreover we place streaming video services in a framework of QoE. The new general framework of streaming video services allows for the interaction between the user, content and technology.</p><p>We demonstrate that it is possible to develop algorithms and architectures that take into account the user's perspective. Quality of Experience in video mobile services is within our reach.</p>
16

Football on mobile phones : algorithms, architectures and quality of experience in streaming video

Sun, Jiong January 2006 (has links)
In this thesis we study algorithms and architectures that can provide a better Quality of Experience (QoE) for streaming video systems and services. With cases and examples taken from the application scenarios of football on mobile phones, we address the fundamental problems behind streaming video services. Thus, our research results can be applied and extended to other networks, to other sports and to other cultural activities. In algorithm development, we propose five different schemes. We suggest a blind motion estimation and a trellis based motion estimation with dynamic programming algorithms for Wyner-Ziv coding. We develop a trans-media technology, vibrotactile coding of visual signals for mobile phones. We propose a new bandwidth prediction scheme for real-time video conference. We also provide an effective method based on dynamic programming to select optimal services and maximize QoE. In architecture design, we offer three architectures for real-time interactive video and two for streaming live football information. The former three are: a structure of motion estimation in Wyner-Ziv coding for real-time video; a variable bit rate Wyner-Ziv video coding structure based on multi-view camera array; and a dynamic resource allocation structure based on 3-D object motion. The latter two are: a vibrotactile signal rendering system for live information; and a Universal Multimedia Access architecture for streaming live football video. In QoE exploration, we give a detailed and deep discussion of QoE and the enabling techniques. We also develop a conceptual model for QoE. Moreover we place streaming video services in a framework of QoE. The new general framework of streaming video services allows for the interaction between the user, content and technology. We demonstrate that it is possible to develop algorithms and architectures that take into account the user's perspective. Quality of Experience in video mobile services is within our reach.
17

Secret Key Generation in the Multiterminal Source Model : Communication and Other Aspects

Mukherjee, Manuj January 2017 (has links) (PDF)
This dissertation is primarily concerned with the communication required to achieve secret key (SK) capacity in a multiterminal source model. The multiterminal source model introduced by Csiszár and Narayan consists of a group of remotely located terminals with access to correlated sources and a noiseless public channel. The terminals wish to secure their communication by agreeing upon a group secret key. The key agreement protocol involves communicating over the public channel, and agreeing upon an SK secured from eavesdroppers listening to the public communication. The SK capacity, i.e., the maximum rate of an SK that can be agreed upon by the terminals, has been characterized by Csiszár and Narayan. Their capacity-achieving key generation protocol involved terminals communicating to attain omniscience, i.e., every terminal gets to recover the sources of the other terminals. While this is a very general protocol, it often requires larger rates of public communication than is necessary to achieve SK capacity. The primary focus of this dissertation is to characterize the communication complexity, i.e., the minimum rate of public discussion needed to achieve SK capacity. A lower bound to communication complexity is derived for a general multiterminal source, although it turns out to be loose in general. While the minimum rate of communication for omniscience is always an upper bound to the communication complexity, we derive tighter upper bounds to communication complexity for a special class of multiterminal sources, namely, the hypergraphical sources. This upper bound yield a complete characterization of hypergraphical sources where communication for omniscience is a rate-optimal protocol for SK generation, i.e., the communication complexity equals the minimum rate of communication for omniscience. Another aspect of the public communication touched upon by this dissertation is the necessity of omnivocality, i.e., all terminals communicating, to achieve the SK capacity. It is well known that in two-terminal sources, only one terminal communicating success to generate a maximum rate secret key. However, we are able to show that for three or more terminals, omnivocality is indeed required to achieve SK capacity if a certain condition is met. For the specific case of three terminals, we show that this condition is also necessary to ensure omnivocality is essential in generating a SK of maximal rate. However, this condition is no longer necessary when there are four or more terminals. A certain notion of common information, namely, the Wyner common information, plays a central role in the communication complexity problem. This dissertation thus includes a study of multiparty versions of the two widely used notions of common information, namely, Wyner common information and Gács-Körner (GK) common information. While evaluating these quantities is difficult in general, we are able to derive explicit expressions for both types of common information in the case of hypergraphical sources. We also study fault-tolerant SK capacity in this dissertation. The maximum rate of SK that can be generated even if an arbitrary subset of terminals drops out is called a fault-tolerant SK capacity. Now, suppose we have a fixed number of pairwise SKs. How should one distribute them amongpairs of terminals, to ensure good fault tolerance behavior in generating a groupSK? We show that the distribution of the pairwise keys according to a Harary graph provides a certain degree of fault tolerance, and bounds are obtained on its fault-tolerant SK capacity.
18

Modern Error Control Codes and Applications to Distributed Source Coding

Sartipi, Mina 15 August 2006 (has links)
This dissertation first studies two-dimensional wavelet codes (TDWCs). TDWCs are introduced as a solution to the problem of designing a 2-D code that has low decoding- complexity and has the maximum erasure-correcting property for rectangular burst erasures. The half-rate TDWCs of dimensions N<sub>1</sub> X N<sub>2</sub> satisfy the Reiger bound with equality for burst erasures of dimensions N<sub>1</sub> X N<sub>2</sub>/2 and N<sub>1</sub>/2 X N<sub>2</sub>, where GCD(N<sub>1</sub>,N<sub>2</sub>) = 2. Examples of TDWC are provided that recover any rectangular burst erasure of area N<sub>1</sub>N<sub>2</sub>/2. These lattice-cyclic codes can recover burst erasures with a simple and efficient ML decoding. This work then studies the problem of distributed source coding for two and three correlated signals using channel codes. We propose to model the distributed source coding problem with a set of parallel channel that simplifies the distributed source coding to de- signing non-uniform channel codes. This design criterion improves the performance of the source coding considerably. LDPC codes are used for lossless and lossy distributed source coding, when the correlation parameter is known or unknown at the time of code design. We show that distributed source coding at the corner point using LDPC codes is simplified to non-uniform LDPC code and semi-random punctured LDPC codes for a system of two and three correlated sources, respectively. We also investigate distributed source coding at any arbitrary rate on the Slepian-Wolf rate region. This problem is simplified to designing a rate-compatible LDPC code that has unequal error protection property. This dissertation finally studies the distributed source coding problem for applications whose wireless channel is an erasure channel with unknown erasure probability. For these application, rateless codes are better candidates than LDPC codes. Non-uniform rateless codes and improved decoding algorithm are proposed for this purpose. We introduce a reliable, rate-optimal, and energy-efficient multicast algorithm that uses distributed source coding and rateless coding. The proposed multicast algorithm performs very close to network coding, while it has lower complexity and higher adaptability.
19

Optimized information processing in resource-constrained vision systems. From low-complexity coding to smart sensor networks

MORBEE, MARLEEN 14 October 2011 (has links)
Vision systems have become ubiquitous. They are used for traffic monitoring, elderly care, video conferencing, virtual reality, surveillance, smart rooms, home automation, sport games analysis, industrial safety, medical care etc. In most vision systems, the data coming from the visual sensor(s) is processed before transmission in order to save communication bandwidth or achieve higher frame rates. The type of data processing needs to be chosen carefully depending on the targeted application, and taking into account the available memory, computational power, energy resources and bandwidth constraints. In this dissertation, we investigate how a vision system should be built under practical constraints. First, this system should be intelligent, such that the right data is extracted from the video source. Second, when processing video data this intelligent vision system should know its own practical limitations, and should try to achieve the best possible output result that lies within its capabilities. We study and improve a wide range of vision systems for a variety of applications, which go together with different types of constraints. First, we present a modulo-PCM-based coding algorithm for applications that demand very low complexity coding and need to preserve some of the advantageous properties of PCM coding (direct processing, random access, rate scalability). Our modulo-PCM coding scheme combines three well-known, simple, source coding strategies: PCM, binning, and interpolative coding. The encoder first analyzes the signal statistics in a very simple way. Then, based on these signal statistics, the encoder simply discards a number of bits of each image sample. The modulo-PCM decoder recovers the removed bits of each sample by using its received bits and side information which is generated by interpolating previous decoded signals. Our algorithm is especially appropriate for image coding. / Morbee, M. (2011). Optimized information processing in resource-constrained vision systems. From low-complexity coding to smart sensor networks [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/12126 / Palancia

Page generated in 0.0405 seconds