• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 5
  • 2
  • 1
  • Tagged with
  • 17
  • 13
  • 12
  • 8
  • 7
  • 7
  • 6
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Layered Wyner-Ziv video coding for noisy channels

Xu, Qian 01 November 2005 (has links)
The growing popularity of video sensor networks and video celluar phones has generated the need for low-complexity and power-efficient multimedia systems that can handle multiple video input and output streams. While standard video coding techniques fail to satisfy these requirements, distributed source coding is a promising technique for ??uplink?? applications. Wyner-Ziv coding refers to lossy source coding with side information at the decoder. Based on recent theoretical result on successive Wyner-Ziv coding, we propose in this thesis a practical layered Wyner-Ziv video codec using the DCT, nested scalar quantizer, and irregular LDPC code based Slepian-Wolf coding (or lossless source coding with side information) for noiseless channel. The DCT is applied as an approximation to the conditional KLT, which makes the components of the transformed block conditionally independent given the side information. NSQ is a binning scheme that facilitates layered bit-plane coding of the bin indices while reducing the bit rate. LDPC code based Slepian-Wolf coding exploits the correlation between the quantized version of the source and the side information to achieve further compression. Different from previous works, an attractive feature of our proposed system is that video encoding is done only once but decoding allowed at many lower bit rates without quality loss. For Wyner-Ziv coding over discrete noisy channels, we present a Wyner-Ziv video codec using IRA codes for Slepian-Wolf coding based on the idea of two equivalent channels. For video streaming applications where the channel is packet based, we apply unequal error protection scheme to the embedded Wyner-Ziv coded video stream to find the optimal source-channel coding trade-off for a target transmission rate over packet erasure channel.
2

Approximation et représentation des fonctions sur la sphère. Applications à la géodésie et à l'imagerie médicale.

Nicu, Ana-Maria 15 February 2012 (has links) (PDF)
Cette thèse est construite autour de l'approximation et la représentation des fonctions sur la sphère avec des applications pour des problèmes inverses issues de la géodésie et de l'imagerie médicale. Le plan de la thèse est structuré de la façon suivante. Dans le premier chapitre, on donne le cadre général d'un problème inverse ainsi que la description du problème de la géophysique et de la M/EEG. L'idée d'un problème inverse est de retrouver une densité à l'intérieur d'un domaine (la boule unité modélisant la terre ou le cerveau humain), à partir des données des mesures d'un certain potentiel à la surface du domaine. On continue par donner les principales définitions et théorèmes qu'on utilisera tout au long de la thèse. De plus, la résolution du problème inverse consiste dans la résolution de deux problèmes : transmission de données et localisation de sources à l'intérieur de la boule. En pratique, les données mesurées sont disponibles que sur des parties de la sphère : calottes sphériques, hémisphère nord de la tête (M/EEG), continents (géodésie). Pour représenter ce type de données, on construit la base de Slepian qui a des bonnes propriétés sur les régions étudiées. Dans le Chapitre 4 on s'intéresse au problème d'estimation de données sur la sphère entière (leur développement sous la base des harmoniques sphériques) à partir des mesures partielles bruitées. Une fois qu'on connait ce développement, on applique la méthode du meilleur approximant rationnel sur des sections planes de la sphère (Chapitre 5). Ce chapitre traite trois types de densité : monopolaire, dipolaire et inclusions pour la modélisation des problèmes, ainsi que des propriétés de la densité et du potentiel associé, quantités mises en relation par un certain opérateur. Dans le Chapitre 6 on regarde les Chapitres 3, 4 et 5 du point de vue numérique. On présente des tests numériques pour la localisation de sources dans la géodésie et la M/EEG lorsqu'on dispose des données partielles sur la sphère.
3

Layered Wyner-Ziv video coding: a new approach to video compression and delivery

Xu, Qian 15 May 2009 (has links)
Following recent theoretical works on successive Wyner-Ziv coding, we propose a practical layered Wyner-Ziv video coder using the DCT, nested scalar quantiza- tion, and irregular LDPC code based Slepian-Wolf coding (or lossless source coding with side information at the decoder). Our main novelty is to use the base layer of a standard scalable video coder (e.g., MPEG-4/H.26L FGS or H.263+) as the decoder side information and perform layered Wyner-Ziv coding for quality enhance- ment. Similar to FGS coding, there is no performance di®erence between layered and monolithic Wyner-Ziv coding when the enhancement bitstream is generated in our proposed coder. Using an H.26L coded version as the base layer, experiments indicate that Wyner-Ziv coding gives slightly worse performance than FGS coding when the channel (for both the base and enhancement layers) is noiseless. However, when the channel is noisy, extensive simulations of video transmission over wireless networks conforming to the CDMA2000 1X standard show that H.26L base layer coding plus Wyner-Ziv enhancement layer coding are more robust against channel errors than H.26L FGS coding. These results demonstrate that layered Wyner-Ziv video coding is a promising new technique for video streaming over wireless networks. For scalable video transmission over the Internet and 3G wireless networks, we propose a system for receiver-driven layered multicast based on layered Wyner-Ziv video coding and digital fountain coding. Digital fountain codes are near-capacity erasure codes that are ideally suited for multicast applications because of their rate- less property. By combining an error-resilient Wyner-Ziv video coder and rateless fountain codes, our system allows reliable multicast of high-quality video to an arbi- trary number of heterogeneous receivers without the requirement of feedback chan- nels. Extending this work on separate source-channel coding, we consider distributed joint source-channel coding by using a single channel code for both video compression (via Slepian-Wolf coding) and packet loss protection. We choose Raptor codes - the best approximation to a digital fountain - and address in detail both encoder and de- coder designs. Simulation results show that, compared to one separate design using Slepian-Wolf compression plus erasure protection and another based on FGS coding plus erasure protection, the proposed joint design provides better video quality at the same number of transmitted packets.
4

Low-Complexity Compression Techniques for High Frame Rate Video

Yang, Duo January 2017 (has links)
Recently, video has become one of the most important multimedia resources to be shared in our work and daily life. With the development of high frame rate video (HFV), the write speed from high speed camera array sensor to the massive data storage device has been regarded as the main constraints on HFV applications. In this thesis, some low-complexity compression techniques are proposed for HFV acquisition and transmission. The core technique of our developed codec is the application of Slepian-Wolf coding theorem in video compression. The light-duty encoder employs SW encoding, resulting in lower computational cost. The pixel values are transformed into bit sequences, and then we assemble the bits on same bit plane into 8 bit streams. For each bit plane, there is a statistical BSC being constructed to describe the dependency between the source image and the SI image. Furthermore, an improved coding scheme is applied to exploit the spatial correlation between two consecutive bit planes, which is able to reduce the source coding rates. Different from the encoder, the collaborative heavy-duty decoder shoulders the burden of realizing high reconstruction fidelity. Motion estimation and motion compensation employ the block-matching algorithm to predict the SI image. And then the received syndrome sequence is able to be SW decoded with SI. To realize different compression goals, compression are separated to the original and the downsampled cases. With regard to the compression at the original resolution, it completes after SW decoding. While with respect to compression at reduced resolution, the SW decoded image is necessary to be upsampled by the state-of-the-art learning based SR technique: A+ . Since there are some important image details lost after the resolution resizing, ME and MC is applied to modify the upsampled image again, promoting the reconstruction PSNR. Experimental results show that the proposed low-complexity compression techniques are effective on improving reconstruction fidelity and compression ratio. / Thesis / Master of Applied Science (MASc)
5

Résolution de problèmes inverses en géodésie physique / On solving some inverse problems in physical geodesy

Abdelmoula, Amine 20 December 2013 (has links)
Ce travail traite de deux problèmes de grande importances en géodésie physique. Le premier porte sur la détermination du géoïde sur une zone terrestre donnée. Si la terre était une sphère homogène, la gravitation en un point, serait entièrement déterminée à partir de sa distance au centre de la terre, ou de manière équivalente, en fonction de son altitude. Comme la terre n'est ni sphérique ni homogène, il faut calculer en tout point la gravitation. A partir d'un ellipsoïde de référence, on cherche la correction à apporter à une première approximation du champ de gravitation afin d'obtenir un géoïde, c'est-à-dire une surface sur laquelle la gravitation est constante. En fait, la méthode utilisée est la méthode de collocation par moindres carrés qui sert à résoudre des grands problèmes aux moindres carrés généralisés. Le seconde partie de cette thèse concerne un problème inverse géodésique qui consiste à trouver une répartition de masses ponctuelles (caractérisées par leurs intensités et positions), de sorte que le potentiel généré par eux, se rapproche au maximum d'un potentiel donné. Sur la terre entière une fonction potentielle est généralement exprimée en termes d'harmoniques sphériques qui sont des fonctions de base à support global la sphère. L'identification du potentiel cherché se fait en résolvant un problème aux moindres carrés. Lorsque seulement une zone limitée de la Terre est étudiée, l'estimation des paramètres des points masses à l'aide des harmoniques sphériques est sujette à l'erreur, car ces fonctions de base ne sont plus orthogonales sur un domaine partiel de la sphère. Le problème de la détermination des points masses sur une zone limitée est traitée par la construction d'une base de Slepian qui est orthogonale sur le domaine limité spécifié de la sphère. Nous proposons un algorithme itératif pour la résolution numérique du problème local de détermination des masses ponctuelles et nous donnons quelques résultats sur la robustesse de ce processus de reconstruction. Nous étudions également la stabilité de ce problème relativement au bruit ajouté. Nous présentons quelques résultats numériques ainsi que leurs interprétations. / This work focuses on the study of two well-known problems in physical geodesy. The first problem concerns the determination of the geoid on a given area on the earth. If the Earth were a homogeneous sphere, the gravity at a point would be entirely determined from its distance to the center of the earth or in terms of its altitude. As the earth is neither spherical nor homogeneous, we must calculate gravity at any point. From a reference ellipsoid, we search to find the correction to a mathematical approximation of the gravitational field in order to obtain a geoid, i.e. A surface on which gravitational potential is constant. The method used is the method of least squares collocation which is the best for solving large generalized least squares problems. In the second problem, We are interested in a geodetic inverse problem that consists in finding a distribution of point masses (characterized by their intensities and positions), such that the potential generated by them best approximates a given potential field. On the whole Earth a potential function is usually expressed in terms of spherical harmonics which are basis functions with global support. The identification of the two potentials is done by solving a least-squares problem. When only a limited area of the Earth is studied, the estimation of the point-mass parameters by means of spherical harmonics is prone to error, since they are no longer orthogonal over a partial domain of the sphere. The point-mass determination problem on a limited region is treated by the construction of a Slepian basis that is orthogonal over the specified limited domain of the sphere. We propose an iterative algorithm for the numerical solution of the local point mass determination problem and give some results on the robustness of this reconstruction process. We also study the stability of this problem against added noise. Some numerical tests are presented and commented.
6

Multiterminal source coding: sum-rate loss, code designs, and applications to video sensor networks

Yang, Yang 15 May 2009 (has links)
Driven by a host of emerging applications (e.g., sensor networks and wireless video), distributed source coding (i.e., Slepian-Wolf coding, Wyner-Ziv coding and various other forms of multiterminal source coding), has recently become a very active research area. This dissertation focuses on multiterminal (MT) source coding problem, and consists of three parts. The first part studies the sum-rate loss of an important special case of quadratic Gaussian multi-terminal source coding, where all sources are positively symmetric and all target distortions are equal. We first give the minimum sum-rate for joint encoding of Gaussian sources in the symmetric case, and then show that the supremum of the sum-rate loss due to distributed encoding in this case is 1 2 log2 5 4 = 0:161 b/s when L = 2 and increases in the order of º L 2 log2 e b/s as the number of terminals L goes to infinity. The supremum sum-rate loss of 0:161 b/s in the symmetric case equals to that in general quadratic Gaussian two-terminal source coding without the symmetric assumption. It is conjectured that this equality holds for any number of terminals. In the second part, we present two practical MT coding schemes under the framework of Slepian-Wolf coded quantization (SWCQ) for both direct and indirect MT problems. The first, asymmetric SWCQ scheme relies on quantization and Wyner-Ziv coding, and it is implemented via source splitting to achieve any point on the sum-rate bound. In the second, conceptually simpler scheme, symmetric SWCQ, the two quantized sources are compressed using symmetric Slepian-Wolf coding via a channel code partitioning technique that is capable of achieving any point on the Slepian-Wolf sum-rate bound. Our practical designs employ trellis-coded quantization and turbo/LDPC codes for both asymmetric and symmetric Slepian-Wolf coding. Simulation results show a gap of only 0.139-0.194 bit per sample away from the sum-rate bound for both direct and indirect MT coding problems. The third part applies the above two MT coding schemes to two practical sources, i.e., stereo video sequences to save the sum rate over independent coding of both sequences. Experiments with both schemes on stereo video sequences using H.264, LDPC codes for Slepian-Wolf coding of the motion vectors, and scalar quantization in conjunction with LDPC codes for Wyner-Ziv coding of the residual coefficients give slightly smaller sum rate than separate H.264 coding of both sequences at the same video quality.
7

Performance of MIMO and non-orthogonal transmission in lossy forward relay networks

He, J. (Jiguang) 23 October 2018 (has links)
Abstract In the current LTE-Advanced system, decode-and-forward (DF) is leveraged for cooperative relaying, where the erroneously decoded sequences are discarded at the relay, resulting in a waste of resources. The reason lies in that the erroneously decoded sequence can provide a certain amount of useful information about the source at the destination. Therefore, we develop a new relaying scheme, called lossy DF (also known as lossy forward (LF)), where the relay always forwards the decoded sequence to the destination. Beneficial from the always-forward principle, it has been verified that LF relaying outperforms DF relaying in terms of outage probability, ε-outage achievable rate, frame error rate (FER), and communication coverage. Three exemplifying network scenarios are studied in this thesis: the one-way multiple-input multiple-output (MIMO) relay network, the multiple access relay channel (MARC), and the general multi-source multi-relay network. We derive the outage probability of the one-way MIMO relay networks under the assumption that the orthogonal space-time block code (OSTBC) is implemented at the transmitter side for each individual transmission. Interestingly, we find that the diversity order of the OSTBC based one-way MIMO relay network can be interpreted and formulated by the well-known max-flow min-cut theorem, which is widely utilized to calculate the network capacity. For the MARC, non-orthogonal transmission is introduced to further improve the network throughput compared to its orthogonal counterpart. The region for lossless recovery of both sources is formulated by the theorem of multiple access channel (MAC) with a helper, which combines the Slepian-Wolf rate region and the MAC capacity region. Since the region for lossless recovery is obtained via sufficient condition, the derived outage probability can be regarded as a theoretical upper bound. We also conduct the performance evaluation by exploiting different accumulator (ACC) aided turbo codes at the transmitter side, exclusive or (XOR) based multi-user complete decoding at the relay, and iterative joint decoding (JD) at the destination. For the general multi-source multi-relay network, we focus on the investigation the end-to-end outage probability. The performance improvement of LF over DF is verified through theoretical analyses and numerical results in terms of outage probability. / Tiivistelmä Tämän päivän LTE-A-tiedonsiirtojärjestelmissä hyödynnetään dekoodaa-ja-välitä (decode-and-forward, DF) menetelmää yhteistoiminnalliseen tiedon edelleenlähetykseen (relaying) siten, että virheellisesti vastaanotetut sekvenssit hylätään välittimessä (relay). Tämä on resurssien hukkaamista, sillä virheellisissäkin viesteissä on informaatiota, jota voidaan hyödyntää vastaanottimessa. Tässä väitöskirjassa tutkitaan uutta häviöllistä DF-menetelmää, johon viitataan nimellä häviöllinen välitys (lossy forward, LF). Menetelmässä välitin lähettää informaation aina eteenpäin olipa siinä virheitä tai ei. Sen etuna verrattuna perinteiseen DF-menetelmään, on parantunut luotettavuus metriikoilla jossa mitataan vastaanoton todennäköisyyttä ja verkon peittoaluetta. Väitöskirjassa tarkastellaan LF-menetelmää kolmessa eri verkkotopologiassa jotka ovat yksisuuntainen monitulo-monilähtövälitinverkko (multiple-input multiple-output, MIMO), moniliityntävälitinkanava (multiple access relay channel, MARC), sekä yleinen moniläheinen monivälitinverkko. Työssä johdetaan matemaattinen esitys estotilan todennäköisyydelle (outage probability) yksisuuntaisessa MIMO-välitinverkossa olettaen ortogonaalisen tila-aika lohkokoodin (orthogonal space-time block code, OSTBC) käyttö. Estotilan todennäköisyys esitetään käyttäen toisteastta (diversity order), joka saadaan johdettua tunnetusta max-flow min-cut lauseesta, jota puolestaan käytetään yleisesti erilaisten verkkojen kapasiteettien laskentaan. MARC-topologiassa hyödynnetään ei-ortogonaalista lähetystä verkon datavirran kasvattamiseen. Häviöttömän lähetyksen informaatioteoreettinen kapasiteettialue saadaan johdettua MAC-auttajan kanssa. Lähestymistavassa Slepian-Wolf- sekä MAC-kapasiteettialueet yhdistyvät. Alueelle, jossa kahden lähteen lähetysnopeudet ovat sellaiset, että vastaanotto on häviötöntä, annetaan riittävä ehto, jolloin johdettu estotilan todennäköisyys on teoreettinen yläraja. Suorituskykyä evaluoidaan myös tietokonesimulaatioilla, joissa käytetään erilaisia akkumulaattoriavusteisia turbokoodeja lähettimessä, ehdoton tai (exclusive or, XOR) pohjaista monen käyttäjän dekoodausta välittimessä sekä iteratiivista yhteisdekoodausta vastaanottimessa. Yleisessä monilähteisessä monivälitinverkossa keskitytään alkuperäisen lähetyksen estotilatodennäköisyyteen. Teoreettinen analyysi sekä simulaatiot osoittavat, että LF:n estotilan todennäköisyys on pienempi kuin DF:n.
8

Decoding and lossy forwarding based multiple access relaying

Lu, P.-S. (Pen-Shun) 20 March 2015 (has links)
Abstract The goal of this thesis is to provide a unified concept of lossy-forwarding from the theoretical analysis to practical scheme design for the decode-and-forward-based multiple access relay channel (MARC) system. To improve the performance of MARC with the relay subject to resources or/and time constraints, the erroneous estimates output from simple detection schemes are used at the relay are forwarded and exploited. A correlation is then found between two sequences: one is the network-coded sequence sent from the relay, and the other is their corresponding exclusive-OR-ed information sequence. Several joint network-channel coding (JNCC) techniques are provided in which the correlation is utilized to update the log-likelihood ratio sequences during the iterative decoding process at the destination. As a result, the bit error rate (BER) and frame error rate (FER) are improved compared with those of MARC with select DF strategy (SDF-MARC). The MARC proposed above is referred to as erroneous estimates-exploiting MARC (e-MARC). To investigate the achieved FER performance of the e-MARC system, the outage probability for e-MARC with two source nodes is theoretically derived. We re-formulate the e-MARC system and identify its admissible rate region according to the Slepian-Wolf theorem with a helper. Then, the outage probability is obtained by a set of integral over the rate region with respect to the probability density functions of all the links' instantaneous signal-to-noise power ratios. It is found through simulations that, as one of the source nodes is far away from both the relay and destination, e-MARC is superior to SDF-MARC in terms of outage performance. Furthermore, a joint adaptive network-channel coding (JANCC) technique is then proposed to support e-MARC with more source nodes. A vector is constructed at the destination in JANCC to identify the indices of the incorrectly decoded source node(s), and re-transmitted to the relay for requesting additional redundancy. The relay performs network-coding only over the estimates specified by the vector upon receiving the request. Numerical results show that JANCC-aided e-MARC is superior to e-MARC in terms of FER and goodput efficiency. In addition, compared iterative decoding is performed at relay with SDF-MARC, the use of differential detection with JANCC-aided e-MARC significantly reduces the computational complexity and latency with only a small loss in the FER. / Tiivistelmä Tämän väitöskirjan tarkoituksena on tuottaa yhtenäinen kokonaisuus häviöllisestä lähetyksestä pura-ja-lähetä (DF) -pohjaisessa monikäyttörelejärjestelmässä (MARC) sekä teoreettisesta että käytännöllisestä näkökulmasta. Parantaakseen resurssi- tai aikarajoitetun MARC-järjestelmän suorituskykyä, vastaanotin hyödyntää riippuvuussuhdetta releen välittämien informaatiosekvenssien virheellisten estimaattien ja suoraan lähteestä tulevien informaatiosekvenssien välillä (e-MARC). Työssä ehdotetaan useita yhdistetyn verkko -ja kanavakoodauksen menetelmiä (JNCC), joissa log-uskottavuussuhdesekvenssit iteratiivisen purkamisprosessin aikana päivitetään hyödyntämällä sekvenssien riippuvuussuhdetta vastaanottimessa. Tämän tuloksena sekä bittivirhe- että kehysvirhesuhdetta saadaan parannettua verrattuna selektiiviseen pura-ja-lähetä menetelmää käyttävään MARC-strategiaan (SDF-MARC). Kehysvirheen suorituskyvyn tarkastelua varten työssä johdetaan teoreettinen epäkäytettävyyden todennäköisyys e-MARC-menetelmälle kahden lähettimen tapauksessa. Lisäksi e-MARC-menetelmälle määritetään tiedonsiirtonopeusalue Slepian-Wolf -teoreeman mukaisesti. Tämän jälkeen saadaan epäkäytettävyyden todennäköisyys kaikkien linkkien signaalikohinasuhteen todennäköisyystiheysfunktion integraalina tiedonsiirtonopeusalueen yli. Simulointitulokset osoittavat e-MARC-menetelmän paremman epäkäytettävyyden todennäköisyyden verrattuna SDF-MARC-menetelmään silloin kun yksi lähettimistä on kaukana sekä releestä että vastaanottimesta. Mahdollistaakseen useamman lähteen käytön e-MARC-menetelmässä, työssä ehdotetaan lisäksi adaptiivinen yhdistetyn verkko-ja kanavakoodauksen menetelmä (JANCC). Siinä vastaanotin määrittää väärin purettujen sekvenssien lähettimet ja ilmoittaa ne vektorimuodossa takaisin releelle pyytääkseen näiden lähettimien informaation uudelleenlähetystä. Tämän jälkeen rele suorittaa verkkokoodauksen vain tunnistusvektorin määrittämien informaatiosekvenssien estimaatteihin perustuen. Tulokset näyttävät, että JANCC-menetelmää käyttävä e-MARC saavuttaa paremman kehysvirheen ja hyödyllisen läpäisyn tehokkuuden verrattuna e-MARC-menetelmään.
9

Implementation Of A Distributed Video Codec

Isik, Cem Vedat 01 February 2008 (has links) (PDF)
Current interframe video compression standards such as the MPEG4 and H.264, require a high-complexity encoder for predictive coding to exploit the similarities among successive video frames. This requirement is acceptable for cases where the video sequence to be transmitted is encoded once and decoded many times. However, some emerging applications such as video-based sensor networks, power-aware surveillance and mobile video communication systems require computational complexity to be shifted from encoder to decoder. Distributed Video Coding (DVC) is a new coding paradigm, based on two information-theoretic results, Slepian-Wolf and Wyner-Ziv, which allows exploiting source statistics at the decoder only. This architecture, therefore, enables very simple encoders to be used in video coding. Wyner-Ziv video coding is a particular case of DVC which deals with lossy source coding where side information is available at the decoder only. In this thesis, we implemented a DVC codec based on the DISCOVER (DIStributed COding for Video sERvices) project and carried out a detailed analysis of each block. Several algorithms have been implemented for each block and results are compared in terms of rate-distortion. The implemented architecture is aimed to be used as a testbed for future studies.
10

Coding with side information

Cheng, Szeming 01 November 2005 (has links)
Source coding and channel coding are two important problems in communications. Although side information exists in everyday scenario, the effect of side information is not taken into account in the conventional setups. In this thesis, we focus on the practical designs of two interesting coding problems with side information: Wyner-Ziv coding (source coding with side information at the decoder) and Gel??fand-Pinsker coding (channel coding with side information at the encoder). For WZC, we split the design problem into the two cases when the distortion of the reconstructed source is zero and when it is not. We review that the first case, which is commonly called Slepian-Wolf coding (SWC), can be implemented using conventional channel coding. Then, we detail the SWC design using the low-density parity-check (LDPC) code. To facilitate SWC design, we justify a necessary requirement that the SWC performance should be independent of the input source. We show that a sufficient condition of this requirement is that the hypothetical channel between the source and the side information satisfies a symmetry condition dubbed dual symmetry. Furthermore, under that dual symmetry condition, SWC design problem can be simply treated as LDPC coding design over the hypothetical channel. When the distortion of the reconstructed source is non-zero, we propose a practical WZC paradigm called Slepian-Wolf coded quantization (SWCQ) by combining SWC and nested lattice quantization. We point out an interesting analogy between SWCQ and entropy coded quantization in classic source coding. Furthermore, a practical scheme of SWCQ using 1-D nested lattice quantization and LDPC is implemented. For GPC, since the actual design procedure relies on the more precise setting of the problem, we choose to investigate the design of GPC as the form of a digital watermarking problem as digital watermarking is the precise dual of WZC. We then introduce an enhanced version of the well-known spread spectrum watermarking technique. Two applications related to digital watermarking are presented.

Page generated in 0.0265 seconds