Spelling suggestions: "subject:"cource coding"" "subject:"cource boding""
31 |
Multiterminal Video Coding: From Theory to ApplicationZhang, Yifu 2012 August 1900 (has links)
Multiterminal (MT) video coding is a practical application of the MT source coding theory. For MT source coding theory, two problems associated with achievable rate regions are well investigated into in this thesis: a new sufficient condition for BT sum-rate tightness, and the sum-rate loss for quadratic Gaussian MT source coding. Practical code design for ideal Gaussian sources with quadratic distortion measure is also achieved for cases more than two sources with minor rate loss compared to theoretical limits. However, when the theory is applied to practical applications, the performance of MT video coding has been unsatisfactory due to the difficulty to explore the correlation between different camera views. In this dissertation, we present an MT video coding scheme under the H.264/AVC framework. In this scheme, depth camera information can be optionally sent to the decoder separately as another source sequence. With the help of depth information at the decoder end, inter-view correlation can be largely improved and thus so is the compression performance. With the depth information, joint estimation from decoded frames and side information at the decoder also becomes available to improve the quality of reconstructed video frames. Experimental result shows that compared to separate encoding, up to 9.53% of the bit rate can be saved by the proposed MT scheme using decoder depth information, while up to 5.65% can be saved by the scheme without depth camera information. Comparisons to joint video coding schemes are also provided.
|
32 |
Universal Source Coding in the Non-Asymptotic RegimeJanuary 2018 (has links)
abstract: Fundamental limits of fixed-to-variable (F-V) and variable-to-fixed (V-F) length universal source coding at short blocklengths is characterized. For F-V length coding, the Type Size (TS) code has previously been shown to be optimal up to the third-order rate for universal compression of all memoryless sources over finite alphabets. The TS code assigns sequences ordered based on their type class sizes to binary strings ordered lexicographically.
Universal F-V coding problem for the class of first-order stationary, irreducible and aperiodic Markov sources is first considered. Third-order coding rate of the TS code for the Markov class is derived. A converse on the third-order coding rate for the general class of F-V codes is presented which shows the optimality of the TS code for such Markov sources.
This type class approach is then generalized for compression of the parametric sources. A natural scheme is to define two sequences to be in the same type class if and only if they are equiprobable under any model in the parametric class. This natural approach, however, is shown to be suboptimal. A variation of the Type Size code is introduced, where type classes are defined based on neighborhoods of minimal sufficient statistics. Asymptotics of the overflow rate of this variation is derived and a converse result establishes its optimality up to the third-order term. These results are derived for parametric families of i.i.d. sources as well as Markov sources.
Finally, universal V-F length coding of the class of parametric sources is considered in the short blocklengths regime. The proposed dictionary which is used to parse the source output stream, consists of sequences in the boundaries of transition from low to high quantized type complexity, hence the name Type Complexity (TC) code. For large enough dictionary, the $\epsilon$-coding rate of the TC code is derived and a converse result is derived showing its optimality up to the third-order term. / Dissertation/Thesis / Doctoral Dissertation Electrical Engineering 2018
|
33 |
The analysis of enumerative source codes and their use in Burrows‑Wheeler compression algorithmsMcDonald, Andre Martin 10 September 2010 (has links)
In the late 20th century the reliable and efficient transmission, reception and storage of information proved to be central to the most successful economies all over the world. The Internet, once a classified project accessible to a selected few, is now part of the everyday lives of a large part of the human population, and as such the efficient storage of information is an important part of the information economy. The improvement of the information storage density of optical and electronic media has been remarkable, but the elimination of redundancy in stored data and the reliable reconstruction of the original data is still a desired goal. The field of source coding is concerned with the compression of redundant data and its reliable decompression. The arithmetic source code, which was independently proposed by J. J. Rissanen and R. Pasco in 1976, revolutionized the field of source coding. Compression algorithms that use an arithmetic code to encode redundant data are typically more effective and computationally more efficient than compression algorithms that use earlier source codes such as extended Huffman codes. The arithmetic source code is also more flexible than earlier source codes, and is frequently used in adaptive compression algorithms. The arithmetic code remains the source code of choice, despite having been introduced more than 30 years ago. The problem of effectively encoding data from sources with known statistics (i.e. where the probability distribution of the source data is known) was solved with the introduction of the arithmetic code. The probability distribution of practical data is seldomly available to the source encoder, however. The source coding of data from sources with unknown statistics is a more challenging problem, and remains an active research topic. Enumerative source codes were introduced by T. J. Lynch and L. D. Davisson in the 1960s. These lossless source codes have the remarkable property that they may be used to effectively encode source sequences from certain sources without requiring any prior knowledge of the source statistics. One drawback of these source codes is the computationally complex nature of their implementations. Several years after the introduction of enumerative source codes, J. G. Cleary and I. H. Witten proved that approximate enumerative source codes may be realized by using an arithmetic code. Approximate enumerative source codes are significantly less complex than the original enumerative source codes, but are less effective than the original codes. Researchers have become more interested in arithmetic source codes than enumerative source codes since the publication of the work by Cleary and Witten. This thesis concerns the original enumerative source codes and their use in Burrows–Wheeler compression algorithms. A novel implementation of the original enumerative source code is proposed. This implementation has a significantly lower computational complexity than the direct implementation of the original enumerative source code. Several novel enumerative source codes are introduced in this thesis. These codes include optimal fixed–to–fixed length source codes with manageable computational complexity. A generalization of the original enumerative source code, which includes more complex data sources, is proposed in this thesis. The generalized source code uses the Burrows–Wheeler transform, which is a low–complexity algorithm for converting the redundancy of sequences from complex data sources to a more accessible form. The generalized source code effectively encodes the transformed sequences using the original enumerative source code. It is demonstrated and proved mathematically that this source code is universal (i.e. the code has an asymptotic normalized average redundancy of zero bits). AFRIKAANS : Die betroubare en doeltreffende versending, ontvangs en berging van inligting vorm teen die einde van die twintigste eeu die kern van die mees suksesvolle ekonomie¨e in die wˆereld. Die Internet, eens op ’n tyd ’n geheime projek en toeganklik vir slegs ’n klein groep verbruikers, is vandag deel van die alledaagse lewe van ’n groot persentasie van die mensdom, en derhalwe is die doeltreffende berging van inligting ’n belangrike deel van die inligtingsekonomie. Die verbetering van die bergingsdigteid van optiese en elektroniese media is merkwaardig, maar die uitwissing van oortolligheid in gebergde data, asook die betroubare herwinning van oorspronklike data, bly ’n doel om na te streef. Bronkodering is gemoeid met die kompressie van oortollige data, asook die betroubare dekompressie van die data. Die rekenkundige bronkode, wat onafhanklik voorgestel is deur J. J. Rissanen en R. Pasco in 1976, het ’n revolusie veroorsaak in die bronkoderingsveld. Kompressiealgoritmes wat rekenkundige bronkodes gebruik vir die kodering van oortollige data is tipies meer doeltreffend en rekenkundig meer effektief as kompressiealgoritmes wat vroe¨ere bronkodes, soos verlengde Huffman kodes, gebruik. Rekenkundige bronkodes, wat gereeld in aanpasbare kompressiealgoritmes gebruik word, is ook meer buigbaar as vroe¨ere bronkodes. Die rekenkundige bronkode bly na 30 jaar steeds die bronkode van eerste keuse. Die probleem om data wat afkomstig is van bronne met bekende statistieke (d.w.s. waar die waarskynlikheidsverspreiding van die brondata bekend is) doeltreffend te enkodeer is opgelos deur die instelling van rekenkundige bronkodes. Die bronenkodeerder het egter selde toegang tot die waarskynlikheidsverspreiding van praktiese data. Die bronkodering van data wat afkomstig is van bronne met onbekende statistieke is ’n groter uitdaging, en bly steeds ’n aktiewe navorsingsveld. T. J. Lynch and L. D. Davisson het tel–bronkodes in die 1960s voorgestel. Tel– bronkodes het die merkwaardige eienskap dat bronsekwensies van sekere bronne effektief met hierdie foutlose kodes ge¨enkodeer kan word, sonder dat die bronenkodeerder enige vooraf kennis omtrent die statistieke van die bron hoef te besit. Een nadeel van tel–bronkodes is die ho¨e rekenkompleksiteit van hul implementasies. J. G. Cleary en I. H. Witten het verskeie jare na die instelling van tel–bronkodes bewys dat benaderde tel–bronkodes gerealiseer kan word deur die gebruik van rekenkundige bronkodes. Benaderde tel–bronkodes het ’n laer rekenkompleksiteit as tel–bronkodes, maar benaderde tel–bronkodes is minder doeltreffend as die oorspronklike tel–bronkodes. Navorsers het sedert die werk van Cleary en Witten meer belangstelling getoon in rekenkundige bronkodes as tel–bronkodes. Hierdie tesis is gemoeid met die oorspronklike tel–bronkodes en die gebruik daarvan in Burrows–Wheeler kompressiealgoritmes. ’n Nuwe implementasie van die oorspronklike tel–bronkode word voorgestel. Die voorgestelde implementasie het ’n beduidende laer rekenkompleksiteit as die direkte implementasie van die oorspronklike tel–bronkode. Verskeie nuwe tel–bronkodes, insluitende optimale vaste–tot–vaste lengte tel–bronkodes met beheerbare rekenkompleksiteit, word voorgestel. ’n Veralgemening van die oorspronklike tel–bronkode, wat meer komplekse databronne insluit as die oorspronklike tel–bronkode, word voorgestel in hierdie tesis. The veralgemeende tel–bronkode maak gebruik van die Burrows–Wheeler omskakeling. Die Burrows–Wheeler omskakeling is ’n lae–kompleksiteit algoritme wat die oortolligheid van bronsekwensies wat afkomstig is van komplekse databronne omskakel na ’n meer toeganklike vorm. Die veralgemeende bronkode enkodeer die omgeskakelde sekwensies effektief deur die oorspronklike tel–bronkode te gebruik. Die universele aard van hierdie bronkode word gedemonstreer en wiskundig bewys (d.w.s. dit word bewys dat die kode ’n asimptotiese genormaliseerde gemiddelde oortolligheid van nul bisse het). Copyright / Dissertation (MEng)--University of Pretoria, 2010. / Electrical, Electronic and Computer Engineering / unrestricted
|
34 |
Integer-forcing in multiterminal coding: uplink-downlink duality and source-channel dualityHe, Wenbo 05 November 2016 (has links)
Interference is considered to be a major obstacle to wireless communication. Popular approaches, such as the zero-forcing receiver in MIMO (multiple-input and multiple-output) multiple-access channel (MAC) and zero-forcing (ZF) beamforming in MIMO broadcast channel (BC), eliminate the interference first and decode each codeword separately using a conventional single-user decoder. Recently, a transceiver architecture called integer-forcing (IF) has been proposed in the context of the MIMO Gaussian multiple-access channel to exploit integer-linear combinations of the codewords. Instead of treating other codewords as interference, the integer-forcing approach decodes linear combinations of the codewords from different users and solves for desired codewords. Integer-forcing can closely approach the performance of the optimal joint maximum likelihood decoder. An advanced version called successive integer-forcing can achieve the sum capacity of the MIMO MAC channel. Several extensions of integer-forcing have been developed in various scenarios, such as integer-forcing for the Gaussian MIMO broadcast channel, integer-forcing for Gaussian distributed source coding and integer-forcing interference alignment for the Gaussian interference channel.
This dissertation demonstrates duality relationships for integer-forcing among three different channel models. We explore in detail two distinct duality types in this thesis: uplink-downlink duality and source-channel duality. Uplink-downlink duality is established for integer-forcing between the Gaussian MIMO multiple-access channel and its dual Gaussian MIMO broadcast channel. We show that under a total power constraint, integer-forcing can achieve the same sum rate in both cases. We further develop a dirty-paper integer-forcing scheme for the Gaussian MIMO BC and show an uplink-downlink duality with successive integer-forcing for the Gaussian MIMO MAC. The source-channel duality is established for integer-forcing between the Gaussian MIMO multiple-access channel and its dual Gaussian distributed source coding problem. We extend previous results for integer-forcing source coding to allow for successive cancellation. For integer-forcing without successive cancellation in both channel coding and source coding, we show the rates in two scenarios lie within a constant gap of one another. We further show that there exists a successive cancellation scheme such that both integer-forcing channel coding and integer-forcing source coding achieve the same rate tuple.
|
35 |
[pt] CODIFICADORES UNIVERSAIS VIA RECORRÊNCIA DE PADRÕES PARA FONTES COM NÚMERO DE ESTADOS FINITO / [en] UNIVERSAL STRING MATCHING ENCODERS BY RECURRENCE OF STANDARDS FOR SOURCES WITH FINITE NUMBER OF STATESMARCELO DA SILVA PINHO 01 December 2005 (has links)
[pt] Os codificadores universais via recorrência de padrões
surgiram nos anos 70, quando foram propostos os
codificadores lz77 e lz78. Devido a baixa complexidade
computacional e ao bom desempenho, quando aplicados na
compressão de arquivos de dados, estes codificadores se
tornaram extremamente populares. Embora estes
codificadores sejam universais, i.e., possuam taxas de
compressão que convergem para entropia, recentemente foi
mostrado que as taxas de compressão não convergem da forma
mais rápida possível, nem mesmo para a classe de fontes
sem memória. A redundância de um codificador universal C,
dada por R, mede a rapidez com que a taxa de compressão
converge para a entropia. Para fontes com número de
estados finito, enquanto os melhores resultados de
codificadores universais via recorrência de padrões
apresentam uma redundância da ordem de 1/{log n}, existem
codificadores que atingem uma redundancia de {log n} /n.
Portanto, os codificadores via recorrência de padrões não
são ótimos. Embora seja conhecido codificadores ótimos
segundo o critério da redundância, tais codificadores
possuem uma alta complexidade computacional e são pouco
úteis na prática. Dentre os codificadores via recorrência
de padrões, o codificador lz78 possui uma das mais baixas
redundâncias para a classe de fontes com número de estados
finito. De fato, não existe outro codificador desta
classe, tal que a redundância seja melhor que a do lz78.
Tomando como base o lz78, este trabalho propõe novas
técnicas para acelerar a convergência da taxa de
compressão (diminuir a redundância) dos codificadores
universais via recorrência de padrões, para fontes com
número de estados finito. Estas técnicas dão origem a
novas versões do lz78. As redundâncias das novas versões
são estabelecidas, considerando a classe de fontes com
número de estados finito. Estas versões são aplicadas na
compressão de arquivos de dados, e os resultados obtidos
são comparados com os resultados de versões anteriores. / [en] The string matching encoders were proposed about 20 years
ago, when the lz77 and lz78 were introduced. They became
extremely popular because of the relationship between
their low complexity and their good performance. Although
these encoders are universal, that is, their compression
rates converge to the source entropy, it was shown,
recently, that those rates do not converge as fast as
possible, even for the class of memoryless source. The
redundancy of an unversal encoder C, denoted by Rc,
measures how fast the compression rate converge to the
entropy. In the class of Finite State Machine (FSM)
source, while the best result of the string matching
encoders is O 1/ {log n}, there are encoders which achieve
a redundancy of O {log n}/n. Therefore, the string
matching encoders are not optimal. Even though optimal
encoders are known, in general, those encoders have a high
complexity and are not useful in practice. Considering the
class of string matching encoders, the lz78 has one of the
best results over the class of FSM source. In fact, there
is no encoder based on the string matching, which archives
a better redundancy. This work makes use of the lz78
encoder to propose new techniques to improve the
performance of sting matching encoders over the class of
FSM. These techniques bring up new versions of the lz78.
For the class of FSM, the redundancies of this versions
are stablished. These versions are used to compress data
files, and their perfoermances are compared to the
performances of older versions.
|
36 |
Symmetric Generalized Gaussian Multiterminal Source CodingChang, Yameng Jr January 2018 (has links)
Consider a generalized multiterminal source coding system, where (l choose m) encoders, each m observing a distinct size-m subset of l (l ≥ 2) zero-mean unit-variance symmetrically correlated Gaussian sources with correlation coefficient ρ, compress their observation in such a way that a joint decoder can reconstruct the sources within a prescribed mean squared error distortion based on the compressed data. The optimal rate- distortion performance of this system was previously known only for the two extreme cases m = l (the centralized case) and m = 1 (the distributed case), and except when ρ = 0, the centralized system can achieve strictly lower compression rates than the distributed system under all non-trivial distortion constaints. Somewhat surprisingly, it is established in the present thesis that the optimal rate-distortion preformance of the afore-described generalized multiterminal source coding system with m ≥ 2 coincides with that of the centralized system for all distortions when ρ ≤ 0 and for distortions below an explicit positive threshold (depending on m) when ρ > 0. Moreover, when ρ > 0, the minimum achievable rate of generalized multiterminal source coding subject to an arbitrary positive distortion constraint d is shown to be within a finite gap (depending on m and d) from its centralized counterpart in the large l limit except for possibly the critical distortion d = 1 − ρ. / Thesis / Master of Applied Science (MASc)
|
37 |
Modern Error Control Codes and Applications to Distributed Source CodingSartipi, Mina 15 August 2006 (has links)
This dissertation first studies two-dimensional wavelet codes (TDWCs). TDWCs
are introduced as a solution to the problem of designing a 2-D code that has low decoding-
complexity and has the maximum erasure-correcting property for rectangular burst erasures.
The half-rate TDWCs of dimensions N<sub>1</sub> X N<sub>2</sub> satisfy the Reiger bound with equality for
burst erasures of dimensions N<sub>1</sub> X N<sub>2</sub>/2 and N<sub>1</sub>/2 X N<sub>2</sub>, where GCD(N<sub>1</sub>,N<sub>2</sub>) = 2. Examples
of TDWC are provided that recover any rectangular burst erasure of area N<sub>1</sub>N<sub>2</sub>/2. These
lattice-cyclic codes can recover burst erasures with a simple and efficient ML decoding.
This work then studies the problem of distributed source coding for two and three correlated signals using channel codes. We propose to model the distributed source coding
problem with a set of parallel channel that simplifies the distributed source coding to de-
signing non-uniform channel codes. This design criterion improves the performance of the
source coding considerably. LDPC codes are used for lossless and lossy distributed source
coding, when the correlation parameter is known or unknown at the time of code design.
We show that distributed source coding at the corner point using LDPC codes is simplified
to non-uniform LDPC code and semi-random punctured LDPC codes for a system of two
and three correlated sources, respectively. We also investigate distributed source coding at
any arbitrary rate on the Slepian-Wolf rate region. This problem is simplified to designing
a rate-compatible LDPC code that has unequal error protection property. This dissertation
finally studies the distributed source coding problem for applications whose wireless channel is an erasure channel with unknown erasure probability. For these application, rateless
codes are better candidates than LDPC codes. Non-uniform rateless codes and improved
decoding algorithm are proposed for this purpose. We introduce a reliable, rate-optimal,
and energy-efficient multicast algorithm that uses distributed source coding and rateless
coding. The proposed multicast algorithm performs very close to network coding, while it
has lower complexity and higher adaptability.
|
38 |
Performance of MIMO and non-orthogonal transmission in lossy forward relay networksHe, J. (Jiguang) 23 October 2018 (has links)
Abstract
In the current LTE-Advanced system, decode-and-forward (DF) is leveraged for cooperative relaying, where the erroneously decoded sequences are discarded at the relay, resulting in a waste of resources. The reason lies in that the erroneously decoded sequence can provide a certain amount of useful information about the source at the destination. Therefore, we develop a new relaying scheme, called lossy DF (also known as lossy forward (LF)), where the relay always forwards the decoded sequence to the destination. Beneficial from the always-forward principle, it has been verified that LF relaying outperforms DF relaying in terms of outage probability, ε-outage achievable rate, frame error rate (FER), and communication coverage.
Three exemplifying network scenarios are studied in this thesis: the one-way multiple-input multiple-output (MIMO) relay network, the multiple access relay channel (MARC), and the general multi-source multi-relay network.
We derive the outage probability of the one-way MIMO relay networks under the assumption that the orthogonal space-time block code (OSTBC) is implemented at the transmitter side for each individual transmission. Interestingly, we find that the diversity order of the OSTBC based one-way MIMO relay network can be interpreted and formulated by the well-known max-flow min-cut theorem, which is widely utilized to calculate the network capacity. For the MARC, non-orthogonal transmission is introduced to further improve the network throughput compared to its orthogonal counterpart. The region for lossless recovery of both sources is formulated by the theorem of multiple access channel (MAC) with a helper, which combines the Slepian-Wolf rate region and the MAC capacity region. Since the region for lossless recovery is obtained via sufficient condition, the derived outage probability can be regarded as a theoretical upper bound.
We also conduct the performance evaluation by exploiting different accumulator (ACC) aided turbo codes at the transmitter side, exclusive or (XOR) based multi-user complete decoding at the relay, and iterative joint decoding (JD) at the destination. For the general multi-source multi-relay network, we focus on the investigation the end-to-end outage probability. The performance improvement of LF over DF is verified through theoretical analyses and numerical results in terms of outage probability. / Tiivistelmä
Tämän päivän LTE-A-tiedonsiirtojärjestelmissä hyödynnetään dekoodaa-ja-välitä (decode-and-forward, DF) menetelmää yhteistoiminnalliseen tiedon edelleenlähetykseen (relaying) siten, että virheellisesti vastaanotetut sekvenssit hylätään välittimessä (relay). Tämä on resurssien hukkaamista, sillä virheellisissäkin viesteissä on informaatiota, jota voidaan hyödyntää vastaanottimessa. Tässä väitöskirjassa tutkitaan uutta häviöllistä DF-menetelmää, johon viitataan nimellä häviöllinen välitys (lossy forward, LF). Menetelmässä välitin lähettää informaation aina eteenpäin olipa siinä virheitä tai ei. Sen etuna verrattuna perinteiseen DF-menetelmään, on parantunut luotettavuus metriikoilla jossa mitataan vastaanoton todennäköisyyttä ja verkon peittoaluetta.
Väitöskirjassa tarkastellaan LF-menetelmää kolmessa eri verkkotopologiassa jotka ovat yksisuuntainen monitulo-monilähtövälitinverkko (multiple-input multiple-output, MIMO), moniliityntävälitinkanava (multiple access relay channel, MARC), sekä yleinen moniläheinen monivälitinverkko.
Työssä johdetaan matemaattinen esitys estotilan todennäköisyydelle (outage probability) yksisuuntaisessa MIMO-välitinverkossa olettaen ortogonaalisen tila-aika lohkokoodin (orthogonal space-time block code, OSTBC) käyttö. Estotilan todennäköisyys esitetään käyttäen toisteastta (diversity order), joka saadaan johdettua tunnetusta max-flow min-cut lauseesta, jota puolestaan käytetään yleisesti erilaisten verkkojen kapasiteettien laskentaan. MARC-topologiassa hyödynnetään ei-ortogonaalista lähetystä verkon datavirran kasvattamiseen. Häviöttömän lähetyksen informaatioteoreettinen kapasiteettialue saadaan johdettua MAC-auttajan kanssa. Lähestymistavassa Slepian-Wolf- sekä MAC-kapasiteettialueet yhdistyvät. Alueelle, jossa kahden lähteen lähetysnopeudet ovat sellaiset, että vastaanotto on häviötöntä, annetaan riittävä ehto, jolloin johdettu estotilan todennäköisyys on teoreettinen yläraja.
Suorituskykyä evaluoidaan myös tietokonesimulaatioilla, joissa käytetään erilaisia akkumulaattoriavusteisia turbokoodeja lähettimessä, ehdoton tai (exclusive or, XOR) pohjaista monen käyttäjän dekoodausta välittimessä sekä iteratiivista yhteisdekoodausta vastaanottimessa. Yleisessä monilähteisessä monivälitinverkossa keskitytään alkuperäisen lähetyksen estotilatodennäköisyyteen. Teoreettinen analyysi sekä simulaatiot osoittavat, että LF:n estotilan todennäköisyys on pienempi kuin DF:n.
|
39 |
Coding with side informationCheng, Szeming 01 November 2005 (has links)
Source coding and channel coding are two important problems in communications. Although side information exists in everyday scenario, the effect of side information is not taken into account in the conventional setups. In this thesis, we focus on the practical designs of two interesting coding problems with side information: Wyner-Ziv coding (source coding with side information at the decoder) and Gel??fand-Pinsker coding (channel coding with side information at the encoder).
For WZC, we split the design problem into the two cases when the distortion of the reconstructed source is zero and when it is not. We review that the first case, which is commonly called Slepian-Wolf coding (SWC), can be implemented using conventional channel coding. Then, we detail the SWC design using the low-density parity-check (LDPC) code. To facilitate SWC design, we justify a necessary requirement that the SWC performance should be independent of the input source. We show that a sufficient condition of this requirement is that the hypothetical channel between the source and the side information satisfies a symmetry condition dubbed dual symmetry. Furthermore, under that dual symmetry condition, SWC design problem can be simply treated as LDPC coding design over the hypothetical channel.
When the distortion of the reconstructed source is non-zero, we propose a practical WZC paradigm called Slepian-Wolf coded quantization (SWCQ) by combining SWC and nested lattice quantization. We point out an interesting analogy between SWCQ and entropy coded quantization in classic source coding. Furthermore, a practical scheme of SWCQ using 1-D nested lattice quantization and LDPC is implemented.
For GPC, since the actual design procedure relies on the more precise setting of the problem, we choose to investigate the design of GPC as the form of a digital watermarking problem as digital watermarking is the precise dual of WZC. We then introduce an enhanced version of the well-known spread spectrum watermarking technique. Two applications related to digital watermarking are presented.
|
40 |
Source-channel coding for wireless networksWernersson, Niklas January 2006 (has links)
<p>The aim of source coding is to represent information as accurately as possible using as few bits as possible and in order to do so redundancy from the source needs to be removed. The aim of channel coding is in some sense the contrary, namely to introduce redundancy that can be exploited to protect the information when being transmitted over a nonideal channel. Combining these two techniques leads to the area of joint source–channel coding which in general makes it possible to achieve a better performance when designing a communication system than in the case when source and channel codes are designed separately. In this thesis two particular areas in joint source–channel coding are studied: multiple description coding (MDC) and soft decoding. Two new MDC schemes are proposed and investigated. The first is based on sorting a frame of samples and transmitting, as side-information/redundancy, an index that describes the resulting permutation. In case that some of the transmitted descriptors are lost during transmission this side information (if received) can be used to estimate the lost descriptors based on the received ones. The second scheme uses permutation codes to produce different descriptions of a block of source data. These descriptions can be used jointly to estimate the original source data. Finally, also the MDC method multiple description coding using pairwise correlating transforms as introduced by Wang et al is studied. A modification of the quantization in this method is proposed which yields a performance gain. A well known result in joint source–channel coding is that the performance of a communication system can be improved by using soft decoding of the channel output at the cost of a higher decoding complexity. An alternative to this is to quantize the soft information and store the pre-calculated soft decision values in a lookup table. In this thesis we propose new methods for quantizing soft channel information, to be used in conjunction with soft-decision source decoding. The issue on how to best construct finite-bandwidth representations of soft information is also studied.</p>
|
Page generated in 0.0537 seconds