• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 5
  • Tagged with
  • 13
  • 13
  • 12
  • 8
  • 7
  • 7
  • 6
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Layered Wyner-Ziv video coding for noisy channels

Xu, Qian 01 November 2005 (has links)
The growing popularity of video sensor networks and video celluar phones has generated the need for low-complexity and power-efficient multimedia systems that can handle multiple video input and output streams. While standard video coding techniques fail to satisfy these requirements, distributed source coding is a promising technique for ??uplink?? applications. Wyner-Ziv coding refers to lossy source coding with side information at the decoder. Based on recent theoretical result on successive Wyner-Ziv coding, we propose in this thesis a practical layered Wyner-Ziv video codec using the DCT, nested scalar quantizer, and irregular LDPC code based Slepian-Wolf coding (or lossless source coding with side information) for noiseless channel. The DCT is applied as an approximation to the conditional KLT, which makes the components of the transformed block conditionally independent given the side information. NSQ is a binning scheme that facilitates layered bit-plane coding of the bin indices while reducing the bit rate. LDPC code based Slepian-Wolf coding exploits the correlation between the quantized version of the source and the side information to achieve further compression. Different from previous works, an attractive feature of our proposed system is that video encoding is done only once but decoding allowed at many lower bit rates without quality loss. For Wyner-Ziv coding over discrete noisy channels, we present a Wyner-Ziv video codec using IRA codes for Slepian-Wolf coding based on the idea of two equivalent channels. For video streaming applications where the channel is packet based, we apply unequal error protection scheme to the embedded Wyner-Ziv coded video stream to find the optimal source-channel coding trade-off for a target transmission rate over packet erasure channel.
2

Layered Wyner-Ziv video coding: a new approach to video compression and delivery

Xu, Qian 15 May 2009 (has links)
Following recent theoretical works on successive Wyner-Ziv coding, we propose a practical layered Wyner-Ziv video coder using the DCT, nested scalar quantiza- tion, and irregular LDPC code based Slepian-Wolf coding (or lossless source coding with side information at the decoder). Our main novelty is to use the base layer of a standard scalable video coder (e.g., MPEG-4/H.26L FGS or H.263+) as the decoder side information and perform layered Wyner-Ziv coding for quality enhance- ment. Similar to FGS coding, there is no performance di®erence between layered and monolithic Wyner-Ziv coding when the enhancement bitstream is generated in our proposed coder. Using an H.26L coded version as the base layer, experiments indicate that Wyner-Ziv coding gives slightly worse performance than FGS coding when the channel (for both the base and enhancement layers) is noiseless. However, when the channel is noisy, extensive simulations of video transmission over wireless networks conforming to the CDMA2000 1X standard show that H.26L base layer coding plus Wyner-Ziv enhancement layer coding are more robust against channel errors than H.26L FGS coding. These results demonstrate that layered Wyner-Ziv video coding is a promising new technique for video streaming over wireless networks. For scalable video transmission over the Internet and 3G wireless networks, we propose a system for receiver-driven layered multicast based on layered Wyner-Ziv video coding and digital fountain coding. Digital fountain codes are near-capacity erasure codes that are ideally suited for multicast applications because of their rate- less property. By combining an error-resilient Wyner-Ziv video coder and rateless fountain codes, our system allows reliable multicast of high-quality video to an arbi- trary number of heterogeneous receivers without the requirement of feedback chan- nels. Extending this work on separate source-channel coding, we consider distributed joint source-channel coding by using a single channel code for both video compression (via Slepian-Wolf coding) and packet loss protection. We choose Raptor codes - the best approximation to a digital fountain - and address in detail both encoder and de- coder designs. Simulation results show that, compared to one separate design using Slepian-Wolf compression plus erasure protection and another based on FGS coding plus erasure protection, the proposed joint design provides better video quality at the same number of transmitted packets.
3

Low-Complexity Compression Techniques for High Frame Rate Video

Yang, Duo January 2017 (has links)
Recently, video has become one of the most important multimedia resources to be shared in our work and daily life. With the development of high frame rate video (HFV), the write speed from high speed camera array sensor to the massive data storage device has been regarded as the main constraints on HFV applications. In this thesis, some low-complexity compression techniques are proposed for HFV acquisition and transmission. The core technique of our developed codec is the application of Slepian-Wolf coding theorem in video compression. The light-duty encoder employs SW encoding, resulting in lower computational cost. The pixel values are transformed into bit sequences, and then we assemble the bits on same bit plane into 8 bit streams. For each bit plane, there is a statistical BSC being constructed to describe the dependency between the source image and the SI image. Furthermore, an improved coding scheme is applied to exploit the spatial correlation between two consecutive bit planes, which is able to reduce the source coding rates. Different from the encoder, the collaborative heavy-duty decoder shoulders the burden of realizing high reconstruction fidelity. Motion estimation and motion compensation employ the block-matching algorithm to predict the SI image. And then the received syndrome sequence is able to be SW decoded with SI. To realize different compression goals, compression are separated to the original and the downsampled cases. With regard to the compression at the original resolution, it completes after SW decoding. While with respect to compression at reduced resolution, the SW decoded image is necessary to be upsampled by the state-of-the-art learning based SR technique: A+ . Since there are some important image details lost after the resolution resizing, ME and MC is applied to modify the upsampled image again, promoting the reconstruction PSNR. Experimental results show that the proposed low-complexity compression techniques are effective on improving reconstruction fidelity and compression ratio. / Thesis / Master of Applied Science (MASc)
4

Multiterminal source coding: sum-rate loss, code designs, and applications to video sensor networks

Yang, Yang 15 May 2009 (has links)
Driven by a host of emerging applications (e.g., sensor networks and wireless video), distributed source coding (i.e., Slepian-Wolf coding, Wyner-Ziv coding and various other forms of multiterminal source coding), has recently become a very active research area. This dissertation focuses on multiterminal (MT) source coding problem, and consists of three parts. The first part studies the sum-rate loss of an important special case of quadratic Gaussian multi-terminal source coding, where all sources are positively symmetric and all target distortions are equal. We first give the minimum sum-rate for joint encoding of Gaussian sources in the symmetric case, and then show that the supremum of the sum-rate loss due to distributed encoding in this case is 1 2 log2 5 4 = 0:161 b/s when L = 2 and increases in the order of º L 2 log2 e b/s as the number of terminals L goes to infinity. The supremum sum-rate loss of 0:161 b/s in the symmetric case equals to that in general quadratic Gaussian two-terminal source coding without the symmetric assumption. It is conjectured that this equality holds for any number of terminals. In the second part, we present two practical MT coding schemes under the framework of Slepian-Wolf coded quantization (SWCQ) for both direct and indirect MT problems. The first, asymmetric SWCQ scheme relies on quantization and Wyner-Ziv coding, and it is implemented via source splitting to achieve any point on the sum-rate bound. In the second, conceptually simpler scheme, symmetric SWCQ, the two quantized sources are compressed using symmetric Slepian-Wolf coding via a channel code partitioning technique that is capable of achieving any point on the Slepian-Wolf sum-rate bound. Our practical designs employ trellis-coded quantization and turbo/LDPC codes for both asymmetric and symmetric Slepian-Wolf coding. Simulation results show a gap of only 0.139-0.194 bit per sample away from the sum-rate bound for both direct and indirect MT coding problems. The third part applies the above two MT coding schemes to two practical sources, i.e., stereo video sequences to save the sum rate over independent coding of both sequences. Experiments with both schemes on stereo video sequences using H.264, LDPC codes for Slepian-Wolf coding of the motion vectors, and scalar quantization in conjunction with LDPC codes for Wyner-Ziv coding of the residual coefficients give slightly smaller sum rate than separate H.264 coding of both sequences at the same video quality.
5

Performance of MIMO and non-orthogonal transmission in lossy forward relay networks

He, J. (Jiguang) 23 October 2018 (has links)
Abstract In the current LTE-Advanced system, decode-and-forward (DF) is leveraged for cooperative relaying, where the erroneously decoded sequences are discarded at the relay, resulting in a waste of resources. The reason lies in that the erroneously decoded sequence can provide a certain amount of useful information about the source at the destination. Therefore, we develop a new relaying scheme, called lossy DF (also known as lossy forward (LF)), where the relay always forwards the decoded sequence to the destination. Beneficial from the always-forward principle, it has been verified that LF relaying outperforms DF relaying in terms of outage probability, ε-outage achievable rate, frame error rate (FER), and communication coverage. Three exemplifying network scenarios are studied in this thesis: the one-way multiple-input multiple-output (MIMO) relay network, the multiple access relay channel (MARC), and the general multi-source multi-relay network. We derive the outage probability of the one-way MIMO relay networks under the assumption that the orthogonal space-time block code (OSTBC) is implemented at the transmitter side for each individual transmission. Interestingly, we find that the diversity order of the OSTBC based one-way MIMO relay network can be interpreted and formulated by the well-known max-flow min-cut theorem, which is widely utilized to calculate the network capacity. For the MARC, non-orthogonal transmission is introduced to further improve the network throughput compared to its orthogonal counterpart. The region for lossless recovery of both sources is formulated by the theorem of multiple access channel (MAC) with a helper, which combines the Slepian-Wolf rate region and the MAC capacity region. Since the region for lossless recovery is obtained via sufficient condition, the derived outage probability can be regarded as a theoretical upper bound. We also conduct the performance evaluation by exploiting different accumulator (ACC) aided turbo codes at the transmitter side, exclusive or (XOR) based multi-user complete decoding at the relay, and iterative joint decoding (JD) at the destination. For the general multi-source multi-relay network, we focus on the investigation the end-to-end outage probability. The performance improvement of LF over DF is verified through theoretical analyses and numerical results in terms of outage probability. / Tiivistelmä Tämän päivän LTE-A-tiedonsiirtojärjestelmissä hyödynnetään dekoodaa-ja-välitä (decode-and-forward, DF) menetelmää yhteistoiminnalliseen tiedon edelleenlähetykseen (relaying) siten, että virheellisesti vastaanotetut sekvenssit hylätään välittimessä (relay). Tämä on resurssien hukkaamista, sillä virheellisissäkin viesteissä on informaatiota, jota voidaan hyödyntää vastaanottimessa. Tässä väitöskirjassa tutkitaan uutta häviöllistä DF-menetelmää, johon viitataan nimellä häviöllinen välitys (lossy forward, LF). Menetelmässä välitin lähettää informaation aina eteenpäin olipa siinä virheitä tai ei. Sen etuna verrattuna perinteiseen DF-menetelmään, on parantunut luotettavuus metriikoilla jossa mitataan vastaanoton todennäköisyyttä ja verkon peittoaluetta. Väitöskirjassa tarkastellaan LF-menetelmää kolmessa eri verkkotopologiassa jotka ovat yksisuuntainen monitulo-monilähtövälitinverkko (multiple-input multiple-output, MIMO), moniliityntävälitinkanava (multiple access relay channel, MARC), sekä yleinen moniläheinen monivälitinverkko. Työssä johdetaan matemaattinen esitys estotilan todennäköisyydelle (outage probability) yksisuuntaisessa MIMO-välitinverkossa olettaen ortogonaalisen tila-aika lohkokoodin (orthogonal space-time block code, OSTBC) käyttö. Estotilan todennäköisyys esitetään käyttäen toisteastta (diversity order), joka saadaan johdettua tunnetusta max-flow min-cut lauseesta, jota puolestaan käytetään yleisesti erilaisten verkkojen kapasiteettien laskentaan. MARC-topologiassa hyödynnetään ei-ortogonaalista lähetystä verkon datavirran kasvattamiseen. Häviöttömän lähetyksen informaatioteoreettinen kapasiteettialue saadaan johdettua MAC-auttajan kanssa. Lähestymistavassa Slepian-Wolf- sekä MAC-kapasiteettialueet yhdistyvät. Alueelle, jossa kahden lähteen lähetysnopeudet ovat sellaiset, että vastaanotto on häviötöntä, annetaan riittävä ehto, jolloin johdettu estotilan todennäköisyys on teoreettinen yläraja. Suorituskykyä evaluoidaan myös tietokonesimulaatioilla, joissa käytetään erilaisia akkumulaattoriavusteisia turbokoodeja lähettimessä, ehdoton tai (exclusive or, XOR) pohjaista monen käyttäjän dekoodausta välittimessä sekä iteratiivista yhteisdekoodausta vastaanottimessa. Yleisessä monilähteisessä monivälitinverkossa keskitytään alkuperäisen lähetyksen estotilatodennäköisyyteen. Teoreettinen analyysi sekä simulaatiot osoittavat, että LF:n estotilan todennäköisyys on pienempi kuin DF:n.
6

Decoding and lossy forwarding based multiple access relaying

Lu, P.-S. (Pen-Shun) 20 March 2015 (has links)
Abstract The goal of this thesis is to provide a unified concept of lossy-forwarding from the theoretical analysis to practical scheme design for the decode-and-forward-based multiple access relay channel (MARC) system. To improve the performance of MARC with the relay subject to resources or/and time constraints, the erroneous estimates output from simple detection schemes are used at the relay are forwarded and exploited. A correlation is then found between two sequences: one is the network-coded sequence sent from the relay, and the other is their corresponding exclusive-OR-ed information sequence. Several joint network-channel coding (JNCC) techniques are provided in which the correlation is utilized to update the log-likelihood ratio sequences during the iterative decoding process at the destination. As a result, the bit error rate (BER) and frame error rate (FER) are improved compared with those of MARC with select DF strategy (SDF-MARC). The MARC proposed above is referred to as erroneous estimates-exploiting MARC (e-MARC). To investigate the achieved FER performance of the e-MARC system, the outage probability for e-MARC with two source nodes is theoretically derived. We re-formulate the e-MARC system and identify its admissible rate region according to the Slepian-Wolf theorem with a helper. Then, the outage probability is obtained by a set of integral over the rate region with respect to the probability density functions of all the links' instantaneous signal-to-noise power ratios. It is found through simulations that, as one of the source nodes is far away from both the relay and destination, e-MARC is superior to SDF-MARC in terms of outage performance. Furthermore, a joint adaptive network-channel coding (JANCC) technique is then proposed to support e-MARC with more source nodes. A vector is constructed at the destination in JANCC to identify the indices of the incorrectly decoded source node(s), and re-transmitted to the relay for requesting additional redundancy. The relay performs network-coding only over the estimates specified by the vector upon receiving the request. Numerical results show that JANCC-aided e-MARC is superior to e-MARC in terms of FER and goodput efficiency. In addition, compared iterative decoding is performed at relay with SDF-MARC, the use of differential detection with JANCC-aided e-MARC significantly reduces the computational complexity and latency with only a small loss in the FER. / Tiivistelmä Tämän väitöskirjan tarkoituksena on tuottaa yhtenäinen kokonaisuus häviöllisestä lähetyksestä pura-ja-lähetä (DF) -pohjaisessa monikäyttörelejärjestelmässä (MARC) sekä teoreettisesta että käytännöllisestä näkökulmasta. Parantaakseen resurssi- tai aikarajoitetun MARC-järjestelmän suorituskykyä, vastaanotin hyödyntää riippuvuussuhdetta releen välittämien informaatiosekvenssien virheellisten estimaattien ja suoraan lähteestä tulevien informaatiosekvenssien välillä (e-MARC). Työssä ehdotetaan useita yhdistetyn verkko -ja kanavakoodauksen menetelmiä (JNCC), joissa log-uskottavuussuhdesekvenssit iteratiivisen purkamisprosessin aikana päivitetään hyödyntämällä sekvenssien riippuvuussuhdetta vastaanottimessa. Tämän tuloksena sekä bittivirhe- että kehysvirhesuhdetta saadaan parannettua verrattuna selektiiviseen pura-ja-lähetä menetelmää käyttävään MARC-strategiaan (SDF-MARC). Kehysvirheen suorituskyvyn tarkastelua varten työssä johdetaan teoreettinen epäkäytettävyyden todennäköisyys e-MARC-menetelmälle kahden lähettimen tapauksessa. Lisäksi e-MARC-menetelmälle määritetään tiedonsiirtonopeusalue Slepian-Wolf -teoreeman mukaisesti. Tämän jälkeen saadaan epäkäytettävyyden todennäköisyys kaikkien linkkien signaalikohinasuhteen todennäköisyystiheysfunktion integraalina tiedonsiirtonopeusalueen yli. Simulointitulokset osoittavat e-MARC-menetelmän paremman epäkäytettävyyden todennäköisyyden verrattuna SDF-MARC-menetelmään silloin kun yksi lähettimistä on kaukana sekä releestä että vastaanottimesta. Mahdollistaakseen useamman lähteen käytön e-MARC-menetelmässä, työssä ehdotetaan lisäksi adaptiivinen yhdistetyn verkko-ja kanavakoodauksen menetelmä (JANCC). Siinä vastaanotin määrittää väärin purettujen sekvenssien lähettimet ja ilmoittaa ne vektorimuodossa takaisin releelle pyytääkseen näiden lähettimien informaation uudelleenlähetystä. Tämän jälkeen rele suorittaa verkkokoodauksen vain tunnistusvektorin määrittämien informaatiosekvenssien estimaatteihin perustuen. Tulokset näyttävät, että JANCC-menetelmää käyttävä e-MARC saavuttaa paremman kehysvirheen ja hyödyllisen läpäisyn tehokkuuden verrattuna e-MARC-menetelmään.
7

Implementation Of A Distributed Video Codec

Isik, Cem Vedat 01 February 2008 (has links) (PDF)
Current interframe video compression standards such as the MPEG4 and H.264, require a high-complexity encoder for predictive coding to exploit the similarities among successive video frames. This requirement is acceptable for cases where the video sequence to be transmitted is encoded once and decoded many times. However, some emerging applications such as video-based sensor networks, power-aware surveillance and mobile video communication systems require computational complexity to be shifted from encoder to decoder. Distributed Video Coding (DVC) is a new coding paradigm, based on two information-theoretic results, Slepian-Wolf and Wyner-Ziv, which allows exploiting source statistics at the decoder only. This architecture, therefore, enables very simple encoders to be used in video coding. Wyner-Ziv video coding is a particular case of DVC which deals with lossy source coding where side information is available at the decoder only. In this thesis, we implemented a DVC codec based on the DISCOVER (DIStributed COding for Video sERvices) project and carried out a detailed analysis of each block. Several algorithms have been implemented for each block and results are compared in terms of rate-distortion. The implemented architecture is aimed to be used as a testbed for future studies.
8

Coding with side information

Cheng, Szeming 01 November 2005 (has links)
Source coding and channel coding are two important problems in communications. Although side information exists in everyday scenario, the effect of side information is not taken into account in the conventional setups. In this thesis, we focus on the practical designs of two interesting coding problems with side information: Wyner-Ziv coding (source coding with side information at the decoder) and Gel??fand-Pinsker coding (channel coding with side information at the encoder). For WZC, we split the design problem into the two cases when the distortion of the reconstructed source is zero and when it is not. We review that the first case, which is commonly called Slepian-Wolf coding (SWC), can be implemented using conventional channel coding. Then, we detail the SWC design using the low-density parity-check (LDPC) code. To facilitate SWC design, we justify a necessary requirement that the SWC performance should be independent of the input source. We show that a sufficient condition of this requirement is that the hypothetical channel between the source and the side information satisfies a symmetry condition dubbed dual symmetry. Furthermore, under that dual symmetry condition, SWC design problem can be simply treated as LDPC coding design over the hypothetical channel. When the distortion of the reconstructed source is non-zero, we propose a practical WZC paradigm called Slepian-Wolf coded quantization (SWCQ) by combining SWC and nested lattice quantization. We point out an interesting analogy between SWCQ and entropy coded quantization in classic source coding. Furthermore, a practical scheme of SWCQ using 1-D nested lattice quantization and LDPC is implemented. For GPC, since the actual design procedure relies on the more precise setting of the problem, we choose to investigate the design of GPC as the form of a digital watermarking problem as digital watermarking is the precise dual of WZC. We then introduce an enhanced version of the well-known spread spectrum watermarking technique. Two applications related to digital watermarking are presented.
9

Codage de sources distribuées : Outils et Applications à la compression vidéo

Toto-Zarasoa, Velotiaray 29 November 2010 (has links) (PDF)
Le codage de sources distribuées est une technique permettant de compresser plusieurs sources corrélées sans aucune coopération entre les encodeurs, et sans perte de débit si leur décodage s'effectue conjointement. Fort de ce principe, le codage de vidéo distribué exploite la corrélation entre les images successives d'une vidéo, en simplifiant au maximum l'encodeur et en laissant le décodeur exploiter la corrélation. Parmi les contributions de cette thèse, nous nous intéressons dans une première partie au codage asymétrique de sources binaires dont la distribution n'est pas uniforme, puis au codage des sources à états de Markov cachés. Nous montrons d'abord que, pour ces deux types de sources, exploiter la distribution au décodeur permet d'augmenter le taux de compression. En ce qui concerne le canal binaire symétrique modélisant la corrélation entre les sources, nous proposons un outil, basé sur l'algorithme EM, pour en estimer le paramètre. Nous montrons que cet outil permet d'obtenir une estimation rapide du paramètre, tout en assurant une précision proche de la borne de Cramer-Rao. Dans une deuxième partie, nous développons des outils permettant de décoder avec succès les sources précédemment étudiées. Pour cela, nous utilisons des codes Turbo et LDPC basés syndrome, ainsi que l'algorithme EM. Cette partie a été l'occasion de développer des nouveaux outils pour atteindre les bornes des codages asymétrique et non-asymétrique. Nous montrons aussi que, pour les sources non-uniformes, le rôle des sources corrélées n'est pas symétrique. Enfin, nous montrons que les modèles de sources proposés modélisent bien les distributions des plans de bits des vidéos; nous montrons des résultats prouvant l'efficacité des outils développés. Ces derniers permettent d'améliorer de façon notable la performance débit-distorsion d'un codeur vidéo distribué, mais sous certaines conditions d'additivité du canal de corrélation.
10

Joint Compression and Digital Watermarking: Information-Theoretic Study and Algorithms Development

Sun, Wei January 2006 (has links)
In digital watermarking, a watermark is embedded into a covertext in such a way that the resulting watermarked signal is robust to certain distortion caused by either standard data processing in a friendly environment or malicious attacks in an unfriendly environment. The watermarked signal can then be used for different purposes ranging from copyright protection, data authentication,fingerprinting, to information hiding. In this thesis, digital watermarking will be investigated from both an information theoretic viewpoint and a numerical computation viewpoint. <br /><br /> From the information theoretic viewpoint, we first study a new digital watermarking scenario, in which watermarks and covertexts are generated from a joint memoryless watermark and covertext source. The configuration of this scenario is different from that treated in existing digital watermarking works, where watermarks are assumed independent of covertexts. In the case of public watermarking where the covertext is not accessible to the watermark decoder, a necessary and sufficient condition is determined under which the watermark can be fully recovered with high probability at the end of watermark decoding after the watermarked signal is disturbed by a fixed memoryless attack channel. Moreover, by using similar techniques, a combined source coding and Gel'fand-Pinsker channel coding theorem is established, and an open problem proposed recently by Cox et al is solved. Interestingly, from the sufficient and necessary condition we can show that, in light of the correlation between the watermark and covertext, watermarks still can be fully recovered with high probability even if the entropy of the watermark source is strictly above the standard public watermarking capacity. <br /><br /> We then extend the above watermarking scenario to a case of joint compression and watermarking, where the watermark and covertext are correlated, and the watermarked signal has to be further compressed. Given an additional constraint of the compression rate of the watermarked signals, a necessary and sufficient condition is determined again under which the watermark can be fully recovered with high probability at the end of public watermark decoding after the watermarked signal is disturbed by a fixed memoryless attack channel. <br /><br /> The above two joint compression and watermarking models are further investigated under a less stringent environment where the reproduced watermark at the end of decoding is allowed to be within certain distortion of the original watermark. Sufficient conditions are determined in both cases, under which the original watermark can be reproduced with distortion less than a given distortion level after the watermarked signal is disturbed by a fixed memoryless attack channel and the covertext is not available to the watermark decoder. <br /><br /> Watermarking capacities and joint compression and watermarking rate regions are often characterized and/or presented as optimization problems in information theoretic research. However, it does not mean that they can be calculated easily. In this thesis we first derive closed forms of watermarking capacities of private Laplacian watermarking systems with the magnitude-error distortion measure under a fixed additive Laplacian attack and a fixed arbitrary additive attack, respectively. Then, based on the idea of the Blahut-Arimoto algorithm for computing channel capacities and rate distortion functions, two iterative algorithms are proposed for calculating private watermarking capacities and compression and watermarking rate regions of joint compression and private watermarking systems with finite alphabets. Finally, iterative algorithms are developed for calculating public watermarking capacities and compression and watermarking rate regions of joint compression and public watermarking systems with finite alphabets based on the Blahut-Arimoto algorithm and the Shannon's strategy.

Page generated in 0.0348 seconds