• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 1
  • Tagged with
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Joint Compression and Digital Watermarking: Information-Theoretic Study and Algorithms Development

Sun, Wei January 2006 (has links)
In digital watermarking, a watermark is embedded into a covertext in such a way that the resulting watermarked signal is robust to certain distortion caused by either standard data processing in a friendly environment or malicious attacks in an unfriendly environment. The watermarked signal can then be used for different purposes ranging from copyright protection, data authentication,fingerprinting, to information hiding. In this thesis, digital watermarking will be investigated from both an information theoretic viewpoint and a numerical computation viewpoint. <br /><br /> From the information theoretic viewpoint, we first study a new digital watermarking scenario, in which watermarks and covertexts are generated from a joint memoryless watermark and covertext source. The configuration of this scenario is different from that treated in existing digital watermarking works, where watermarks are assumed independent of covertexts. In the case of public watermarking where the covertext is not accessible to the watermark decoder, a necessary and sufficient condition is determined under which the watermark can be fully recovered with high probability at the end of watermark decoding after the watermarked signal is disturbed by a fixed memoryless attack channel. Moreover, by using similar techniques, a combined source coding and Gel'fand-Pinsker channel coding theorem is established, and an open problem proposed recently by Cox et al is solved. Interestingly, from the sufficient and necessary condition we can show that, in light of the correlation between the watermark and covertext, watermarks still can be fully recovered with high probability even if the entropy of the watermark source is strictly above the standard public watermarking capacity. <br /><br /> We then extend the above watermarking scenario to a case of joint compression and watermarking, where the watermark and covertext are correlated, and the watermarked signal has to be further compressed. Given an additional constraint of the compression rate of the watermarked signals, a necessary and sufficient condition is determined again under which the watermark can be fully recovered with high probability at the end of public watermark decoding after the watermarked signal is disturbed by a fixed memoryless attack channel. <br /><br /> The above two joint compression and watermarking models are further investigated under a less stringent environment where the reproduced watermark at the end of decoding is allowed to be within certain distortion of the original watermark. Sufficient conditions are determined in both cases, under which the original watermark can be reproduced with distortion less than a given distortion level after the watermarked signal is disturbed by a fixed memoryless attack channel and the covertext is not available to the watermark decoder. <br /><br /> Watermarking capacities and joint compression and watermarking rate regions are often characterized and/or presented as optimization problems in information theoretic research. However, it does not mean that they can be calculated easily. In this thesis we first derive closed forms of watermarking capacities of private Laplacian watermarking systems with the magnitude-error distortion measure under a fixed additive Laplacian attack and a fixed arbitrary additive attack, respectively. Then, based on the idea of the Blahut-Arimoto algorithm for computing channel capacities and rate distortion functions, two iterative algorithms are proposed for calculating private watermarking capacities and compression and watermarking rate regions of joint compression and private watermarking systems with finite alphabets. Finally, iterative algorithms are developed for calculating public watermarking capacities and compression and watermarking rate regions of joint compression and public watermarking systems with finite alphabets based on the Blahut-Arimoto algorithm and the Shannon's strategy.
2

Joint Compression and Digital Watermarking: Information-Theoretic Study and Algorithms Development

Sun, Wei January 2006 (has links)
In digital watermarking, a watermark is embedded into a covertext in such a way that the resulting watermarked signal is robust to certain distortion caused by either standard data processing in a friendly environment or malicious attacks in an unfriendly environment. The watermarked signal can then be used for different purposes ranging from copyright protection, data authentication,fingerprinting, to information hiding. In this thesis, digital watermarking will be investigated from both an information theoretic viewpoint and a numerical computation viewpoint. <br /><br /> From the information theoretic viewpoint, we first study a new digital watermarking scenario, in which watermarks and covertexts are generated from a joint memoryless watermark and covertext source. The configuration of this scenario is different from that treated in existing digital watermarking works, where watermarks are assumed independent of covertexts. In the case of public watermarking where the covertext is not accessible to the watermark decoder, a necessary and sufficient condition is determined under which the watermark can be fully recovered with high probability at the end of watermark decoding after the watermarked signal is disturbed by a fixed memoryless attack channel. Moreover, by using similar techniques, a combined source coding and Gel'fand-Pinsker channel coding theorem is established, and an open problem proposed recently by Cox et al is solved. Interestingly, from the sufficient and necessary condition we can show that, in light of the correlation between the watermark and covertext, watermarks still can be fully recovered with high probability even if the entropy of the watermark source is strictly above the standard public watermarking capacity. <br /><br /> We then extend the above watermarking scenario to a case of joint compression and watermarking, where the watermark and covertext are correlated, and the watermarked signal has to be further compressed. Given an additional constraint of the compression rate of the watermarked signals, a necessary and sufficient condition is determined again under which the watermark can be fully recovered with high probability at the end of public watermark decoding after the watermarked signal is disturbed by a fixed memoryless attack channel. <br /><br /> The above two joint compression and watermarking models are further investigated under a less stringent environment where the reproduced watermark at the end of decoding is allowed to be within certain distortion of the original watermark. Sufficient conditions are determined in both cases, under which the original watermark can be reproduced with distortion less than a given distortion level after the watermarked signal is disturbed by a fixed memoryless attack channel and the covertext is not available to the watermark decoder. <br /><br /> Watermarking capacities and joint compression and watermarking rate regions are often characterized and/or presented as optimization problems in information theoretic research. However, it does not mean that they can be calculated easily. In this thesis we first derive closed forms of watermarking capacities of private Laplacian watermarking systems with the magnitude-error distortion measure under a fixed additive Laplacian attack and a fixed arbitrary additive attack, respectively. Then, based on the idea of the Blahut-Arimoto algorithm for computing channel capacities and rate distortion functions, two iterative algorithms are proposed for calculating private watermarking capacities and compression and watermarking rate regions of joint compression and private watermarking systems with finite alphabets. Finally, iterative algorithms are developed for calculating public watermarking capacities and compression and watermarking rate regions of joint compression and public watermarking systems with finite alphabets based on the Blahut-Arimoto algorithm and the Shannon's strategy.
3

Compression progressive et tatouage conjoint de maillages surfaciques avec attributs de couleur / Progressive compression and joint compression and watermarking of surface mesh with color attributes

Lee, Ho 21 June 2011 (has links)
L’utilisation des modèles 3D, représentés sous forme de maillage, est sans cesse croissante dans de nombreuses applications. Pour une transmission efficace et pour une adaptation à l’hétérogénéité des ressources de ces modèles, des techniques de compression progressive sont généralement utilisées. Afin de protéger le droit d’auteur de ces modèles pendant la transmission, des techniques de tatouage sont également employées. Dans ces travaux de thèse, nous proposons premièrement deux méthodes de compression progressive pour des maillages avec ou sans information de couleurs et nous présentons finalement un système conjoint de compression progressive et de tatouage. Dans une première partie, nous proposons une méthode d’optimisation du compromis débit-distorsion pour des maillages sans attribut de couleur. Pendant le processus de l’encodage, nous adoptons la précision de quantification au nombre d’éléments et à la complexité géométrique pour chaque niveau de détail. Cette adaptation peut s’effectuer de manière optimale en mesurant la distance par rapport au maillage original, ou de façon quasi-optimale en utilisant un modèle théorique pour une optimisation rapide. Les résultats montrent que notre méthode donne des résultats compétitifs par rapport aux méthodes de l’état de l’art. Dans une deuxième partie, nous nous focalisons sur l’optimisation du compromis débit-distorsion pour des maillages possédant l’information de couleur attachée aux sommets. Après avoir proposé deux méthodes de compression pour ce type de maillage, nous présentons une méthode d’optimisation du débit-distorsion qui repose sur l’adaptation de la précision de quantification de la géométrie et de la couleur pour chaque maillage intermédiaire. Cette adaptation peut être effectuée rapidement selon un modèle théorique qui permet d’évaluer le nombre de bits de quantification nécessaire pour chaque maillage intermédiaire. Une métrique est également proposée pour préserver les éléments caractéristiques durant la phase de simplification. Finalement, nous proposons un schéma conjoint de compression progressive et de tatouage. Afin de protéger tous les niveaux de détails, nous insérons le tatouage dans chaque étape du processus d’encodage. Pour cela, à chaque itération de la simplification, nous séparons les sommets du maillage en deux ensembles et nous calculons un histogramme de distribution de normes pour chacun d’entre eux. Ensuite, nous divisons ces histogrammes en plusieurs classes et nous modifions ces histogrammes en décalant les classes pour insérer un bit. Cette technique de tatouage est réversible et permet de restaurer de manière exacte le maillage original en éliminant la déformation induite par l’insertion du tatouage. Nous proposons également une nouvelle méthode de prédiction de la géométrie afin de réduire le surcoût provoqué par l’insertion du tatouage. Les résultats expérimentaux montrent que notre méthode est robuste à diverses attaques géométriques tout en maintenant un bon taux de compression / The use of 3D models, represented as a mesh, is growing in many applications. For efficient transmission and adaptation of these models to the heterogeneity of client devices, progressive compression techniques are generally used. To protect the copyright during the transmission, watermarking techniques are also used. In this thesis, we first propose two progressive compression methods for meshes with or without color information, and we present a joint system of compression and watermarking. In the first part, we propose a method for optimizing the rate-distortion trade-off for meshes without color attribute. During the encoding process, we adopt the quantization precision to the number of elements and geometric complexity. This adaptation can be performed optimally by measuring the distance regarding the original mesh, or can be carried out using a theoretical model for fast optimization. The results show that our method yields competitive results with the state-of-the-art methods. In the second part, we focus on optimizing the rate-distortion performance for meshes with color information attached to mesh vertices. We propose firstly two methods of compression for this type of mesh and then we present a method for optimizing the rate-distortion trade-off based on the adaptation of the quantification precision of both geometry and color for each intermediate mesh. This adaptation can be performed rapidly by a theoretical model that evaluates the required number of quantization bits for each intermediate mesh. A metric is also proposed in order to preserve the feature elements throughout simplification. Finally, we propose a joint scheme of progressive compression and watermarking. To protect all levels of detail, we insert the watermark within each step of the encoding process. More precisely, at each iteration of simplification, we separate vertices into two sets and compute a histogram of distribution of vertex norms for each set. Then, we divide these histograms into several bins and we modify these histograms by shifting bins to insert a bit. This watermarking technique is reversible and can restore exactly the original mesh by eliminating the distortion caused by the insertion of the watermark. We also propose a new prediction method for geometry encoding to reduce the overhead caused by the insertion of the watermark. Experimental results show that our method is robust to various geometric attacks while maintaining a good compression ratio

Page generated in 0.1757 seconds