• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 1
  • Tagged with
  • 5
  • 5
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Compression and Classification of Imagery

Tabesh, Ali January 2006 (has links)
Problems at the intersection of compression and statistical inference recur frequently due to the concurrent use of signal and image compression and classification algorithms in many applications. This dissertation addresses two such problems: statistical inference on compressed data, and rate-allocation for joint compression and classification.Features of the JPEG2000 standard make possible the development of computationally efficient algorithms to achieve such a goal for imagery compressed using this standard. We propose the use of the information content (IC) of wavelet subbands, defined as the number of bytes that the JPEG2000 encoder spends to compress the subbands, for content analysis. Applying statistical learning frameworks for detection and classification, we present experimental results for compressed-domain texture image classification and cut detection in video. Our results indicate that reasonable performance can be achieved, while saving computational and bandwidth resources. IC features can also be used for preliminary analysis in the compressed domain to identify candidates for further analysis in the decompressed domain.In many applications of image compression, the compressed image is to be presented to human observers and statistical decision-making systems. In such applications, the fidelity criterion with respect to which the image is compressed must be selected to strike an appropriate compromise between the (possibly conflicting) image quality criteria for the human and machine observers. We present tractable distortion measures based on the Bhattacharyya distance (BD) and a new upper bound on the quantized probability of error that make possible closed form expressions for rate allocation to image subbands and show their efficacy in maintaining the aforementioned balance between compression and classification. The new bound offers two advantages over the BD in that it yields closed-form solutions for rate-allocation in problems involving correlated sources and more than two classes.
2

Distributed Reception in the Presence of Gaussian Interference

January 2019 (has links)
abstract: An analysis is presented of a network of distributed receivers encumbered by strong in-band interference. The structure of information present across such receivers and how they might collaborate to recover a signal of interest is studied. Unstructured (random coding) and structured (lattice coding) strategies are studied towards this purpose for a certain adaptable system model. Asymptotic performances of these strategies and algorithms to compute them are developed. A jointly-compressed lattice code with proper configuration performs best of all strategies investigated. / Dissertation/Thesis / Doctoral Dissertation Electrical Engineering 2019
3

Joint Compression and Digital Watermarking: Information-Theoretic Study and Algorithms Development

Sun, Wei January 2006 (has links)
In digital watermarking, a watermark is embedded into a covertext in such a way that the resulting watermarked signal is robust to certain distortion caused by either standard data processing in a friendly environment or malicious attacks in an unfriendly environment. The watermarked signal can then be used for different purposes ranging from copyright protection, data authentication,fingerprinting, to information hiding. In this thesis, digital watermarking will be investigated from both an information theoretic viewpoint and a numerical computation viewpoint. <br /><br /> From the information theoretic viewpoint, we first study a new digital watermarking scenario, in which watermarks and covertexts are generated from a joint memoryless watermark and covertext source. The configuration of this scenario is different from that treated in existing digital watermarking works, where watermarks are assumed independent of covertexts. In the case of public watermarking where the covertext is not accessible to the watermark decoder, a necessary and sufficient condition is determined under which the watermark can be fully recovered with high probability at the end of watermark decoding after the watermarked signal is disturbed by a fixed memoryless attack channel. Moreover, by using similar techniques, a combined source coding and Gel'fand-Pinsker channel coding theorem is established, and an open problem proposed recently by Cox et al is solved. Interestingly, from the sufficient and necessary condition we can show that, in light of the correlation between the watermark and covertext, watermarks still can be fully recovered with high probability even if the entropy of the watermark source is strictly above the standard public watermarking capacity. <br /><br /> We then extend the above watermarking scenario to a case of joint compression and watermarking, where the watermark and covertext are correlated, and the watermarked signal has to be further compressed. Given an additional constraint of the compression rate of the watermarked signals, a necessary and sufficient condition is determined again under which the watermark can be fully recovered with high probability at the end of public watermark decoding after the watermarked signal is disturbed by a fixed memoryless attack channel. <br /><br /> The above two joint compression and watermarking models are further investigated under a less stringent environment where the reproduced watermark at the end of decoding is allowed to be within certain distortion of the original watermark. Sufficient conditions are determined in both cases, under which the original watermark can be reproduced with distortion less than a given distortion level after the watermarked signal is disturbed by a fixed memoryless attack channel and the covertext is not available to the watermark decoder. <br /><br /> Watermarking capacities and joint compression and watermarking rate regions are often characterized and/or presented as optimization problems in information theoretic research. However, it does not mean that they can be calculated easily. In this thesis we first derive closed forms of watermarking capacities of private Laplacian watermarking systems with the magnitude-error distortion measure under a fixed additive Laplacian attack and a fixed arbitrary additive attack, respectively. Then, based on the idea of the Blahut-Arimoto algorithm for computing channel capacities and rate distortion functions, two iterative algorithms are proposed for calculating private watermarking capacities and compression and watermarking rate regions of joint compression and private watermarking systems with finite alphabets. Finally, iterative algorithms are developed for calculating public watermarking capacities and compression and watermarking rate regions of joint compression and public watermarking systems with finite alphabets based on the Blahut-Arimoto algorithm and the Shannon's strategy.
4

Joint Compression and Digital Watermarking: Information-Theoretic Study and Algorithms Development

Sun, Wei January 2006 (has links)
In digital watermarking, a watermark is embedded into a covertext in such a way that the resulting watermarked signal is robust to certain distortion caused by either standard data processing in a friendly environment or malicious attacks in an unfriendly environment. The watermarked signal can then be used for different purposes ranging from copyright protection, data authentication,fingerprinting, to information hiding. In this thesis, digital watermarking will be investigated from both an information theoretic viewpoint and a numerical computation viewpoint. <br /><br /> From the information theoretic viewpoint, we first study a new digital watermarking scenario, in which watermarks and covertexts are generated from a joint memoryless watermark and covertext source. The configuration of this scenario is different from that treated in existing digital watermarking works, where watermarks are assumed independent of covertexts. In the case of public watermarking where the covertext is not accessible to the watermark decoder, a necessary and sufficient condition is determined under which the watermark can be fully recovered with high probability at the end of watermark decoding after the watermarked signal is disturbed by a fixed memoryless attack channel. Moreover, by using similar techniques, a combined source coding and Gel'fand-Pinsker channel coding theorem is established, and an open problem proposed recently by Cox et al is solved. Interestingly, from the sufficient and necessary condition we can show that, in light of the correlation between the watermark and covertext, watermarks still can be fully recovered with high probability even if the entropy of the watermark source is strictly above the standard public watermarking capacity. <br /><br /> We then extend the above watermarking scenario to a case of joint compression and watermarking, where the watermark and covertext are correlated, and the watermarked signal has to be further compressed. Given an additional constraint of the compression rate of the watermarked signals, a necessary and sufficient condition is determined again under which the watermark can be fully recovered with high probability at the end of public watermark decoding after the watermarked signal is disturbed by a fixed memoryless attack channel. <br /><br /> The above two joint compression and watermarking models are further investigated under a less stringent environment where the reproduced watermark at the end of decoding is allowed to be within certain distortion of the original watermark. Sufficient conditions are determined in both cases, under which the original watermark can be reproduced with distortion less than a given distortion level after the watermarked signal is disturbed by a fixed memoryless attack channel and the covertext is not available to the watermark decoder. <br /><br /> Watermarking capacities and joint compression and watermarking rate regions are often characterized and/or presented as optimization problems in information theoretic research. However, it does not mean that they can be calculated easily. In this thesis we first derive closed forms of watermarking capacities of private Laplacian watermarking systems with the magnitude-error distortion measure under a fixed additive Laplacian attack and a fixed arbitrary additive attack, respectively. Then, based on the idea of the Blahut-Arimoto algorithm for computing channel capacities and rate distortion functions, two iterative algorithms are proposed for calculating private watermarking capacities and compression and watermarking rate regions of joint compression and private watermarking systems with finite alphabets. Finally, iterative algorithms are developed for calculating public watermarking capacities and compression and watermarking rate regions of joint compression and public watermarking systems with finite alphabets based on the Blahut-Arimoto algorithm and the Shannon's strategy.
5

Compression progressive et tatouage conjoint de maillages surfaciques avec attributs de couleur / Progressive compression and joint compression and watermarking of surface mesh with color attributes

Lee, Ho 21 June 2011 (has links)
L’utilisation des modèles 3D, représentés sous forme de maillage, est sans cesse croissante dans de nombreuses applications. Pour une transmission efficace et pour une adaptation à l’hétérogénéité des ressources de ces modèles, des techniques de compression progressive sont généralement utilisées. Afin de protéger le droit d’auteur de ces modèles pendant la transmission, des techniques de tatouage sont également employées. Dans ces travaux de thèse, nous proposons premièrement deux méthodes de compression progressive pour des maillages avec ou sans information de couleurs et nous présentons finalement un système conjoint de compression progressive et de tatouage. Dans une première partie, nous proposons une méthode d’optimisation du compromis débit-distorsion pour des maillages sans attribut de couleur. Pendant le processus de l’encodage, nous adoptons la précision de quantification au nombre d’éléments et à la complexité géométrique pour chaque niveau de détail. Cette adaptation peut s’effectuer de manière optimale en mesurant la distance par rapport au maillage original, ou de façon quasi-optimale en utilisant un modèle théorique pour une optimisation rapide. Les résultats montrent que notre méthode donne des résultats compétitifs par rapport aux méthodes de l’état de l’art. Dans une deuxième partie, nous nous focalisons sur l’optimisation du compromis débit-distorsion pour des maillages possédant l’information de couleur attachée aux sommets. Après avoir proposé deux méthodes de compression pour ce type de maillage, nous présentons une méthode d’optimisation du débit-distorsion qui repose sur l’adaptation de la précision de quantification de la géométrie et de la couleur pour chaque maillage intermédiaire. Cette adaptation peut être effectuée rapidement selon un modèle théorique qui permet d’évaluer le nombre de bits de quantification nécessaire pour chaque maillage intermédiaire. Une métrique est également proposée pour préserver les éléments caractéristiques durant la phase de simplification. Finalement, nous proposons un schéma conjoint de compression progressive et de tatouage. Afin de protéger tous les niveaux de détails, nous insérons le tatouage dans chaque étape du processus d’encodage. Pour cela, à chaque itération de la simplification, nous séparons les sommets du maillage en deux ensembles et nous calculons un histogramme de distribution de normes pour chacun d’entre eux. Ensuite, nous divisons ces histogrammes en plusieurs classes et nous modifions ces histogrammes en décalant les classes pour insérer un bit. Cette technique de tatouage est réversible et permet de restaurer de manière exacte le maillage original en éliminant la déformation induite par l’insertion du tatouage. Nous proposons également une nouvelle méthode de prédiction de la géométrie afin de réduire le surcoût provoqué par l’insertion du tatouage. Les résultats expérimentaux montrent que notre méthode est robuste à diverses attaques géométriques tout en maintenant un bon taux de compression / The use of 3D models, represented as a mesh, is growing in many applications. For efficient transmission and adaptation of these models to the heterogeneity of client devices, progressive compression techniques are generally used. To protect the copyright during the transmission, watermarking techniques are also used. In this thesis, we first propose two progressive compression methods for meshes with or without color information, and we present a joint system of compression and watermarking. In the first part, we propose a method for optimizing the rate-distortion trade-off for meshes without color attribute. During the encoding process, we adopt the quantization precision to the number of elements and geometric complexity. This adaptation can be performed optimally by measuring the distance regarding the original mesh, or can be carried out using a theoretical model for fast optimization. The results show that our method yields competitive results with the state-of-the-art methods. In the second part, we focus on optimizing the rate-distortion performance for meshes with color information attached to mesh vertices. We propose firstly two methods of compression for this type of mesh and then we present a method for optimizing the rate-distortion trade-off based on the adaptation of the quantification precision of both geometry and color for each intermediate mesh. This adaptation can be performed rapidly by a theoretical model that evaluates the required number of quantization bits for each intermediate mesh. A metric is also proposed in order to preserve the feature elements throughout simplification. Finally, we propose a joint scheme of progressive compression and watermarking. To protect all levels of detail, we insert the watermark within each step of the encoding process. More precisely, at each iteration of simplification, we separate vertices into two sets and compute a histogram of distribution of vertex norms for each set. Then, we divide these histograms into several bins and we modify these histograms by shifting bins to insert a bit. This watermarking technique is reversible and can restore exactly the original mesh by eliminating the distortion caused by the insertion of the watermark. We also propose a new prediction method for geometry encoding to reduce the overhead caused by the insertion of the watermark. Experimental results show that our method is robust to various geometric attacks while maintaining a good compression ratio

Page generated in 0.0943 seconds