Spelling suggestions: "subject:"[een] IMAGE COMPRESSION"" "subject:"[enn] IMAGE COMPRESSION""
121 |
Video Coding Based on the Kantorovich Distance / Video Kodning Baserat på Kantorovich AvståndÖstman, Martin January 2004 (has links)
<p>In this Master Thesis, a model of a video coding system that uses the transportation plan taken from the calculation of the Kantorovich distance is developed. The coder uses the transportation plan instead of the differential image and sends it through blocks of transformation, quantization and coding. </p><p>The Kantorovich distance is a rather unknown distance metric that is used in optimization theory but is also applicable on images. It can be defined as the cheapest way to transport the mass of one image into another and the cost is determined by the distance function chosen to measure distance between pixels. The transportation plan is a set of finitely many five-dimensional vectors that show exactly how the mass should be moved from the transmitting pixel to the receiving pixel in order to achieve the Kantorovich distance between the images. A vector in the transportation plan is called an arc. </p><p>The original transportation plan was transformed into a new set of four-dimensional vectors called the modified difference plan. This set replaces the transmitting pixel and the receiving pixel with the distance from the transmitting pixel of the last arc and the relative distance between the receiving pixel and the transmitting pixel. The arcs where the receiving pixels are the same as the transmitting pixels are redundant and were removed. The coder completed an eleven frame sequence of size 128x128 pixels in eight to ten hours.</p>
|
122 |
Adaptive Fractal and Wavelet Image DenoisingGhazel, Mohsen January 2004 (has links)
The need for image enhancement and restoration is encountered in many practical applications. For instance, distortion due to additive white Gaussian noise (AWGN) can be caused by poor quality image acquisition, images observed in a noisy environment or noise inherent in communication channels. In this thesis, image denoising is investigated. After reviewing standard image denoising methods as applied in the spatial, frequency and wavelet domains of the noisy image, the thesis embarks on the endeavor of developing and experimenting with new image denoising methods based on fractal and wavelet transforms. In particular, three new image denoising methods are proposed: context-based wavelet thresholding, predictive fractal image denoising and fractal-wavelet image denoising. The proposed context-based thresholding strategy adopts localized hard and soft thresholding operators which take in consideration the content of an immediate neighborhood of a wavelet coefficient before thresholding it. The two fractal-based predictive schemes are based on a simple yet effective algorithm for estimating the fractal code of the original noise-free image from the noisy one. From this predicted code, one can then reconstruct a fractally denoised estimate of the original image. This fractal-based denoising algorithm can be applied in the pixel and the wavelet domains of the noisy image using standard fractal and fractal-wavelet schemes, respectively. Furthermore, the cycle spinning idea was implemented in order to enhance the quality of the fractally denoised estimates. Experimental results show that the proposed image denoising methods are competitive, or sometimes even compare favorably with the existing image denoising techniques reviewed in the thesis. This work broadens the application scope of fractal transforms, which have been used mainly for image coding and compression purposes.
|
123 |
Progressive Lossless Image Compression Using Image Decomposition and Context QuantizationZha, Hui 23 January 2007 (has links)
Lossless image compression has many applications, for example, in medical imaging, space photograph and film industry. In this thesis, we propose an efficient lossless image compression scheme for both binary images and gray-scale images. The scheme first decomposes images into a set of progressively refined binary sequences and then uses the context-based, adaptive arithmetic coding algorithm to encode these sequences. In order to deal with the context dilution problem in arithmetic coding, we propose a Lloyd-like iterative algorithm to quantize contexts. Fixing the set of input contexts and the number of quantized contexts, our context quantization algorithm iteratively finds the optimum context mapping in the sense of minimizing the compression rate. Experimental results show that by combining image decomposition and context quantization, our scheme can achieve competitive lossless compression performance compared to the JBIG algorithm for binary images, and the CALIC algorithm for gray-scale images. In contrast to CALIC, our scheme provides the additional feature of allowing progressive transmission of gray-scale images, which is very appealing in applications such as web browsing.
|
124 |
Video Coding Based on the Kantorovich Distance / Video Kodning Baserat på Kantorovich AvståndÖstman, Martin January 2004 (has links)
In this Master Thesis, a model of a video coding system that uses the transportation plan taken from the calculation of the Kantorovich distance is developed. The coder uses the transportation plan instead of the differential image and sends it through blocks of transformation, quantization and coding. The Kantorovich distance is a rather unknown distance metric that is used in optimization theory but is also applicable on images. It can be defined as the cheapest way to transport the mass of one image into another and the cost is determined by the distance function chosen to measure distance between pixels. The transportation plan is a set of finitely many five-dimensional vectors that show exactly how the mass should be moved from the transmitting pixel to the receiving pixel in order to achieve the Kantorovich distance between the images. A vector in the transportation plan is called an arc. The original transportation plan was transformed into a new set of four-dimensional vectors called the modified difference plan. This set replaces the transmitting pixel and the receiving pixel with the distance from the transmitting pixel of the last arc and the relative distance between the receiving pixel and the transmitting pixel. The arcs where the receiving pixels are the same as the transmitting pixels are redundant and were removed. The coder completed an eleven frame sequence of size 128x128 pixels in eight to ten hours.
|
125 |
Adaptive Fractal and Wavelet Image DenoisingGhazel, Mohsen January 2004 (has links)
The need for image enhancement and restoration is encountered in many practical applications. For instance, distortion due to additive white Gaussian noise (AWGN) can be caused by poor quality image acquisition, images observed in a noisy environment or noise inherent in communication channels. In this thesis, image denoising is investigated. After reviewing standard image denoising methods as applied in the spatial, frequency and wavelet domains of the noisy image, the thesis embarks on the endeavor of developing and experimenting with new image denoising methods based on fractal and wavelet transforms. In particular, three new image denoising methods are proposed: context-based wavelet thresholding, predictive fractal image denoising and fractal-wavelet image denoising. The proposed context-based thresholding strategy adopts localized hard and soft thresholding operators which take in consideration the content of an immediate neighborhood of a wavelet coefficient before thresholding it. The two fractal-based predictive schemes are based on a simple yet effective algorithm for estimating the fractal code of the original noise-free image from the noisy one. From this predicted code, one can then reconstruct a fractally denoised estimate of the original image. This fractal-based denoising algorithm can be applied in the pixel and the wavelet domains of the noisy image using standard fractal and fractal-wavelet schemes, respectively. Furthermore, the cycle spinning idea was implemented in order to enhance the quality of the fractally denoised estimates. Experimental results show that the proposed image denoising methods are competitive, or sometimes even compare favorably with the existing image denoising techniques reviewed in the thesis. This work broadens the application scope of fractal transforms, which have been used mainly for image coding and compression purposes.
|
126 |
Progressive Lossless Image Compression Using Image Decomposition and Context QuantizationZha, Hui 23 January 2007 (has links)
Lossless image compression has many applications, for example, in medical imaging, space photograph and film industry. In this thesis, we propose an efficient lossless image compression scheme for both binary images and gray-scale images. The scheme first decomposes images into a set of progressively refined binary sequences and then uses the context-based, adaptive arithmetic coding algorithm to encode these sequences. In order to deal with the context dilution problem in arithmetic coding, we propose a Lloyd-like iterative algorithm to quantize contexts. Fixing the set of input contexts and the number of quantized contexts, our context quantization algorithm iteratively finds the optimum context mapping in the sense of minimizing the compression rate. Experimental results show that by combining image decomposition and context quantization, our scheme can achieve competitive lossless compression performance compared to the JBIG algorithm for binary images, and the CALIC algorithm for gray-scale images. In contrast to CALIC, our scheme provides the additional feature of allowing progressive transmission of gray-scale images, which is very appealing in applications such as web browsing.
|
127 |
Efficient Access Methods on the Hilbert CurveWu, Chen-Chang 18 June 2012 (has links)
The design of multi-dimensional access methods is difficult as compared to those of one-dimensional case because of no total ordering that preserves spatial locality. One way is to look for the total order that preserves spatial proximity at least to some extent. A space-filling curve is a continuous path which passes through every point in a space once so giving a one-to-one correspondence between the coordinates of the points and the 1D-sequence numbers of points on the curve. The Hilbert curve is a famous space filling curve, since it has been shown to have strong locality preserving properties; that is, it is the best space-filling curve in minimizing the number of clusters. Hence, it has been extensively used to maintain spatial locality of multidimensional data in a wide variety of applications. A window query is an important query operation in spatial (image) databases. Given a Hilbert curve, a window query reports its corresponding orders without the need to decode all the points inside this window into the corresponding Hilbert orders. Chung et al. have proposed an algorithm for decomposing a window into the corresponding Hilbert orders. However, the Hilbert curve requires that the region is of size 2^k x 2^k, where k∈N. The intuitive method such as Chung et al.¡¦s algorithm is to directly use Hilbert curves in the decomposed areas and then connect them. They must generate a sequence of the scanned quadrants additionally before encoding and decoding the Hilbert order of one pixel and scan this sequence one time while encoding and decoding one pixel. In this dissertation, on the design of methods for window queries on a Hilbert curve, we propose an efficient algorithm, named as Quad-Splitting, for decomposing a window into the corresponding Hilbert orders on a Hilbert curve without individual sorting and merging steps. The proposed algorithm does not perform individual sorting and merging steps which are needed in Chung et al.¡¦s algorithm. From our experimental results, we show that the Quad-Splitting algorithm outperforms Chung et al.¡¦s algorithm. On the design of the methods for generating the Hilbert curve of an arbitrary-sized image, we propose approximately even partition approach to generate a pseudo Hilbert curve of an arbitrary-sized image. From our experimental results, we show that our proposed pseudo Hilbert curve preserves the similar strong locality property to the Hilbert curve. On the design of the methods for coding Hilbert curve of an arbitrary-sized image, we propose encoding and decoding algorithms. From our experimental results, we show that our encoding and decoding algorithms outperform the Chung et al.¡¦s algorithms.
|
128 |
Topics in genomic image processingHua, Jianping 12 April 2006 (has links)
The image processing methodologies that have been actively studied and developed
now play a very significant role in the flourishing biotechnology research. This work
studies, develops and implements several image processing techniques for M-FISH and
cDNA microarray images. In particular, we focus on three important areas: M-FISH
image compression, microarray image processing and expression-based classification.
Two schemes, embedded M-FISH image coding (EMIC) and Microarray BASICA:
Background Adjustment, Segmentation, Image Compression and Analysis, have been
introduced for M-FISH image compression and microarray image processing, respectively.
In the expression-based classification area, we investigate the relationship
between optimal number of features and sample size, either analytically or through
simulation, for various classifiers.
|
129 |
Analysis And Design Of Image And Video Encryption AlgorithmsYekkala, Anil Kumar 12 1900 (has links)
The rapid growth in multimedia based Internet systems and applications like
video telephony, video on demand, network based DVD recorders and IP
television has created a substantial need for multimedia security. One of the important requirements for multimedia security is transmission of the digital multimedia content in a secure manner using encryption for protecting it from eavesdropping. The simplest way of encrypting multimedia content is to consider the two-dimensional/three-dimensional image/video stream as an one-dimensional stream and to encrypt the entire content using standard block ciphers like AES, DES, IDEA or RC4 or using a stream cipher. The method of encrypting the entire multimedia content is considered as a naive encryption approach. Even though the naive encryption approach provides the desired security requirements, it imposes a large overhead on the multimedia codex. This is due to the size of the multimedia content, and also due to real time requirements of transmission and rendering. Hence, lightweight encryption schemes are gaining popularity for multimedia encryption. Lightweight Encryption schemes are based on the principle “Encrypt minimal and induce maximum noise". Lightweight encryption schemes are designed to take the structure of the multimedia content into consideration.
In our work we analyze some of the existing lightweight encryption schemes for digital images and video. The analysis is done based on the amount of security, scalability and effect on compression. A detailed study of some of the existing
lightweight encryption schemes is also done by designing cryptanalysis
schemes. The cryptanalysis schemes are designed using image noise clearing
algorithms and pixel prediction techniques. The designed cryptanalysis schemes reduce the amount of noise introduced by the corresponding lightweight encryption schemes considerably. Based on our analysis of existing lightweight
encryption schemes, we propose a set of more robust lightweight encryption
schemes for images and video. The proposed lightweight encryption schemes
are secure, scalable, and do not degrade the compression achieved. In our work, we also propose a few enhancements to JPEG image compression for achieving more compression, without compromising on the quality. The enhancements to the JPEG compression are extensions of the pixel prediction techniques used in the proposed cryptanalysis schemes.
|
130 |
Advanced wavelet image and video coding strategies for multimedia communicationsVass, Jozsef January 2000 (has links)
Thesis (Ph. D.)--University of Missouri-Columbia, 2000. / Typescript. Vita. Includes bibliographical references (leaves 202-221). Also available on the Internet.
|
Page generated in 0.0339 seconds