Spelling suggestions: "subject:"[een] IMAGE COMPRESSION"" "subject:"[enn] IMAGE COMPRESSION""
51 |
Image/video coding for network applications functionality and adaptation /Lan, Junqiang, January 2004 (has links)
Thesis (Ph.D.)--University of Missouri-Columbia, 2004. / Typescript. Vita. Includes bibliographical references (leaves 109-115). Also available on the Internet.
|
52 |
New wavelet transforms and their applications to data compressionSingh, Inderpreet 15 March 2018 (has links)
With the evolution of multimedia systems, image and video compression is becoming the key enabling technology for delivering various image/video services over heterogeneous networks. The basic goal of image data compression is to reduce the bit rate for transmission and storage while either maintaining the original quality of the data or providing an acceptable quality.
This thesis proposes a new wavelet transform for lossless compression of images with application to medical images. The transform uses integer arithmetic and is very computationally efficient. Then a new color image transformation, which is reversible and uses integer arithmetic, is proposed. The transformation reduces the redundancy among the red, green, and blue color bands. It approximates the luminance and chrominance components of the YIQ coordinate system. This transformation involves no floating point/integer multiplications or divisions, and is, therefore, very suitable for real-time applications where the number of CPU cycles needs to be kept to a minimum.
A technique for lossy compression of an image data base is also proposed. The technique uses a wavelet transform and vector quantization for compression. The discrete cosine transform is applied to the coarsest scale wavelet coefficients to achieve even higher compression ratios without any significant increase in computational complexity. Wavelet denoising is used to reduce the image artifacts generated by quantizing the discrete cosine transform coefficients. This improves the subjective quality of the decompressed images for very low bit rate images (less than 0.5 bits per pixel).
The thesis also deals with the real-time implementation of the wavelet transform. The new wavelet transform has been applied to speech signals. Both lossless and lossy techniques for speech coding have been implemented. The lossless technique involves using the reversible integer-arithmetic wavelet transform and Huffman coding to obtain the compressed bitstream. The lossy technique, on the other hand, quantizes the wavelet coefficients to obtain higher compression ratio at the expense of some degradation in sound quality. The issues related to real-time wavelet compression are also discussed. Due to the limited size of memory on a DSP, a wavelet transform had to be applied to an input signal of finite length. The effects of varying the signal length on compression performance are also studied for different reversible wavelet transforms. The limitations of the proposed techniques are discussed and recommendations for future research are provided. / Graduate
|
53 |
Wavelet-based Image Compression Using Human Visual System ModelsBeegan, Andrew Peter 22 May 2001 (has links)
Recent research in transform-based image compression has focused on the wavelet transform due to its superior performance over other transforms. Performance is often measured solely in terms of peak signal-to-noise ratio (PSNR) and compression algorithms are optimized for this quantitative metric. The performance in terms of subjective quality is typically not evaluated. Moreover, the sensitivities of the human visual system (HVS) are often not incorporated into compression schemes.
This paper develops new wavelet models of the HVS and illustrates their performance for various scalar wavelet and multiwavelet transforms. The performance is measured quantitatively (PSNR) and qualitatively using our new perceptual testing procedure.
Our new HVS model is comprised of two components: CSF masking and asymmetric compression. CSF masking weights the wavelet coefficients according to the contrast sensitivity function (CSF)---a model of humans' sensitivity to spatial frequency. This mask gives the most perceptible information the highest priority in the quantizer. The second component of our HVS model is called asymmetric compression. It is well known that humans are more sensitive to luminance stimuli than they are to chrominance stimuli; asymmetric compression quantizes the chrominance spaces more severely than the luminance component.
The results of extensive trials indicate that our HVS model improves both quantitative and qualitative performance. These trials included 14 observers, 4 grayscale images and 10 color images (both natural and synthetic). For grayscale images, although our HVS scheme lowers PSNR, it improves subjective quality. For color images, our HVS model improves both PSNR and subjective quality. A benchmark for our HVS method is the latest version of the international image compression standard---JPEG2000. In terms of subjective quality, our scheme is superior to JPEG2000 for all images; it also outperforms JPEG2000 by 1 to 3 dB in PSNR. / Master of Science
|
54 |
Efficient image compression system using a CMOS transform imagerLee, Jungwon 12 November 2009 (has links)
This research focuses on the implementation of the efficient image compression system among the many potential applications of a transform imager system. The study includes implementing the image
compression system using a transform imager, developing a novel image compression algorithm for the system, and improving the performance of the image compression system through efficient encoding and decoding algorithms for vector quantization.
A transform imaging system is implemented using a transform imager, and the baseline JPEG compression algorithm is implemented and tested to verify the functionality and performance of the
transform imager system. The computational reduction in digital processing is investigated from two perspectives, algorithmic and implementation. Algorithmically, a novel wavelet-based embedded image compression algorithm using dynamic index reordering vector quantization (DIRVQ) is proposed for the system. DIRVQ makes it possible for the proposed algorithm to achieve superior performance over the
embedded zero-tree wavelet (EZW) algorithm and the
successive approximation vector quantization (SAVQ) algorithm. However, because DIRVQ requires intensive computational complexity, additional focus is placed on the efficient implementation of DIRVQ, and highly efficient implementation is achieved without a compromise in performance.
|
55 |
Effects of image compression on data interpretation for telepathologyWilliams, Saunya Michelle 25 August 2011 (has links)
When geographical distance poses as a barrier, telepathology is designed to offer pathologists the opportunity to replicate their normal activities by using an alternative means of practice. The rapid progression in technology has greatly influenced the appeal of telepathology and its use in multiple domains. To that point, telepathology systems help to afford teleconsultation services for remote locations, improve the workload distribution in clinical environments, measure quality assurance, and also enhance educational programs.
While telepathology is an attractive method to many potential users, the resource requirements for digitizing microscopic specimens have hindered widespread adoption. The use of image compression is extremely critical to help advance the pervasiveness of digital images in pathology. For this research, we characterize two different methods that we use to assess compression of pathology images. Our first method is characterized by the fact that image quality is human-based and completely subjective in terms of interpretation. Our second method is characterized by the fact that image analysis is introduced by using machine-based interpretation to provide objective results. Additionally, the objective outcomes from the image analysis may also be used to help confirm tumor classification. With these two methods in mind, the purpose of this dissertation is to quantify the effects of image compression on data interpretation as seen by human experts and a computerized algorithm for use in telepathology.
|
56 |
Compression of Cartoon ImagesTaylor, Ty 03 May 2011 (has links)
No description available.
|
57 |
Image compression using the one-dimensional discrete pulse transformUys, Ernst Wilhelm 03 1900 (has links)
Thesis (MSc)--University of Stellenbosch, 2011. / ENGLISH ABSTRACT: The nonlinear LULU smoothers excel at removing impulsive noise from sequences
and possess a variety of theoretical properties that make it possible to
perform a so-called Discrete Pulse Transform, which is a novel multiresolution
analysis technique that decomposes a sequence into resolution levels with a
large amount of structure, analogous to a Discrete Wavelet Transform.
We explore the use of a one-dimensional Discrete Pulse Transform as the
central element in a digital image compressor. We depend crucially on the
ability of space-filling scanning orders to map the two-dimensional image
data to one dimension, sacrificing as little image structure as possible. Both
lossless and lossy image compression are considered, leading to five new
image compression schemes that give promising results when compared to
state-of-the-art image compressors. / AFRIKAANSE OPSOMMING: Die nielineêre LULU gladstrykers verwyder impulsiewe geraas baie goed uit
rye en besit verskeie teoretiese eienskappe wat dit moontlik maak om ’n sogenoemde
Diskrete Puls Transform uit te voer; ’n nuwe multiresolusie analise
tegniek wat ’n ry opbreek in ’n versameling resolusie vlakke wat ’n groot
hoeveelheid struktuur bevat, soortgelyk tot ’n Diskrete Golfie Transform.
Ons ondersoek of ’n eendimensionele Diskrete Puls Transform as die sentrale
element in ’n digitale beeld kompressor gebruik kan word. Ons is afhanklik
van ruimtevullende skandeer ordes om die tweedimensionele beelddata
om te skakel na een dimensie, sonder om te veel beeld struktuur te verloor.
Vyf nuwe beeld kompressie skemas word bespreek. Hierdie skemas lewer belowende
resultate wanneer dit met die beste hedendaagse beeld kompressors
vergelyk word.
|
58 |
Lifting schemes for wavelet filters of trigonometric vanishingmomentsCheng, Ho-Yin., 鄭浩賢. January 2002 (has links)
published_or_final_version / Mathematics / Master / Master of Philosophy
|
59 |
An error-free image compression algorithm using classifying-sequencing techniques.He, Duanfeng. January 1991 (has links)
Digital image compression is more and more in demand as our society becomes more information oriented, with more digital images being acquired, transmitted and stored everyday. Error-free, or non-destructive, image compression is required in applications where the final image is to be analyzed digitally by computers. A new error-free digital image compression algorithm, the Classifying-Sequencing algorithm, is presented in this dissertation. Without the help of any statistics information of the images being processed, this algorithm achieves average bits-per-pixel close to the entropy of the neighboring pixel difference. In other words, the compression results are comparable to the best that a statistics code can achieve. Because this algorithm does not involve statistical modeling, generation of a code book, or long integer/floating point arithmetics, it is simpler and therefore faster than the standard statistics codes, such as Huffman Code or Arithmetic Code. In this dissertation the new algorithm under discussion is tested using seven images, together with several known algorithms. Three lower-order entropies of the image files are also provided for comparisons. Presenting compression results from an isolated algorithm is not sufficiently objective for comparisons between algorithms, as potential discrepancies exist between not only different images but also same images when reproduced from prints. Comparing the results of different algorithms and with the entropy of the neighboring pixel differences on the same images is more objective. When the entropy of an image is high, the compression ratios of all algorithms are likely to be low; and vice versa. Given that it is faster in decoding than in encoding images, the most prospective applications of the Classifying-Sequencing algorithm are in the fields of digital image transmission, distribution and archiving, where the images are likely to be encoded once but decoded many times. It can be easily realized on simple processors, or completely in hardware, due to its simplicity.
|
60 |
DESIGN AND IMPLEMENTATION OF LIFTING BASED DAUBECHIES WAVELET TRANSFORMS USING ALGEBRAIC INTEGERS2013 April 1900 (has links)
Over the past few decades, the demand for digital information has increased drastically. This enormous demand poses serious difficulties on the storage and transmission bandwidth of the current technologies. One possible solution to overcome this approach is to compress the amount of information by discarding all the redundancies. In multimedia technology, various lossy compression techniques are used to compress the raw image data to facilitate storage and to fit the transmission bandwidth.
In this thesis, we propose a new approach using algebraic integers to reduce the complexity of the Daubechies-4 (D4) and Daubechies-6 (D6) Lifting based Discrete Wavelet Transforms. The resulting architecture is completely integer based, which is free from the round-off error that is caused in floating point calculations. The filter coefficients of the two transforms of Daubechies family are individually converted to integers by multiplying it with value of 2x, where, x is a random value selected at a point where the quantity of losses is negligible. The wavelet coefficients are then quantized using the proposed iterative individual-subband coding algorithm. The proposed coding algorithm is adopted from the well-known Embedded Zerotree Wavelet (EZW) coding. The results obtained from simulation shows that the proposed coding algorithm proves to be much faster than its predecessor, and at the same time, produces good Peak Signal to Noise Ratio (PSNR) at very low bit rates.
Finally, the two proposed transform architectures are implemented on Virtex-E Field Programmable Gate Array (FPGA) to test the hardware cost (in terms of multipliers, adders and registers) and throughput rate. From the synthesis results, we see that the proposed algorithm has low hardware cost and a high throughput rate.
|
Page generated in 0.0296 seconds