Spelling suggestions: "subject:"image compression"" "subject:"image 8compression""
221 |
Convolutional Neural Networks for Enhanced Compression TechniquesGnacek, Matthew 18 May 2021 (has links)
No description available.
|
222 |
Wavelet Based SPIHT Compression for DICOM ImagesDhasarathan, Iyyappan, Rathinasamy, Vimal, Cui, Tang January 2011 (has links)
Generally, image viewers do not include the scalability for the image compressionand ecient encoding and decoding for easy transmission. They alsonever consider the specic requirements of the heterogeneous networks constitutedby the Global Packet Radio Service(GPRS), Universal Mobile TelecommunicationSystem(UMTS), Wireless Local Area Network(WLAN) and DigitalVideo Broadcasting(DVB-H). This work contains the medical application withviewer for the Digital Imaging and Communications in Medicine(DICOM) imagesas its core content. This application discusses the scalable wavelet-basedcompression, retrival and the decompression of the DICOM images. This proposedapplication is compatible with the mobile phones activated in the heterogeneousnetwoks. This paper also explains about the performance issues whenthis application is used in prototype heterogenous networks.
|
223 |
Dimension Reduction for Hyperspectral ImageryLy, Nam H (Nam Hoai) 14 December 2013 (has links)
In this dissertation, the general problem of the dimensionality reduction of hyperspectral imagery is considered. Data dimension can be reduced through compression, in which an original image is encoded into bitstream of greatly reduced size; through application of a transformation, in which a high-dimensional space is mapped into a low-dimensional space; and through a simple process of subsampling, wherein the number of pixels is reduced spatially during image acquisition. All three techniques are investigated in the course of the dissertation. For data compression, an approach to calculate an operational bitrate for JPEG2000 in conjunction with principal component analysis is proposed. It is shown that an optimal bitrate for such a lossy compression method can be estimated while maintaining both class separability as well as anomalous pixels in the original data. On the other hand, the transformation paradigm is studied for spectral dimensionality reduction; specifically, dataindependent random spectral projections are considered, while the compressive projection principal component analysis algorithm is adopted for data reconstruction. It is shown that, by incorporating both spectral and spatial partitioning of the original data, reconstruction accuracy can be improved. Additionally, a new supervised spectral dimensionality reduction approach using a sparsity-preserving graph is developed. The resulting sparse graph-based discriminant analysis is seen to yield superior classification performance at low dimensionality. Finally, for spatial dimensionality reduction, a simple spatial subsampling scheme is considered for a multitemporal hyperspectral image sequence, such that the original image is reconstructed using a sparse dictionary learned from a prior image in the sequence.
|
224 |
A Comparison of the Discrete Hermite Transform and Wavelets for Image CompressionBellis, Christopher John 14 May 2012 (has links)
No description available.
|
225 |
Comparison Of Sparse Coding And Jpeg Coding Schemes For Blurred Retinal Images.Chandrasekaran, Balaji 01 January 2007 (has links)
Overcomplete representations are currently one of the highly researched areas especially in the field of signal processing due to their strong potential to generate sparse representation of signals. Sparse representation implies that given signal can be represented with components that are only rarely significantly active. It has been strongly argued that the mammalian visual system is highly related towards sparse and overcomplete representations. The primary visual cortex has overcomplete responses in representing an input signal which leads to the use of sparse neuronal activity for further processing. This work investigates the sparse coding with an overcomplete basis set representation which is believed to be the strategy employed by the mammalian visual system for efficient coding of natural images. This work analyzes the Sparse Code Learning algorithm in which the given image is represented by means of linear superposition of sparse statistically independent events on a set of overcomplete basis functions. This algorithm trains and adapts the overcomplete basis functions such as to represent any given image in terms of sparse structures. The second part of the work analyzes an inhibition based sparse coding model in which the Gabor based overcomplete representations are used to represent the image. It then applies an iterative inhibition algorithm based on competition between neighboring transform coefficients to select subset of Gabor functions such as to represent the given image with sparse set of coefficients. This work applies the developed models for the image compression applications and tests the achievable levels of compression of it. The research towards these areas so far proves that sparse coding algorithms are inefficient in representing high frequency sharp image features. So this work analyzes the performance of these algorithms only on the natural images which does not have sharp features and compares the compression results with the current industrial standard coding schemes such as JPEG and JPEG 2000. It also models the characteristics of an image falling on the retina after the distortion effects of the eye and then applies the developed algorithms towards these images and tests compression results.
|
226 |
Construction and evaluation of a lossless image format, Carbonara / Konstruktion och evaluering av ett icke-förstörande bildformat, CarbonaraRösler, Viktor January 2023 (has links)
High-speed laser triangulation 3D cameras, such as the Ranger3 from SICK, transmit image data to a PC for processing. The camera’s operational speed is constrained by the capabilities of the transmission link. By compressing the data, the bandwidth requirements of the camera is reduced. This thesis presents the development of a lossless image compression format developed for this purpose. The proposed image compression format features a single-pass encoder that utilizes run-length and delta encoding. It is designed to be suitable for implementation on field-programmable gate arrays (FPGAs) within high-speed laser-scanning cameras. Furthermore, the format offers configurability through parameters, enabling optimization for diverse types of image data to achieve more efficient compression. The compression ratio of the image compression format was evaluated using a range oftypical images captured by a Ranger3 camera. The compression ratio was measured across different configurations of the format and subsequently compared with that of PNG. The compression ratio achieved by the proposed format is on par with that of the PNG format, despite having a much simpler encoding process.
|
227 |
On the Performance of Jpeg2000 and Principal Component Analysis in Hyperspectral Image CompressionZhu, Wei 05 May 2007 (has links)
Because of the vast data volume of hyperspectral imagery, compression becomes a necessary process for hyperspectral data transmission, storage, and analysis. Three-dimensional discrete wavelet transform (DWT) based algorithms are particularly of interest due to their excellent rate-distortion performance. This thesis investigates several issues surrounding efficient compression using JPEG2000. Firstly, the rate-distortion performance is studied when Principal Component Analysis (PCA) replaces DWT for spectral decorrelation with the focus on the use of a subset of principal components (PCs) rather than all the PCs. Secondly, the algorithms are evaluated in terms of data analysis performance, such as anomaly detection and linear unmixing, which is directly related to the useful information preserved. Thirdly, the performance of compressing radiance and reflectance data with or without bad band removal is compared, and instructive suggestions are provided for practical applications. Finally, low-complexity PCA algorithms are presented to reduce the computational complexity and facilitate the future hardware design.
|
228 |
FPGA Implementation of the JPEG2000 MQ DecoderLucking, David Joseph 05 May 2010 (has links)
No description available.
|
229 |
Three-level block truncation codingLee, Deborah Ann 01 January 1988 (has links) (PDF)
Block Truncation Coding (BTC) techniques, to date, utilize a two-level image block code. This thesis presents and studies a new BTC method employing a three-level image coding technique. This new method is applied to actual image frames and compared with other well-known block truncation coding techniques.
The method separates an image into disjoint, rectangular regions of fixed size and finds the highest and lowest pixel values of each. Using these values, the image block pixel value range is calculated and divided into three equal sections. The individual image block pixels are then quantized according to the region into which their pixel value falls; they are quantized to a 2 if they fall in the upper third , a 1 in the middle third, and a O if in the lower third range region. Thus, each pixel now requires only two bits for transmission. This is one bit per pixel more than the other well-known BTC techniques and thus it has a smaller compression ratio.
When the BTC techniques were applied to actual images, the resulting 3LBTC reconstructed images had the smallest mean-squared-error of the techniques applied. It also produced favorable results in terms of the entropy of the reconstructions as compared to the entropy of the original images. The reconstructed images were also very good replicas of the originals and the 3LBTC process had the fastest processing speed. For applications where coding and reconstruction speed are crucial and bandwidth is not critical, the 3LBTC provides an image coding solution.
|
230 |
L2 Optimized Predictive Image Coding with L∞ BoundChuah, Sceuchin 04 1900 (has links)
<p>In many scientific, medical and defense applications of image/video compression, an <em>l</em><sub>∞ </sub>error bound is required. However, pure <em>l</em><sub>∞</sub>-optimized image coding, colloquially known as near-lossless image coding, is prone to structured errors such as contours and speckles if the bit rate is not sufficiently high; moreover, previous <em>l</em><sub>∞</sub>-based image coding methods suffer from poor rate control. In contrast, the <em>l</em><sub>2</sub> error metric aims for average fidelity and hence preserves the subtlety of smooth waveforms better than the <em>l</em><sub>∞</sub> error metric and it offers fine granularity in rate control; but pure <em>l</em><sub>2</sub>-based image coding methods (e.g., JPEG 2000) cannot bound individual errors as <em>l</em><sub>∞</sub>-based methods can. This thesis presents a new compression approach to retain the benefits and circumvent the pitfalls of the two error metrics.</p> / Master of Applied Science (MASc)
|
Page generated in 0.0852 seconds