• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 183
  • 30
  • 14
  • 10
  • 8
  • 5
  • 5
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 309
  • 309
  • 79
  • 64
  • 58
  • 47
  • 47
  • 42
  • 40
  • 37
  • 36
  • 32
  • 31
  • 29
  • 27
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

An investigation into the requirements for an efficient image transmission system over an ATM network

Chia, Liang-Tien January 1994 (has links)
This thesis looks into the problems arising in an image transmission system when transmitting over an A TM network. Two main areas were investigated: (i) an alternative coding technique to reduce the bit rate required; and (ii) concealment of errors due to cell loss, with emphasis on processing in the transform domain of DCT-based images.
42

The application of computer vision to very low bit-rate communications

Gibson, David Peter January 2000 (has links)
No description available.
43

Lossless medical image compression using integer transforms and predictive coding technique

Neela, Divya January 1900 (has links)
Master of Science / Department of Electrical and Computer Engineering / D. V. Satish Chandra / The future of healthcare delivery systems and telemedical applications will undergo a radical change due to the developments in wearable technologies, medical sensors, mobile computing and communication techniques. E-health was born with the integration of networks and telecommunications when dealing with applications of collecting, sorting and transferring medical data from distant locations for performing remote medical collaborations and diagnosis. Healthcare systems in recent years rely on images acquired in two dimensional (2D) domain in the case of still images, or three dimensional (3D) domain for volumetric images or video sequences. Images are acquired with many modalities including X-ray, positron emission tomography (PET), magnetic resonance imaging (MRI), computed axial tomography (CAT) and ultrasound. Medical information is either in multidimensional or multi resolution form, this creates enormous amount of data. Efficient storage, retrieval, management and transmission of this voluminous data is extremely complex. One of the solutions to reduce this complex problem is to compress the medical data losslessly so that the diagnostics capabilities are not compromised. This report proposes techniques that combine integer transforms and predictive coding to enhance the performance of lossless compression. The performance of the proposed techniques is evaluated using compression measures such as entropy and scaled entropy.
44

Multidimensional multirate filter banks : some theory and design

Tay, David B. H. January 1993 (has links)
No description available.
45

Progressive Lossy-to-Lossless Compression of DNA Microarray Images

Hernandez-Cabronero, Miguel, Blanes, Ian, Pinho, Armando J., Marcellin, Michael W., Serra-Sagrista, Joan 05 1900 (has links)
The analysis techniques applied to DNA microarray images are under active development. As new techniques become available, it will be useful to apply them to existing microarray images to obtain more accurate results. The compression of these images can be a useful tool to alleviate the costs associated to their storage and transmission. The recently proposed Relative Quantizer (RQ) coder provides the most competitive lossy compression ratios while introducing only acceptable changes in the images. However, images compressed with the RQ coder can only be reconstructed with a limited quality, determined before compression. In this work, a progressive lossy-to-lossless scheme is presented to solve this problem. First, the regular structure of the RQ intervals is exploited to define a lossy-to-lossless coding algorithm called the Progressive RQ (PRQ) coder. Second, an enhanced version that prioritizes a region of interest, called the PRQ-region of interest (ROI) coder, is described. Experiments indicate that the PRQ coder offers progressivity with lossless and lossy coding performance almost identical to the best techniques in the literature, none of which is progressive. In turn, the PRQ-ROI exhibits very similar lossless coding results with better rate-distortion performance than both the RQ and PRQ coders.
46

Compression techniques for image-based representations

Ng, King-to., 吳景濤. January 2003 (has links)
published_or_final_version / Electrical and Electronic Engineering / Doctoral / Doctor of Philosophy
47

Context-based compression algorithms for text and image data.

January 1997 (has links)
Wong Ling. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1997. / Includes bibliographical references (leaves 80-85). / ABSTRACT --- p.1 / Chapter 1. --- INTRODUCTION --- p.2 / Chapter 1.1 --- motivation --- p.4 / Chapter 1.2 --- Original Contributions --- p.5 / Chapter 1.3 --- thesis Structure --- p.5 / Chapter 2. --- BACKGROUND --- p.7 / Chapter 2.1 --- information theory --- p.7 / Chapter 2.2 --- early compression --- p.8 / Chapter 2.2.1 --- Some Source Codes --- p.10 / Chapter 2.2.1.1 --- Huffman Code --- p.10 / Chapter 2.2.1.2 --- Tutstall Code --- p.10 / Chapter 2.2.1.3 --- Arithmetic Code --- p.11 / Chapter 2.3 --- modern techniques for compression --- p.14 / Chapter 2.3.1 --- Statistical Modeling --- p.14 / Chapter 2.3.1.1 --- Context Modeling --- p.15 / Chapter 2.3.1.2 --- State Based Modeling --- p.17 / Chapter 2.3.2 --- Dictionary Based Compression --- p.17 / Chapter 2.3.2.1 --- LZ-compression --- p.19 / Chapter 2.3.3 --- Other Compression Techniques --- p.20 / Chapter 2.3.3.1 --- Block Sorting --- p.20 / Chapter 2.3.3.2 --- Context Tree Weighting --- p.21 / Chapter 3. --- SYMBOL REMAPPING --- p.22 / Chapter 3. 1 --- reviews on Block Sorting --- p.22 / Chapter 3.1.1 --- Forward Transformation --- p.23 / Chapter 3.1.2 --- Inverse Transformation --- p.24 / Chapter 3.2 --- Ordering Method --- p.25 / Chapter 3.3 --- discussions --- p.27 / Chapter 4. --- CONTENT PREDICTION --- p.29 / Chapter 4.1 --- Prediction and Ranking Schemes --- p.29 / Chapter 4.1.1 --- Content Predictor --- p.29 / Chapter 4.1.2 --- Ranking Techn ique --- p.30 / Chapter 4.2 --- Reviews on Context Sorting --- p.31 / Chapter 4.2.1 --- Context Sorting basis --- p.31 / Chapter 4.3 --- General Framework of Content Prediction --- p.31 / Chapter 4.3.1 --- A Baseline Version --- p.32 / Chapter 4.3.2 --- Context Length Merge --- p.34 / Chapter 4.4 --- Discussions --- p.36 / Chapter 5. --- BOUNDED-LENGTH BLOCK SORTING --- p.38 / Chapter 5.1 --- block sorting with bounded context length --- p.38 / Chapter 5.1.1 --- Forward Transformation --- p.38 / Chapter 5.1.2 --- Reverse Transformation --- p.39 / Chapter 5.2 --- Locally Adaptive Entropy Coding --- p.43 / Chapter 5.3 --- discussion --- p.45 / Chapter 6. --- CONTEXT CODING FOR IMAGE DATA --- p.47 / Chapter 6.1 --- Digital Images --- p.47 / Chapter 6.1.1 --- Redundancy --- p.48 / Chapter 6.2 --- model of a compression system --- p.49 / Chapter 6.2.1 --- Representation --- p.49 / Chapter 6.2.2 --- Quantization --- p.50 / Chapter 6.2.3 --- Lossless coding --- p.51 / Chapter 6.3 --- The Embedded Zerotree Wavelet Coding --- p.51 / Chapter 6.3.1 --- Simple Zerotree-like Implementation --- p.53 / Chapter 6.3.2 --- Analysis of Zerotree Coding --- p.54 / Chapter 6.3.2.1 --- Linkage between Coefficients --- p.55 / Chapter 6.3.2.2 --- Design of Uniform Threshold Quantizer with Dead Zone --- p.58 / Chapter 6.4 --- Extensions on Wavelet Coding --- p.59 / Chapter 6.4.1 --- Coefficients Scanning --- p.60 / Chapter 6.5 --- Discussions --- p.61 / Chapter 7. --- CONCLUSIONS --- p.63 / Chapter 7.1 --- Future Research --- p.64 / APPENDIX --- p.65 / Chapter A --- Lossless Compression Results --- p.65 / Chapter B --- Image Compression Standards --- p.72 / Chapter C --- human Visual System Characteristics --- p.75 / Chapter D --- Lossy Compression Results --- p.76 / COMPRESSION GALLERY --- p.77 / Context-based Wavelet Coding --- p.75 / RD-OPT-based jpeg Compression --- p.76 / SPIHT Wavelet Compression --- p.77 / REFERENCES --- p.80
48

Entropy coding and post-processing for image and video coding.

January 2010 (has links)
Fong, Yiu Leung. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2010. / Includes bibliographical references (leaves 83-87). / Abstracts in English and Chinese. / Abstract --- p.2 / Acknowledgement --- p.6 / Chapter 1. --- Introduction --- p.9 / Chapter 2. --- Background and Motivation --- p.10 / Chapter 2.1 --- Context-Based Arithmetic Coding --- p.10 / Chapter 2.2 --- Video Post-processing --- p.13 / Chapter 3. --- Context-Based Arithmetic Coding for JPEG --- p.16 / Chapter 3.1 --- Introduction --- p.16 / Chapter 3.1.1 --- Huffman Coding --- p.16 / Chapter 3.1.1.1 --- Introduction --- p.16 / Chapter 3.1.1.2 --- Concept --- p.16 / Chapter 3.1.1.3 --- Drawbacks --- p.18 / Chapter 3.1.2 --- Context-Based Arithmetic Coding --- p.19 / Chapter 3.1.2.1 --- Introduction --- p.19 / Chapter 3.1.2.2 --- Concept --- p.20 / Chapter 3.2 --- Proposed Method --- p.30 / Chapter 3.2.1 --- Introduction --- p.30 / Chapter 3.2.2 --- Redundancy in Quantized DCT Coefficients --- p.32 / Chapter 3.2.2.1 --- Zig-Zag Scanning Position --- p.32 / Chapter 3.2.2.2 --- Magnitudes of Previously Coded Coefficients --- p.41 / Chapter 3.2.3 --- Proposed Scheme --- p.43 / Chapter 3.2.3.1 --- Overview --- p.43 / Chapter 3.2.3.2 --- Preparation of Coding --- p.44 / Chapter 3.2.3.3 --- Coding of Non-zero Coefficient Flags and EOB Decisions --- p.45 / Chapter 3.2.3.4 --- Coding of ´بLEVEL' --- p.48 / Chapter 3.2.3.5 --- Separate Coding of Color Planes --- p.53 / Chapter 3.3 --- Experimental Results --- p.54 / Chapter 3.3.1 --- Evaluation Method --- p.54 / Chapter 3.3.2 --- Methods under Evaluation --- p.55 / Chapter 3.3.3 --- Average File Size Reduction --- p.57 / Chapter 3.3.4 --- File Size Reduction on Individual Images --- p.59 / Chapter 3.3.5 --- Performance of Individual Techniques --- p.63 / Chapter 3.4 --- Discussions --- p.66 / Chapter 4. --- Video Post-processing for H.264 --- p.67 / Chapter 4.1 --- Introduction --- p.67 / Chapter 4.2 --- Proposed Method --- p.68 / Chapter 4.3 --- Experimental Results --- p.69 / Chapter 4.3.1 --- Deblocking on Compressed Frames --- p.69 / Chapter 4.3.2 --- Deblocking on Residue of Compressed Frames --- p.72 / Chapter 4.3.3 --- Performance Investigation --- p.74 / Chapter 4.3.4 --- Investigation Experiment 1 --- p.75 / Chapter 4.3.5 --- Investigation Experiment 2 --- p.77 / Chapter 4.3.6 --- Investigation Experiment 3 --- p.79 / Chapter 4.4 --- Discussions --- p.81 / Chapter 5. --- Conclusions --- p.82 / References --- p.83
49

An Analysis of Approaches to Efficient Hardware Realization of Image Compression Algorithms

Iravani, Kamran 27 October 1994 (has links)
In this thesis an attempt has been made to develop a fast algorithm to compress images. The Reed-Muller compression algorithm which was introduced by Reddy & Pai [3] is fast, but the compression factor is too low when compared to the other methods. In this thesis first research has been done to improve this method by generalizing the Reed-Muller transform to the fixed polarity Reed-Muller form. This thesis shows that the Fixed Polarity Reed-Muller transform does not improve the compression factor enough to warrant its use as an image compression method. The paper, by Reddy & Pai [3], on Reed-Muller image compression has been criticized, and it was shown that some crucial errors in this paper make it impossible to evaluate the quality and compression factors of their approach. Finally a simple and fast method for image compression has been introduced. This method has taken advantage of the high correlation between the adjacent pixels of an image. If the matrix of pixel values of an image is divided into bit planes from the Most Significant Bit (MSB) plane to the Least Significant Bit (LSB) plane, most of the adjacent bits in the MSB planes (MSB, 2nd MSB, 3rd MSB and 4th MSB) are the same. Using this fact a method has been developed by Xoring the adjacent lines of the MSBs planes bit by bit, and Xoring the resulting planes bit by bit. It has been shown that this method gives a much better compression factor, and can be realized by much simpler hardware compared to Reed-Muller image compression method.
50

Signal compression for digital television.

Truong, Huy S. January 1999 (has links)
Still image and image sequence compression plays an important role in the development of digital television. Although various still image and image sequence compression algorithms have already been developed, it is very difficult for them to achieve both compression performance and coding efficiency simultaneously due to the complexity of the compression process itself. As a results, improvements in the forms of hybrid coding, coding procedure refinement, new algorithms and even new coding concepts have been constantly tried, some offering very encouraging results.In this thesis, Block Adaptive Classified Vector Quantisation (BACVQ) has been developed as an alternative algorithm for still image compression. It is found that BACVQ achieves good compression performance and coding efficiency by combining variable block-size coding and classified VQ. Its performance is further enhanced by adopting both spatial and transform domain criteria for the image block segmentation and classification process. Alternative algorithms have also been developed to accelerate normal codebook searching operation and to determine the optimal sizes of classified VQ sub-codebooks.For image sequence compression, an adaptive spatial/temporal compression algorithm has been developed which divides an image sequence into smaller groups of pictures (GOP) using adaptive scene segmentation before BACVQ and variable block-size motion compensated predictive coding are applied to the intraframe and interframe coding processes. It is found the application of the proposed adaptive scene segmentation algorithm, an alternative motion estimation strategy and a new progressive motion estimation algorithm enables the performance and efficiency of the compression process to be improved even further.Apart from improving still image and image sequence compression algorithms, the application of parallel ++ / processing to image sequence compression is also investigated. Parallel image compression offers a more effective approach than the sequential counterparts to accelerate the compression process and bring it closer to real-time operation. In this study, a small scale parallel digital signal processing platform has been constructed for supporting parallel image sequence compression operation. It consists of a 486DX33 IBM/PC serving as a master processor and two DSP (PC-32) cards as parallel processors. Because of the independent processing and spatial arrangement natures of most image processing operations, an effective parallel image sequence compression algorithm has been developed on the proposed parallel processing platform to significantly reduce the processing time of the proposed parallel image compression algorithms.

Page generated in 0.1138 seconds