• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 57
  • 6
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 86
  • 86
  • 32
  • 29
  • 28
  • 17
  • 17
  • 16
  • 15
  • 14
  • 12
  • 11
  • 10
  • 9
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Analysis and Design of Lossless Bi-level Image Coding Systems

Guo, Jianghong January 2000 (has links)
Lossless image coding deals with the problem of representing an image with a minimum number of binary bits from which the original image can be fully recovered without any loss of information. Most lossless image coding algorithms reach the goal of efficient compression by taking care of the spatial correlations and statistical redundancy lying in images. Context based algorithms are the typical algorithms in lossless image coding. One key probelm in context based lossless bi-level image coding algorithms is the design of context templates. By using carefully designed context templates, we can effectively employ the information provided by surrounding pixels in an image. In almost all image processing applications, image data is accessed in a raster scanning manner and is treated as 1-D integer sequence rather than 2-D data. In this thesis, we present a quadrisection scanning method which is better than raster scanning in that more adjacent surrounding pixels are incorporated into context templates. Based on quadrisection scanning, we develop several context templates and propose several image coding schemes for both sequential and progressive lossless bi-level image compression. Our results show that our algorithms perform better than those raster scanning based algorithms, such as JBIG1 used in this thesis as a reference. Also, the application of 1-D grammar based codes in lossless image coding is discussed. 1-D grammar based codes outperform those LZ77/LZ78 based compression utility software for general data compression. It is also effective in lossless image coding. Several coding schemes for bi-level image compression via 1-D grammar codes are provided in this thesis, especially the parallel switching algorithm which combines the power of 1-D grammar based codes and context based algorithms. Most of our results are comparable to or better than those afforded by JBIG1.
22

Perception-based second generation image coding using variable resolution / Perceptionsbaserad andra generationens bildkodning med variabel upplösning

Rydell, Joakim January 2003 (has links)
In ordinary image coding, the same image quality is obtained in all parts of an image. If it is known that there is only one viewer, and where in the image that viewer is focusing, the quality can be degraded in other parts of the image without incurring any perceptible coding artefacts. This master's thesispresents a coding scheme where an image is segmented into homogeneous regions which are then separately coded, and where knowledge about the user's focus point is used to obtain further data reduction. It is concluded that the coding performance does not quite reach the levels attained when applying focus-based quality degradation to coding schemes not based on segmentation.
23

Implementation of a Watermarking Algorithm for H.264 Video Sequences / Implementation av en vattenmärkningsalgoritm för H.264-videosekvenser

Bergkvist, David January 2004 (has links)
In today's video delivery and broadcast networks, issues of copyright protection have become more urgent than in analog times, since the copying of digital video does not result in the decrease in quality that occurs when analog video is copied. One method of copyright protection is to embed a digital code, a"watermark", into the video sequence. The watermark can then unambiguously identify the copyright holder of the video sequence. Watermarks can also be used to identify the purchaser of a video sequence, which is called "fingerprinting". The objective of this master thesis was to implement a program that would insert watermarks into video sequences and also detect if a given video sequence contains a givenwatermark. The video standard I chose to use was the H.264 standard (also known as MPEG4 AVC) as it offers a significant efficiency improvement over the previous video compression standards. A couple of tests that can be considered representative for most image manipulations and attacks were performed. The program passed all tests, suggesting that the watermarking mechanism of this thesis can be expected to be rather robust, at least for the video sequence used. By looking at the watermarked video sequences and comparing them to the originals, or measuring the signal to noise ratio, one can also see that the watermarks are unobtrusive. The execution times were also measured. Compared to coding and decoding a H.264 video stream, the time it takes to insert and extract watermarks was much less. Calculating a threshold takes roughly double the time as decoding the sequence, though.
24

Analysis and Design of Lossless Bi-level Image Coding Systems

Guo, Jianghong January 2000 (has links)
Lossless image coding deals with the problem of representing an image with a minimum number of binary bits from which the original image can be fully recovered without any loss of information. Most lossless image coding algorithms reach the goal of efficient compression by taking care of the spatial correlations and statistical redundancy lying in images. Context based algorithms are the typical algorithms in lossless image coding. One key probelm in context based lossless bi-level image coding algorithms is the design of context templates. By using carefully designed context templates, we can effectively employ the information provided by surrounding pixels in an image. In almost all image processing applications, image data is accessed in a raster scanning manner and is treated as 1-D integer sequence rather than 2-D data. In this thesis, we present a quadrisection scanning method which is better than raster scanning in that more adjacent surrounding pixels are incorporated into context templates. Based on quadrisection scanning, we develop several context templates and propose several image coding schemes for both sequential and progressive lossless bi-level image compression. Our results show that our algorithms perform better than those raster scanning based algorithms, such as JBIG1 used in this thesis as a reference. Also, the application of 1-D grammar based codes in lossless image coding is discussed. 1-D grammar based codes outperform those LZ77/LZ78 based compression utility software for general data compression. It is also effective in lossless image coding. Several coding schemes for bi-level image compression via 1-D grammar codes are provided in this thesis, especially the parallel switching algorithm which combines the power of 1-D grammar based codes and context based algorithms. Most of our results are comparable to or better than those afforded by JBIG1.
25

Fractal Image Coding Based on Classified Range Regions

USUI, Shin'ichi, TANIMOTO, Masayuki, FUJII, Toshiaki, KIMOTO, Tadahiko, OHYAMA, Hiroshi 20 December 1998 (has links)
No description available.
26

Combined source-channel coding for a power and bandwidth constrained noisy channel

Raja, Nouman Saeed 17 February 2005 (has links)
This thesis proposes a framework for combined source-channel coding under power and bandwidth constrained noisy channel. The framework is then applied to progressive image coding transmission using constant envelope M-ary Phase Shift Key (MPSK) signaling over an Additive White Gaussian Channel (AWGN) channel. First the framework for uncoded MPSK signaling is developed. Then, it’s extended to include coded modulation using Trellis Coded Modulation (TCM) for MPSK signaling. Simulation results show that coded MPSK signaling performs 3.1 to 5.2 dB better than uncoded MPSK signaling depending on the constellation size. Finally, an adaptive TCM system is presented for practical implementation of the proposed scheme, which outperforms uncoded MPSK system over all signal to noise ratio (Es/No) ranges for various MPSK modulation formats. In the second part of this thesis, the performance of the scheme is investigated from the channel capacity point of view. Using powerful channel codes like Turbo and Low Density Parity Check (LDPC) codes, the combined source-channel coding scheme is shown to be within 1 dB of the performance limit with MPSK channel signaling.
27

Region-based variable quantization for JPEG image compression

Golner, Mitchell Ari 01 January 1998 (has links)
No description available.
28

Exploiting spatial and temporal redundancies for vector quantization of speech and images

Meh Chu, Chu 07 January 2016 (has links)
The objective of the proposed research is to compress data such as speech, audio, and images using a new re-ordering vector quantization approach that exploits the transition probability between consecutive code vectors in a signal. Vector quantization is the process of encoding blocks of samples from a data sequence by replacing every input vector from a dictionary of reproduction vectors. Shannon’s rate-distortion theory states that signals encoded as blocks of samples have a better rate-distortion performance relative to when encoded on a sample-to-sample basis. As such, vector quantization achieves a lower coding rate for a given distortion relative to scalar quantization for any given signal. Vector quantization does not take advantage of the inter-vector correlation between successive input vectors in data sequences. It has been demonstrated that real signals have significant inter-vector correlation. This correlation has led to vector quantization approaches that encode input vectors based on previously encoded vectors. Some methods have been proposed in literature to exploit the dependence between successive code vectors. Predictive vector quantization, dynamic codebook re-ordering, and finite-state vector quantization are examples of vector quantization schemes that use intervector correlation. Predictive vector quantization and finite-state vector quantization predict the reproduction vector for a given input vector by using past input vectors. Dynamic codebook re-ordering vector quantization has the same reproduction vectors as standard vector quantization. The dynamic codebook re-ordering algorithm is based on the concept of re-ordering indices whereby existing reproduction vectors are assigned new channel indices according a structure that orders the reproduction vectors in an order of increasing dissimilarity. Hence, an input vector encoded in the standard vector quantization method is transmitted through a channel with new indices such that 0 is assigned to the closest reproduction vector to the past reproduction vector. Larger index values are assigned to reproduction vectors that have larger distances from the previous reproduction vector. Dynamic codebook re-ordering assumes that the reproduction vectors of two successive vectors of real signals are typically close to each other according to a distance metric. Sometimes, two successively encoded vectors may have relatively larger distances from each other. Our likelihood codebook re-ordering vector quantization algorithm exploits the structure within a signal by exploiting the non-uniformity in the reproduction vector transition probability in a data sequence. Input vectors that have higher probability of transition from prior reproduction vectors are assigned indices of smaller values. The code vectors that are more likely to follow a given vector are assigned indices closer to 0 while the less likely are given assigned indices of higher value. This re-ordering provides the reproduction dictionary a structure suitable for entropy coding such as Huffman and arithmetic coding. Since such transitions are common in real signals, it is expected that our proposed algorithm when combined with entropy coding algorithms such binary arithmetic and Huffman coding, will result in lower bit rates for the same distortion as a standard vector quantization algorithm. The re-ordering vector quantization approach on quantized indices can be useful in speech, images, audio transmission. By applying our re-ordering approach to these data types, we expect to achieve lower coding rates for a given distortion or perceptual quality. This reduced coding rate makes our proposed algorithm useful for transmission and storage of larger image, speech streams for their respective communication channels. The use of truncation on the likelihood codebook re-ordering scheme results in much lower compression rates without significantly distorting the perceptual quality of the signals. Today, texts and other multimedia signals may be benefit from this additional layer of likelihood re-ordering compression.
29

Automatic source camera identification by lens aberration and JPEG compression statistics

Choi, Kai-san., 蔡啟新. January 2006 (has links)
published_or_final_version / abstract / Electrical and Electronic Engineering / Master / Master of Philosophy
30

Complex Bases, Number Systems and Their Application to Fractal-Wavelet Image Coding

Pich??, Daniel G. January 2002 (has links)
This thesis explores new approaches to the analysis of functions by combining tools from the fields of complex bases, number systems, iterated function systems (IFS) and wavelet multiresolution analyses (MRA). The foundation of this work is grounded in the identification of a link between two-dimensional non-separable Haar wavelets and complex bases. The theory of complex bases and this link are generalized to higher dimensional number systems. Tilings generated by number systems are typically fractal in nature. This often yields asymmetry in the wavelet trees of functions during wavelet decomposition. To acknowledge this situation, a class of extensions of functions is developed. These are shown to be consistent with the Mallat algorithm. A formal definition of local IFS on wavelet trees (LIFSW) is constructed for MRA associated with number systems, along with an application to the inverse problem. From these investigations, a series of algorithms emerge, namely the Mallat algorithm using addressing in number systems, an algorithm for extending functions and a method for constructing LIFSW operators in higher dimensions. Applications to image coding are given and ideas for further study are also proposed. Background material is included to assist readers less familiar with the varied topics considered. In addition, an appendix provides a more detailed exposition of the fundamentals of IFS theory.

Page generated in 0.0662 seconds