• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 182
  • 35
  • 34
  • 7
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 322
  • 322
  • 144
  • 120
  • 86
  • 66
  • 65
  • 58
  • 52
  • 42
  • 37
  • 37
  • 35
  • 28
  • 26
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Using semantic knowledge to improve compression on log files /

Otten, Frederick John. January 2008 (has links)
Thesis (M.Sc. (Computer Science)) - Rhodes University, 2009.
52

A reference guide to JPEG compression /

Goodenow, Daniel P. January 1993 (has links)
Thesis (M.S.)--Rochester Institute of Technology, 1993. / Typescript. Includes bibliographical references.
53

Efficient software implementation of the JBIG compression standard /

Smith, Craig M. January 1993 (has links)
Thesis (M.S.)--Rochester Institute of Technology, 1993. / Typescript. Includes bibliographical references (leaves 72-74).
54

On-line stochastic processes in data compression /

Bunton, Suzanne. January 1996 (has links)
Thesis (Ph. D.)--University of Washington, 1996. / Vita. Includes bibliographical references (p. [150]-155).
55

Empirical analysis of BWT-based lossless image compression

Bhupathiraju, Kalyan Varma. January 2010 (has links)
Thesis (M.S.)--West Virginia University, 2010. / Title from document title page. Document formatted into pages; contains v, 61 p. : ill. (some col.). Includes abstract. Includes bibliographical references (p. 54-56).
56

Adaptive edge-based prediction for lossless image compression

Parthe, Rahul G. January 1900 (has links)
Thesis (M.S.)--West Virginia University, 2005. / Title from document title page. Document formatted into pages; contains vii, 90 p. : ill. Includes abstract. Includes bibliographical references (p. 84-90).
57

Real-time loss-less data compression

Toufie, Moegamat Zahir January 2000 (has links)
Thesis (MTech (Information Technology))--Cape Technikon, Cape Town, 2000 / Data stored on disks generally contain significant redundancy. A mechanism or algorithm that recodes the data to lessen the data size could possibly double or triple the effective data that could be stored on the media. One mechanism of doing this is by data compression. Many compression algorithms currently exist, but each one has its own advantages as well as disadvantages. The objective of this study', to formulate a new compression algorithm that could be implemented in a real-time mode in any file system. The new compression algorithm should also execute as fast as possible, so as not to cause a lag in the file systems performance. This study focuses on binary data of any type, whereas previous articles such as (Huftnlan. 1952:1098), (Ziv & Lempel, 1977:337: 1978:530), (Storer & Szymanski. 1982:928) and (Welch, 1984:8) have placed particular emphasis on text compression in their discussions of compression algorithms for computer data. The resulting compression algorithm that is formulated by this study is Lempel-Ziv-Toutlc (LZT). LZT is basically an LZ77 (Ziv & Lempel, 1977:337) encoder with a buffer size equal in size to that of the data block of the file system in question. LZT does not make this distinction, it discards the sliding buffer principle and uses each data block of the entire input stream. as one big buffer on which compression can be performed. LZT also handles the encoding of a match slightly different to that of LZ77. An LZT match is encoded by two bit streams, the first specifying the position of the match and the other specifying the length of the match. This combination is commonly referred to as a <position, length> pair. To encode the position portion of the <position, length> pair, we make use of a sliding scale method. The sliding scale method works as follows. Let the position in the input buffer, of the current character to be compressed be held by inpos, where inpos is initially set to 3. It is then only possible for a match to occur at position 1 or 2. Hence the position of a match will never be greater than 2, and therefore the position portion can be encoded using only 1 bit. As "inpos" is incremented as each character is encoded, the match position range increases and therefore more bits will be required to encode the match position. The reason why a decimal 2 can be encoded 'sing only I bit can be explained as follows. When decimal values are converted to binary values, we get 010 = 02, 110 = 12, 210, = 102etc. As a position of 0 will never be used, it is possible to develop a coding scheme where a decimal value of 1 can be represented by a binary value of 0, and a decimal value of 2 can be represented by binary value of 1. Only I bit is therefore needed to encode match position I and match position 2. In general. any decimal value n ca:) be represented by the binary equivalent for (n - 1). The number of bits needed to encode (n - 1), indicates the number of bits needed to encode the match position. The length portion of the <position, length> pair is encoded using a variable length coding (vlc) approach. The vlc method performs its encoding by using binary blocks. The first binary block is 3 bits long, where binary values 000 through 110 represent decimal values I through 7.
58

New wavelet transforms and their applications to data compression

Singh, Inderpreet 15 March 2018 (has links)
With the evolution of multimedia systems, image and video compression is becoming the key enabling technology for delivering various image/video services over heterogeneous networks. The basic goal of image data compression is to reduce the bit rate for transmission and storage while either maintaining the original quality of the data or providing an acceptable quality. This thesis proposes a new wavelet transform for lossless compression of images with application to medical images. The transform uses integer arithmetic and is very computationally efficient. Then a new color image transformation, which is reversible and uses integer arithmetic, is proposed. The transformation reduces the redundancy among the red, green, and blue color bands. It approximates the luminance and chrominance components of the YIQ coordinate system. This transformation involves no floating point/integer multiplications or divisions, and is, therefore, very suitable for real-time applications where the number of CPU cycles needs to be kept to a minimum. A technique for lossy compression of an image data base is also proposed. The technique uses a wavelet transform and vector quantization for compression. The discrete cosine transform is applied to the coarsest scale wavelet coefficients to achieve even higher compression ratios without any significant increase in computational complexity. Wavelet denoising is used to reduce the image artifacts generated by quantizing the discrete cosine transform coefficients. This improves the subjective quality of the decompressed images for very low bit rate images (less than 0.5 bits per pixel). The thesis also deals with the real-time implementation of the wavelet transform. The new wavelet transform has been applied to speech signals. Both lossless and lossy techniques for speech coding have been implemented. The lossless technique involves using the reversible integer-arithmetic wavelet transform and Huffman coding to obtain the compressed bitstream. The lossy technique, on the other hand, quantizes the wavelet coefficients to obtain higher compression ratio at the expense of some degradation in sound quality. The issues related to real-time wavelet compression are also discussed. Due to the limited size of memory on a DSP, a wavelet transform had to be applied to an input signal of finite length. The effects of varying the signal length on compression performance are also studied for different reversible wavelet transforms. The limitations of the proposed techniques are discussed and recommendations for future research are provided. / Graduate
59

Evaluation of ANSI compression in a bulk data file transfer system

Chaulklin, Douglas Gary 20 January 2010 (has links)
This report evaluates the use of a newly proposed American National Standard Institute (ANSI) standard for data compression in a bulk data transmission system. An overview of the transmission system, the current compression method, and the ANSI algorithm are presented. A dynamic systems model is used to analyze the benefits and impacts of various alternatives to addressing the needs of the system. A decision model is built to summarize the alternatives based on the perceived problem contexts. / Master of Engineering
60

Statistical data compression by optimal segmentation. Theory, algorithms and experimental results.

Steiner, Gottfried 09 1900 (has links) (PDF)
The work deals with statistical data compression or data reduction by a general class of classification methods. The data compression results in a representation of the data set by a partition or by some typical points (called prototypes). The optimization problems are related to minimum variance partitions and principal point problems. A fixpoint method and an adaptive approach is applied for the solution of these problems. The work contains a presentation of the theoretical background of the optimization problems and lists some pseudo-codes for the numerical solution of the data compression. The main part of this work concentrates on some practical questions for carrying out a data compression. The determination of a suitable number of representing points, the choice of an objective function, the establishment of an adjacency structure and the improvement of the fixpoint algorithm belong to the practically relevant topics. The performance of the proposed methods and algorithms is compared and evaluated experimentally. A lot of examples deepen the understanding of the applied methods. (author's abstract)

Page generated in 0.089 seconds