Spelling suggestions: "subject:"[een] DATA COMPRESSION"" "subject:"[enn] DATA COMPRESSION""
41 |
Empirical analysis of BWT-based lossless image compressionBhupathiraju, Kalyan Varma. January 2010 (has links)
Thesis (M.S.)--West Virginia University, 2010. / Title from document title page. Document formatted into pages; contains v, 61 p. : ill. (some col.). Includes abstract. Includes bibliographical references (p. 54-56).
|
42 |
Adaptive edge-based prediction for lossless image compressionParthe, Rahul G. January 1900 (has links)
Thesis (M.S.)--West Virginia University, 2005. / Title from document title page. Document formatted into pages; contains vii, 90 p. : ill. Includes abstract. Includes bibliographical references (p. 84-90).
|
43 |
Statistical data compression by optimal segmentation. Theory, algorithms and experimental results.Steiner, Gottfried 09 1900 (has links) (PDF)
The work deals with statistical data compression or data reduction by a general class of classification methods. The data compression results in a representation of the data set by a partition or by some typical points (called prototypes). The optimization problems are related to minimum variance partitions and principal point problems. A fixpoint method and an adaptive approach is applied for the solution of these problems. The work contains a presentation of the theoretical background of the optimization problems and lists some pseudo-codes for the numerical solution of the data compression. The main part of this work concentrates on some practical questions for carrying out a data compression. The determination of a suitable number of representing points, the choice of an objective function, the establishment of an adjacency structure and the improvement of the fixpoint algorithm belong to the practically relevant topics. The performance of the proposed methods and algorithms is compared and evaluated experimentally. A lot of examples deepen the understanding of the applied methods. (author's abstract)
|
44 |
A heuristic method for reducing message redundancy in a file transfer environmentBodwell, William Robert January 1976 (has links)
Intercomputer communications involves the transfer of information between intelligent hosts. Since communication costs are almost proportional to the amount of data transferred, the processing capability of the respective hosts might advantageously be applied through pre-processing and post-processing of data to reduce redundancy. The major emphasis of this research is development of the Substitution Method which minimizes data transfer between hosts required to reconstruct user JCL files, Fortran source files, and data files.
The technique requires that a set of user files for each category of files be examined to determine the frequency distribution of symbols, fixed strings, and repeated symbol strings to determine symbol and structural redundancy. Information gathered during the examination of these files when combined with the user created Source Language Syntax Table generate Encoding/Decoding Tables which are used to reduce both symbol and structural redundancy. The Encoding/Decoding Tables allow frequently encountered strings to be represented by only one or two symbols through the utilization of table shift symbols. The table shift symbols allow less frequently encountered symbols of the original alphabet to be represented as an entry in a Secondary Encoding/Decoding Table. A technique is described which enables a programmer to easily modify his Fortran program such that he can take advantage of the Substitution Method's ability to compress data files by removing both informational and structural redundancy.
Each user file requested to be transferred is preprocessed at cost, C[prep], to reduce data (both symbol and structural redundancy) which need not be transferred for faithful reproduction of the file. The file is transferred over a noiseless channel at cost, C[ptran]. The channel consists of presently available or proposed services of the common-carriers and specialized common-carriers. The received file is post-processed to reconstruct the original source file at cost, C[post]. The costs associated with pre-processing, transferring, and post-processing are compared with the cost, C[otran], of transferring the entire file in its original form. / Ph. D.
|
45 |
RANGE TELEMETRY IMPROVEMENT AND MODERNIZATIONChalfant, Timothy A., Irving, Charles E. 10 1900 (has links)
International Telemetering Conference Proceedings / October 27-30, 1997 / Riviera Hotel and Convention Center, Las Vegas, Nevada / The system throughput capacities of modern data systems exceed the bit rate capacity of
current range telemetry capabilities. Coupling this with the shrinking spectrum allocated
for telemetry results in a serious problem for the Test, Training, and Space telemetry users.
Acknowledging this problem, the Department of Defense (DoD) has embarked on an
aggressive improvement and modernization program that will benefit both the government
and commercial range providers and users. The ADVANCED RANGE TELEMETRY
(ARTM) program was created and funded by the Central Test and Evaluation Investment
Program (CTEIP) under the Office of the Secretary of Defense, Undersecretary for
Acquisition and Technology to address this problem. The ARTM program goals are to
improve the efficiency of spectrum usage by changing historical methods of acquiring
telemetry data and transmitting it from systems under test to range customers. The program
is initiating advances in coding, compression, data channel assignment, and modulation.
Due to the strong interactions of these four dimensions, the effort is integrated into a single
focused program. This paper describes the ARTM program and how academia research,
emerging technology, and commercial applications will lay the foundation for future
development.
|
46 |
An On-Board Instrumentation System for High-Rate Medium Caliber ProjectilesBukowski, Edward, Don, Michael, Grzybowski, David, Harkins, Thomas 10 1900 (has links)
ITC/USA 2015 Conference Proceedings / The Fifty-First Annual International Telemetering Conference and Technical Exhibition / October 26-29, 2015 / Bally's Hotel & Convention Center, Las Vegas, NV / The U.S. Army Research Laboratory developed an on-board telemetry instrumentation system to obtain measurements of the in-flight dynamics of medium caliber projectiles. The small size, high launch acceleration, and extremely high spin rates of these projectiles created many design challenges. Particularly challenging were the high spin rates, necessitating the development of a data compression scheme for solar sensors. Flight tests successfully captured data for spin rates exceeding 1000 Hertz (1 kHz).
|
47 |
Higher Compression from the Burrows-Wheeler Transform with New Algorithms for the List Update ProblemChapin, Brenton 08 1900 (has links)
Burrows-Wheeler compression is a three stage process in which the data is transformed with the Burrows-Wheeler Transform, then transformed with Move-To-Front, and finally encoded with an entropy coder. Move-To-Front, Transpose, and Frequency Count are some of the many algorithms used on the List Update problem. In 1985, Competitive Analysis first showed the superiority of Move-To-Front over Transpose and Frequency Count for the List Update problem with arbitrary data. Earlier studies due to Bitner assumed independent identically distributed data, and showed that while Move-To-Front adapts to a distribution faster, incurring less overwork, the asymptotic costs of Frequency Count and Transpose are less. The improvements to Burrows-Wheeler compression this work covers are increases in the amount, not speed, of compression. Best x of 2x-1 is a new family of algorithms created to improve on Move-To-Front's processing of the output of the Burrows-Wheeler Transform which is like piecewise independent identically distributed data. Other algorithms for both the middle stage of Burrows-Wheeler compression and the List Update problem for which overwork, asymptotic cost, and competitive ratios are also analyzed are several variations of Move One From Front and part of the randomized algorithm Timestamp. The Best x of 2x - 1 family includes Move-To-Front, the part of Timestamp of interest, and Frequency Count. Lastly, a greedy choosing scheme, Snake, switches back and forth as the amount of compression that two List Update algorithms achieves fluctuates, to increase overall compression. The Burrows-Wheeler Transform is based on sorting of contexts. The other improvements are better sorting orders, such as “aeioubcdf...” instead of standard alphabetical “abcdefghi...” on English text data, and an algorithm for computing orders for any data, and Gray code sorting instead of standard sorting. Both techniques lessen the overwork incurred by whatever List Update algorithms are used by reducing the difference between adjacent sorted contexts.
|
48 |
Associative neural networks: properties, learning, and applications.January 1994 (has links)
by Chi-sing Leung. / Thesis (Ph.D.)--Chinese University of Hong Kong, 1994. / Includes bibliographical references (leaves 236-244). / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Background of Associative Neural Networks --- p.1 / Chapter 1.2 --- A Distributed Encoding Model: Bidirectional Associative Memory --- p.3 / Chapter 1.3 --- A Direct Encoding Model: Kohonen Map --- p.6 / Chapter 1.4 --- Scope and Organization --- p.9 / Chapter 1.5 --- Summary of Publications --- p.13 / Chapter I --- Bidirectional Associative Memory: Statistical Proper- ties and Learning --- p.17 / Chapter 2 --- Introduction to Bidirectional Associative Memory --- p.18 / Chapter 2.1 --- Bidirectional Associative Memory and its Encoding Method --- p.18 / Chapter 2.2 --- Recall Process of BAM --- p.20 / Chapter 2.3 --- Stability of BAM --- p.22 / Chapter 2.4 --- Memory Capacity of BAM --- p.24 / Chapter 2.5 --- Error Correction Capability of BAM --- p.28 / Chapter 2.6 --- Chapter Summary --- p.29 / Chapter 3 --- Memory Capacity and Statistical Dynamics of First Order BAM --- p.31 / Chapter 3.1 --- Introduction --- p.31 / Chapter 3.2 --- Existence of Energy Barrier --- p.34 / Chapter 3.3 --- Memory Capacity from Energy Barrier --- p.44 / Chapter 3.4 --- Confidence Dynamics --- p.49 / Chapter 3.5 --- Numerical Results from the Dynamics --- p.63 / Chapter 3.6 --- Chapter Summary --- p.68 / Chapter 4 --- Stability and Statistical Dynamics of Second order BAM --- p.70 / Chapter 4.1 --- Introduction --- p.70 / Chapter 4.2 --- Second order BAM and its Stability --- p.71 / Chapter 4.3 --- Confidence Dynamics of Second Order BAM --- p.75 / Chapter 4.4 --- Numerical Results --- p.82 / Chapter 4.5 --- Extension to higher order BAM --- p.90 / Chapter 4.6 --- Verification of the conditions of Newman's Lemma --- p.94 / Chapter 4.7 --- Chapter Summary --- p.95 / Chapter 5 --- Enhancement of BAM --- p.97 / Chapter 5.1 --- Background --- p.97 / Chapter 5.2 --- Review on Modifications of BAM --- p.101 / Chapter 5.2.1 --- Change of the encoding method --- p.101 / Chapter 5.2.2 --- Change of the topology --- p.105 / Chapter 5.3 --- Householder Encoding Algorithm --- p.107 / Chapter 5.3.1 --- Construction from Householder Transforms --- p.107 / Chapter 5.3.2 --- Construction from iterative method --- p.109 / Chapter 5.3.3 --- Remarks on HCA --- p.111 / Chapter 5.4 --- Enhanced Householder Encoding Algorithm --- p.112 / Chapter 5.4.1 --- Construction of EHCA --- p.112 / Chapter 5.4.2 --- Remarks on EHCA --- p.114 / Chapter 5.5 --- Bidirectional Learning --- p.115 / Chapter 5.5.1 --- Construction of BL --- p.115 / Chapter 5.5.2 --- The Convergence of BL and the memory capacity of BL --- p.116 / Chapter 5.5.3 --- Remarks on BL --- p.120 / Chapter 5.6 --- Adaptive Ho-Kashyap Bidirectional Learning --- p.121 / Chapter 5.6.1 --- Construction of AHKBL --- p.121 / Chapter 5.6.2 --- Convergent Conditions for AHKBL --- p.124 / Chapter 5.6.3 --- Remarks on AHKBL --- p.125 / Chapter 5.7 --- Computer Simulations --- p.126 / Chapter 5.7.1 --- Memory Capacity --- p.126 / Chapter 5.7.2 --- Error Correction Capability --- p.130 / Chapter 5.7.3 --- Learning Speed --- p.157 / Chapter 5.8 --- Chapter Summary --- p.158 / Chapter 6 --- BAM under Forgetting Learning --- p.160 / Chapter 6.1 --- Introduction --- p.160 / Chapter 6.2 --- Properties of Forgetting Learning --- p.162 / Chapter 6.3 --- Computer Simulations --- p.168 / Chapter 6.4 --- Chapter Summary --- p.168 / Chapter II --- Kohonen Map: Applications in Data compression and Communications --- p.170 / Chapter 7 --- Introduction to Vector Quantization and Kohonen Map --- p.171 / Chapter 7.1 --- Background on Vector quantization --- p.171 / Chapter 7.2 --- Introduction to LBG algorithm --- p.173 / Chapter 7.3 --- Introduction to Kohonen Map --- p.174 / Chapter 7.4 --- Chapter Summary --- p.179 / Chapter 8 --- Applications of Kohonen Map in Data Compression and Communi- cations --- p.181 / Chapter 8.1 --- Use Kohonen Map to design Trellis Coded Vector Quantizer --- p.182 / Chapter 8.1.1 --- Trellis Coded Vector Quantizer --- p.182 / Chapter 8.1.2 --- Trellis Coded Kohonen Map --- p.188 / Chapter 8.1.3 --- Computer Simulations --- p.191 / Chapter 8.2 --- Kohonen MapiCombined Vector Quantization and Modulation --- p.195 / Chapter 8.2.1 --- Impulsive Noise in the received data --- p.195 / Chapter 8.2.2 --- Combined Kohonen Map and Modulation --- p.198 / Chapter 8.2.3 --- Computer Simulations --- p.200 / Chapter 8.3 --- Error Control Scheme for the Transmission of Vector Quantized Data --- p.213 / Chapter 8.3.1 --- Motivation and Background --- p.214 / Chapter 8.3.2 --- Trellis Coded Modulation --- p.216 / Chapter 8.3.3 --- "Combined Vector Quantization, Error Control, and Modulation" --- p.220 / Chapter 8.3.4 --- Computer Simulations --- p.223 / Chapter 8.4 --- Chapter Summary --- p.226 / Chapter 9 --- Conclusion --- p.232 / Bibliography --- p.236
|
49 |
Analysis and Design of Lossless Bi-level Image Coding SystemsGuo, Jianghong January 2000 (has links)
Lossless image coding deals with the problem of representing an image with a minimum number of binary bits from which the original image can be fully recovered without any loss of information. Most lossless image coding algorithms reach the goal of efficient compression by taking care of the spatial correlations and statistical redundancy lying in images. Context based algorithms are the typical algorithms in lossless image coding. One key probelm in context based lossless bi-level image coding algorithms is the design of context templates. By using carefully designed context templates, we can effectively employ the information provided by surrounding pixels in an image. In almost all image processing applications, image data is accessed in a raster scanning manner and is treated as 1-D integer sequence rather than 2-D data. In this thesis, we present a quadrisection scanning method which is better than raster scanning in that more adjacent surrounding pixels are incorporated into context templates. Based on quadrisection scanning, we develop several context templates and propose several image coding schemes for both sequential and progressive lossless bi-level image compression. Our results show that our algorithms perform better than those raster scanning based algorithms, such as JBIG1 used in this thesis as a reference. Also, the application of 1-D grammar based codes in lossless image coding is discussed. 1-D grammar based codes outperform those LZ77/LZ78 based compression utility software for general data compression. It is also effective in lossless image coding. Several coding schemes for bi-level image compression via 1-D grammar codes are provided in this thesis, especially the parallel switching algorithm which combines the power of 1-D grammar based codes and context based algorithms. Most of our results are comparable to or better than those afforded by JBIG1.
|
50 |
Analysis and Design of Lossless Bi-level Image Coding SystemsGuo, Jianghong January 2000 (has links)
Lossless image coding deals with the problem of representing an image with a minimum number of binary bits from which the original image can be fully recovered without any loss of information. Most lossless image coding algorithms reach the goal of efficient compression by taking care of the spatial correlations and statistical redundancy lying in images. Context based algorithms are the typical algorithms in lossless image coding. One key probelm in context based lossless bi-level image coding algorithms is the design of context templates. By using carefully designed context templates, we can effectively employ the information provided by surrounding pixels in an image. In almost all image processing applications, image data is accessed in a raster scanning manner and is treated as 1-D integer sequence rather than 2-D data. In this thesis, we present a quadrisection scanning method which is better than raster scanning in that more adjacent surrounding pixels are incorporated into context templates. Based on quadrisection scanning, we develop several context templates and propose several image coding schemes for both sequential and progressive lossless bi-level image compression. Our results show that our algorithms perform better than those raster scanning based algorithms, such as JBIG1 used in this thesis as a reference. Also, the application of 1-D grammar based codes in lossless image coding is discussed. 1-D grammar based codes outperform those LZ77/LZ78 based compression utility software for general data compression. It is also effective in lossless image coding. Several coding schemes for bi-level image compression via 1-D grammar codes are provided in this thesis, especially the parallel switching algorithm which combines the power of 1-D grammar based codes and context based algorithms. Most of our results are comparable to or better than those afforded by JBIG1.
|
Page generated in 0.0531 seconds