Spelling suggestions: "subject:"data compression"" "subject:"mata compression""
61 |
A heuristic method for reducing message redundancy in a file transfer environmentBodwell, William Robert January 1976 (has links)
Intercomputer communications involves the transfer of information between intelligent hosts. Since communication costs are almost proportional to the amount of data transferred, the processing capability of the respective hosts might advantageously be applied through pre-processing and post-processing of data to reduce redundancy. The major emphasis of this research is development of the Substitution Method which minimizes data transfer between hosts required to reconstruct user JCL files, Fortran source files, and data files.
The technique requires that a set of user files for each category of files be examined to determine the frequency distribution of symbols, fixed strings, and repeated symbol strings to determine symbol and structural redundancy. Information gathered during the examination of these files when combined with the user created Source Language Syntax Table generate Encoding/Decoding Tables which are used to reduce both symbol and structural redundancy. The Encoding/Decoding Tables allow frequently encountered strings to be represented by only one or two symbols through the utilization of table shift symbols. The table shift symbols allow less frequently encountered symbols of the original alphabet to be represented as an entry in a Secondary Encoding/Decoding Table. A technique is described which enables a programmer to easily modify his Fortran program such that he can take advantage of the Substitution Method's ability to compress data files by removing both informational and structural redundancy.
Each user file requested to be transferred is preprocessed at cost, C[prep], to reduce data (both symbol and structural redundancy) which need not be transferred for faithful reproduction of the file. The file is transferred over a noiseless channel at cost, C[ptran]. The channel consists of presently available or proposed services of the common-carriers and specialized common-carriers. The received file is post-processed to reconstruct the original source file at cost, C[post]. The costs associated with pre-processing, transferring, and post-processing are compared with the cost, C[otran], of transferring the entire file in its original form. / Ph. D.
|
62 |
SOME MEASURED PERFORMANCE BOUNDS AND IMPLEMENTATION CONSIDERATIONS FOR THE LEMPEL-ZIV-WELCH DATA COMPACTION ALGORITHMJacobsen, H. D. 10 1900 (has links)
International Telemetering Conference Proceedings / October 26-29, 1992 / Town and Country Hotel and Convention Center, San Diego, California / Lempel-Ziv-Welch (LZW) algorithm is a popular data compaction technique that has been
adopted by CCITT in its V.42bis recommendation and is often implemented in association
with the V.32 standard for 9600 bps modems. It has also been implemented as Microcom
Networking Protocol (MNP) Level 7, where it goes by the name of Enhanced Data
Compression. LZW compacts data by encoding frequently occurring input strings with a
single output symbol. The algorithm automatically generates a string dictionary for each
symbol at each end of the transmission path. The amount of compaction that can be
derived with the LZW algorithm varies with the type of data being transmitted and the
efficiency by which table entries can be indexed. Table indexing is usually implemented by
use of a hashing table. Although some manufacturers advertise a 4-to-1 gain in throughput,
this seems to be an extreme case. This paper documents a implementation of the exact
ZLW algorithm. The results presented in this paper are significantly less, typically on the
order of 1-to-2 for ASCII text, with substantially less compaction for pre-compacted files
or files containing random bit patterns.
The efficiency of the LZW algorith on ASCII text is shown to be a function of dictionary
size and block size. Although fewer transmitted symbols are required for larger dictionary
tables, the additional bits required for the symbol index is marginally greater than the
efficiency that is gained. The net effect is that dictionary sizes beyond 2K in size are
increasingly less efficient for input data block sizes of 10K or more. The author concludes
that the algorithm could be implemented as a direct table look-up rather than through a
hashing algorithm. This would allow the LZW to be implemented with very simple
firmware and with a maximum of hardware efficiency.
|
63 |
RANGE TELEMETRY IMPROVEMENT AND MODERNIZATIONChalfant, Timothy A., Irving, Charles E. 10 1900 (has links)
International Telemetering Conference Proceedings / October 27-30, 1997 / Riviera Hotel and Convention Center, Las Vegas, Nevada / The system throughput capacities of modern data systems exceed the bit rate capacity of
current range telemetry capabilities. Coupling this with the shrinking spectrum allocated
for telemetry results in a serious problem for the Test, Training, and Space telemetry users.
Acknowledging this problem, the Department of Defense (DoD) has embarked on an
aggressive improvement and modernization program that will benefit both the government
and commercial range providers and users. The ADVANCED RANGE TELEMETRY
(ARTM) program was created and funded by the Central Test and Evaluation Investment
Program (CTEIP) under the Office of the Secretary of Defense, Undersecretary for
Acquisition and Technology to address this problem. The ARTM program goals are to
improve the efficiency of spectrum usage by changing historical methods of acquiring
telemetry data and transmitting it from systems under test to range customers. The program
is initiating advances in coding, compression, data channel assignment, and modulation.
Due to the strong interactions of these four dimensions, the effort is integrated into a single
focused program. This paper describes the ARTM program and how academia research,
emerging technology, and commercial applications will lay the foundation for future
development.
|
64 |
An On-Board Instrumentation System for High-Rate Medium Caliber ProjectilesBukowski, Edward, Don, Michael, Grzybowski, David, Harkins, Thomas 10 1900 (has links)
ITC/USA 2015 Conference Proceedings / The Fifty-First Annual International Telemetering Conference and Technical Exhibition / October 26-29, 2015 / Bally's Hotel & Convention Center, Las Vegas, NV / The U.S. Army Research Laboratory developed an on-board telemetry instrumentation system to obtain measurements of the in-flight dynamics of medium caliber projectiles. The small size, high launch acceleration, and extremely high spin rates of these projectiles created many design challenges. Particularly challenging were the high spin rates, necessitating the development of a data compression scheme for solar sensors. Flight tests successfully captured data for spin rates exceeding 1000 Hertz (1 kHz).
|
65 |
DATA COMPRESSION SYSTEM FOR VIDEO IMAGESRAJYALAKSHMI, P.S., RAJANGAM, R.K. 10 1900 (has links)
International Telemetering Conference Proceedings / October 13-16, 1986 / Riviera Hotel, Las Vegas, Nevada / In most transmission channels, bandwidth is at a premium and an important attribute of any good digital signalling scheme is to optimally utilise the bandwidth for transmitting the information. The Data Compression System in this way plays a significant role in the transmission of picture data from any Remote Sensing Satellite by exploiting the statistical properties of the imagery. The data rate required for transmission to ground can be reduced by using suitable compression technique. A data compression algorithm has been developed for processing the images of Indian Remote Sensing Satellite. Sample LANDSAT imagery and also a reference photo are used for evaluating the performance of the system. The reconstructed images are obtained after compression for 1.5 bits per pixel and 2 bits per pixel as against the original of 7 bits per pixel. The technique used is uni-dimensional Hadamard Transform Technique. The Histograms are computed for various pictures which are used as samples. This paper describes the development of such a hardware and software system and also indicates how hardware can be adopted for a two dimensional Hadamard Transform Technique.
|
66 |
Associative neural networks: properties, learning, and applications.January 1994 (has links)
by Chi-sing Leung. / Thesis (Ph.D.)--Chinese University of Hong Kong, 1994. / Includes bibliographical references (leaves 236-244). / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Background of Associative Neural Networks --- p.1 / Chapter 1.2 --- A Distributed Encoding Model: Bidirectional Associative Memory --- p.3 / Chapter 1.3 --- A Direct Encoding Model: Kohonen Map --- p.6 / Chapter 1.4 --- Scope and Organization --- p.9 / Chapter 1.5 --- Summary of Publications --- p.13 / Chapter I --- Bidirectional Associative Memory: Statistical Proper- ties and Learning --- p.17 / Chapter 2 --- Introduction to Bidirectional Associative Memory --- p.18 / Chapter 2.1 --- Bidirectional Associative Memory and its Encoding Method --- p.18 / Chapter 2.2 --- Recall Process of BAM --- p.20 / Chapter 2.3 --- Stability of BAM --- p.22 / Chapter 2.4 --- Memory Capacity of BAM --- p.24 / Chapter 2.5 --- Error Correction Capability of BAM --- p.28 / Chapter 2.6 --- Chapter Summary --- p.29 / Chapter 3 --- Memory Capacity and Statistical Dynamics of First Order BAM --- p.31 / Chapter 3.1 --- Introduction --- p.31 / Chapter 3.2 --- Existence of Energy Barrier --- p.34 / Chapter 3.3 --- Memory Capacity from Energy Barrier --- p.44 / Chapter 3.4 --- Confidence Dynamics --- p.49 / Chapter 3.5 --- Numerical Results from the Dynamics --- p.63 / Chapter 3.6 --- Chapter Summary --- p.68 / Chapter 4 --- Stability and Statistical Dynamics of Second order BAM --- p.70 / Chapter 4.1 --- Introduction --- p.70 / Chapter 4.2 --- Second order BAM and its Stability --- p.71 / Chapter 4.3 --- Confidence Dynamics of Second Order BAM --- p.75 / Chapter 4.4 --- Numerical Results --- p.82 / Chapter 4.5 --- Extension to higher order BAM --- p.90 / Chapter 4.6 --- Verification of the conditions of Newman's Lemma --- p.94 / Chapter 4.7 --- Chapter Summary --- p.95 / Chapter 5 --- Enhancement of BAM --- p.97 / Chapter 5.1 --- Background --- p.97 / Chapter 5.2 --- Review on Modifications of BAM --- p.101 / Chapter 5.2.1 --- Change of the encoding method --- p.101 / Chapter 5.2.2 --- Change of the topology --- p.105 / Chapter 5.3 --- Householder Encoding Algorithm --- p.107 / Chapter 5.3.1 --- Construction from Householder Transforms --- p.107 / Chapter 5.3.2 --- Construction from iterative method --- p.109 / Chapter 5.3.3 --- Remarks on HCA --- p.111 / Chapter 5.4 --- Enhanced Householder Encoding Algorithm --- p.112 / Chapter 5.4.1 --- Construction of EHCA --- p.112 / Chapter 5.4.2 --- Remarks on EHCA --- p.114 / Chapter 5.5 --- Bidirectional Learning --- p.115 / Chapter 5.5.1 --- Construction of BL --- p.115 / Chapter 5.5.2 --- The Convergence of BL and the memory capacity of BL --- p.116 / Chapter 5.5.3 --- Remarks on BL --- p.120 / Chapter 5.6 --- Adaptive Ho-Kashyap Bidirectional Learning --- p.121 / Chapter 5.6.1 --- Construction of AHKBL --- p.121 / Chapter 5.6.2 --- Convergent Conditions for AHKBL --- p.124 / Chapter 5.6.3 --- Remarks on AHKBL --- p.125 / Chapter 5.7 --- Computer Simulations --- p.126 / Chapter 5.7.1 --- Memory Capacity --- p.126 / Chapter 5.7.2 --- Error Correction Capability --- p.130 / Chapter 5.7.3 --- Learning Speed --- p.157 / Chapter 5.8 --- Chapter Summary --- p.158 / Chapter 6 --- BAM under Forgetting Learning --- p.160 / Chapter 6.1 --- Introduction --- p.160 / Chapter 6.2 --- Properties of Forgetting Learning --- p.162 / Chapter 6.3 --- Computer Simulations --- p.168 / Chapter 6.4 --- Chapter Summary --- p.168 / Chapter II --- Kohonen Map: Applications in Data compression and Communications --- p.170 / Chapter 7 --- Introduction to Vector Quantization and Kohonen Map --- p.171 / Chapter 7.1 --- Background on Vector quantization --- p.171 / Chapter 7.2 --- Introduction to LBG algorithm --- p.173 / Chapter 7.3 --- Introduction to Kohonen Map --- p.174 / Chapter 7.4 --- Chapter Summary --- p.179 / Chapter 8 --- Applications of Kohonen Map in Data Compression and Communi- cations --- p.181 / Chapter 8.1 --- Use Kohonen Map to design Trellis Coded Vector Quantizer --- p.182 / Chapter 8.1.1 --- Trellis Coded Vector Quantizer --- p.182 / Chapter 8.1.2 --- Trellis Coded Kohonen Map --- p.188 / Chapter 8.1.3 --- Computer Simulations --- p.191 / Chapter 8.2 --- Kohonen MapiCombined Vector Quantization and Modulation --- p.195 / Chapter 8.2.1 --- Impulsive Noise in the received data --- p.195 / Chapter 8.2.2 --- Combined Kohonen Map and Modulation --- p.198 / Chapter 8.2.3 --- Computer Simulations --- p.200 / Chapter 8.3 --- Error Control Scheme for the Transmission of Vector Quantized Data --- p.213 / Chapter 8.3.1 --- Motivation and Background --- p.214 / Chapter 8.3.2 --- Trellis Coded Modulation --- p.216 / Chapter 8.3.3 --- "Combined Vector Quantization, Error Control, and Modulation" --- p.220 / Chapter 8.3.4 --- Computer Simulations --- p.223 / Chapter 8.4 --- Chapter Summary --- p.226 / Chapter 9 --- Conclusion --- p.232 / Bibliography --- p.236
|
67 |
Live deduplication storage of virtual machine images in an open-source cloud.January 2012 (has links)
重覆數據删除技術是一個消除冗餘數據存儲塊的技術。尤其是,在儲存數兆位元組的虛擬機器影像時,它已被證明可以減少使用磁碟空間。但是,在會經常加入和讀取虛擬機器影像的雲端平台,部署重覆數據删除技術仍然存在挑戰。我們提出了一個在內核運行的重覆數據删除檔案系統LiveDFS,它可以在一個在低成本硬件配置的開源雲端平台中作為儲存虛擬機器影像的後端。LiveDFS有幾個新穎的特點。具體來說,LiveDFS中最重要的特點是在考慮檔案系統佈局時,它利用空間局部性放置重覆數據删除中繼資料。LiveDFS是POSIX兼容的Linux內核檔案系統。我們透過使用42個不同Linux發行版的虛擬機器影像,在實驗平台測試了LiveDFS的讀取和寫入性能。我們的工作證明了在低成本硬件配置的雲端平台部署LiveDFS的可行性。 / Deduplication is a technique that eliminates the storage of redundant data blocks. In particular, it has been shown to effectively reduce the disk space for storing multi-gigabyte virtual machine (VM) images. However, there remain challenging deployment issues of enabling deduplication in a cloud platform, where VM images are regularly inserted and retrieved. We propose a kernel-space deduplication file systems called LiveDFS, which can serve as a VM image storage backend in an open-source cloud platform that is built on low-cost commodity hardware configurations. LiveDFS is built on several novel design features. Specifically, the main feature of LiveDFS is to exploit spatial locality of placing deduplication metadata on disk with respect to the underlying file system layout. LiveDFS is POSIX-compliant and is implemented as Linux kernel-space file systems. We conduct testbed experiments of the read/write performance of LiveDFS using a dataset of 42 VM images of different Linux distributions. Our work justifies the feasibility of deploying LiveDFS in a cloud platform under commodity settings. / Detailed summary in vernacular field only. / Ng, Chun Ho. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2012. / Includes bibliographical references (leaves 39-42). / Abstracts also in Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 2 --- LiveDFS Design --- p.5 / Chapter 2.1 --- File System Layout --- p.5 / Chapter 2.2 --- Deduplication Primitives --- p.6 / Chapter 2.3 --- Deduplication Process --- p.8 / Chapter 2.3.1 --- Fingerprint Store --- p.9 / Chapter 2.3.2 --- Fingerprint Filter --- p.11 / Chapter 2.4 --- Prefetching of Fingerprint Stores --- p.14 / Chapter 2.5 --- Journaling --- p.15 / Chapter 2.6 --- Ext4 File System --- p.17 / Chapter 3 --- Implementation Details --- p.18 / Chapter 3.1 --- Choice of Hash Function --- p.18 / Chapter 3.2 --- OpenStack Deployment --- p.19 / Chapter 4 --- Experiments --- p.21 / Chapter 4.1 --- I/O Throughput --- p.21 / Chapter 4.2 --- OpenStack Deployment --- p.26 / Chapter 5 --- Related Work --- p.34 / Chapter 6 --- Conclusions and Future Work --- p.37 / Bibliography --- p.39
|
68 |
Kernel-space inline deduplication file systems for virtual machine image storage.January 2013 (has links)
從文件系統設計的角度,我們探索了利用重復數據删除技術來消除硬盤陣列存儲設備當中的重復數據。我們提出了ScaleDFS,一個重復數據删除技術的文件系統, 旨在硬盤陣列存儲設備上實現可擴展的吞吐性能。ScaleDFS有三個主要的特點。第一,利用多核CPU並行計算出用作識別重復數據的加密指紋,以提高寫入速度。第二,緩存曾經讀取過的重復數據塊,以顯著提高讀取速度。第三,優化用作查找指紋的內存數據結構,以更加節省內存。ScaleDFS是一個以Linux系統內核模塊開發的,與POSIX兼容的,可以用在一般低成本硬件配置上的文件系統。我們進行了一系列的微觀性能測試,以及用42個不同版本的Linux虛擬鏡像文件進行了宏觀性能測試。我們證實,ScaleDFS在磁盤陣列上比目前已有的開源重復數據删除文件系統擁有更好的讀寫性能。 / We explore the use of deduplication for eliminating the storage of redundant data in RAID from a file-system design perspective. We propose ScaleDFS, a deduplication file system that seeks to achieve scalable read/write throughput in RAID. ScaleDFS is built on three novel design features. First, we improve the write throughput by exploiting multiple CPU cores to parallelize the processing of the cryptographic fingerprints that are used to identify redundant data. Second, we improve the read throughput by specifically caching in memory the recently read blocks that have been deduplicated. Third, we reduce the memory usage by enhancing the data structures that are used for fingerprint lookups. ScaleDFS is implemented as a POSIX-compliant, kernel-space driver module that can be deployed in commodity hardware configurations. We conduct microbenchmark experiments using synthetic workloads, and macrobenchmark experiments using a dataset of 42 VM images of different Linux distributions. We show that ScaleDFS achieves higher read/write throughput than existing open-source deduplication file systems in RAID. / Detailed summary in vernacular field only. / Ma, Mingcao. / "October 2012." / Thesis (M.Phil.)--Chinese University of Hong Kong, 2013. / Includes bibliographical references (leaves 39-42). / Abstracts also in Chinese. / Chapter 1 --- Introduction --- p.2 / Chapter 2 --- Literature Review --- p.5 / Chapter 2.1 --- Backup systems --- p.5 / Chapter 2.2 --- Use of special hardware --- p.6 / Chapter 2.3 --- Scalable storage --- p.6 / Chapter 2.4 --- Inline DFSs --- p.6 / Chapter 2.5 --- VM image storage with deduplication --- p.7 / Chapter 3 --- ScaleDFS Background --- p.8 / Chapter 3.1 --- Spatial Locality of Fingerprint Placement --- p.9 / Chapter 3.2 --- Prefetching of Fingerprint Stores --- p.12 / Chapter 3.3 --- Journaling --- p.13 / Chapter 4 --- ScaleDFS Design --- p.15 / Chapter 4.1 --- Parallelizing Deduplication --- p.15 / Chapter 4.2 --- Caching Read Blocks --- p.17 / Chapter 4.3 --- Reducing Memory Usage --- p.17 / Chapter 5 --- Implementation --- p.20 / Chapter 5.1 --- Choice of Hash Function --- p.20 / Chapter 5.2 --- OpenStack Deployment --- p.21 / Chapter 6 --- Experiments --- p.23 / Chapter 6.1 --- Microbenchmarks --- p.23 / Chapter 6.2 --- OpenStack Deployment --- p.28 / Chapter 6.3 --- VM Image Operations in a RAID Setup --- p.33 / Chapter 7 --- Conclusions and FutureWork --- p.38 / Bibliography --- p.39
|
69 |
Turbo codes for data compression and joint source-channel codingZhao, Ying. January 2007 (has links)
Thesis (Ph.D.)--University of Delaware, 2006. / Principal faculty advisor: Javier Garcia-Frias, Dept. of Electrical and Computer Engineering. Includes bibliographical references.
|
70 |
Low density generator matrix codes for source and channel codingZhong, Wei. January 2006 (has links)
Thesis (Ph.D.)--University of Delaware, 2006. / Principal faculty advisor: Javier Garcia-Frias, Dept. of Electrical and Computer Engineering. Includes bibliographical references.
|
Page generated in 0.1264 seconds