141 |
Time-varying linear predictive coding of speech signals.Hall, Mark Gilbert January 1977 (has links)
Thesis. 1977. M.S.--Massachusetts Institute of Technology. Dept. of Electrical Engineering and Computer Science. / MICROFICHE COPY AVAILABLE IN ARCHIVES AND ENGINEERING. / Includes bibliographical references. / M.S.
|
142 |
Joint coding and modulation designs for bandlimited satellite channelsHui, Joseph Y. N January 1981 (has links)
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1981. / MICROFICHE COPY AVAILABLE IN ARCHIVES AND ENGINEERING. / Includes bibliographical references. / by Joseph Y. N. Hui. / M.S.
|
143 |
An empirical study on Chinese text compression: from character-based to word-based approach.January 1997 (has links)
by Kwok-Shing Cheng. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1997. / Includes bibliographical references (leaves 114-120). / Abstract --- p.i / Acknowledgement --- p.iii / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Importance of Text Compression --- p.1 / Chapter 1.2 --- Motivation of this Research --- p.2 / Chapter 1.3 --- Characteristics of Chinese --- p.2 / Chapter 1.3.1 --- Huge size of character set --- p.3 / Chapter 1.3.2 --- Lack of word segmentation --- p.3 / Chapter 1.3.3 --- Rich semantics --- p.3 / Chapter 1.4 --- Different Coding Schemes for Chinese --- p.4 / Chapter 1.4.1 --- Big5 Code --- p.4 / Chapter 1.4.2 --- GB (Guo Biao) Code --- p.4 / Chapter 1.4.3 --- HZ (Hanzi) Code --- p.5 / Chapter 1.4.4 --- Unicode Code --- p.5 / Chapter 1.5 --- Modeling and Coding for Chinese Text --- p.6 / Chapter 1.6 --- Static and Adaptive Modeling --- p.6 / Chapter 1.7 --- One-Pass and Two-Pass Modeling --- p.8 / Chapter 1.8 --- Ordering of models --- p.9 / Chapter 1.9 --- Two Sets of Benchmark Files and the Platform --- p.9 / Chapter 1.10 --- Outline of the Thesis --- p.11 / Chapter 2 --- A Survey of Chinese Text Compression --- p.13 / Chapter 2.1 --- Entropy for Chinese Text --- p.14 / Chapter 2.2 --- Weakness of Traditional Compression Algorithms on Chinese Text --- p.15 / Chapter 2.3 --- Statistical Class Algorithms for Compressing Chinese --- p.16 / Chapter 2.3.1 --- Huffman coding scheme --- p.17 / Chapter 2.3.2 --- Arithmetic Coding Scheme --- p.22 / Chapter 2.3.3 --- Restricted Variable Length Coding Scheme --- p.26 / Chapter 2.4 --- Dictionary-based Class Algorithms for Compressing Chinese --- p.27 / Chapter 2.5 --- Experiments and Results --- p.32 / Chapter 2.6 --- Chapter Summary --- p.35 / Chapter 3 --- Indicator Dependent Huffman Coding Scheme --- p.37 / Chapter 3.1 --- Chinese Character Identification Routine --- p.37 / Chapter 3.2 --- Reduction of Header Size --- p.39 / Chapter 3.3 --- Semi-adaptive IDC for Chinese Text --- p.44 / Chapter 3.3.1 --- Theoretical Analysis of Partition Technique for Com- pression --- p.48 / Chapter 3.3.2 --- Experiments and Results of the Semi-adaptive IDC --- p.50 / Chapter 3.4 --- Adaptive IDC for Chinese Text --- p.54 / Chapter 3.4.1 --- Experiments and Results of the Adaptive IDC --- p.57 / Chapter 3.5 --- Chapter Summary --- p.58 / Chapter 4 --- Cascading LZ Algorithms with Huffman Coding Schemes --- p.59 / Chapter 4.1 --- Variations of Huffman Coding Scheme --- p.60 / Chapter 4.1.1 --- Analysis of EPDC and PDC --- p.60 / Chapter 4.1.2 --- "Analysis of PDC, 16Huff and IDC" --- p.65 / Chapter 4.1.3 --- Time and Memory Consumption --- p.71 / Chapter 4.2 --- "Cascading LZSS with PDC, 16Huff and IDC" --- p.73 / Chapter 4.2.1 --- Experimental Results --- p.76 / Chapter 4.3 --- "Cascading LZW with PDC, 16Huff and IDC" --- p.79 / Chapter 4.3.1 --- Experimental Results --- p.82 / Chapter 4.4 --- Chapter Summary --- p.84 / Chapter 5 --- Applying Compression Algorithms to Word-segmented Chi- nese Text --- p.85 / Chapter 5.1 --- Background of word-based compression algorithms --- p.86 / Chapter 5.2 --- Terminology and Benchmark Files for Word Segmentation Model --- p.88 / Chapter 5.3 --- Word Segmentation Model --- p.88 / Chapter 5.4 --- Chinese Entropy from Byte to Word --- p.91 / Chapter 5.5 --- The Generalized Compression and Decompression Model for Word-segmented Chinese text --- p.92 / Chapter 5.6 --- Applying Huffman Coding Scheme to Word-segmented Chinese text --- p.94 / Chapter 5.7 --- Applying WLZSSHUF to Word-segmented Chinese text --- p.97 / Chapter 5.8 --- Applying WLZWHUF to Word-segmented Chinese text --- p.102 / Chapter 5.9 --- Match Ratio and Compression Ratio --- p.105 / Chapter 5.10 --- Chapter Summary --- p.108 / Chapter 6 --- Concluding Remarks --- p.110 / Chapter 6.1 --- Conclusions --- p.110 / Chapter 6.2 --- Contributions --- p.111 / Chapter 6.3 --- Future Directions --- p.112 / Chapter 6.3.1 --- Integrate Decremental Coding Scheme with IDC --- p.112 / Chapter 6.3.2 --- Re-order the Character Sequences in the Sliding Window of LZSS --- p.113 / Chapter 6.3.3 --- Multiple Huffman Trees for Word-based Compression --- p.113 / Bibliography --- p.114
|
144 |
Dictionary-based Compression Algorithms in Mobile Packet CoreTikkireddy, Lakshmi Venkata Sai Sri January 2019 (has links)
With the rapid growth in technology, the amount of data to be transmitted and stored is increasing. The efficiency of information retrieval and storage has become a major drawback, thereby the concept of data compression has come into the picture. Data compression is a technique that effectively reduces the size of the data to save storage and speed up the transmission of the data from one place to another. Data compression is present in various formats and mainly categorized into lossy compression and lossless compression where lossless compression is often used to compress the data. In Ericsson, SGSN-MME is using one of the data compression technique namely Deflate, to compress each user data independently. Due to the compression ratio between compress and decompress speed, the deflate algorithm is not optimal for the SGSN-MME’s use case. To mitigate this problem, the deflate algorithm has to be replaced with a better compression algorithm.
|
145 |
Distributed indexing and scalable query processing for interactive big data explorationsGuzun, Gheorghi 01 August 2016 (has links)
The past few years have brought a major surge in the volumes of collected data. More and more enterprises and research institutions find tremendous value in data analysis and exploration. Big Data analytics is used for improving customer experience, perform complex weather data integration and model prediction, as well as personalized medicine and many other services.
Advances in technology, along with high interest in big data, can only increase the demand on data collection and mining in the years to come.
As a result, and in order to keep up with the data volumes, data processing has become increasingly distributed. However, most of the distributed processing for large data is done by batch processing and interactive exploration is hardly an option. To efficiently support queries over large amounts of data, appropriate indexing mechanisms must be in place.
This dissertation proposes an indexing and query processing framework that can run on top of a distributed computing engine, to support fast, interactive data explorations in data warehouses. Our data processing layer is built around bit-vector based indices. This type of indexing features fast bit-wise operations and scales up well for high dimensional data. Additionally, compression can be applied to reduce the index size, and thus utilize less memory and network communication.
Our work can be divided into two areas: index compression and query processing.
Two compression schemes are proposed for sparse and dense bit-vectors. The design of these encoding methods is hardware-driven, and the query processing is optimized for the available computing hardware. Query algorithms are proposed for selection, aggregation, and other specialized queries. The query processing is supported on single machines, as well as computer clusters.
|
146 |
Error resilience in JPEG2000Natu, Ambarish Shrikrishna, Electrical Engineering & Telecommunications, Faculty of Engineering, UNSW January 2003 (has links)
The rapid growth of wireless communication and widespread access to information has resulted in a strong demand for robust transmission of compressed images over wireless channels. The challenge of robust transmission is to protect the compressed image data against loss, in such a way as to maximize the received image quality. This thesis addresses this problem and provides an investigation of a forward error correction (FEC) technique that has been evaluated in the context of the emerging JPEG2000 standard. Not much effort has been made in the JPEG2000 project regarding error resilience. The only techniques standardized are based on insertion of marker codes in the code-stream, which may be used to restore high-level synchronization between the decoder and the code-stream. This helps to localize error and prevent it from propagating through the entire code-stream. Once synchronization is achieved, additional tools aim to exploit as much of the remaining data as possible. Although these techniques help, they cannot recover lost data. FEC adds redundancy into the bit-stream, in exchange for increased robustness to errors. We investigate unequal protection schemes for JPEG2000 by applying different levels of protection to different quality layers in the code-stream. More particularly, the results reported in this thesis provide guidance concerning the selection of JPEG2000 coding parameters and appropriate combinations of Reed-Solomon (RS) codes for typical wireless bit error rates. We find that unequal protection schemes together with the use of resynchronization makers and some additional tools can significantly improve the image quality in deteriorating channel conditions. The proposed channel coding scheme is easily incorporated into the existing JPEG2000 code-stream structure and experimental results clearly demonstrate the viability of our approach
|
147 |
On Data Compression for TDOA Localization / Datakompression för TDOA-lokaliseringArbring, Joel, Hedström, Patrik January 2010 (has links)
<p>This master thesis investigates different approaches to data compression on common types of signals in the context of localization by estimating time difference of arrival (TDOA). The thesis includes evaluation of the compression schemes using recorded data, collected as part of the thesis work. This evaluation shows that compression is possible while preserving localization accuracy.</p><p>The recorded data is backed up with more extensive simulations using a free space propagation model without attenuation. The signals investigated are flat spectrum signals, signals using phase-shift keying and single side band speech signals. Signals with low bandwidth are given precedence over high bandwidth signals, since they require more data in order to get an accurate localization estimate.</p><p>The compression methods used are transform based schemes. The transforms utilized are the Karhunen-Loéve transform and the discrete Fourier transform. Different approaches for quantization of the transform components are examined, one of them being zonal sampling.</p><p>Localization is performed in the Fourier domain by calculating the steered response power from the cross-spectral density matrix. The simulations are performed in Matlab using three recording nodes in a symmetrical geometry.</p><p>The performance of localization accuracy is compared with the Cramér-Rao bound for flat spectrum signals using the standard deviation of the localization error from the compressed signals.</p>
|
148 |
Winpaz : Ett GUI till en ny komprimeringsalgoritm / Winpaz : A GUI for a new compression algorithmSvensson, Andreas, Olsson, Björn January 2007 (has links)
<p>Detta är ett 10-poängs examensarbete på C-nivå, som vi under vårterminen avlagt vid Karlstads universitet. Målet med vårt projekt var att skapa ett nytt grafiskt gränssnitt till en komprimeringsalgoritm vid namn PAZ. Nuförtiden finns en hel uppsjö av grafiska gränssnitt till komprimeringsalgoritmer, men inget av dem stöder möjligheten att plugga in en egenutvecklad algoritm. Därför var vi tvungna att skapa vårt eget gränssnitt, som förutom PAZ också skulle stödja ZIP, RAR och andra etablerade algoritmer. Vi har alltså skapat ett grafiskt gränssnitt, vilket är anpassat för en implementation av den nya PAZ-algoritmen.</p><p>Algoritmen har utvecklats av vår uppdragsgivare Martin Larsson. Gränssnittet har vi, i samarbete med Martin, valt att kalla för Winpaz. Vi började med att göra två prototyper, som vi bland annat delade ut till personer på universitetet för att få respons på användarvänligheten hos gränssnitten. Med utgångspunkt från resultaten av dessa tester, konstruerade vi sedan vårt slutgiltiga gränssnitt.</p><p>Kravet vi hade på oss var att skapa ett användarvänligt grafiskt gränssnitt, som skulle stödja de vanligaste algoritmerna. Vi nådde målet att implementera stöd för algoritmerna ZIP och TAR. Dock fick vi göra avkall på implementationen av algoritmerna PAZ och RAR på grund av tidsbrist, men anpassade gränssnittet så att en senare inpluggning av dessa algoritmer är möjlig.</p><p>Vi är nöjda med vår slutprodukt, men inser också att det troligen krävs en del ytterligare funktionalitet för att gränssnittet ska fungera exemplariskt. Under projektet har vi, förutom att vi stärkte våra kunskaper i C++-programmering, även lärt oss att använda utvecklingsverktyget wx-DevCpp. Utöver detta har vi lärt oss att programmera grafiska gränssnitt med hjälp av wxWidgets.</p> / <p>This is a bachelor´s project that we have been working on during a period of ten weeks, full time. The goal of our project was to design and implement a GUI for a new data compression algorithm called PAZ. At present, a vast array of compression/extraction GUIs is available, but none of them provide the possibility to incorporate a user developed algorithm. Thus, we had to create our own, with support for not only PAZ, but also ZIP, RAR, and other well known archiving algorithms. And so, we have created a GUI that is well suited for an implementation of the new PAZ algorithm.</p><p>The PAZ algorithm has been developed by Martin Larsson. We chose, in collaboration with Martin, to name the GUI application Winpaz. We began by implementing two separate prototypes, which we sent out to be tested by a closed group of beta testers. The reason for this test was to investigate how to design various parts of the application to be user friendly. Using the results from the testers, we then began the development of our final version of the GUI.</p><p>Our goals were to implement a user friendly GUI, that supported PAZ as well as the most widespread algorithms already in use. We achieved our first goal, to design a user friendly GUI, and we implemented support for both ZIP and TAR, but had to abandon our efforts in implementing RAR and PAZ support due to lack of time. The interface is however designed with the future incorporation of these algorithms in mind.</p><p>We are fairly pleased with our work, but we also recognize the need for added functionality in order to make the GUI a commercial grade product. During this project we have, apart from broadening our knowledge and skill in C++ programming, also learned to use the IDE wxDevCpp, a powerful open source tool for developing GUI applications based on the wxWidgets framework.</p>
|
149 |
Comparison of DPCM and Subband Codec performance in the presence of burst errorsBhutani, Meeta 31 August 1998 (has links)
This thesis is a preliminary study of the relative performance of the major speech
compression techniques, Differential Pulse Code Modulation (DPCM) and Subband
Coding (SBC) in the presence of transmission distortion. The combined effect of the
channel distortions and the channel codec including error correction is represented by
bursts of bit errors. While compression is critical since bandwidth is scarce in a wireless
channel, channel distortions are greater and less predictable. Little to no work has
addressed the impact of channel errors on perceptual quality of speech due to the
complexity of the problem. At the transmitter, the input signal is compressed to 24 kbps
using either DPCM or SBC, quantized, binary encoded and transmitted over the burst
error channel. The reverse process is carried out at the receiver. DPCM achieves
compression by removing redundant information in successive time domain samples,
while SBC uses lower resolution quantizer to encode frequency bands of lower
perceptual importance. The performance of these codecs is evaluated for BERs of 0.001
and 0.05, with the burst lengths varying between 4 and 64 bits. Two different speech
segments - one voiced and one unvoiced are used in testing. Performance measures
include two objective tests signal to noise ratio (SNR) & segmental SNR, and a
subjective test of perceptual quality - the Mean Opinion Score (MOS). The results
obtained show that with a fixed BER and increasing burst length in bits, the total errors
reduce in the decoded speech thereby improving its perceptual quality for both DPCM
and SBC. Informal subjective tests also demonstrate this trend as well as indicate
distortion in DPCM seemed to be less perceptually degrading than SBC. / Graduation date: 1999
|
150 |
Event compression using recursive least squares signal processingJanuary 1980 (has links)
Webster Pope Dove. / Originally published as thesis (Dept. of Electrical Engineering and Computer Science, M.S., 1980). / Bibliography: leaf 150. / National Science Foundation Grant ENG76-24117 National Science Foundation Grant ECS79-15226
|
Page generated in 0.1326 seconds