Spelling suggestions: "subject:"compression algorithms"" "subject:"compression a.lgorithms""
1 |
Image and video compression using the wavelet transformLewis, A. S. January 1995 (has links)
No description available.
|
2 |
Prototype waveform interpolation based low bit rate speech codingYaghmaie, Khashayar January 1997 (has links)
Advances in digital technology in the last decade have motivated the development of very efficient and high quality speech compression algorithms. While in the early low bit rate coding systems, the main target was production of intelligible speech at low information rates, expansion of new applications such as mobile satellite systems increased the demand for high quality speech at lowest possible bit rates. This resulted in the development of efficient parametric models for speech production system. These models were the basis of powerful speech compression algorithms such as CELP and Multiband excitation. CELP is a very efficient algorithm at medium bit rates and has achieved almost toll quality at 8 kb/s. However, the performance of CELP rapidly reduces at bit rates below 4.8 kb/s. The sinusoidal based coding algorithms and in particular multiband excitation technique have proved their abilities in producing high quality speech at bit rates below 5 kb/s. In recent years, another efficient speech compression algorithm called prototype waveform interpolation (PWI) has emerged. PWI presented a novel model which proved to be very efficient in removing redundant information from speech. While the early PWI systems produced high quality speech at bit rates around 3.5 kb/s, its latest versions produce an even higher quality at the bit rates as low as 2.4 kb/s. The key to the success of PWI is the approach it exploits in reducing the distortion associated with low bit rate coding algorithms. However, the price for this achievement is a very high computational demand which has been the main hurdle in its real time applications. The aim of the research in this thesis is the development of low complexity PWI systems without sacrificing the high quality. While the target of the majority of PWI systems is efficient coding of the excitation signal in the LP model of speech, this research focuses on exploiting PWI to directly encode the original speech. In the first part of the thesis, basic techniques in low bit rate speech coding are described and proper tools are developed to be exploited in a PWI based coding system. In the second part, the original PWI algorithm operating in the LP residual domain is briefly explained and application of PWI in speech domain is introduced as a method to cope with problems associated with the original PWI. To demonstrate the abilities of this approach, various coding schemes operating in the range of 1.85 to 2.95 kb/s are developed. In the final stage, a new technique which combines the two powerful low bit rate coding techniques, i.e multiband excitation and PWI, is developed to produce high quality synthetic speech at 2.6 kb/s.
|
3 |
Image compression using locally sensitive hashingChucri, Samer Gerges 18 December 2013 (has links)
The problem of archiving photos is becoming increasingly important as image databases are growing more popular, and larger in size. One could take the example of any social networking website, where users share hundreds of photos, resulting in billions of total images to be stored. Ideally, one would like to use minimal storage to archive these images, by making use of the redundancy that they share, while not sacrificing quality. We suggest a compression algorithm that aims at compressing across images, rather than compressing images individually. This is a very novel approach that has never been adopted before. This report presents the design of a new image database compression tool. In addition to that, we implement a complete system on C++, and show the significant gains that we achieve in some cases, where we compress 90% of the initial data. One of the main tools we use is Locally Sensitive Hashing (LSH), a relatively new technique mainly used for similarity search in high-dimensions. / text
|
4 |
Efficient image compression system using a CMOS transform imagerLee, Jungwon. January 2009 (has links)
Thesis (Ph.D)--Electrical and Computer Engineering, Georgia Institute of Technology, 2010. / Committee Chair: Anderson, David; Committee Member: Dorsey, John; Committee Member: Hasler, Paul; Committee Member: Kang, Sung Ha; Committee Member: Romberg, Justin. Part of the SMARTech Electronic Thesis and Dissertation Collection.
|
5 |
Lifting schemes for wavelet filters of trigonometric vanishing momentsCheng, Ho-Yin. January 2002 (has links)
Thesis (M.Phil.)--University of Hong Kong, 2003. / Includes bibliographical references (leaves 80) Also available in print.
|
6 |
En studie i komprimeringsalgoritmer / A study in compression algorithmsSjöstrand, Mattias Håkansson January 2005 (has links)
Compression algorithms can be used everywhere. For example, when you look at a DVD movie a lossy algorithm is used, both for picture and sound. If you want to do a backup of your data, you might be using a lossless algorithm. This thesis will explain how many of the more common lossless compression algorithms work. During the work of this thesis I also developed a new lossless compression algorithm. I compared this new algorithm to the more common algorithms by testing it on five different types of files. The result that I got was that the new algorithm was comparable to the other algorithms when comparing the compression ratio, and in some cases it also performed better than the others. / Komprimeringsalgoritmer kan användas överallt. T ex, när du tittar på en DVD-film så används en förstörande algoritm, både för bild och ljud. Om du vill göra en säkerhetskopia av din data, så kanske du använder en icke förstörande algoritm. Denna avhandling kommer att förklara hur många av de mer vanliga icke förstörande algoritmer fungerar. Under arbetets gång så utvecklade jag också en ny icke förstörande algoritm. Jag jämförde denna nya algoritm med de mer vanliga algoritmerna genom att jämföra algoritmerna med varandra på fem olika typer av filer. Resultatet som jag kom fram till var att den nya algoritmen var jämförbar med de andra algoritmerna när man jämförde komprimeringsförhållandet, och i vissa fall så presterade den bättre än de andra.
|
7 |
Evaluation of Spectrum Data Compression Algorithms for Edge-Applications in Industrial ToolsRing, Johanna January 2024 (has links)
Data volume is growing for each day as more and more is digitalized which puts the data management on test. The smart tools developed by Atlas Copco saves and transmits data to the cloud as a service to find errors in tightening's for their customers to review. A problem is the amount of data that is lost in this process. A tightening cycle usually contains thousands of data points and the storage space for it is too great for the tool's hardware. Today many of the data points are deleted and a small portion of scattered data of the cycle is saved and transmitted. To avoid overfilling the storage space the data need to be minimized. This study is focus on comparing data compression algorithms that could solve this problem. In a literature study in the beginning, numerous data compression algorithms were found with their advantages and disadvantages. Two different types of compression algorithms are also defined as lossy compression, where data is compressed by losing data points or precision, and lossless compression, where no data is lost throughout the compression. Two lossy and two lossless algorithms are selected to be avaluated with respect to their compression ratio, speed and error tolerance. Poor Man's Compression - Midrange (PMC-MR) and SWING-filter are the lossy algorithms while Gorilla and Fixed-Point Compression (FPC) are the lossless ones. The reached compression ratios, in percentage, could range from 39\% to 99\%. As combinations of a lossy and a lossless algorithm yields best compression ratios with lower error tolerance, PMC-MR with Gorilla is suggested to be the best suited for Atlas Copco's needs.
|
8 |
Akcelerace kompresního algoritmu LZ4 v FPGA / Acceleration of LZ4 Compression Algorithm in FPGAMarton, Dominik January 2017 (has links)
This project describes the implementation of an LZ4 compression algorithm in a C/C++-like language, that can be used to generate VHDL programs for FPGA integrated circuits embedded in accelerated network interface controllers (NICs). Based on the algorithm specification, software versions of LZ4 compressor and decompressor are implemented, which are then transformed into a synthesizable language, that is then used to generate fully functional VHDL code for both components. Execution time and compression ratio of all implementations are then compared. The project also serves as a demonstration of usability and influence of high-level synthesis and high-level approach to design and implementation of hardware applications known from common programming languages.
|
9 |
Compression Selection for Columnar Data using Machine-Learning and Feature EngineeringPersson, Douglas, Juelsson Larsen, Ludvig January 2023 (has links)
There is a continuously growing demand for improved solutions that provide both efficient storage and efficient retrieval of big data for analytical purposes. This thesis researches the use of machine-learning together with feature engineering to recommend the most cost-effective compression algorithm and encoding combination for columns in a columnar database management system (DBMS). The framework consists of a cost function calculated using compression time, decompression time, and compression ratio. An XGBoost machine-learning model is trained on labels provided by the cost function to recommend the most cost-effective combination for columnar data within a column or vector-oriented DBMS. While the methods are applied on ClickHouse, one of the most popular open-source column-oriented DBMS on the market, the results are broadly applicable to column-oriented data which share data type and characteristics with IoT telemetry data. Using billions of available rows of numeric real business data obtained at Axis Communications in Lund, Sweden, a set of features are engineered to accurately describe the characteristics of a given column. The proposed framework allows for weighting the business interests (compression time, decompression time, and compression ratio) to determine the individually optimal cost-effective solution. The model reaches an accuracy of 99% on the test dataset and an accuracy of 90.1% on unseen data by leveraging data features that are predictive of compression algorithms and encodings performances. Following ClickHouse strategies and the most suitable practices in the field, combinations of general-purpose compression algorithms and data encodings are analysed that together yield the best results in efficiently compressing the data of certain columns. Applying the unweighted recommended combinations on all columns, the framework’s performance impact was measured to increase the average compression speed by 95.46%. Reducing the time to compress the columns from 31.17 seconds to compress the data to 13.17 seconds. Additionally, the decompression speed was increased by 59.87%, reducing the time to decompress the columns from 2.63 seconds to 2.02 seconds, at the cost of decreasing the compression ratio by 66.05%. Increasing the storage requirements by 94.9 MB. In column and vector databases, chunks of data belonging to a certain column are often stored together on a disk. Therefore, choosing the right compression algorithm can lower the storage requirements and boost database throughput.
|
10 |
Überblick und Klassifikation leichtgewichtiger Kompressionsverfahren im Kontext hauptspeicherbasierter DatenbanksystemeHildebrandt, Juliana 22 July 2015 (has links) (PDF)
Im Kontext von In-Memory-Datenbanksystemen nehmen leichtgewichtige Kompressionsalgorithmen eine entscheidende Rolle ein, um eine effiziente Speicherung und Verarbeitung großer Datenmengen im Hauptspeicher zu realisieren. Verglichen mit klassischen Komprimierungstechniken wie z.B. Huffman erzielen leichtgewichtige Kompressionsalgorithmen vergleichbare Kompressionsraten aufgrund der Einbeziehung von Kontextwissen und erlauben eine schnellere Kompression und Dekompression. Die Vielfalt der leichtgewichtigen Kompressionsalgorithmen hat in den letzten Jahren zugenommen, da ein großes Optimierungspotential über die Einbeziehung des Kontextwissens besteht. Um diese Vielfalt zu bewältigen haben wir uns mit der Modularisierung von leichtgewichtigen Kompressionsalgorithmen beschäftigt und ein allgemeines Kompressionsschema entwickelt. Durch den Austausch einzelner Module oder auch nur eingehender Parameter lassen sich verschiedene Algorithmen einfach realisieren.
|
Page generated in 0.0675 seconds