• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 11
  • 11
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Image and video compression using the wavelet transform

Lewis, A. S. January 1995 (has links)
No description available.
2

Prototype waveform interpolation based low bit rate speech coding

Yaghmaie, Khashayar January 1997 (has links)
Advances in digital technology in the last decade have motivated the development of very efficient and high quality speech compression algorithms. While in the early low bit rate coding systems, the main target was production of intelligible speech at low information rates, expansion of new applications such as mobile satellite systems increased the demand for high quality speech at lowest possible bit rates. This resulted in the development of efficient parametric models for speech production system. These models were the basis of powerful speech compression algorithms such as CELP and Multiband excitation. CELP is a very efficient algorithm at medium bit rates and has achieved almost toll quality at 8 kb/s. However, the performance of CELP rapidly reduces at bit rates below 4.8 kb/s. The sinusoidal based coding algorithms and in particular multiband excitation technique have proved their abilities in producing high quality speech at bit rates below 5 kb/s. In recent years, another efficient speech compression algorithm called prototype waveform interpolation (PWI) has emerged. PWI presented a novel model which proved to be very efficient in removing redundant information from speech. While the early PWI systems produced high quality speech at bit rates around 3.5 kb/s, its latest versions produce an even higher quality at the bit rates as low as 2.4 kb/s. The key to the success of PWI is the approach it exploits in reducing the distortion associated with low bit rate coding algorithms. However, the price for this achievement is a very high computational demand which has been the main hurdle in its real time applications. The aim of the research in this thesis is the development of low complexity PWI systems without sacrificing the high quality. While the target of the majority of PWI systems is efficient coding of the excitation signal in the LP model of speech, this research focuses on exploiting PWI to directly encode the original speech. In the first part of the thesis, basic techniques in low bit rate speech coding are described and proper tools are developed to be exploited in a PWI based coding system. In the second part, the original PWI algorithm operating in the LP residual domain is briefly explained and application of PWI in speech domain is introduced as a method to cope with problems associated with the original PWI. To demonstrate the abilities of this approach, various coding schemes operating in the range of 1.85 to 2.95 kb/s are developed. In the final stage, a new technique which combines the two powerful low bit rate coding techniques, i.e multiband excitation and PWI, is developed to produce high quality synthetic speech at 2.6 kb/s.
3

Image compression using locally sensitive hashing

Chucri, Samer Gerges 18 December 2013 (has links)
The problem of archiving photos is becoming increasingly important as image databases are growing more popular, and larger in size. One could take the example of any social networking website, where users share hundreds of photos, resulting in billions of total images to be stored. Ideally, one would like to use minimal storage to archive these images, by making use of the redundancy that they share, while not sacrificing quality. We suggest a compression algorithm that aims at compressing across images, rather than compressing images individually. This is a very novel approach that has never been adopted before. This report presents the design of a new image database compression tool. In addition to that, we implement a complete system on C++, and show the significant gains that we achieve in some cases, where we compress 90% of the initial data. One of the main tools we use is Locally Sensitive Hashing (LSH), a relatively new technique mainly used for similarity search in high-dimensions. / text
4

Efficient image compression system using a CMOS transform imager

Lee, Jungwon. January 2009 (has links)
Thesis (Ph.D)--Electrical and Computer Engineering, Georgia Institute of Technology, 2010. / Committee Chair: Anderson, David; Committee Member: Dorsey, John; Committee Member: Hasler, Paul; Committee Member: Kang, Sung Ha; Committee Member: Romberg, Justin. Part of the SMARTech Electronic Thesis and Dissertation Collection.
5

Lifting schemes for wavelet filters of trigonometric vanishing moments

Cheng, Ho-Yin. January 2002 (has links)
Thesis (M.Phil.)--University of Hong Kong, 2003. / Includes bibliographical references (leaves 80) Also available in print.
6

En studie i komprimeringsalgoritmer / A study in compression algorithms

Sjöstrand, Mattias Håkansson January 2005 (has links)
Compression algorithms can be used everywhere. For example, when you look at a DVD movie a lossy algorithm is used, both for picture and sound. If you want to do a backup of your data, you might be using a lossless algorithm. This thesis will explain how many of the more common lossless compression algorithms work. During the work of this thesis I also developed a new lossless compression algorithm. I compared this new algorithm to the more common algorithms by testing it on five different types of files. The result that I got was that the new algorithm was comparable to the other algorithms when comparing the compression ratio, and in some cases it also performed better than the others. / Komprimeringsalgoritmer kan användas överallt. T ex, när du tittar på en DVD-film så används en förstörande algoritm, både för bild och ljud. Om du vill göra en säkerhetskopia av din data, så kanske du använder en icke förstörande algoritm. Denna avhandling kommer att förklara hur många av de mer vanliga icke förstörande algoritmer fungerar. Under arbetets gång så utvecklade jag också en ny icke förstörande algoritm. Jag jämförde denna nya algoritm med de mer vanliga algoritmerna genom att jämföra algoritmerna med varandra på fem olika typer av filer. Resultatet som jag kom fram till var att den nya algoritmen var jämförbar med de andra algoritmerna när man jämförde komprimeringsförhållandet, och i vissa fall så presterade den bättre än de andra.
7

Akcelerace kompresního algoritmu LZ4 v FPGA / Acceleration of LZ4 Compression Algorithm in FPGA

Marton, Dominik January 2017 (has links)
This project describes the implementation of an LZ4 compression algorithm in a C/C++-like language, that can be used to generate VHDL programs for FPGA integrated circuits embedded in accelerated network interface controllers (NICs). Based on the algorithm specification, software versions of LZ4 compressor and decompressor are implemented, which are then transformed into a synthesizable language, that is then used to generate fully functional VHDL code for both components. Execution time and compression ratio of all implementations are then compared. The project also serves as a demonstration of usability and influence of high-level synthesis and high-level approach to design and implementation of hardware applications known from common programming languages.
8

Compression Selection for Columnar Data using Machine-Learning and Feature Engineering

Persson, Douglas, Juelsson Larsen, Ludvig January 2023 (has links)
There is a continuously growing demand for improved solutions that provide both efficient storage and efficient retrieval of big data for analytical purposes. This thesis researches the use of machine-learning together with feature engineering to recommend the most cost-effective compression algorithm and encoding combination for columns in a columnar database management system (DBMS). The framework consists of a cost function calculated using compression time, decompression time, and compression ratio. An XGBoost machine-learning model is trained on labels provided by the cost function to recommend the most cost-effective combination for columnar data within a column or vector-oriented DBMS. While the methods are applied on ClickHouse, one of the most popular open-source column-oriented DBMS on the market, the results are broadly applicable to column-oriented data which share data type and characteristics with IoT telemetry data. Using billions of available rows of numeric real business data obtained at Axis Communications in Lund, Sweden, a set of features are engineered to accurately describe the characteristics of a given column. The proposed framework allows for weighting the business interests (compression time, decompression time, and compression ratio) to determine the individually optimal cost-effective solution. The model reaches an accuracy of 99% on the test dataset and an accuracy of 90.1% on unseen data by leveraging data features that are predictive of compression algorithms and encodings performances. Following ClickHouse strategies and the most suitable practices in the field, combinations of general-purpose compression algorithms and data encodings are analysed that together yield the best results in efficiently compressing the data of certain columns. Applying the unweighted recommended combinations on all columns, the framework’s performance impact was measured to increase the average compression speed by 95.46%. Reducing the time to compress the columns from 31.17 seconds to compress the data to 13.17 seconds. Additionally, the decompression speed was increased by 59.87%, reducing the time to decompress the columns from 2.63 seconds to 2.02 seconds, at the cost of decreasing the compression ratio by 66.05%. Increasing the storage requirements by 94.9 MB. In column and vector databases, chunks of data belonging to a certain column are often stored together on a disk. Therefore, choosing the right compression algorithm can lower the storage requirements and boost database throughput.
9

Überblick und Klassifikation leichtgewichtiger Kompressionsverfahren im Kontext hauptspeicherbasierter Datenbanksysteme

Hildebrandt, Juliana 22 July 2015 (has links) (PDF)
Im Kontext von In-Memory-Datenbanksystemen nehmen leichtgewichtige Kompressionsalgorithmen eine entscheidende Rolle ein, um eine effiziente Speicherung und Verarbeitung großer Datenmengen im Hauptspeicher zu realisieren. Verglichen mit klassischen Komprimierungstechniken wie z.B. Huffman erzielen leichtgewichtige Kompressionsalgorithmen vergleichbare Kompressionsraten aufgrund der Einbeziehung von Kontextwissen und erlauben eine schnellere Kompression und Dekompression. Die Vielfalt der leichtgewichtigen Kompressionsalgorithmen hat in den letzten Jahren zugenommen, da ein großes Optimierungspotential über die Einbeziehung des Kontextwissens besteht. Um diese Vielfalt zu bewältigen haben wir uns mit der Modularisierung von leichtgewichtigen Kompressionsalgorithmen beschäftigt und ein allgemeines Kompressionsschema entwickelt. Durch den Austausch einzelner Module oder auch nur eingehender Parameter lassen sich verschiedene Algorithmen einfach realisieren.
10

Überblick und Klassifikation leichtgewichtiger Kompressionsverfahren im Kontext hauptspeicherbasierter Datenbanksysteme

Hildebrandt, Juliana January 2015 (has links)
Im Kontext von In-Memory-Datenbanksystemen nehmen leichtgewichtige Kompressionsalgorithmen eine entscheidende Rolle ein, um eine effiziente Speicherung und Verarbeitung großer Datenmengen im Hauptspeicher zu realisieren. Verglichen mit klassischen Komprimierungstechniken wie z.B. Huffman erzielen leichtgewichtige Kompressionsalgorithmen vergleichbare Kompressionsraten aufgrund der Einbeziehung von Kontextwissen und erlauben eine schnellere Kompression und Dekompression. Die Vielfalt der leichtgewichtigen Kompressionsalgorithmen hat in den letzten Jahren zugenommen, da ein großes Optimierungspotential über die Einbeziehung des Kontextwissens besteht. Um diese Vielfalt zu bewältigen haben wir uns mit der Modularisierung von leichtgewichtigen Kompressionsalgorithmen beschäftigt und ein allgemeines Kompressionsschema entwickelt. Durch den Austausch einzelner Module oder auch nur eingehender Parameter lassen sich verschiedene Algorithmen einfach realisieren.:1 Einleitung 1 2 Modularisierung von Komprimierungsmethoden 5 2.1 Zum Literaturstand 5 2.2 Einfaches Schema zur Komprimierung 7 2.3 Weitere Betrachtungen 11 2.3.1 Splitmodul und Wortgenerator mit mehreren Ausgaben 11 2.3.2 Hierarchische Datenorganisation 13 2.3.3 Mehrmaliger Aufruf des Schemas 15 2.4 Bewertung und Begründung der Modularisierung 17 2.5 Zusammenfassung 17 3 Modularisierung für verschiedene Kompressionsmuster 19 3.1 Frame of Reference (FOR) 19 3.2 Differenzkodierung (DELTA) 21 3.3 Symbolunterdrückung 23 3.4 Lauflängenkodierung (RLE) 23 3.5 Wörterbuchkompression (DICT) 24 3.6 Bitvektoren (BV) 26 3.7 Vergleich verschiedener Muster und Techniken 26 3.8 Zusammenfassung 30 4 Konkrete Algorithmen 31 4.1 Binary Packing 31 4.2 FOR mit Binary Packing 33 4.3 Adaptive FOR und VSEncoding 35 4.4 PFOR-Algorithmen 38 4.4.1 PFOR und PFOR2008 38 4.4.2 NewPFD und OptPFD 42 4.4.3 SimplePFOR und FastPFOR 46 4.4.4 Anmerkungen zur differenzkodierten Daten 49 5.4 Simple-Algorithmen 49 4.5.1 Simple-9 49 4.5.2 Simple-16 50 4.5.3 Relative-10 und Carryover-12 52 4.6 Byteorientierte Kodierungen 55 4.6.1 Varint-SU und Varint-PU 56 4.6.2 Varint-GU 56 4.6.3 Varint-PB 59 4.6.4 Varint-GB 61 4.6.5 Vergleich der Module der Varint-Algorithmen 62 4.6.6 RLE VByte 62 4.7 Wörterbuchalgorithmen 63 4.7.1 ZIL 63 4.7.2 Sigmakodierte invertierte Dateien 65 4.8 Zusammenfassung 66 5 Eigenschaften von Komprimierungsmethoden 69 5.1 Anpassbarkeit 69 5.2 Anzahl der Pässe 71 5.3 Genutzte Information 74 5.4 Art der Daten und Arten von Redundanz 74 5.5 Zusammenfassung 77 6 Zusammenfassung und Ausblick 79

Page generated in 0.1227 seconds