• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 10
  • 10
  • 7
  • 5
  • 5
  • 4
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Scan test data compression using alternate Huffman coding

Baltaji, Najad Borhan 13 August 2012 (has links)
Huffman coding is a good method for statistically compressing test data with high compression rates. Unfortunately, the on-­‐chip decoder to decompress that encoded test data after it is loaded onto the chip may be too complex. With limited die area, the decoder complexity becomes a drawback. This makes Huffman coding not ideal for use in scan data compression. Selectively encoding test data using Huffman coding can provide similarly high compression rates while reducing the complexity of the decoder. A smaller and less complex decoder makes Alternate Huffman Coding a viable option for compressing and decompressing scan test data. / text
2

Μελέτη και υλοποίηση αλγορίθμων συμπίεσης

Γρίβας, Απόστολος 19 May 2011 (has links)
Σ΄αυτή τη διπλωματική εργασία μελετάμε κάποιους αλγορίθμους συμπίεσης δεδομένων και τους υλοποιούμε. Αρχικά, αναφέρονται βασικές αρχές της κωδικοποίησης και παρουσιάζεται το μαθηματικό υπόβαθρο της Θεωρίας Πληροφορίας. Παρουσιάζονται, επίσης διάφορα είδη κωδικών. Εν συνεχεία αναλύονται διεξοδικά η κωδικοποίηση Huffman και η αριθμητική κωδικοποίηση. Τέλος, οι δύο προαναφερθείσες κωδικοποιήσεις υλοποιούνται σε υπολογιστή με χρήση γλώσσας προγραμματισμού C και χρησιμοποιούνται για τη συμπίεση αρχείων κειμένου. Τα αρχεία που προκύπτουν συγκρίνονται με αρχεία που έχουν συμπιεστεί με χρήση προγραμμάτων του εμπορίου, αναλύονται τα αίτια των διαφορών στην αποδοτικότητα και εξάγονται χρήσιμα συμπεράσματα. / In this thesis we study some data compression algorithms and implement them. The basic principles of coding are mentioned and the mathematical foundation of information theory is presented. Also different types of codes are presented. Then the Huffman coding and arithmetic coding are analyzed in detail. Finally, the two codings are implemented on a computer using the C programming language in order to compress text files. The resulting files are compared with files that are compressed using commercial programmes, the causes of differences in the efficiency are analyzed and useful conclusions are drawn.
3

Busca indexada de padrões em textos comprimidos / Indexed search of compressed texts

Machado, Lennon de Almeida 07 May 2010 (has links)
A busca de palavras em uma grande coleção de documentos é um problema muito recorrente nos dias de hoje, como a própria utilização dos conhecidos \"motores de busca\" revela. Para que as buscas sejam realizadas em tempo que independa do tamanho da coleção, é necessário que a coleção seja indexada uma única vez. O tamanho destes índices é tipicamente linear no tamanho da coleção de documentos. A compressão de dados é outro recurso bastante utilizado para lidar com o tamanho sempre crescente da coleção de documentos. A intenção deste estudo é aliar a indexação utilizada nas buscas à compressão de dados, verificando alternativas às soluções já propostas e visando melhorias no tempo de resposta das buscas e no consumo de memória utilizada nos índices. A análise das estruturas de índice com os algoritmos de compressão mostra que arquivo invertido por blocos em conjuntos com compressão Huffman por palavras é uma ótima opção para sistemas com restrição de consumo de memória, pois proporciona acesso aleatório e busca comprimida. Neste trabalho também são propostas novas codificações livres de prefixo a fim de melhorar a compressão obtida e capaz de gerar códigos auto-sincronizados, ou seja, com acesso aleatório realmente viável. A vantagem destas novas codificações é que elas eliminam a necessidade de gerar a árvore de codificação Huffman através dos mapeamentos propostos, o que se traduz em economia de memória, codificação mais compacta e menor tempo de processamento. Os resultados obtidos mostram redução de 7% e 9% do tamanho dos arquivos comprimidos com tempos de compressão e descompressão melhores e menor consumo de memória. / Pattern matching over a big document collection is a very recurrent problem nowadays, as the growing use of the search engines reveal. In order to accomplish the search in a period of time independent from the collection size, it is necessary to index the collecion only one time. The index size is typically linear in the size of document collection. Data compression is another powerful resource to manage the ever growing size of the document collection. The objective in this assignment is to ally the indexed search to data compression, verifying alternatives to the current solutions, seeking improvement in search time and memory usage. The analysis on the index structures and compression algorithms indicates that joining the block inverted les with Huffman word-based compression is an interesting solution because it provides random access and compressed search. New prefix free codes are proposed in this assignment in order to enhance the compression and facilitate the generation of self-sinchronized codes, furthermore, with a truly viable random access. The advantage in this new codes is that they eliminate the need of generating the Huffman-code tree through the proposed mappings, which stands for economy of memory, compact encoding and shorter processing time. The results demonstrate gains of 7% and 9% in the compressed le size, with better compression and decompression times and lower memory consumption.
4

Busca indexada de padrões em textos comprimidos / Indexed search of compressed texts

Lennon de Almeida Machado 07 May 2010 (has links)
A busca de palavras em uma grande coleção de documentos é um problema muito recorrente nos dias de hoje, como a própria utilização dos conhecidos \"motores de busca\" revela. Para que as buscas sejam realizadas em tempo que independa do tamanho da coleção, é necessário que a coleção seja indexada uma única vez. O tamanho destes índices é tipicamente linear no tamanho da coleção de documentos. A compressão de dados é outro recurso bastante utilizado para lidar com o tamanho sempre crescente da coleção de documentos. A intenção deste estudo é aliar a indexação utilizada nas buscas à compressão de dados, verificando alternativas às soluções já propostas e visando melhorias no tempo de resposta das buscas e no consumo de memória utilizada nos índices. A análise das estruturas de índice com os algoritmos de compressão mostra que arquivo invertido por blocos em conjuntos com compressão Huffman por palavras é uma ótima opção para sistemas com restrição de consumo de memória, pois proporciona acesso aleatório e busca comprimida. Neste trabalho também são propostas novas codificações livres de prefixo a fim de melhorar a compressão obtida e capaz de gerar códigos auto-sincronizados, ou seja, com acesso aleatório realmente viável. A vantagem destas novas codificações é que elas eliminam a necessidade de gerar a árvore de codificação Huffman através dos mapeamentos propostos, o que se traduz em economia de memória, codificação mais compacta e menor tempo de processamento. Os resultados obtidos mostram redução de 7% e 9% do tamanho dos arquivos comprimidos com tempos de compressão e descompressão melhores e menor consumo de memória. / Pattern matching over a big document collection is a very recurrent problem nowadays, as the growing use of the search engines reveal. In order to accomplish the search in a period of time independent from the collection size, it is necessary to index the collecion only one time. The index size is typically linear in the size of document collection. Data compression is another powerful resource to manage the ever growing size of the document collection. The objective in this assignment is to ally the indexed search to data compression, verifying alternatives to the current solutions, seeking improvement in search time and memory usage. The analysis on the index structures and compression algorithms indicates that joining the block inverted les with Huffman word-based compression is an interesting solution because it provides random access and compressed search. New prefix free codes are proposed in this assignment in order to enhance the compression and facilitate the generation of self-sinchronized codes, furthermore, with a truly viable random access. The advantage in this new codes is that they eliminate the need of generating the Huffman-code tree through the proposed mappings, which stands for economy of memory, compact encoding and shorter processing time. The results demonstrate gains of 7% and 9% in the compressed le size, with better compression and decompression times and lower memory consumption.
5

Nezávislý datalogger s USB připojením / Autonomous USB datalogger

Románek, Karel January 2010 (has links)
This thesis treats concept of autonomous temperature, relative humidity and pressure USB datalogger. Also there is explained datalogger function, hardware design with respect on device consumption and design of chassis. Furthermore, there is described communication protocol for control and reading out data by the PC. Furthermore, there are described firmware drivers for some used components and modules for USB communication, RTC and data compression. Lastly there is described software which is used for datalogger configuration and data read out.
6

Porovnání hlasových a audio kodeků / Comparison of voice and audio codecs

Lúdik, Michal January 2012 (has links)
This thesis deals with description of human hearing, audio and speech codecs, description of objective measure of quality and practical comparison of codecs. Chapter about audio codecs consists of description of lossless codec FLAC and lossy codecs MP3 and Ogg Vorbis. In chapter about speech codecs is description of linear predictive coding and G.729 and OPUS codecs. Evaluation of quality consists of description of segmental signal-to- noise ratio and perceptual evaluation of quality – WSS and PESQ. Last chapter deals with description od practical part of this thesis, that is comparison of memory and time consumption of audio codecs and perceptual evaluation of speech codecs quality.
7

Komprese DNA sekvencí / DNA Sequence Compression

Friedrich, Tomáš January 2010 (has links)
The increasing volume of biological data requires finding new ways to save these data in genetic banks. The target of this work is design and implementation of a novel algorithm for compression of DNA sequences. The algorithm is based on aligning DNA sequences agains a reference sequence and storing only diferencies between sequence and reference model. The work contains basic prerequisities from molecular biology which are needed for understanding of algorithm details. Next aligment algorithms and common compress schemes suitable for storing of diferencies agains reference sequence are described. The work continues with a description of implementation, which is follewed by derivation of time and space complexity and comparison with common compression algorithms. Further continuation of this thesis is discussed in conclusion.
8

Komprese signálů EKG nasnímaných pomocí mobilního zařízení / Compression of ECG signals recorded using mobile ECG device

Had, Filip January 2017 (has links)
Signal compression is necessary part for ECG scanning, because of relatively big amount of data, which must be transmitted primarily wirelessly for analysis. Because of the wireless sending it is necessary to minimize the amount of data as much as possible. To minimize the amount of data, lossless or lossy compression algorithms are used. This work describes an algorithm SPITH and newly created experimental method, based on PNG, and their testing. This master’s thesis there is also a bank of ECG signals with parallel sensed accelerometer data. In the last part, modification of SPIHT algorithm, which uses accelerometer data, is described and realized.
9

Komprese signálů EKG s využitím vlnkové transformace / ECG Signal Compression Based on Wavelet Transform

Ondra, Josef January 2008 (has links)
Signal compression is daily-used tool for memory capacities reduction and for fast data communication. Methods based on wavelet transform seem to be very effective nowadays. Signal decomposition with a suitable bank filters following with coefficients quantization represents one of the available technique. After packing quantized coefficients into one sequence, run length coding together with Huffman coding are implemented. This thesis focuses on compression effectiveness for the different wavelet transform and quantization settings.
10

Komprese dat / Data compression

Krejčí, Michal January 2009 (has links)
This thesis deals with lossless and losing methods of data compressions and their possible applications in the measurement engineering. In the first part of the thesis there is a theoretical elaboration which informs the reader about the basic terminology, the reasons of data compression, the usage of data compression in standard practice and the division of compression algorithms. The practical part of thesis deals with the realization of the compress algorithms in Matlab and LabWindows/CVI.

Page generated in 0.0469 seconds