• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 13
  • 7
  • 6
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 40
  • 19
  • 12
  • 11
  • 8
  • 8
  • 7
  • 6
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Modelling and implementation of an MPEG-2 video decoder using a GALS design path.

Rosengren, Kaj January 2006 (has links)
As integrated circuits get smaller, faster and can fit more functionality, more problems arise with wire delays and cross-talk. Especially when using global clock signals distributed over a large chip area. This thesis will briefly discuss a solution to this problem using the Globally Asynchronous Locally Synchronous (GALS) design path. The goal of this thesis was to test the solution by modelling and partially implementing an MPEG-2 video decoder connected as a GALS system, using synchronous design tools. This includes design of the system in Simulink, implementing selected parts in VHDL and finally testing the connected parts on an FPGA. Presented in this thesis is the design and implementation of the system as well as theory on the MPEG-2 video decoding standard and a short analysis of the result.
12

Hierarchical Transmission of Huffman Code Using Multi-Code/Multi-Rate DS/SS Modulation with Appropriate Power Control

Makido, Satoshi, Yamazato, Takaya, Katayama, Masaaki, Ogawa, Akira 12 1900 (has links)
No description available.
13

Τεχνικές ενσωματωμένου αυτοελέγχου για ορθή λειτουργία ψηφιακών ολοκληρωμένων συστημάτων στο πεδίο της εφαρμογής

Κουτσουπιά, Μαργαρίτα 04 December 2012 (has links)
Διάφορες μορφές κωδικοποίησης Huffman έχουν προταθεί για τη συμπίεση των δεδομένων δοκιμής που χρησιμοποιούνται για τον έλεγχο της ορθής λειτουργίας ολοκληρωμένων συστημάτων μετά την κατασκευή τους. Μεταξύ αυτών η βέλτιστη επιλεκτική κωδικοποίηση Huffman παρουσιάζει διάφορα πλεονεκτήματα. Η υιοθέτηση διαφορετικής τεχνικής ελέγχου της ορθής λειτουργίας του ολοκληρωμένου συστήματος μετά την κατασκευή του και στο πεδίο της εφαρμογής αυξάνει το κόστος του συστήματος. Για το λόγο αυτό στην εργασία αυτή διερευνούμε τη δυνατότητα χρησιμοποίησης της βέλτιστης επιλεκτικής κωδικοποίησης τόσο μετά την κατασκευή του ολοκληρωμένου συστήματος όσο και στο πεδίο της εφαρμογής. Τα συστήματα που μας ενδιαφέρουν είναι τα ενσωματωμένα συστήματα πραγματικού χρόνου. Για το λόγο αυτό μελετάμε καταρχήν τα χαρακτηριστικά που πρέπει να έχει η τεχνική ελέγχου ορθής λειτουργίας στο πεδίο της εφαρμογής ανάλογα με τις απαιτήσεις ενός ενσωματωμένου συστήματος. Σε κάθε περίπτωση μελετάμε το κόστος υλοποίησης της τεχνικής ελέγχου τόσο σε κυκλώματα, σχεδιάζοντάς τα σε Verilog, όσο και στο βαθμό συμπίεσης των δεδομένων δοκιμής, κάνοντας εξομοιώσεις σε C. / One-time factory testing of VLSI component after fabrication is insufficient in the deep submicron area. The products must be tested periodically in the field of application. Due to the complexity of the Systems on a Chip (SOCs), huge amounts of test data are required. However in many embedded systems the capacity of the available memory is a limited resource. In Automatic Test Equipment (ATE) based factory testing in order to reduce the memory requirements of the ATE and the time required to transfer the test data from ATE into the chip (and hence the test application time) various test set compression techniques have been proposed. In this paper we investigate the required enhancements so that an Optimal Selective Huffman Coding based test-set compression technique can be used for periodic testing in the field. The requirements of various types of periodic testing are examined depending on the criticality of the application running in embedded systems.
14

Busca indexada de padrões em textos comprimidos / Indexed search of compressed texts

Machado, Lennon de Almeida 07 May 2010 (has links)
A busca de palavras em uma grande coleção de documentos é um problema muito recorrente nos dias de hoje, como a própria utilização dos conhecidos \"motores de busca\" revela. Para que as buscas sejam realizadas em tempo que independa do tamanho da coleção, é necessário que a coleção seja indexada uma única vez. O tamanho destes índices é tipicamente linear no tamanho da coleção de documentos. A compressão de dados é outro recurso bastante utilizado para lidar com o tamanho sempre crescente da coleção de documentos. A intenção deste estudo é aliar a indexação utilizada nas buscas à compressão de dados, verificando alternativas às soluções já propostas e visando melhorias no tempo de resposta das buscas e no consumo de memória utilizada nos índices. A análise das estruturas de índice com os algoritmos de compressão mostra que arquivo invertido por blocos em conjuntos com compressão Huffman por palavras é uma ótima opção para sistemas com restrição de consumo de memória, pois proporciona acesso aleatório e busca comprimida. Neste trabalho também são propostas novas codificações livres de prefixo a fim de melhorar a compressão obtida e capaz de gerar códigos auto-sincronizados, ou seja, com acesso aleatório realmente viável. A vantagem destas novas codificações é que elas eliminam a necessidade de gerar a árvore de codificação Huffman através dos mapeamentos propostos, o que se traduz em economia de memória, codificação mais compacta e menor tempo de processamento. Os resultados obtidos mostram redução de 7% e 9% do tamanho dos arquivos comprimidos com tempos de compressão e descompressão melhores e menor consumo de memória. / Pattern matching over a big document collection is a very recurrent problem nowadays, as the growing use of the search engines reveal. In order to accomplish the search in a period of time independent from the collection size, it is necessary to index the collecion only one time. The index size is typically linear in the size of document collection. Data compression is another powerful resource to manage the ever growing size of the document collection. The objective in this assignment is to ally the indexed search to data compression, verifying alternatives to the current solutions, seeking improvement in search time and memory usage. The analysis on the index structures and compression algorithms indicates that joining the block inverted les with Huffman word-based compression is an interesting solution because it provides random access and compressed search. New prefix free codes are proposed in this assignment in order to enhance the compression and facilitate the generation of self-sinchronized codes, furthermore, with a truly viable random access. The advantage in this new codes is that they eliminate the need of generating the Huffman-code tree through the proposed mappings, which stands for economy of memory, compact encoding and shorter processing time. The results demonstrate gains of 7% and 9% in the compressed le size, with better compression and decompression times and lower memory consumption.
15

Compressão de dados sísmicos com perda controlada / Seismic data compression with loss control

Pedro Henrique Ribeiro da Silva 14 February 2014 (has links)
Neste trabalho apresentamos um novo método de compressão, com perda controlada de dados, que tem a vantagem de ter uma taxa significativa de compressão sem introduzir nenhuma perda superior a um parâmetro escolhido pelo usuário. Esta abordagem é uma abordagem mista, pois usa técnicas de compactação de dados tanto com perda quanto sem perda. Isto quer dizer que conseguimos um método que alia as vantagens da alta compressão, sem introduzir distorções indesejáveis nos dados. Mostramos como a massa de dados utilizada nos nossos estudos é obtida e a sua importância na prospecção de depósitos de hidrocarbonetos. É apresentado um levantamento bibliográfico com técnicas de compressão aplicadas a dados sísmicos tipicamente utilizadas em aplicações comerciais. Por fim, apresentamos os resultados da compressão utilizando o método em conjuntos de dados sísmicos reais. Para 1% de erro, os arquivos de dados sísmicos compactados passaram a ter algo próximo a 25% de seus tamanhos originais, o que representa um fator de compressão de aproximadamente 4 / This work presents a new compression method with controlled loss of data, which has the advantage of having a significant compression ratio without introducing to the data a loss higher than a parameter chosen by the user. This approach is a mixed approach, as it uses lossy and lossless data compression techniques. This means that we have achieved a method that combines the advantages of high compression without introducing undesirable distortions in the data. We show how the mass of data used in our studies is obtained and its importance in the exploration of hydrocarbon deposits. A literature review is presented with compression techniques applied to seismic data typically used in commercial applications. Finally, we present the results of compression using the method on real seismic data sets. For 1% error, the archives of seismic data now have close to 25% of their original size, which represents a compression factor of about 4
16

Compressão de dados sísmicos com perda controlada / Seismic data compression with loss control

Pedro Henrique Ribeiro da Silva 14 February 2014 (has links)
Neste trabalho apresentamos um novo método de compressão, com perda controlada de dados, que tem a vantagem de ter uma taxa significativa de compressão sem introduzir nenhuma perda superior a um parâmetro escolhido pelo usuário. Esta abordagem é uma abordagem mista, pois usa técnicas de compactação de dados tanto com perda quanto sem perda. Isto quer dizer que conseguimos um método que alia as vantagens da alta compressão, sem introduzir distorções indesejáveis nos dados. Mostramos como a massa de dados utilizada nos nossos estudos é obtida e a sua importância na prospecção de depósitos de hidrocarbonetos. É apresentado um levantamento bibliográfico com técnicas de compressão aplicadas a dados sísmicos tipicamente utilizadas em aplicações comerciais. Por fim, apresentamos os resultados da compressão utilizando o método em conjuntos de dados sísmicos reais. Para 1% de erro, os arquivos de dados sísmicos compactados passaram a ter algo próximo a 25% de seus tamanhos originais, o que representa um fator de compressão de aproximadamente 4 / This work presents a new compression method with controlled loss of data, which has the advantage of having a significant compression ratio without introducing to the data a loss higher than a parameter chosen by the user. This approach is a mixed approach, as it uses lossy and lossless data compression techniques. This means that we have achieved a method that combines the advantages of high compression without introducing undesirable distortions in the data. We show how the mass of data used in our studies is obtained and its importance in the exploration of hydrocarbon deposits. A literature review is presented with compression techniques applied to seismic data typically used in commercial applications. Finally, we present the results of compression using the method on real seismic data sets. For 1% error, the archives of seismic data now have close to 25% of their original size, which represents a compression factor of about 4
17

Busca indexada de padrões em textos comprimidos / Indexed search of compressed texts

Lennon de Almeida Machado 07 May 2010 (has links)
A busca de palavras em uma grande coleção de documentos é um problema muito recorrente nos dias de hoje, como a própria utilização dos conhecidos \"motores de busca\" revela. Para que as buscas sejam realizadas em tempo que independa do tamanho da coleção, é necessário que a coleção seja indexada uma única vez. O tamanho destes índices é tipicamente linear no tamanho da coleção de documentos. A compressão de dados é outro recurso bastante utilizado para lidar com o tamanho sempre crescente da coleção de documentos. A intenção deste estudo é aliar a indexação utilizada nas buscas à compressão de dados, verificando alternativas às soluções já propostas e visando melhorias no tempo de resposta das buscas e no consumo de memória utilizada nos índices. A análise das estruturas de índice com os algoritmos de compressão mostra que arquivo invertido por blocos em conjuntos com compressão Huffman por palavras é uma ótima opção para sistemas com restrição de consumo de memória, pois proporciona acesso aleatório e busca comprimida. Neste trabalho também são propostas novas codificações livres de prefixo a fim de melhorar a compressão obtida e capaz de gerar códigos auto-sincronizados, ou seja, com acesso aleatório realmente viável. A vantagem destas novas codificações é que elas eliminam a necessidade de gerar a árvore de codificação Huffman através dos mapeamentos propostos, o que se traduz em economia de memória, codificação mais compacta e menor tempo de processamento. Os resultados obtidos mostram redução de 7% e 9% do tamanho dos arquivos comprimidos com tempos de compressão e descompressão melhores e menor consumo de memória. / Pattern matching over a big document collection is a very recurrent problem nowadays, as the growing use of the search engines reveal. In order to accomplish the search in a period of time independent from the collection size, it is necessary to index the collecion only one time. The index size is typically linear in the size of document collection. Data compression is another powerful resource to manage the ever growing size of the document collection. The objective in this assignment is to ally the indexed search to data compression, verifying alternatives to the current solutions, seeking improvement in search time and memory usage. The analysis on the index structures and compression algorithms indicates that joining the block inverted les with Huffman word-based compression is an interesting solution because it provides random access and compressed search. New prefix free codes are proposed in this assignment in order to enhance the compression and facilitate the generation of self-sinchronized codes, furthermore, with a truly viable random access. The advantage in this new codes is that they eliminate the need of generating the Huffman-code tree through the proposed mappings, which stands for economy of memory, compact encoding and shorter processing time. The results demonstrate gains of 7% and 9% in the compressed le size, with better compression and decompression times and lower memory consumption.
18

Nonattribution Properties of JPEG Quantization Tables

Tuladhar, Punnya 17 December 2010 (has links)
In digital forensics, source camera identification of digital images has drawn attention in recent years. An image does contain information of its camera and/or editing software somewhere in it. But the interest of this research is to find manufacturers (henceforth will be called make and model) of a camera using only the header information, such as quantization table and huffman table, of the JPEG encoding. Having done research on around 110, 000 images, we reached to state that "For all practical purposes, using quantization and huffman tables alone to predict a camera make and model isn't a viable approach". We found no correlation between quantization and huffman tables of images and makes of camera. Rather, quantization or huffman table is determined by the quality factors like resolution, RGB values, intensity etc.of an image and standard settings of the camera.
19

Codificación de Imagenes Satelitales Utilizando Técnicas de Compresión con perdidas y sin perdidas

Flores Goycochea, Carlos Alberto January 2010 (has links)
In this work of thesis there appears the development of a technology of codification to achieve the compression of images satelitales, facilitating hereby his transmission on having used minor bandwidth and minor time of transmission, as well as also facilitating his storage on having used devices with minor capacity in bytes. This project is characterized by the implementation of computational algorithms based on the software Matlab, where one has developed two technologies of compression with the purpose of reaching a high degree of compression without altering too much the information contained in the image. They are two technologies combined that are in use in this thesis: the codification with losses, and without losses. The codification with losses is based on the use of transformed discreet of the cosine in 2D (two dimensions), as the JPEG standard uses it, and the codification without losses based on the use of the codification Huffman, where it is achieved to assign the minor quantity of bits for the codification without losing any information. This combination of technologies obtains valuable results especially in images satelitales that are obtained by very poor resolutions in comparison with the conventional photographies
20

Parallel JPEG Processing with a Hardware Accelerated DSP Processor / Parallell JPEG-behandling med en hårdvaruaccelerarad DSP processor

Andersson, Mikael, Karlström, Per January 2004 (has links)
<p>This thesis describes the design of fast JPEG processing accelerators for a DSP processor. </p><p>Certain computation tasks are moved from the DSP processor to hardware accelerators. The accelerators are slave co processing machines and are controlled via a new instruction set. The clock cycle and power consumption is reduced by utilizing the custom built hardware. The hardware can perform the tasks in fewer clock cycles and several tasks can run in parallel. This will reduce the total number of clock cycles needed. </p><p>First a decoder and an encoder were implemented in DSP assembler. The cycle consumption of the parts was measured and from this the hardware/software partitioning was done. Behavioral models of the accelerators were then written in C++ and the assembly code was modified to work with the new hardware. Finally, the accelerators were implemented using Verilog. </p><p>Extension of the accelerator instructions was given following a custom design flow.</p>

Page generated in 0.0403 seconds