• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 183
  • 35
  • 34
  • 7
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 323
  • 323
  • 145
  • 121
  • 86
  • 66
  • 65
  • 58
  • 52
  • 42
  • 37
  • 37
  • 36
  • 28
  • 26
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

The use of Hadamard Transform as a data compression technique in the development of a 3-dimensional fluorescence spectral library for qualitative analysis

Ishihara, Fumiko January 1989 (has links)
In recent years, chemical instrumentation has become much more sophisticated. Most analytical equipment now incorporates a microprocessor or is interfaced to a microcomputer. As a result, chemists can collect an immense amount of data on a single sample in a short period of time. While there may be an advantage to gathering such a great deal of information, problems can arise from too much information. Today, analysts commonly are faced with the dual problems of storing and analyzing the resulting flood of information. The goal of this research has been to address the problems of data storage and data analysis. Specifically, data compression techniques and spectral search and match algorithms have been developed. The data compression techniques developed utilize the Hadamard Transform and the modified zero-crossing clipping algorithm. The spectral search technique utilizes the unique format of the compressed and clipped data to greatly accelerate spectrum identification. To demonstrate the feasibility of this technique, three-dimensional fluorescence spectra of polynuclear aromatic compounds have been used. The results indicate data compression techniques and the application of these techniques to a library search system for three-dimensional fluorescence spectroscopy were both successful. / Ph. D.
212

A hybrid scheme for low-bit rate stereo image compression

Jiang, Jianmin, Edirisinghe, E.A. 29 May 2009 (has links)
No / We propose a hybrid scheme to implement an object driven, block based algorithm to achieve low bit-rate compression of stereo image pairs. The algorithm effectively combines the simplicity and adaptability of the existing block based stereo image compression techniques with an edge/contour based object extraction technique to determine appropriate compression strategy for various areas of the right image. Unlike the existing object-based coding such as MPEG-4 developed in the video compression community, the proposed scheme does not require any additional shape coding. Instead, the arbitrary shape is reconstructed by the matching object inside the left frame, which has been encoded by standard JPEG algorithm and hence made available at the decoding end for those shapes in right frames. Yet the shape reconstruction for right objects incurs no distortion due to the unique correlation between left and right frames inside stereo image pairs and the nature of the proposed hybrid scheme. Extensive experiments carried out support that significant improvements of up to 20% in compression ratios are achieved by the proposed algorithm in comparison with the existing block-based technique, while the reconstructed image quality is maintained at a competitive level in terms of both PSNR values and visual inspections
213

Hardware Accelerator Design for Scientific Computing and Machine Learning Workloads

Huang, Xuanyuanliang (Paul) January 2025 (has links)
Scientific computing and machine learning are two major areas for modern computing demands. The former has applications in physics simulation and mathematical modeling while the latter has become the mainstream approach for tasks such as image classification and natural language processing. Despite the seemingly disparate application domains, the two types of workloads are essentially similar to each other in that their dominant computing kernels are both sparse/dense matrix-vector multiplications. Moreover, conventional computing platforms for these workloads - multicore CPUs/GPUs and microprocessors, consume a large amount of energy and/or in many cases still lack in performance for either cloud or edge applications. Therefore, it is an essential task to develop custom hardware accelerators that improve the energy efficiency and performance and alleviate the performance bottlenecks in CPUs/GPUs. Toward this goal, the thesis presents four hardware accelerator/architecture designs demonstrating significant improvements in key metrics over prior art. The first design is a novel solver chip for partial differential equations (PDEs), a critical mathematical model in scientific computing. Prototyped in 65 nm, the chip features a programmable floating-point processor array architecture. It dramatically improves the range of mappable problems and the solution precision compared to prior art while being 40x faster under the same energy-delay product. The second design, which is part of a system on a chip (SoC) manufactured in 28 nm, is an accelerator for the inference process of convolutional neural networks (CNNs), a popular type of neural network model in machine learning. It incorporates digital in-memory computing modules to efficiently execute the matrix-vector multiplication operations. The resulting SoC achieves 88x higher energy-delay product than prior art. The third design aims to improve the energy efficiency of sparse matrix-vector multiplication (SpMV) by reducing off-chip data movement. Using the gzip compression algorithm, the design offline compresses matrix data in off-chip memory while decompressing it on the fly during computation runtime using custom on-chip decompression hardware. Prototyped in 28 nm, the chip achieves 2.32x system-level energy efficiency improvement over prior art. The fourth design applies the on-the-fly gzip decompression to commercial GPU platforms, aiming at expanding the effective off-chip memory bandwidth. The design proposes a compressed block cache prefetching scheme to address the critical challenges of using gzip for memory decompression purposes in GPUs. Evaluated using open-source simulators, the design achieves 5.3-20.3% performance improvements over the baseline model.
214

Universal homophonic coding

Stevens, Charles Cater 11 1900 (has links)
Redundancy in plaintext is a fertile source of attack in any encryption system. Compression before encryption reduces the redundancy in the plaintext, but this does not make a cipher more secure. The cipher text is still susceptible to known-plaintext and chosen-plaintext attacks. The aim of homophonic coding is to convert a plaintext source into a random sequence by randomly mapping each source symbol into one of a set of homophones. Each homophone is then encoded by a source coder after which it can be encrypted with a cryptographic system. The security of homophonic coding falls into the class of unconditionally secure ciphers. The main advantage of homophonic coding over pure source coding is that it provides security both against known-plaintext and chosen-plaintext attacks, whereas source coding merely protects against a ciphertext-only attack. The aim of this dissertation is to investigate the implementation of an adaptive homophonic coder based on an arithmetic coder. This type of homophonic coding is termed universal, as it is not dependent on the source statistics. / Computer Science / M.Sc. (Computer Science)
215

Universal homophonic coding

Stevens, Charles Cater 11 1900 (has links)
Redundancy in plaintext is a fertile source of attack in any encryption system. Compression before encryption reduces the redundancy in the plaintext, but this does not make a cipher more secure. The cipher text is still susceptible to known-plaintext and chosen-plaintext attacks. The aim of homophonic coding is to convert a plaintext source into a random sequence by randomly mapping each source symbol into one of a set of homophones. Each homophone is then encoded by a source coder after which it can be encrypted with a cryptographic system. The security of homophonic coding falls into the class of unconditionally secure ciphers. The main advantage of homophonic coding over pure source coding is that it provides security both against known-plaintext and chosen-plaintext attacks, whereas source coding merely protects against a ciphertext-only attack. The aim of this dissertation is to investigate the implementation of an adaptive homophonic coder based on an arithmetic coder. This type of homophonic coding is termed universal, as it is not dependent on the source statistics. / Computer Science / M.Sc. (Computer Science)
216

Distributed Coding For Wireless Sensor Networks

Varshneya, Virendra K 11 1900 (has links) (PDF)
No description available.
217

Segmented approximation and analysis of stochastic processes.

Akant, Adnan. January 1977 (has links)
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1977 / Vita. / Includes bibliographical references. / Ph. D. / Ph. D. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
218

Storage-Centric System Architectures for Networked, Resource-Constrained Devices

Tsiftes, Nicolas January 2016 (has links)
The emergence of the Internet of Things (IoT) has increased the demand for networked, resource-constrained devices tremendously. Many of the devices used for IoT applications are designed to be resource-constrained, as they typically must be small, inexpensive, and powered by batteries. In this dissertation, we consider a number of challenges pertaining to these constraints: system support for energy efficiency; flash-based storage systems; programming, testing, and debugging; and safe and secure application execution. The contributions of this dissertation are made through five research papers addressing these challenges. Firstly, to enhance the system support for energy-efficient storage in resource-constrained devices, we present the design, implementation, and evaluation of the Coffee file system and the Antelope DBMS. Coffee provides a sequential write throughput that is over 92% of the attainable flash driver throughput, and has a constant memory footprint for open files. Antelope is the first full-fledged relational DBMS for sensor networks, and it provides two novel indexing algorithms to enable fast and energy-efficient database queries. Secondly, we contribute a framework that extends the functionality and increases the performance of sensornet checkpointing, a debugging and testing technique. Furthermore, we evaluate how different data compression algorithms can be used to decrease the energy consumption and data dissemination time when reprogramming sensor networks. Lastly, we present Velox, a virtual machine for IoT applications. Velox can enforce application-specific resource policies. Through its policy framework and its support for high-level programming languages, Velox helps to secure IoT applications. Our experiments show that Velox monitors applications' resource usage and enforces policies with an energy overhead below 3%. The experimental systems research conducted in this dissertation has had a substantial impact both in the academic community and the open-source software community. Several of the produced software systems and components are included in Contiki, one of the premier open-source operating systems for the IoT and sensor networks, and they are being used both in research projects and commercial products.
219

Compression de données de test pour architecture de systèmes intégrés basée sur bus ou réseaux et réduction des coûts de test / Test data compression for integrated systems architecture based on bus or network and test cost reduction

Dalmasso, Julien 01 October 2010 (has links)
Les circuits intégrés devenant de plus en plus complexes, leur test demande des efforts considérables se répercutant sur le coût de développement et de production de ces composants. De nombreux travaux ont donc porté sur la réduction du coût de ce test en utilisant en particulier les techniques de compression de données de test. Toutefois ces techniques n'adressent que des coeurs numériques dont les concepteurs détiennent la connaissance de toutes les informations structurelles et donc en pratique n'adressent que le test de sous-blocs d'un système complet. Dans cette thèse, nous proposons tout d'abord une nouvelle technique de compression des données de test pour les circuits intégrés compatible avec le paradigme de la conception de systèmes (SoC) à partir de fonctions pré-synthétisées (IPs ou coeurs). Puis, deux méthodes de test des systèmes utilisant la compression sont proposées. La première est relative au test des systèmes SoC utilisant l'architecture de test IEEE 1500 (avec un mécanisme d'accès au test de type bus), la deuxième concerne le test des systèmes pour lesquels la communication interne s'appuie sur des structures de type réseau sur puce (NoC). Ces deux méthodes utilisent conjointement un ordonnancement du test des coeurs du système avec une technique de compression horizontale afin d'augmenter le parallélisme du test des coeurs constituant le système et ce, à coût matériel constant. Les résultats expérimentaux sur des systèmes sur puces de référence montrent des gains de l'ordre de 50% sur le temps de test du système complet. / While microelectronics systems become more and more complex, test costs have increased in the same way. Last years have seen many works focused on test cost reduction by using test data compression. However these techniques only focus on individual digital circuits whose structural implementation (netlist) is fully known by the designer. Therefore, they are not suitable for the testing of cores of a complete system. The goal of this PhD work was to provide a new solution for test data compression of integrated circuits taking into account the paradigm of systems-on-chip (SoC) built from pre-synthesized functions (IPs or cores). Then two systems testing method using compression are proposed for two different system architectures. The first one concerns SoC with IEEE 1500 test architecture (with bus-based test access mechanism), the second one concerns NoC-based systems. Both techniques use test scheduling methods combined with test data compression for better exploration of the design space. The idea is to increase test parallelism with no hardware extra cost. Experimental results performed on system-on-chip benchmarks show that the use of test data compression leads to test time reduction of about 50% at system level.
220

Busca indexada de padrões em textos comprimidos / Indexed search of compressed texts

Machado, Lennon de Almeida 07 May 2010 (has links)
A busca de palavras em uma grande coleção de documentos é um problema muito recorrente nos dias de hoje, como a própria utilização dos conhecidos \"motores de busca\" revela. Para que as buscas sejam realizadas em tempo que independa do tamanho da coleção, é necessário que a coleção seja indexada uma única vez. O tamanho destes índices é tipicamente linear no tamanho da coleção de documentos. A compressão de dados é outro recurso bastante utilizado para lidar com o tamanho sempre crescente da coleção de documentos. A intenção deste estudo é aliar a indexação utilizada nas buscas à compressão de dados, verificando alternativas às soluções já propostas e visando melhorias no tempo de resposta das buscas e no consumo de memória utilizada nos índices. A análise das estruturas de índice com os algoritmos de compressão mostra que arquivo invertido por blocos em conjuntos com compressão Huffman por palavras é uma ótima opção para sistemas com restrição de consumo de memória, pois proporciona acesso aleatório e busca comprimida. Neste trabalho também são propostas novas codificações livres de prefixo a fim de melhorar a compressão obtida e capaz de gerar códigos auto-sincronizados, ou seja, com acesso aleatório realmente viável. A vantagem destas novas codificações é que elas eliminam a necessidade de gerar a árvore de codificação Huffman através dos mapeamentos propostos, o que se traduz em economia de memória, codificação mais compacta e menor tempo de processamento. Os resultados obtidos mostram redução de 7% e 9% do tamanho dos arquivos comprimidos com tempos de compressão e descompressão melhores e menor consumo de memória. / Pattern matching over a big document collection is a very recurrent problem nowadays, as the growing use of the search engines reveal. In order to accomplish the search in a period of time independent from the collection size, it is necessary to index the collecion only one time. The index size is typically linear in the size of document collection. Data compression is another powerful resource to manage the ever growing size of the document collection. The objective in this assignment is to ally the indexed search to data compression, verifying alternatives to the current solutions, seeking improvement in search time and memory usage. The analysis on the index structures and compression algorithms indicates that joining the block inverted les with Huffman word-based compression is an interesting solution because it provides random access and compressed search. New prefix free codes are proposed in this assignment in order to enhance the compression and facilitate the generation of self-sinchronized codes, furthermore, with a truly viable random access. The advantage in this new codes is that they eliminate the need of generating the Huffman-code tree through the proposed mappings, which stands for economy of memory, compact encoding and shorter processing time. The results demonstrate gains of 7% and 9% in the compressed le size, with better compression and decompression times and lower memory consumption.

Page generated in 0.1095 seconds