Spelling suggestions: "subject:"data compression (computer cience)"" "subject:"data compression (computer cscience)""
71 |
A heuristic method for reducing message redundancy in a file transfer environmentBodwell, William Robert January 1976 (has links)
Intercomputer communications involves the transfer of information between intelligent hosts. Since communication costs are almost proportional to the amount of data transferred, the processing capability of the respective hosts might advantageously be applied through pre-processing and post-processing of data to reduce redundancy. The major emphasis of this research is development of the Substitution Method which minimizes data transfer between hosts required to reconstruct user JCL files, Fortran source files, and data files.
The technique requires that a set of user files for each category of files be examined to determine the frequency distribution of symbols, fixed strings, and repeated symbol strings to determine symbol and structural redundancy. Information gathered during the examination of these files when combined with the user created Source Language Syntax Table generate Encoding/Decoding Tables which are used to reduce both symbol and structural redundancy. The Encoding/Decoding Tables allow frequently encountered strings to be represented by only one or two symbols through the utilization of table shift symbols. The table shift symbols allow less frequently encountered symbols of the original alphabet to be represented as an entry in a Secondary Encoding/Decoding Table. A technique is described which enables a programmer to easily modify his Fortran program such that he can take advantage of the Substitution Method's ability to compress data files by removing both informational and structural redundancy.
Each user file requested to be transferred is preprocessed at cost, C[prep], to reduce data (both symbol and structural redundancy) which need not be transferred for faithful reproduction of the file. The file is transferred over a noiseless channel at cost, C[ptran]. The channel consists of presently available or proposed services of the common-carriers and specialized common-carriers. The received file is post-processed to reconstruct the original source file at cost, C[post]. The costs associated with pre-processing, transferring, and post-processing are compared with the cost, C[otran], of transferring the entire file in its original form. / Ph. D.
|
72 |
Lossless reversible text transformsAwan, Fauzia Salim 01 July 2001 (has links)
No description available.
|
73 |
The use of Hadamard Transform as a data compression technique in the development of a 3-dimensional fluorescence spectral library for qualitative analysisIshihara, Fumiko January 1989 (has links)
In recent years, chemical instrumentation has become much more sophisticated. Most analytical equipment now incorporates a microprocessor or is interfaced to a microcomputer. As a result, chemists can collect an immense amount of data on a single sample in a short period of time. While there may be an advantage to gathering such a great deal of information, problems can arise from too much information. Today, analysts commonly are faced with the dual problems of storing and analyzing the resulting flood of information.
The goal of this research has been to address the problems of data storage and data analysis. Specifically, data compression techniques and spectral search and match algorithms have been developed. The data compression techniques developed utilize the Hadamard Transform and the modified zero-crossing clipping algorithm. The spectral search technique utilizes the unique format of the compressed and clipped data to greatly accelerate spectrum identification.
To demonstrate the feasibility of this technique, three-dimensional fluorescence spectra of polynuclear aromatic compounds have been used.
The results indicate data compression techniques and the application of these techniques to a library search system for three-dimensional fluorescence spectroscopy were both successful. / Ph. D.
|
74 |
Hardware Accelerator Design for Scientific Computing and Machine Learning WorkloadsHuang, Xuanyuanliang (Paul) January 2025 (has links)
Scientific computing and machine learning are two major areas for modern computing demands. The former has applications in physics simulation and mathematical modeling while the latter has become the mainstream approach for tasks such as image classification and natural language processing. Despite the seemingly disparate application domains, the two types of workloads are essentially similar to each other in that their dominant computing kernels are both sparse/dense matrix-vector multiplications. Moreover, conventional computing platforms for these workloads - multicore CPUs/GPUs and microprocessors, consume a large amount of energy and/or in many cases still lack in performance for either cloud or edge applications. Therefore, it is an essential task to develop custom hardware accelerators that improve the energy efficiency and performance and alleviate the performance bottlenecks in CPUs/GPUs.
Toward this goal, the thesis presents four hardware accelerator/architecture designs demonstrating significant improvements in key metrics over prior art. The first design is a novel solver chip for partial differential equations (PDEs), a critical mathematical model in scientific computing. Prototyped in 65 nm, the chip features a programmable floating-point processor array architecture. It dramatically improves the range of mappable problems and the solution precision compared to prior art while being 40x faster under the same energy-delay product.
The second design, which is part of a system on a chip (SoC) manufactured in 28 nm, is an accelerator for the inference process of convolutional neural networks (CNNs), a popular type of neural network model in machine learning. It incorporates digital in-memory computing modules to efficiently execute the matrix-vector multiplication operations. The resulting SoC achieves 88x higher energy-delay product than prior art.
The third design aims to improve the energy efficiency of sparse matrix-vector multiplication (SpMV) by reducing off-chip data movement. Using the gzip compression algorithm, the design offline compresses matrix data in off-chip memory while decompressing it on the fly during computation runtime using custom on-chip decompression hardware. Prototyped in 28 nm, the chip achieves 2.32x system-level energy efficiency improvement over prior art.
The fourth design applies the on-the-fly gzip decompression to commercial GPU platforms, aiming at expanding the effective off-chip memory bandwidth. The design proposes a compressed block cache prefetching scheme to address the critical challenges of using gzip for memory decompression purposes in GPUs. Evaluated using open-source simulators, the design achieves 5.3-20.3% performance improvements over the baseline model.
|
75 |
Segmented approximation and analysis of stochastic processes.Akant, Adnan. January 1977 (has links)
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1977 / Vita. / Includes bibliographical references. / Ph. D. / Ph. D. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
|
76 |
Strategien für die Instruktionscodekompression in cachebasierten, eingebetteten Systemen /Jachalsky, Jörn. January 1900 (has links)
Thesis--Technische Universität Hannover. / Includes bibliographical references.
|
77 |
Vector wavelet transforms for the coding of static and time-varying vector fieldsHua, Li. January 2003 (has links)
Thesis (Ph. D.)--Mississippi State University. Department of Electrical and Computer Engineering. / Title from title screen. Includes bibliographical references.
|
78 |
Object-based unequal error protectionMarka, Madhavi. January 2002 (has links)
Thesis (M.S.) -- Mississippi State University. Department of Electrical and Computer Engineering. / Title from title screen. Includes bibliographical references.
|
79 |
Compressing scientific data with control and minimization of the L-infinity metric under the JPEG 2000 frameworkLucero, Aldo. January 2007 (has links)
Thesis (Ph. D.)--University of Texas at El Paso, 2007. / Title from title screen. Vita. CD-ROM. Includes bibliographical references. Also available online.
|
80 |
SPARC16 = uma nova visão de compressão para processadores SPARC / SPARC16 : a new compression approach for SPARC processorsEcco, Leonardo Luiz 17 August 2018 (has links)
Orientadores: Rodolfo Jardim de Azevedo, Paulo César Centoducatte / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-17T03:13:58Z (GMT). No. of bitstreams: 1
Ecco_LeonardoLuiz_M.pdf: 1421385 bytes, checksum: f67461dbfc9c1fb6597942f22c234b0a (MD5)
Previous issue date: 2010 / Resumo: Processadores RISC podem ser usados para atender a crescente demanda por desempenho requerida por sistemas embarcados. Entretanto, essas arquiteturas têm como desvantagem uma densidade de código ruim. Recodificações do conjunto de instruções, como o MIPS16 e o Thumb, representam uma abordagem eficiente para lidar com esse problema. Esse trabalho propõe uma codificação alternativa para a arquitetura SPARCv8. A nova codificação, chamada SPARC16, foi projetada com a ajuda de um modelo de programação linear inteira. As novas instruções utilizam 16 bits para serem codificadas e são facilmente traduzidas para suas correspondentes no conjunto de instruções original em tempo de execução, tornando possível posicionar um descompressor antes do estágio de decode de um processador SPARC e usar o restante do pipeline de forma transparente. O descompressor foi projetado e integrado no processador Leon 3 (SPARCv8) e ocasionou um acréscimo de 24% na área e nenhuma penalização na freqüência. Apenas um montador foi implementado para a extensão SPARC16. O descompressor foi validado através de programas que exercitam todas as instruções SPARC16 escritos diretamente em linguagem de montagem. As razões de compressão dos programas dos benchmarks Mediabench e Mibench foram obtidas inferindo como código SPARCv8 seria representado com instruções SPARC16. Através desse método, razões de compressão de até 58% foram atingidas (para o programa cjpeg) com uma média de 61.27% para os programas do Mediabench e 60.77% para os programas do Mibench. Utilizando a mesma abordagem, uma avaliação da mudança trazida pelo uso de SPARC16 nos padrões de acesso à cachê de instruções foi feita e mostrou reduções no número de misses até superiores a 50% / Abstract: RISC processors can be used to face the ever increasing demand for performance required by embedded systems. Nevertheless, these architectures have as drawback a poor code density. Alternate encodings for instruction sets, such as MIPS16 and Thumb, represent an effective approach to deal with this problem. This work proposes an alternate encoding for the SPARCv8 architecture. The new encoding, called SPARC16, was designed with the aid of an integer linear programming model. The new instructions are 16-bits wide and are easily translated to its 32-bit counterparts during execution time, making it possible to place a decompressor engine before the decode stage of a SPARC processor and use the remaining of the pipeline transparently. The decompressor engine was designed and integrated into the Leon 3 processor (SPARCv8) and caused an increase of 24% in area and no timing overhead. Only an assembler was implemented for the SPARC16 extension. The decompressor engine was validated using programs that cover all the SPARC16 instructions written directly in assembly language. The compression ratios for the programs belonging to the Mediabench and Mibench benchmarks were obtained inferring how SPARCv8 code would be represented with SPARC16 instructions. Through this method, compression ratios as low as 58% were achieved (for the cjpeg program) with an average of 61.27% for the Mediabench programs and 60.77% for the Mibench programs. Using the same approach, an evaluation of the change brought by the use of SPARC16 in the instruction cache access patterns was performed and showed reductions in the number of misses even greater than 50% / Mestrado / Ciência da Computação / Mestre em Ciência da Computação
|
Page generated in 0.1228 seconds