• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 182
  • 35
  • 34
  • 7
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 322
  • 322
  • 144
  • 120
  • 86
  • 66
  • 65
  • 58
  • 52
  • 42
  • 37
  • 37
  • 35
  • 28
  • 26
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

SIGNAL PROCESSING ABOUT A DISTRIBUTED DATA ACQUISITION SYSTEM

Kolb, John 10 1900 (has links)
International Telemetering Conference Proceedings / October 21, 2002 / Town & Country Hotel and Conference Center, San Diego, California / Because modern data acquisition systems use digital backplanes, it is logical for more and more data processing to be done in each Data Acquisition Unit (DAU) or even in each module. The processing related to an analog acquisition module typically takes the form of digital signal conditioning for range adjust, linearization and filtering. Some of the advantages of this are discussed in this paper. The next stage is powerful processing boards within DAUs for data reduction and third-party algorithm development. Once data is being written to and from powerful processing modules an obvious next step is networking and decom-less access to data. This paper discusses some of the issues related to these types of processing.
132

A REAL-TIME HIGH PERFORMANCE DATA COMPRESSION TECHNIQUE FOR SPACE APPLICATIONS

Yeh, Pen-Shu, Miller, Warner H. 10 1900 (has links)
International Telemetering Conference Proceedings / October 25-28, 1999 / Riviera Hotel and Convention Center, Las Vegas, Nevada / A high performance lossy data compression technique is currently being developed for space science applications under the requirement of high-speed push-broom scanning. The technique is also error-resilient in that error propagation is contained within a few scan lines. The algorithm is based on block-transform combined with bit-plane encoding; this combination results in an embedded bit string with exactly the desirable compression rate. The lossy coder is described. The compression scheme performs well on a suite of test images typical of images from spacecraft instruments. Hardware implementations are in development; a functional chip set is expected by the end of 2000.
133

HYBRID OPTICAL/DIGITAL PROCESSING APPROACH FOR INTERFRAME IMAGE DATA COMPRESSION.

ITO, HIROYASU NICOLAS. January 1982 (has links)
Image data compression is an active topic of research in image processing. Traditionally, most image data compression schemes have been dominated by digital processing due to the fact that digital systems are inherently flexible and reliable. However, it has been demonstrated that optical processing can be used for spatial image data compression, using a method called interpolated differential pulse code modulation (IDPCM). This is a compression scheme which functions analogously with conventional digital DPCH compression, except that the specific compression steps are implemented by incoherent optical processing. The main objective of this research is to extend IDPCM to interframe compression, design such systems, and evaluate the compression performance limitation under no channel errors, given the subjectively acceptable image quality by means of digital simulation. We start with a review of digital spatial and interframe compression techniques and their implications for optical implementation. Then, the technological background of electro-optical devices which has made possible hybrid optical/digital processing for image data compression will be briefly discussed. Also, a detailed description of IDPCM coding is given, along with the ways that IDPCM can be extended to interframe compression. Finally, two architectures of hybrid and optical/digital interframe compression are proposed, simulated, and evaluated in order to discover potential performances of optically implemented interframe compression systems. Excellent reconstructed image quality is obtained by the proposed adaptive hybrid (O/D) IDPCM/frame replenishment technique at an overall transmission rate of 3 Mbits/sec, average bit rate of 1.5 bits/pixel, and the average compression ratio of 5.2:1.
134

Optimizing bandwidth of tactical communications systems

Cox, Criston W. 06 1900 (has links)
Current tactical networks are oversaturated, often slowing systems down to unusable speeds. Utilizing data collected from major exercises and Operation Iraqi Freedom II (OIF II), a typical model of existing tactical network performance is modeled and analyzed using NETWARS, a DISA sponsored communication systems modeling and simulation program. Optimization technologies are then introduced, such as network compression, caching, Quality of Service (QoS), and the Space Communication Protocol Standards Transport Protocol (SCPS-TP). The model is then altered to reflect an optimized system, and simulations are run for comparison. Data for the optimized model was obtained by testing commercial optimization products known as Protocol Enhancement Proxies ( Support Activity (MCTSSA) testing laboratory.
135

Regression Wavelet Analysis for Lossless Coding of Remote-Sensing Data

Marcellin, Michael W., Amrani, Naoufal, Serra-Sagristà. Joan, Laparra, Valero, Malo, Jesus 08 May 2016 (has links)
A novel wavelet-based scheme to increase coefficient independence in hyperspectral images is introduced for lossless coding. The proposed regression wavelet analysis (RWA) uses multivariate regression to exploit the relationships among wavelettransformed components. It builds on our previous nonlinear schemes that estimate each coefficient from neighbor coefficients. Specifically, RWA performs a pyramidal estimation in the wavelet domain, thus reducing the statistical relations in the residuals and the energy of the representation compared to existing wavelet-based schemes. We propose three regression models to address the issues concerning estimation accuracy, component scalability, and computational complexity. Other suitable regression models could be devised for other goals. RWA is invertible, it allows a reversible integer implementation, and it does not expand the dynamic range. Experimental results over a wide range of sensors, such as AVIRIS, Hyperion, and Infrared Atmospheric Sounding Interferometer, suggest that RWA outperforms not only principal component analysis and wavelets but also the best and most recent coding standard in remote sensing, CCSDS-123.
136

Permutation-based data compression

Unknown Date (has links)
The use of permutations in data compression is an aspect that is worthy of further exploration. The work that has been done in video compression based on permutations was primarily oriented towards lossless algorithms. The study of previous algorithms has led to a new algorithm that could be either lossless or lossy, for which the amount of compression and the quality of the output can be controlled. The lossless version of our algorithm performs close to lossy versions of H.264 and it improves on them for the majority of the videos that we analyzed. Our algorithm could be used in situations where there is a need for lossless compression and the video sequences are part of a single scene, e.g., medical videos, where loss of information could be risky or expensive. Some results on permutations, which may be of independent interest, arose in developing this algorithm. We report on these as well. / by Amalya Mihnea. / Thesis (Ph.D.)--Florida Atlantic University, 2011. / Includes bibliography. / Electronic reproduction. Boca Raton, Fla., 2011. Mode of access: World Wide Web.
137

Information-theoretics based technoeconomic growth models: simulation and computation of forecasting in telecommunication services

Unknown Date (has links)
This research is concerned with algorithmic representation of technoeconomic growth concerning modern and next-generation telecommunications including the Internet service. The goal of this study thereof is to emphasize efforts to establish the associated forecasting and, the envisioned tasks thereof include : (i) Reviewing the technoeconomic considerations prevailing in telecommunication (telco) service industry and their implicating features; (ii) studying relevant aspects of underlying complex system evolution (akin to biological systems), (iii) pursuant co-evolution modeling of competitive business structures using dichotomous (flip-flop) states as seen in predator evolutions ; (iv) conceiving a novel algorithm based on information-theoretic principles toward technoeconomic forecasting on the basis of modified Fisher-Kaysen model consistent with proportional fairness concept of comsumers' willingness-to-pay, and (v) evaluating forecast needs on inter-office facility based congestion sensitive traffics encountered. Commensurate with the topics indicated above, necessary algorithms, analytical derivations and compatible models are proposed. Relevant computational exercises are performed with MatLab[TM] using data gathered from open-literature on the service profiles of telecommunication companies (telco); and ad hoc model verifications are performed on the results. Lastly, discussions and inferences are made with open-questions identified for further research. / by Raef Rashad Yassin. / Thesis (Ph.D.)--Florida Atlantic University, 2012. / Includes bibliography. / Mode of access: World Wide Web. / System requirements: Adobe Reader.
138

[en] CRYPTO-COMPRESSION PREFIX CODING / [pt] CODIFICAÇÃO LIVRE DE PREFIXO PARA CRIPTO-COMPRESSÃO

CLAUDIO GOMES DE MELLO 16 May 2007 (has links)
[pt] Cifragem e compressão de dados são funcionalidades essencias quando dados digitais são armazenados ou transmitidos através de canais inseguros. Geralmente, duas operações sequencias são aplicadas: primeiro, compressão de dados para economizar espaço de armazenamento e reduzir custos de transmissão, segundo, cifragem de dados para prover confidencialidade. Essa solução funciona bem para a maioria das aplicações, mas é necessário executar duas operações caras, e para acessar os dados, é necessário primeiro decifrar e depois descomprimir todo o texto cifrado para recuperar a informação. Neste trabalho são propostos algoritmos que realizam tanto compressão como cifragem de dados. A primeira contribuição desta tese é o algoritmo ADDNULLS - Inserção Seletiva de Nulos. Este algoritmo usa a técnica da esteganografia para esconder os símbolos codificados em símbolos falsos. É baseado na inserção seletiva de um número variável de símbolos nulos após os símbolos codificados. É mostrado que as perdas nas taxas de compressão são relativamente pequenas. A segunda contribuição desta tese é o algoritmo HHC - Huffman Homofônico-Canônico. Este algoritmo cria uma nova árvore homofônica baseada na árvore de Huffman canônica original para o texto de entrada. Os resultados dos experimentos são mostrados. A terceira contribuição desta tese é o algoritmo RHUFF - Huffman Randomizado. Este algoritmo é uma variante do algoritmo de Huffman que define um procedimento de cripto-compressão que aleatoriza a saída. O objetivo é gerar textos cifrados aleatórios como saída para obscurecer as redundâncias do texto original (confusão). O algoritmo possui uma função de permutação inicial, que dissipa a redundância do texto original pelo texto cifrado (difusão). A quarta contribuição desta tese é o algoritmo HSPC2 - Códigos de Prefixo baseados em Substituição Homofônica com 2 homofônicos. No processo de codificação, o algoritmo adiciona um bit de sufixo em alguns códigos. Uma chave secreta e uma taxa de homofônicos são parâmetros que controlam essa inserção. É mostrado que a quebra do HSPC2 é um problema NP- Completo. / [en] Data compression and encryption are essential features when digital data is stored or transmitted over insecure channels. Usually, we apply two sequential operations: first, we apply data compression to save disk space and to reduce transmission costs, and second, data encryption to provide confidentiality. This solution works fine for most applications, but we have to execute two expensive operations, and if we want to access data, we must first decipher and then decompress the ciphertext to restore information. In this work we propose algorithms that achieve both compressed and encrypted data. The first contribution of this thesis is the algorithm ADDNULLS - Selective Addition of Nulls. This algorithm uses steganographic technique to hide the real symbols of the encoded text within fake ones. It is based on selective insertion of a variable number of null symbols after the real ones. It is shown that coding and decoding rates loss are small. The disadvantage is ciphertext expansion. The second contribution of this thesis is the algorithm HHC - Homophonic- Canonic Huffman. This algorithm creates a new homophonic tree based upon the original canonical Huffman tree for the input text. It is shown the results of the experiments. Adding security has not significantly decreased performance. The third contribution of this thesis is the algorithm RHUFF - Randomized Huffman. This algorithm is a variant of Huffman codes that defines a crypto-compression algorithm that randomizes output. The goal is to generate random ciphertexts as output to obscure the redundancies in the plaintext (confusion). The algorithm uses homophonic substitution, canonical Huffman codes and a secret key for ciphering. The secret key is based on an initial permutation function, which dissipates the redundancy of the plaintext over the ciphertext (diffusion). The fourth contribution of this thesis is the algorithm HSPC2 - Homophonic Substitution Prefix Codes with 2 homophones. It is proposed a provably secure algorithm by using a homophonic substitution algorithm and a key. In the encoding process, the HSPC2 function appends a one bit suffx to some codes. A secret key and a homophonic rate parameters control this appending. It is shown that breaking HSPC2 is an NP-Complete problem.
139

Image coding with a lapped orthogonal transform.

January 1993 (has links)
by Patrick Chi-man Fung. / Thesis (M.Sc.)--Chinese University of Hong Kong, 1993. / Includes bibliographical references (leaves 57-58). / LIST OF FIGURES / LIST OF IMAGES / LIST OF TABLES / NOTATIONS / Chapter 1 --- INTRODUCTION --- p.1 / Chapter 2 --- THEORY --- p.3 / Chapter 2.1 --- Matrix Representation of LOT --- p.3 / Chapter 2.2 --- Feasibility of LOT --- p.5 / Chapter 2.3 --- Properties of Good Feasible LOT --- p.6 / Chapter 2.4 --- An Optimal LOT --- p.7 / Chapter 2.5 --- Approximation of an Optimal LOT --- p.10 / Chapter 2.6 --- Representation of an Approximately Optimal LOT --- p.13 / Chapter 3 --- IMPLEMENTATION --- p.17 / Chapter 3.1 --- Mathematical Background --- p.17 / Chapter 3.2 --- Analysis of LOT Flowgraph --- p.17 / Chapter 3.2.1 --- The Fundamental LOT Building Block --- p.17 / Chapter 3.2.2 --- +1/-1 Butterflies --- p.19 / Chapter 3.3 --- Conclusion --- p.25 / Chapter 4 --- RESULTS --- p.27 / Chapter 4.1 --- Objective of Energy Packing --- p.27 / Chapter 4.2 --- Nature of Target Images --- p.27 / Chapter 4.3 --- Methodology of LOT Coefficient Selection --- p.28 / Chapter 4.4 --- dB RMS Error in Pixel Values --- p.29 / Chapter 4.5 --- Negative Pixel Values in Reverse LOT --- p.30 / Chapter 4.6 --- LOT Coefficient Energy Distribution --- p.30 / Chapter 4.7 --- Experimental Data --- p.32 / Chapter 5 --- DISCUSSION AND CONCLUSIONS --- p.46 / Chapter 5.1 --- RMS Error (dB) and LOT Coeffs. Drop Ratio --- p.46 / Chapter 5.1.1 --- Numeric Experimental Results --- p.46 / Chapter 5.1.2 --- Human Visual Response --- p.46 / Chapter 5.1.3 --- Conclusion --- p.49 / Chapter 5.2 --- Number of Negative Pixel Values in RLOT --- p.50 / Chapter 5.3 --- LOT Coefficient Energy Distribution --- p.51 / Chapter 5.4 --- Effect of Changing the Block Size --- p.54 / REFERENCES --- p.57 / APPENDIX / Tables of Experimental Data --- p.59
140

Non-uniform time-scale modification of speech

Holtzman Dantus, Samuel January 1980 (has links)
Thesis (Elec.E)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1980. / MICROFICHE COPY AVAILABLE IN ARCHIVES AND ENGINEERING. / Bibliography: leaves 173-175. / by Samuel Holtzman Dantus. / Elec.E

Page generated in 0.0853 seconds