• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 182
  • 35
  • 34
  • 7
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 322
  • 322
  • 144
  • 120
  • 86
  • 66
  • 65
  • 58
  • 52
  • 42
  • 37
  • 37
  • 35
  • 28
  • 26
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

EFFICIENT AND SECURE IMAGE AND VIDEO PROCESSING AND TRANSMISSION IN WIRELESS SENSOR NETWORKS

Assegie, Samuel January 2010 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Sensor nodes forming a network and using wireless communications are highly useful in a variety of applications including battle field (military) surveillance, building security, medical and health services, environmental monitoring in harsh conditions, for scientific investigations on other planets, etc. But these wireless sensors are resource constricted: limited power supply, bandwidth for communication, processing speed, and memory space. One possible way of achieve maximum utilization of those constrained resource is applying signal processing and compressing the sensor readings. Usually, processing data consumes much less power than transmitting data in wireless medium, so it is effective to apply data compression by trading computation for communication before transmitting data for reducing total power consumption by a sensor node. However the existing state of the art compression algorithms are not suitable for wireless sensor nodes due to their limited resource. Therefore there is a need to design signal processing (compression) algorithms considering the resource constraint of wireless sensors. In our work, we designed a lightweight codec system aiming surveillance as a target application. In designing the codec system, we have proposed new design ideas and also tweak the existing encoding algorithms to fit the target application. Also during data transmission among sensors and between sensors and base station, the data has to be secured. We have addressed some security issues by assessing the security of wavelet tree shuffling as the only security mechanism.
122

A Comparative Study of Data Transformations for Efficient XML and JSON Data Compression. An In-Depth Analysis of Data Transformation Techniques, including Tag and Capital Conversions, Character and Word N-Gram Transformations, and Domain-Specific Data Transforms using SMILES Data as a Case Study

Scanlon, Shagufta A. January 2015 (has links)
XML is a widely used data exchange format. The verbose nature of XML leads to the requirement to efficiently store and process this type of data using compression. Various general-purpose transforms and compression techniques exist that can be used to transform and compress XML data. More compact alternatives to XML data have been developed, namely JSON due to the verbosity of XML data. Similarly, there is a requirement to efficiently store and process SMILES data used in Chemoinformatics. General-purpose transforms and compressors can be used to compress this type of data to a certain extent, however, these techniques are not specific to SMILES data. The primary contribution of this research is to provide developers that use XML, JSON or SMILES data, with key knowledge of the best transformation techniques to use with certain types of data, and which compression techniques would provide the best compressed output size and processing times, depending on their requirements. The main study in this thesis, investigates the extent of which using data transforms prior to data compression can further improve the compression of XML and JSON data. It provides a comparative analysis of applying a variety of data transform and data transform variations, to a number of different types of XML and JSON equivalent datasets of various sizes, and applying different general-purpose compression techniques over the transformed data. A case study is also conducted, to investigate data transforms prior to compression to improve the compression of data within a data-specific domain. / The files of software accompanying this thesis are unable to be presented online with the thesis.
123

Image compression using a double differential pulse code modulation technique (DPCM/DPCM)

Ma, Kuang-Hua January 1996 (has links)
No description available.
124

Vector quantization in residual-encoded linear prediction of speech

Abramson, Mark January 1983 (has links)
No description available.
125

Wavelet thresholding algorithms for terrain data compression

Hai, Fan 01 April 2000 (has links)
No description available.
126

Evaluation of Spectrum Data Compression Algorithms for Edge-Applications in Industrial Tools

Ring, Johanna January 2024 (has links)
Data volume is growing for each day as more and more is digitalized which puts the data management on test. The smart tools developed by Atlas Copco saves and transmits data to the cloud as a service to find errors in tightening's for their customers to review. A problem is the amount of data that is lost in this process. A tightening cycle usually contains thousands of data points and the storage space for it is too great for the tool's hardware. Today many of the data points are deleted and a small portion of scattered data of the cycle is saved and transmitted. To avoid overfilling the storage space the data need to be minimized. This study is focus on comparing data compression algorithms that could solve this problem.   In a literature study in the beginning, numerous data compression algorithms were found with their advantages and disadvantages. Two different types of compression algorithms are also defined as lossy compression, where data is compressed by losing data points or precision, and lossless compression, where no data is lost throughout the compression. Two lossy and two lossless algorithms are selected to be avaluated with respect to their compression ratio, speed and error tolerance. Poor Man's Compression - Midrange (PMC-MR) and SWING-filter are the lossy algorithms while Gorilla and Fixed-Point Compression (FPC) are the lossless ones.   The reached compression ratios, in percentage, could range from 39\% to 99\%. As combinations of a lossy and a lossless algorithm yields best compression ratios with lower error tolerance, PMC-MR with Gorilla is suggested to be the best suited for Atlas Copco's needs.
127

Ultra High Compression For Weather Radar Reflectivity Data

Makkapati, Vishnu Vardhan 17 November 2006 (has links)
Honeywell Technology Solutions Lab, India / Weather is a major contributing factor in aviation accidents, incidents and delays. Doppler weather radar has emerged as a potent tool to observe weather. Aircraft carry onboard radars but their range and angular resolution are limited. Networks of ground-based weather radars provide extensive coverage of weather over large geographic regions. It would be helpful if these data can be transmitted to the pilot. However, these data are highly voluminous and the bandwidth of the ground-air communication links is limited and expensive. Hence, these data have to be compressed to an extent where they are suitable for transmission over low-bandwidth links. Several methods have been developed to compress pictorial data. General-purpose schemes do not take into account the nature of data and hence do not yield high compression ratios. A scheme for extreme compression of weather radar data is developed in this thesis that does not significantly degrade the meteorological information contained in these data. The method is based on contour encoding. It approximates a contour by a set of systematically chosen ‘control points’ that preserve its fine structure up to a certain level. The contours may be obtained using a thresholding process based on NWS or custom reflectivity levels. This process may result in region and hole contours, enclosing `high' or `low' areas, which may be nested. A tag bit is used to label region and hole contours. The control point extraction method first obtains a smoothed reference contour by averaging the original contour. Then the points on the original contour with maximum deviation from the smoothed contour between the crossings of these contours are identified and are designated as control points. Additional control points are added midway between the control point and the crossing points on either side of it, if the length of the segment between the crossing points exceeds a certain length. The control points, referenced with respect to the top-left corner of each contour for compact quantification, are transmitted to the receiving end. The contour is retrieved from the control points at the receiving end using spline interpolation. The region and hole contours are identified using the tag bit. The pixels between the region and hole contours at a given threshold level are filled using the color corresponding to it. This method is repeated till all the contours for a given threshold level are exhausted, and the process is carried out for all other thresholds, thereby resulting in a composite picture of the reconstructed field. Extensive studies have been conducted by using metrics such as compression ratio, fidelity of reconstruction and visual perception. In particular the effect of the smoothing factor, the choice of the degree of spline interpolation and the choice of thresholds are studied. It has been shown that a smoothing percentage of about 10% is optimal for most data. A degree 2 of spline interpolation is found to be best suited for smooth contour reconstruction. Augmenting NWS thresholds has resulted in improved visual perception, but at the expense of a decrease in the compression ratio. Two enhancements to the basic method that include adjustments to the control points to achieve better reconstruction and bit manipulations on the control points to obtain higher compression are proposed. The spline interpolation inherently tends to move the reconstructed contour away from the control points. This has been somewhat compensated by stretching the control points away from the smoothed reference contour. The amount and direction of stretch are optimized with respect to actual data fields to yield better reconstruction. In the bit manipulation study, the effects of discarding the least significant bits of the control point addresses are analyzed in detail. Simple bit truncation introduces a bias in the contour description and reconstruction, which is removed to a great extent by employing a bias compensation mechanism. The results obtained are compared with other methods devised for encoding weather radar contours.
128

Energy-efficient digital design of reliable, low-throughput wireless biomedical systems

Tolbert, Jeremy Reynard 24 August 2012 (has links)
The main objective of this research is to improve the energy efficiency of low throughput wireless biomedical systems by employing digital design techniques. The power consumed in conventional wireless EEG (biomedical) systems is dominated by digital microcontroller and the radio frequency (RF) transceiver. To reduce the power associated with the digital processor, data compression can reduce the volume of data transmitted. An adaptive data compression algorithm has been proposed to ensure accurate representations of critical epileptic signals, while also preserving the overall power. Further advances in power reduction are also presented by designing a custom baseband processor for data compression. A functional system has been hardware verified and ASIC optimized to reduce the power by over 9X compared to existing methods. The optimized processor can operate at 32MHz with a near threshold supply of 0.5V in a conventional 45nm technology. While attempting to reach high frequencies in the near threshold regime, the probability of timing violations can reduce the robustness of the system. To further optimize the implementation, a low voltage clock tree design has been investigated to improve the reliability of the digital processor. By implementing the proposed clock tree design methodology, the digital processor can improve its robustness (by reducing the probability of timing violations) while reducing the overall power by more than 5 percent. Future work suggests examining new architectures for low-throughput processing and investigating the proposed systems' potential for a multi-channel EEG implementation.
129

Novas abordagens para compressão de documentos XML / New approaches for compression of XML documents

Teixeira, Márlon Amaro Coelho 19 August 2018 (has links)
Orientador: Leonardo de Souza Mendes / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação / Made available in DSpace on 2018-08-19T13:11:21Z (GMT). No. of bitstreams: 1 Teixeira_MarlonAmaroCoelho_M.pdf: 1086156 bytes, checksum: 1acfaaf659e42716010448c6781b6313 (MD5) Previous issue date: 2011 / Resumo: Atualmente, alguns dos fatores que determinam o sucesso ou fracasso das corporações estão ligados a velocidade e a eficiência da tomada de suas decisões. Para que estes quesitos sejam alcançados, a integração dos sistemas computacionais legados aos novos sistemas computacionais é de fundamental importância, criando assim a necessidade de que velhas e novas tecnologias interoperem. Como solução a este problema surge a linguagem XML, uma linguagem auto-descritiva, independente de tecnologia e plataforma, que vem se tornando um padrão de comunicação entre sistemas heterogêneos. Por ser auto-descritiva, a XML se torna redundante, o que gera mais informações a ser transferida e armazenada, exigindo mais recursos dos sistemas computacionais. Este trabalho consiste em apresentar novas abordagens de compressão específicas para a linguagem XML, com o objetivo de reduzir o tamanho de seus documentos, diminuindo os impactos sobre os recursos de rede, armazenamento e processamento. São apresentadas 2 novas abordagens, assim como os casos de testes que as avaliam, considerando os quesitos: taxa de compressão, tempo de compressão e tolerância dos métodos a baixas disponibilidades de memória. Os resultados obtidos são comparados aos métodos de compressão de XML que se destacam na literatura. Os resultados demonstram que a utilização de compressores de documentos XML pode reduzir consideravelmente os impactos de desempenho criados pela linguagem / Abstract: Actually, some of the factors that determine success or failure of a corporation are on the speed and efficiency of making their decisions. For these requirements are achieved, the integration of legacy computational systems to new computational systems is of fundamental importance, thus creating the need for old and new technologies interoperate. As a solution to this problem comes to XML, a language self-descriptive and platform-independent technology, and it is becoming a standard for communication between heterogeneous systems. Being self-descriptive, the XML becomes redundant, which generates more information to be transferred and stored, requiring more resources of computational systems. This work presents new approaches to specific compression for XML, in order to reduce the size of your documents, reducing the impacts on the reducing the impact on network, storage and processing resources. Are presented two new approaches as well as test cases that evaluate, considering the questions: compression ratio, compression time and tolerance of the methods to low memory availability. The results are compared to the XML compression methods that stand out in the literature. The results demonstrate that the use of compressors XML documents can significantly reduce the performance impacts created by the language / Mestrado / Telecomunicações e Telemática / Mestre em Engenharia Elétrica
130

Video Compression Through Spatial Frequency Based Motion Estimation And Compensation

Menezes, Vinod 02 1900 (has links) (PDF)
No description available.

Page generated in 0.1351 seconds