• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 9
  • 3
  • 2
  • 1
  • Tagged with
  • 17
  • 17
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

[en] PERMUTATION CODES FOR DATA COMPRESSION AND MODULATION / [pt] CÓDIGOS DE PERMUTAÇÃO PARA COMPRESSÃO DE DADOS E MODULAÇÃO

DANILO SILVA 01 April 2005 (has links)
[pt] Códigos de permutação são uma interessante ferramenta matemática que pode ser empregada para construir tanto esquemas de compressão com perdas quanto esquemas de modulação em um sistema de transmissão digital. Códigos de permutação vetorial, uma extensão mais poderosa dos códigos de permutação escalar, foram recentemente introduzidos no contexto de compressão de fontes. Este trabalho apresenta novas contribuições a essa teoria e introduz os códigos de permutação vetorial no contexto de modulação. Para compressão de fontes, é demonstrado matematicamente que os códigos de permutação vetorial (VPC) têm desempenho assintótico idêntico ao do quantizador vetorial com restrição de entropia (ECVQ). Baseado neste desenvolvimento, é proposto um método eficiente para o projeto de VPC s. O bom desempenho dos códigos projetados com esse método é verificado através de resultados experimentais para as fontes uniforme e gaussiana: são exibidos VPC s cujo desempenho é semelhante ao do ECVQ e superior ao de sua versão escalar. Para o propósito de transmissão digital, é verificado que também a modulação baseada em códigos de permutação vetorial (VPM) possui desempenho superior ao de sua versão escalar. São desenvolvidas as expressões para o projeto ótimo de VPM, e um método é apresentado para detecção ótima de VPM em canais AWGN e com desvanecimento. / [en] Permutation codes are an interesting mathematical tool which can be used to devise both lossy compression schemes and modulation schemes for digital transmission systems. Vector permutation codes, a more powerful extension of scalar permutation codes, were recently introduced for the purpose of source compression. This work presents new contributions to this theory and also introduces vector permutation codes for the purpose of modulation. For source compression, it is proved that vector permutation codes (VPC) have an asymptotical performance equal to that of an entropy-constrained vector quantizer (ECVQ). Based on this development, an efficient method is proposed for VPC design. Experimental results for Gaussian and uniform sources show that the codes designed by this method have indeed a good performance: VPC s are exhibited whose performances are similar to that of ECVQ and superior to those of their scalar counterparts. In the context of digital transmission, it is verified that also vector permutation modulation (VPM) is superior in performance to scalar permutation modulation. Expressions are developed for the optimal design of VPM, and a method is presented for maximum-likelihood detection of VPM in AWGN and fading channels.
12

Power System Data Compression For Archiving

Das, Sarasij 11 1900 (has links)
Advances in electronics, computer and information technology are fueling major changes in the area of power systems instrumentations. More and more microprocessor based digital instruments are replacing older type of meters. Extensive deployment of digital instruments are generating vast quantities of data which is creating information pressure in Utilities. The legacy SCADA based data management systems do not support management of such huge data. As a result utilities either have to delete or store the metered information in some compact discs, tape drives which are unreliable. Also, at the same time the traditional integrated power industry is going through a deregulation process. The market principle is forcing competition between power utilities, which in turn demands a higher focus on profit and competitive edge. To optimize system operation and planning utilities need better decision making processes which depend on the availability of reliable system information. For utilities it is becoming clear that information is a vital asset. So, the utilities are now keen to store and use as much information as they can. Existing SCADA based systems do not allow to store data of more than a few months. So, in this dissertation effectiveness of compression algorithms in compressing real time operational data has been assessed. Both, lossy and lossless compression schemes are considered. In lossless method two schemes are proposed among which Scheme 1 is based on arithmetic coding and Scheme 2 is based on run length coding. Both the scheme have 2 stages. First stage is common for both the schemes. In this stage the consecutive data elements are decorrelated by using linear predictors. The output from linear predictor, named as residual sequence, is coded by arithmetic coding in Scheme 1 and by run length coding in Scheme 2. Three different types of arithmetic codings are considered in this study : static, decrement and adaptive arithmetic coding. Among them static and decrement codings are two pass methods where the first pass is used to collect symbol statistics while the second is used to code the symbols. The adaptive coding method uses only one pass. In the arithmetic coding based schemes the average compression ratio achieved for voltage data is around 30, for frequency data is around 9, for VAr generation data is around 14, for MW generation data is around 11 and for line flow data is around 14. In scheme 2 Golomb-Rice coding is used for compressing run lengths. In Scheme 2 the average compression ratio achieved for voltage data is around 25, for frequency data is around 7, for VAr generation data is around 10, for MW generation data is around 8 and for line flow data is around 9. The arithmetic coding based method mainly looks at achieving high compression ratio. On the other hand, Golomb-Rice coding based method does not achieve good compression ratio as arithmetic coding but it is computationally very simple in comparison with the arithmetic coding. In lossy method principal component analysis (PCA) based compression method is used. From the data set, a few uncorrelated variables are derived and stored. The range of compression ratio in PCA based compression scheme is around 105-115 for voltage data, around 55-58 for VAr generation data, around 21-23 for MW generation data and around 27-29 for line flow data. This shows that the voltage parameter is amenable for better compression than other parameters. Data of five system parameters - voltage, line flow, frequency, MW generation and MVAr generation - of Souther regional grid of India have been considered for study. One of the aims of this thesis is to argue that collected power system data can be put to other uses as well. In particular we show that, even mining the small amount of practical data (collected from SRLDC) reveals some interesting system behavior patterns. A noteworthy feature of the thesis is that all the studies have been carried out considering data of practical systems. It is believed that the thesis opens up new questions for further investigations.
13

Komprese a hodnocení kvality signálů EKG / Compression and Quality Assessment of ECG Signals

Němcová, Andrea January 2021 (has links)
Ztrátová komprese signálů EKG je užitečná a v současnosti stále se rozvíjející oblast. Stále se vyvíjí nové a nové kompresní algoritmy. V této oblasti ale chybí standardy pro hodnocení kvality signálu po kompresi. Existuje tedy sice mnoho různých kompresních algoritmů, které ale buď nelze objektivně porovnat vůbec, nebo jen zhruba. V oblasti komprese navíc nikde není popsáno, zda mají na výkon kompresních algoritmů vliv patologie, popřípadě jaký. Tato dizertační práce poskytuje přehled všech nalezených metod pro hodnocení kvality signálů EKG po kompresi. Navíc bylo vytvořeno 10 nových metod. V rámci práce byla provedena analýza všech těchto metod a na základě jejích výsledků bylo doporučeno 12 metod vhodných pro hodnocení kvality signálu EKG po kompresi. Také je zde představen nový kompresní algoritmus „Single-Cycle Fractal-Based (SCyF)“. Algoritmus SCyF je inspirován metodou založenou na fraktálech a využívá jednoho cyklu signálu EKG jako domény. Algoritmus SCyF byl testován na čtyřech různých databázích, přičemž kvalita signálů po kompresi byla vyhodnocena 12 doporučenými metodami. Výsledky byly porovnány s velmi populárním kompresním algoritmem založeným na vlnkové transformaci, který využívá metodu „Set Partitioning in Hierarchical Trees (SPIHT)“. Postup testování zároveň slouží jako příklad, jak by měl vypadat standard hodnocení výkonu kompresních algoritmů. Dále bylo statisticky prokázáno, že existuje rozdíl mezi kompresí fyziologických a patologických signálů. Patologické signály byly komprimovány s nižší efektivitou a kvalitou než signály fyziologické.
14

Komprese obrazu pomocí vlnkové transformace / Image Compression Using the Wavelet Transform

Kaše, David January 2015 (has links)
This thesis deals with image compression using wavelet, contourlet and shearlet transformation. It starts with quick look at image compression problem a quality measurement. Next are presented basic concepts of wavelets, multiresolution analysis and scaling function and detailed look at each transform. Representatives of algorithms for coeficients coding are EZW, SPIHT and marginally EBCOT. In second part is described design and implementation of constructed library. Last part compare result of transforms with format JPEG 2000. Comparison resulted in determining type of image in which implemented contourlet and shearlet transform were more effective than wavelet. Format JPEG 2000 was not exceeded.
15

Komprese signálů EKG nasnímaných pomocí mobilního zařízení / Compression of ECG signals recorded using mobile ECG device

Had, Filip January 2017 (has links)
Signal compression is necessary part for ECG scanning, because of relatively big amount of data, which must be transmitted primarily wirelessly for analysis. Because of the wireless sending it is necessary to minimize the amount of data as much as possible. To minimize the amount of data, lossless or lossy compression algorithms are used. This work describes an algorithm SPITH and newly created experimental method, based on PNG, and their testing. This master’s thesis there is also a bank of ECG signals with parallel sensed accelerometer data. In the last part, modification of SPIHT algorithm, which uses accelerometer data, is described and realized.
16

PCA and JPEG2000-based Lossy Compression for Hyperspectral Imagery

Zhu, Wei 30 April 2011 (has links)
This dissertation develops several new algorithms to solve existing problems in practical application of the previously developed PCA+JPEG2000, which has shown superior rate-distortion performance in hyperspectral image compression. In addition, a new scheme is proposed to facilitate multi-temporal hyperspectral image compression. Specifically, the uniqueness in each algorithm is described as follows. 1. An empirical piecewise linear equation is proposed to estimate the optimal number of major principal components (PCs) used in SubPCA+JPEG2000 for AVIRIS data. Sensor-specific equations are presented with excellent fitting performance for AVIRIS, HYDICE, and HyMap data. As a conclusion, a general guideline is provided for finding sensor-specific piecewise linear equations. 2. An anomaly-removal-based hyperspectral image compression algorithm is proposed. It preserves anomalous pixels in a lossless manner, and yields the same or even improved rate-distortion performance. It is particularly useful to SubPCA+JPEG2000 when compressing data with anomalies that may reside in minor PCs. 3. A segmented PCA-based PCA+JPEG2000 compression algorithm is developed, which spectrally partitions an image based on its spectral correlation coefficients. This compression scheme greatly improves the rate-distortion performance of PCA+JPEG2000 when the spatial size of the data is relatively smaller than its spectral size, especially at low bitrates. A sensor-specific partition method is also developed for fast processing with suboptimal performance. 4. A joint multi-temporal image compression scheme is proposed. The algorithm preserves change information in a lossless fashion during the compression. It can yield perfect change detection with slightly degraded rate-distortion performance.
17

Ultra High Compression For Weather Radar Reflectivity Data

Makkapati, Vishnu Vardhan 17 November 2006 (has links)
Honeywell Technology Solutions Lab, India / Weather is a major contributing factor in aviation accidents, incidents and delays. Doppler weather radar has emerged as a potent tool to observe weather. Aircraft carry onboard radars but their range and angular resolution are limited. Networks of ground-based weather radars provide extensive coverage of weather over large geographic regions. It would be helpful if these data can be transmitted to the pilot. However, these data are highly voluminous and the bandwidth of the ground-air communication links is limited and expensive. Hence, these data have to be compressed to an extent where they are suitable for transmission over low-bandwidth links. Several methods have been developed to compress pictorial data. General-purpose schemes do not take into account the nature of data and hence do not yield high compression ratios. A scheme for extreme compression of weather radar data is developed in this thesis that does not significantly degrade the meteorological information contained in these data. The method is based on contour encoding. It approximates a contour by a set of systematically chosen ‘control points’ that preserve its fine structure up to a certain level. The contours may be obtained using a thresholding process based on NWS or custom reflectivity levels. This process may result in region and hole contours, enclosing `high' or `low' areas, which may be nested. A tag bit is used to label region and hole contours. The control point extraction method first obtains a smoothed reference contour by averaging the original contour. Then the points on the original contour with maximum deviation from the smoothed contour between the crossings of these contours are identified and are designated as control points. Additional control points are added midway between the control point and the crossing points on either side of it, if the length of the segment between the crossing points exceeds a certain length. The control points, referenced with respect to the top-left corner of each contour for compact quantification, are transmitted to the receiving end. The contour is retrieved from the control points at the receiving end using spline interpolation. The region and hole contours are identified using the tag bit. The pixels between the region and hole contours at a given threshold level are filled using the color corresponding to it. This method is repeated till all the contours for a given threshold level are exhausted, and the process is carried out for all other thresholds, thereby resulting in a composite picture of the reconstructed field. Extensive studies have been conducted by using metrics such as compression ratio, fidelity of reconstruction and visual perception. In particular the effect of the smoothing factor, the choice of the degree of spline interpolation and the choice of thresholds are studied. It has been shown that a smoothing percentage of about 10% is optimal for most data. A degree 2 of spline interpolation is found to be best suited for smooth contour reconstruction. Augmenting NWS thresholds has resulted in improved visual perception, but at the expense of a decrease in the compression ratio. Two enhancements to the basic method that include adjustments to the control points to achieve better reconstruction and bit manipulations on the control points to obtain higher compression are proposed. The spline interpolation inherently tends to move the reconstructed contour away from the control points. This has been somewhat compensated by stretching the control points away from the smoothed reference contour. The amount and direction of stretch are optimized with respect to actual data fields to yield better reconstruction. In the bit manipulation study, the effects of discarding the least significant bits of the control point addresses are analyzed in detail. Simple bit truncation introduces a bias in the contour description and reconstruction, which is removed to a great extent by employing a bias compensation mechanism. The results obtained are compared with other methods devised for encoding weather radar contours.

Page generated in 0.1125 seconds