111 |
Progressive and Random Accessible Mesh Compression / Compression Progressive et avec Accès Aléatoire de MaillagesMaglo, Adrien, Enam 10 July 2013 (has links)
Les travaux de l'état de l'art en compression progressive de maillages se sont concentrés sur les maillages triangulaires. Mais les maillages contenant d'autres types de faces sont aussi couramment utilisés. Nous proposons donc une nouvelle méthode de compression progressive qui peut efficacement encoder des maillages polygonaux. Ce nouvel algorithme atteint un niveau de performance comparable aux algorithmes dédiés aux maillages triangulaires. La compression progressive est liée à la décimation de maillages car ces deux applications génèrent des niveaux de détail. Par conséquent, nous proposons une nouvelle métrique volumique simple pour guider la décimation de maillages polygonaux. Nous montrons ensuite que les possibilités offertes par les algorithmes de compression progressive peuvent être exploitées pour adapter les données 3D en proposant un nouveau cadre applicatif pour la visualisation scientifique distante. Les algorithmes de compression progressive avec accès aléatoire peuvent mieux adapter les données 3D aux différentes contraintes en prenant en compte les régions d'intérêt de l'utilisateur. Notre premier algorithme de ce type est basé sur une segmentation préliminaire du maillage d'entrée. Chaque grappe est ensuite compressée de manière indépendante par un algorithme progressif. Notre second algorithme est basé sur le groupement hiérarchique des sommets obtenu par la décimation. Cette seconde méthode offre une forte granularité d'accès aléatoire et génère des maillages décompressés en une pièce avec des transitions lisses entre les parties décompressées à différents niveaux de détail. Des résultats expérimentaux démontrent l'efficacité des deux approches. / Previous work on progressive mesh compression focused on triangle meshes but meshes containing other types of faces are commonly used. Therefore, we propose a new progressive mesh compression method that can efficiently compress meshes with arbitrary face degrees. Its compression performance is competitive with approaches dedicated to progressive triangle mesh compression. Progressive mesh compression is linked to mesh decimation because both applications generate levels of detail. Consequently, we propose a new simple volume metric to drive the polygon mesh decimation. We apply this metric to the progressive compression and the simplification of polygon meshes. We then show that the features offered by progressive mesh compression algorithms can be exploited for 3D adaptation by the proposition of a new framework for remote scientific visualization. Progressive random accessible mesh compression schemes can better adapt 3D mesh data to the various constraints by taking into account regions of interest. So, we propose two new progressive random-accessible algorithms. The first one is based on the initial segmentation of the input model. Each generated cluster is compressed independently with a progressive algorithm. The second one is based on the hierarchical grouping of vertices obtained by the decimation. The advantage of this second method is that it offers a high random accessibility granularity and generates one-piece decompressed meshes with smooth transitions between parts decompressed at low and high levels of detail. Experimental results demonstrate the compression and adaptation efficiency of both approaches.
|
112 |
Compressão de eletrocardiogramas usando wavelets / Compression of electrocardiograms using waveletsAgulhari, Cristiano Marcos, 1983- 02 November 2009 (has links)
Orientador: Ivanil Sebastião Bonatti / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação / Made available in DSpace on 2018-08-12T20:20:55Z (GMT). No. of bitstreams: 1
Agulhari_CristianoMarcos_M.pdf: 901688 bytes, checksum: 97ec8feb4ee297c319c80463616a7391 (MD5)
Previous issue date: 2009 / Resumo: A principal contribuição desta dissertação é a proposta de dois métodos de compressão de eletrocardiogramas (ECGs). O primeiro método, chamado Run Length Encoding Adaptativo (RLEA), é baseado nas transformadas wavelet e consiste basicamente em utilizar uma função wavelet, obtida pela resolução de um problema de otimização, que se ajuste ao sinal a ser comprimido. O problema de otimização torna-se irrestrito com a parametrização dos coeficientes do filtro escala, que definem unicamente uma função wavelet. Após a resolução do problema de otimização é aplicado o procedimento de decomposição wavelet no sinal e os coeficientes de representação mais significativos são retidos, sendo que o número de coeficientes retidos é determinado de forma a satisfazer uma medida de distorção pré-especificada. Os coeficientes retidos são então quantizados e compactados, assim como o bitmap que indica as posições dos coeficientes retidos. A quantização é feita de forma adaptativa, utilizando diferentes números de bits de quantização para os diferentes subespaços de decomposição considerados. Tanto os valores dos coeficientes retidos quanto o bitmap são codificados utilizando uma variante do método Run Length Encoding. O segundo método proposto nesta dissertação, chamado Zero Padding Singular Values Decomposition (ZPSVD), consiste em primeiramente detectar os batimentos, equalizá-los pela inserção de zeros (zero padding) e então aplicar a decomposição SVD para obter tanto a base quanto os coeficientes de representação dos batimentos. Alguns componentes da base são retidos e então comprimidos utilizando os mesmos procedimentos aplicados aos coeficientes de decomposição do ECG no método RLEA, enquanto que os coeficientes de projeção dos batimentos nessa base são quantizados utilizando um procedimento de quantização adaptativa. Os dois métodos de compressão propostos são comparados com diversos outros métodos existentes na literatura por meio de experimentos numéricos / Abstract: The main contribution of the present thesis is the proposition of two electrocardiogram (ECG) compression methods. The first method, called Run Length Encoding Adaptativo (RLEA), is based on wavelet transforms and consists of using a wavelet function, obtained by the resolution of an optimization problem, which fits to the signal to be compressed. The optimization problem becomes unconstrained with the parametrization of the coefficients of the scaling filter, that define uniquely a wavelet function. After the resolution of the optimization problem, the wavelet decomposition procedure is applied to the signal and the most significant coefficients of representation are retained, being the number of retained coefficients determined in order to satisfty a pre-specified distortion measure. The retained coefficients are quantized and compressed, likewise the bitmap that informs the positions of the retained coefficients. The quantization is performed in an adaptive way, using different numbers of bits for the different decomposition subspaces considered. Both the values of the retained coefficients and the bitmap are encoded using a modi- fied version of the Run Length Encoding technique. The second method proposed in this dissertation, called Zero Padding Singular Values Decomposition (ZPSVD), consists of detecting the beat pulses of the ECG, equalizing the pulses by inserting zeros (zero padding), and finally applying the SVD to obtain both the basis and the coefficients of representation of the beat pulses. Some components of the basis are retained and then compressed using the same procedures applied to the coefficients of decomposition of the ECG in the RLEA method, while the coefficients of projection of the beat pulses in the basis are quantized using an adaptive quantization procedure. Both proposed compression methods are compared to other techniques by means of numerical experiments / Mestrado / Telecomunicações e Telemática / Mestre em Engenharia Elétrica
|
113 |
Pseudo-random access compressed archive for security log dataRadley, Johannes Jurgens January 2015 (has links)
We are surrounded by an increasing number of devices and applications that produce a huge quantity of machine generated data. Almost all the machine data contains some element of security information that can be used to discover, monitor and investigate security events.The work proposes a pseudo-random access compressed storage method for log data to be used with an information retrieval system that in turn provides the ability to search and correlate log data and the corresponding events. We explain the method for converting log files into distinct events and storing the events in a compressed file. This yields an entry identifier for each log entry that provides a pointer that can be used by indexing methods. The research also evaluates the compression performance penalties encountered by using this storage system, including decreased compression ratio, as well as increased compression and decompression times.
|
114 |
Fractal application in data compression / Uplatnění fraktálů v kompresi datDušák, Petr January 2015 (has links)
The mission of the Technology Transfer Programme Office is to increase impact on a society by transferring technologies developed by the European Space Agency. Method and Apparatus for compressing time series is a patented compression algorithm designed to be efficient as its purpose is to run on deep space probes or satellites. The algorithm is inspired by a method for fractal terrain generation, namely the midpoint displacement algorithm. This work introduces fractals, their application and modifying the patented algorithm, in order to achieve greater compression. The modification lies in modifying the displacement mechanism. The modified algorithm is capable of reducing data up to 25 %, compared to the patented algorithm. The modification made the algorithm less efficient. In large-scale test, performed on Rosetta spacecraft telemetry, the modified algorithm achieved around 5 % higher compression.
|
115 |
Adaptive compression codingNasiopoulos, Panagiotis January 1988 (has links)
An adaptive image compression coding technique, ACC, is presented. This algorithm is shown to preserve edges and give better quality decompressed pictures and better compression ratios than that of the Absolute Moment Block Truncation Coding. Lookup
tables are used to achieve better compression rates without affecting the visual quality of the reconstructed image. Regions with approximately uniform intensities are successfully detected by using the range and these regions are approximated by their average. This procedure leads to further reduction in the compression data rates. A method for preserving edges is introduced. It is shown that as more details are preserved around edges the pictorial results improve dramatically. The ragged appearance of the edges in AMBTC is reduced or eliminated, leading to images far superior than those of AMBTC.
For most of the images ACC yields Root Mean Square Error smaller than that obtained
by AMBTC. Decompression time is shown to be comparable to that of AMBTC for low threshold values and becomes significantly lower as the compression rate becomes
smaller. An adaptive filter is introduced which helps recover lost texture at very low compression rates (0.8 to 0.6 b/p, depending on the degree of texture in the image). This algorithm is easy to implement since no special hardware is needed. / Applied Science, Faculty of / Electrical and Computer Engineering, Department of / Graduate
|
116 |
Studies on probabilistic tensor subspace learningZhou, Yang 04 January 2019 (has links)
Most real-world data such as images and videos are naturally organized as tensors, and often have high dimensionality. Tensor subspace learning is a fundamental problem that aims at finding low-dimensional representations from tensors while preserving their intrinsic characteristics. By dealing with tensors in the learned subspace, subsequent tasks such as clustering, classification, visualization, and interpretation can be greatly facilitated. This thesis studies the tensor subspace learning problem from a generative perspective, and proposes four probabilistic methods that generalize the ideas of classical subspace learning techniques for tensor analysis. Probabilistic Rank-One Tensor Analysis (PROTA) generalizes probabilistic principle component analysis. It is flexible in capturing data characteristics, and avoids rotational ambiguity. For robustness against overfitting, concurrent regularizations are further proposed to concurrently and coherently penalize the whole subspace, so that unnecessary scale restrictions can be relaxed in regularizing PROTA. Probabilistic Rank-One Discriminant Analysis (PRODA) is a bilinear generalization of probabilistic linear discriminant analysis. It learns a discriminative subspace by representing each observation as a linear combination of collective and individual rank-one matrices. This provides PRODA with both the expressiveness of capturing discriminative features and non-discriminative noise, and the capability of exploiting the (2D) tensor structures. Bilinear Probabilistic Canonical Correlation Analysis (BPCCA) generalizes probabilistic canonical correlation analysis for learning correlations between two sets of matrices. It is built on a hybrid Tucker model in which the two-view matrices are combined in two stages via matrix-based and vector-based concatenations, respectively. This enables BPCCA to capture two-view correlations without breaking the matrix structures. Bayesian Low-Tubal-Rank Tensor Factorization (BTRTF) is a fully Bayesian treatment of robust principle component analysis for recovering tensors corrupted with gross outliers. It is based on the recently proposed tensor-SVD model, and has more expressive modeling power in characterizing tensors with certain orientation such as images and videos. A novel sparsity-inducing prior is also proposed to provide BTRTF with automatic determination of the tensor rank (subspace dimensionality). Comprehensive validations and evaluations are carried out on both synthetic and real-world datasets. Empirical studies on parameter sensitivities and convergence properties are also provided. Experimental results show that the proposed methods achieve the best overall performance in various applications such as face recognition, photograph-sketch match, and background modeling. Keywords: Tensor subspace learning, probabilistic models, Bayesian inference, tensor decomposition.
|
117 |
Přenos komprimovaného EKG signálu po síti Ethernet / ECG Signal Transmission via EthernetBayerová, Zuzana January 2012 (has links)
The semestral thesis describes ECG signal compression methods designed to modify the data for transmission via communication channels. The thesis contains an introduction to Ethernet and explanation of communication in the network. The transport protocols TCP and UDP are discussed in more detail. In the practical part of the thesis was created two separate applications. The first application in the sender's computer opens a text file with the ECG signal. Loaded ECG signal is filtered by cascade of filters to eliminate interference. The resulting signal is displayed. A part of the application is the R wave detection, calculating the length of RR interval and heart rate. The application also allows to compress an ECG signal. ECG signal is sent via Ethernet network via UDP protocol for individual samples. Applications in the recipient's computer receives signal samples from the network. Recieved compressed data is reconstructed. The resulting ECG signal is displayed and there are again detected R waves, the length of RR intervals and sampling frequency are calculated.
|
118 |
Data Compression Using a Multi-residue System (Mrs)Melaedavattil Jaganathan, Jyothy 08 1900 (has links)
This work presents a novel technique for data compression based on multi-residue number systems. The basic theorem is that an under-determined system of congruences could be solved to accomplish data compression for a signal satisfying continuity of its information content and bounded in peak-to -peak amplitude by the product of relatively prime moduli,. This thesis investigates this property and presents quantitative results along with MATLAB codes. Chapter 1 is introductory in nature and Chapter 2 deals in more detail with the basic theorem. Chapter 3 explicitly mentions the assumptions made and chapter 4 shows alternative solutions to the Chinese remainder theorem. Chapter 5 explains the experiments in detail whose results are mentioned in chapter 6. Chapter 7 concludes with a summary and suggestions for future work.
|
119 |
The use of context in text compression /Reich, Edwina Helen. January 1984 (has links)
No description available.
|
120 |
Tree encoding of speech signals at low bit ratesChu, Chung Cheung. January 1986 (has links)
No description available.
|
Page generated in 0.0744 seconds