• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 29
  • 13
  • 6
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 61
  • 42
  • 20
  • 17
  • 12
  • 12
  • 11
  • 10
  • 10
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Compression Algorithm in Mobile Packet Core

Poranki, Lakshmi Nishita January 2020 (has links)
Context: Data compression is the technique that is used for the fast transmission of the data and also to reduce the storage size of the transmitted data. Data compression is the massive and ubiquitous technology where almost every communication company make use of data compression. Data compression is categorized mainly into lossy and lossless data compression. Ericsson is the telecommunication company that deals with millions of user data and, all these data get compressed using the Deflate compression algorithm. Due to the compression ratio and compression speed, the deflate algorithm is not optimal for the present use case(compress twice and decompress once) of Ericsson. This research is all about finding the best alternate algorithm which suits the current use case so that the deflate algorithm can replace it. Objectives: The objective of the research is to replace the Deflate algorithm with the algorithm, which is useful for compressing the Serving GPRS Support Node-Mobility Management Entity(SGSN-MME) user data effectively. The main objectives to achieve this goal are: Investigating the better algorithm which fits the SGSN-MME compression patterns, investigating the few alternate algorithms for Deflate algorithm, the SGSN- MME dataset used to perform experimentation, the experiment should perform by using all selected algorithms on the dataset, the results of the experiment were compared based on the compression factors, based on the performance of algorithm the Deflate algorithm will get replaced with the suitable algorithm. Methods: In this research, a literature review performed to investigate the alternate algorithms for the Deflate algorithm. After selecting the algorithm, an experiment conducted on the data which was provided by Ericsson AB, Gothenburg and based on the compression factors like compression ratio, compression speed the performance of the algorithm evaluated. Results: By analyzing the results of the experiment, Z-standard is the better performance algorithm with the optimal compression sizes, compression ratio, and compression speed. Conclusions: This research concludes by identifying an alternate algorithm that can replace the Deflate algorithm and also which is suitable for the present Use case.
42

Analyse de Flux de Trames AFDX en Réception et Méthode d’Optimisation Mémoire / AFDX Frame Flow Analysis in Reception and Memory Optimization Method

Baga, Yohan 03 May 2018 (has links)
L’essor des réseaux AFDX comme infrastructure de communication entre les équipements de bord des aéronefs civils motive de nombreux travaux de recherche pour réduire les délais de communication tout en garantissant un haut niveau de déterminisme et de qualité de service. Cette thèse traite de l’effet des accolements de trames sur l’End System de réception, notamment sur le buffer interne afin de garantir une non perte de trames et un dimensionnement mémoire optimal. Une modélisation pire cas du flux de trames est réalisée selon une première méthode pessimiste, basée sur un flux de trames périodiques ; puis une seconde, plus optimiste, basée sur des intervalles de réception et un placement de trames itératif. Une étude probabiliste met en œuvre des distributions gaussiennes pour évaluer les probabilités d’occurrences des pires cas d’accolements et apporte un éclairage qui ouvre une discussion sur la pertinence de ne considérer que la modélisation pire cas pour dimensionner le buffer de réception. Un gain mémoire supplémentaire peut être obtenu par la mise en œuvre de la compression sans perte LZW. / The rise of AFDX networks as a communication infrastructure between on-board equipment of civil aircraft motivates many research projects to reduce communication delays while guaranteeing a high level of determination and quality of service. This thesis deals with the effect of the back-ot-back frame reception on the reception End System, in particular, on the internal buffer, in order to guarantee a non-loss of frames and optimal memory dimensioning. A worst-case modeling of the frame flow is carried out according to a first pessimistic method, based on a periodic frame flow. Then a more optimistic method is presented based on the reception intervals and an iterative frame placement. A probabilistic study implements Gaussian distributions to evaluate the occurrence probabilities of the worst back-to-back frames and provides an illumination that opens a discussion on the relevance of not considering the worst-case modeling to size the reception buffer. Additional memory gain can be achieved by implementing LZW lossless compression.
43

Perceptually Lossless Coding of Medical Images - From Abstraction to Reality

Wu, David, dwu8@optusnet.com.au January 2007 (has links)
This work explores a novel vision model based coding approach to encode medical images at a perceptually lossless quality, within the framework of the JPEG 2000 coding engine. Perceptually lossless encoding offers the best of both worlds, delivering images free of visual distortions and at the same time providing significantly greater compression ratio gains over its information lossless counterparts. This is achieved through a visual pruning function, embedded with an advanced model of the human visual system to accurately identify and to efficiently remove visually irrelevant/insignificant information. In addition, it maintains bit-stream compliance with the JPEG 2000 coding framework and subsequently is compliant with the Digital Communications in Medicine standard (DICOM). Equally, the pruning function is applicable to other Discrete Wavelet Transform based image coders, e.g., The Set Partitioning in Hierarchical Trees. Further significant coding gains are ex ploited through an artificial edge segmentation algorithm and a novel arithmetic pruning algorithm. The coding effectiveness and qualitative consistency of the algorithm is evaluated through a double-blind subjective assessment with 31 medical experts, performed using a novel 2-staged forced choice assessment that was devised for medical experts, offering the benefits of greater robustness and accuracy in measuring subjective responses. The assessment showed that no differences of statistical significance were perceivable between the original images and the images encoded by the proposed coder.
44

Power System Data Compression For Archiving

Das, Sarasij 11 1900 (has links)
Advances in electronics, computer and information technology are fueling major changes in the area of power systems instrumentations. More and more microprocessor based digital instruments are replacing older type of meters. Extensive deployment of digital instruments are generating vast quantities of data which is creating information pressure in Utilities. The legacy SCADA based data management systems do not support management of such huge data. As a result utilities either have to delete or store the metered information in some compact discs, tape drives which are unreliable. Also, at the same time the traditional integrated power industry is going through a deregulation process. The market principle is forcing competition between power utilities, which in turn demands a higher focus on profit and competitive edge. To optimize system operation and planning utilities need better decision making processes which depend on the availability of reliable system information. For utilities it is becoming clear that information is a vital asset. So, the utilities are now keen to store and use as much information as they can. Existing SCADA based systems do not allow to store data of more than a few months. So, in this dissertation effectiveness of compression algorithms in compressing real time operational data has been assessed. Both, lossy and lossless compression schemes are considered. In lossless method two schemes are proposed among which Scheme 1 is based on arithmetic coding and Scheme 2 is based on run length coding. Both the scheme have 2 stages. First stage is common for both the schemes. In this stage the consecutive data elements are decorrelated by using linear predictors. The output from linear predictor, named as residual sequence, is coded by arithmetic coding in Scheme 1 and by run length coding in Scheme 2. Three different types of arithmetic codings are considered in this study : static, decrement and adaptive arithmetic coding. Among them static and decrement codings are two pass methods where the first pass is used to collect symbol statistics while the second is used to code the symbols. The adaptive coding method uses only one pass. In the arithmetic coding based schemes the average compression ratio achieved for voltage data is around 30, for frequency data is around 9, for VAr generation data is around 14, for MW generation data is around 11 and for line flow data is around 14. In scheme 2 Golomb-Rice coding is used for compressing run lengths. In Scheme 2 the average compression ratio achieved for voltage data is around 25, for frequency data is around 7, for VAr generation data is around 10, for MW generation data is around 8 and for line flow data is around 9. The arithmetic coding based method mainly looks at achieving high compression ratio. On the other hand, Golomb-Rice coding based method does not achieve good compression ratio as arithmetic coding but it is computationally very simple in comparison with the arithmetic coding. In lossy method principal component analysis (PCA) based compression method is used. From the data set, a few uncorrelated variables are derived and stored. The range of compression ratio in PCA based compression scheme is around 105-115 for voltage data, around 55-58 for VAr generation data, around 21-23 for MW generation data and around 27-29 for line flow data. This shows that the voltage parameter is amenable for better compression than other parameters. Data of five system parameters - voltage, line flow, frequency, MW generation and MVAr generation - of Souther regional grid of India have been considered for study. One of the aims of this thesis is to argue that collected power system data can be put to other uses as well. In particular we show that, even mining the small amount of practical data (collected from SRLDC) reveals some interesting system behavior patterns. A noteworthy feature of the thesis is that all the studies have been carried out considering data of practical systems. It is believed that the thesis opens up new questions for further investigations.
45

Χρήση του προτύπου MPEG-4 ALS και διακαναλλική πρόβλεψη για κωδικοποίηση πολυκαναλλικού ηλεκτροκαρδιογραφήματος

Κωνσταντίνου, Ιωάννης 03 July 2009 (has links)
Είναι γεγονός ότι το ηλεκτροκαρδιογράφημα είναι ένα πολύ καλά μελετημένο σήμα. Ειδικά τα τελευταία χρόνια, έχει προταθεί ένας μεγάλος αριθμός αλγορίθμων επεξεργασίας, συμπίεσης, αυτόματης διάγνωσης, φιλτραρίσματος, αποθορυβοποίησης και κωδικοποίησης. Σ’ αυτή τη διπλωματική εργασία, προτείνουμε ένα αποδοτικό αλγόριθμο κωδικοποίησης χωρίς απώλειες για δεδομένα από δωδεκακάναλλο ηλεκτροκαρδιογράφημα. Ο κωδικοποιητής υλοποιεί ένα πολυγραμμικό μοντέλο υψηλής απόδοσης, το οποίο είναι «ειδικευμένο στους ασθενείς», ενδοκαναλικής πρόβλεψη και εφαρμόζει το πρότυπο κωδικοποίησης MPEG-4 ALS για διακαναλική πρόβλεψη και κωδικοποίηση. Τα αποτελέσματα του αλγορίθμου συγκρίθηκαν με τεχνικές κωδικοποίησης εντροπίας χωρίς απώλειες και δείχνουν αύξηση της απόδοσης κωδικοποίησης. / The Electrocardiogram (ECG) is one of the most well studied medical signals. A large number of ECG processing algorithms have being proposed over the years covering the areas of ECG noise filtering, automated diagnostic interpretation and coding. In this master thesis, we propose a robust multi-channel ECG encoder architecture, which operates on 12-channel ECG data. The encoder utilizes highly efficient multi-linear patient specific models for inter-channel prediction and the MPEG-4 Audio Lossless Coding (ALS) architecture for intra-channel prediction and coding. The results of the algorithm show improved performance over standard encoding techniques.
46

Sistema de alto desempenho para compressão sem perdas de imagens mamográficas

Marques, José Raphael Teixeira 30 April 2010 (has links)
Made available in DSpace on 2015-05-14T12:36:32Z (GMT). No. of bitstreams: 1 arquivototal.pdf: 1384872 bytes, checksum: 17a26f8a3828692a7cd893ffaf2ff3f9 (MD5) Previous issue date: 2010-04-30 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / The usage of mammographic image databases in digital form and the practice of telemedicine require to store and to transmit large amounts of data. The image digitization from a single mammographic exam with appropriate resolution can take up to 120MB of space in disk, which becomes even more critical when considering the large number of exams per day on a clinic. Thus, efficient data compression techniques are needed to reduce storage and transmission costs. This document describes the development of a high-performance lossless compressor based on Prediction by Partial Matching (PPM) algorithm with modules for segmentation, mapping, gray code, bit planes decomposition and move-to-front transform, for mammographic image compression. The compressor developed was efficient in both compression ratio and processing time, and compresses 27MB images in about 13 seconds with an average compression ratio of 5,39. / A utilização de bancos de dados de imagens mamográficas em formato digital e as práticas de telemedicina exigem que se armazene e transmita grandes quantidades de dados. A digitalização das quatro imagens de um único exame mamográfico com resolução adequada pode ocupar até 120MB de espaço em disco. Esta quantidade de dados leva a uma situação ainda mais crítica ao considerar-se o grande número de exames diários efetuados rotineiramente em uma clínica. Assim, técnicas eficientes de compressão de dados são necessárias para reduzir os custos relativos ao armazenamento e à transmissão destas imagens. O presente trabalho descreve o desenvolvimento de um sistema de alto desempenho para compressão sem perdas de imagens mamográficas baseado no algoritmo Prediction by Partial Matching (PPM), em conjunto com módulos para segmentação, mapeamento, codificação com Código Gray, decomposição em planos de bits e transformada move-to-front (MTF). O sistema desenvolvido mostrou-se eficiente tanto no que tange à razão de compressão quanto ao tempo de processamento, comprimindo imagens de 27MB em aproximadamente 13 segundos com razão de compressão média de 5,39.
47

Akcelerace kompresního algoritmu LZ4 v FPGA / Acceleration of LZ4 Compression Algorithm in FPGA

Marton, Dominik January 2017 (has links)
This project describes the implementation of an LZ4 compression algorithm in a C/C++-like language, that can be used to generate VHDL programs for FPGA integrated circuits embedded in accelerated network interface controllers (NICs). Based on the algorithm specification, software versions of LZ4 compressor and decompressor are implemented, which are then transformed into a synthesizable language, that is then used to generate fully functional VHDL code for both components. Execution time and compression ratio of all implementations are then compared. The project also serves as a demonstration of usability and influence of high-level synthesis and high-level approach to design and implementation of hardware applications known from common programming languages.
48

Porovnání hlasových a audio kodeků / Comparison of voice and audio codecs

Lúdik, Michal January 2012 (has links)
This thesis deals with description of human hearing, audio and speech codecs, description of objective measure of quality and practical comparison of codecs. Chapter about audio codecs consists of description of lossless codec FLAC and lossy codecs MP3 and Ogg Vorbis. In chapter about speech codecs is description of linear predictive coding and G.729 and OPUS codecs. Evaluation of quality consists of description of segmental signal-to- noise ratio and perceptual evaluation of quality – WSS and PESQ. Last chapter deals with description od practical part of this thesis, that is comparison of memory and time consumption of audio codecs and perceptual evaluation of speech codecs quality.
49

Bezeztrátová komprese obrazu / Lossless Image Compression

Vondrášek, Petr January 2011 (has links)
The aim of this master's thesis was to design, develop and test a method for lossless image compression. The theoretical part includes a description of selected exiting methods such as RLE, MTF, adaptive arithmetic coding, color models used in LOCO-I and JPEG 2000, predictors MED, GAP and laplacian pyramid. The conclusion includes a comparison of various combinations of chosen approaches and overall efficiency compared with PNG and JPEG-LS.
50

Komprese signálů EKG nasnímaných pomocí mobilního zařízení / Compression of ECG signals recorded using mobile ECG device

Had, Filip January 2017 (has links)
Signal compression is necessary part for ECG scanning, because of relatively big amount of data, which must be transmitted primarily wirelessly for analysis. Because of the wireless sending it is necessary to minimize the amount of data as much as possible. To minimize the amount of data, lossless or lossy compression algorithms are used. This work describes an algorithm SPITH and newly created experimental method, based on PNG, and their testing. This master’s thesis there is also a bank of ECG signals with parallel sensed accelerometer data. In the last part, modification of SPIHT algorithm, which uses accelerometer data, is described and realized.

Page generated in 0.0429 seconds