Spelling suggestions: "subject:"[een] LOSSY COMPRESSION"" "subject:"[enn] LOSSY COMPRESSION""
1 |
Enabling Approximate Storage through Lossy Media Data CompressionWorek, Brian David 08 February 2019 (has links)
Memory capacity, bandwidth, and energy all continue to present hurdles in the quest for efficient, high-speed computing. Recognition, mining, and synthesis (RMS) applications in particular are limited by the efficiency of the memory subsystem due to their large datasets and need to frequently access memory. RMS applications, such as those in machine learning, deliver intelligent analysis and decision making through their ability to learn, identify, and create complex data models. To meet growing demand for RMS application deployment in battery constrained devices, such as mobile and Internet-of-Things, designers will need novel techniques to improve system energy consumption and performance. Fortunately, many RMS applications demonstrate inherent error resilience, a property that allows them to produce acceptable outputs even when data used in computation contain errors. Approximate storage techniques across circuits, architectures, and algorithms exploit this property to improve the energy consumption and performance of the memory subsystem through quality-energy scaling. This thesis reviews state of the art techniques in approximate storage and presents our own contribution that uses lossy compression to reduce the storage cost of media data. / MS / Computer memory systems present challenges in the quest for more powerful overall computing systems. Computer applications with the ability to learn from large sets of data in particular are limited because they need to frequently access the memory system. These applications are capable of intelligent analysis and decision making due to their ability to learn, identify, and create complex data models. To meet growing demand for intelligent applications in smartphones and other Internet connected devices, designers will need novel techniques to improve energy consumption and performance. Fortunately, many intelligent applications are naturally resistant to errors, which means they can produce acceptable outputs even when there are errors in inputs or computation. Approximate storage techniques across computer hardware and software exploit this error resistance to improve the energy consumption and performance of computer memory by purposefully reducing data precision. This thesis reviews state of the art techniques in approximate storage and presents our own contribution that uses lossy compression to reduce the storage cost of media data.
|
2 |
Perceived audio quality of compressed audio in game dialogueAhlberg, Anton January 2016 (has links)
A game could have thousands of sound assets, to fit all of those files to a manageable storage space it is often necessary to reduce the size of the files to a more manageable size so they have to be compressed. One type of sound that often takes up a lot of disc space (because there is so much of it) is dialogue. In the popular game engine Unreal Engine 4 (UE4) the the audio is compressed to Ogg Vorbis and has as default the bit rate is set to 104 kbit/s. The goal of this paper is to see if untrained listeners find dialogue compressed in Ogg Vorbis 104 kbit/s good enough for dialogue or if they prefer higher bit rates. A game was made in UE4 that would act as a listening test. Dialogue audio was recorded with a male and a female voice-actor and was compressed in UE4 in six different bit rates. 24 untrained subjects was asked to play the game and identify the two out of six robots with the dialogue audio they thought sound the best. The results show that the subjects prefer the higher bit rates that was tested. The results was analyzed with a chi-squared test which showed that the null-hypothesis can be rejected. Only 21% of the answers were towards UE4s default bit rate of 104 kbit/s or lower. The result suggest that the subjects prefer dialogue in higher bit rates and UE4 should raise the default bit rate.
|
3 |
Low-complexity methods for image and video watermarkingCoria Mendoza, Lino Evgueni 05 1900 (has links)
For digital media, the risk of piracy is aggravated by the ease to copy and distribute the content. Watermarking has become the technology of choice for discouraging people from creating illegal copies of digital content. Watermarking is the practice of imperceptibly altering the media content by embedding a message, which can be used to identify the owner of that content. A watermark message can also be a set of instructions for the display equipment, providing information about the content’s usage restrictions. Several applications are considered and three watermarking solutions are provided.
First, applications such as owner identification, proof of ownership, and digital fingerprinting are considered and a fast content-dependent image watermarking method is proposed. The scheme offers a high degree of robustness against distortions, mainly additive noise, scaling, low-pass filtering, and lossy compression. This method also requires a small amount of computations. The method generates a set of evenly distributed codewords that are constructed via an iterative algorithm. Every message bit is represented by one of these codewords and is then embedded in one of the image’s 8 × 8 pixel blocks. The information in that particular block is used in the embedding so as to ensure robustness and image fidelity.
Two watermarking schemes designed to prevent theatre camcorder piracy are also presented. In these methods, the video is watermarked so that its display is not permitted if a compliant video player detects the watermark. A watermark that is robust to geometric distortions (rotation, scaling, cropping) and lossy compression is required in order to block access to media content that has been recorded with a camera inside a movie theatre. The proposed algorithms take advantage of the properties of the dual-tree complex wavelet transform (DT CWT). This transform offers the advantages of both the regular and the complex wavelets (perfect reconstruction, approximate shift invariance and good directional selectivity). Our methods use these characteristics to create watermarks that are robust to geometric distortions and lossy compression. The proposed schemes are simple to implement and outperform comparable methods when tested against geometric distortions.
|
4 |
Low-complexity methods for image and video watermarkingCoria Mendoza, Lino Evgueni 05 1900 (has links)
For digital media, the risk of piracy is aggravated by the ease to copy and distribute the content. Watermarking has become the technology of choice for discouraging people from creating illegal copies of digital content. Watermarking is the practice of imperceptibly altering the media content by embedding a message, which can be used to identify the owner of that content. A watermark message can also be a set of instructions for the display equipment, providing information about the content’s usage restrictions. Several applications are considered and three watermarking solutions are provided.
First, applications such as owner identification, proof of ownership, and digital fingerprinting are considered and a fast content-dependent image watermarking method is proposed. The scheme offers a high degree of robustness against distortions, mainly additive noise, scaling, low-pass filtering, and lossy compression. This method also requires a small amount of computations. The method generates a set of evenly distributed codewords that are constructed via an iterative algorithm. Every message bit is represented by one of these codewords and is then embedded in one of the image’s 8 × 8 pixel blocks. The information in that particular block is used in the embedding so as to ensure robustness and image fidelity.
Two watermarking schemes designed to prevent theatre camcorder piracy are also presented. In these methods, the video is watermarked so that its display is not permitted if a compliant video player detects the watermark. A watermark that is robust to geometric distortions (rotation, scaling, cropping) and lossy compression is required in order to block access to media content that has been recorded with a camera inside a movie theatre. The proposed algorithms take advantage of the properties of the dual-tree complex wavelet transform (DT CWT). This transform offers the advantages of both the regular and the complex wavelets (perfect reconstruction, approximate shift invariance and good directional selectivity). Our methods use these characteristics to create watermarks that are robust to geometric distortions and lossy compression. The proposed schemes are simple to implement and outperform comparable methods when tested against geometric distortions.
|
5 |
Low-complexity methods for image and video watermarkingCoria Mendoza, Lino Evgueni 05 1900 (has links)
For digital media, the risk of piracy is aggravated by the ease to copy and distribute the content. Watermarking has become the technology of choice for discouraging people from creating illegal copies of digital content. Watermarking is the practice of imperceptibly altering the media content by embedding a message, which can be used to identify the owner of that content. A watermark message can also be a set of instructions for the display equipment, providing information about the content’s usage restrictions. Several applications are considered and three watermarking solutions are provided.
First, applications such as owner identification, proof of ownership, and digital fingerprinting are considered and a fast content-dependent image watermarking method is proposed. The scheme offers a high degree of robustness against distortions, mainly additive noise, scaling, low-pass filtering, and lossy compression. This method also requires a small amount of computations. The method generates a set of evenly distributed codewords that are constructed via an iterative algorithm. Every message bit is represented by one of these codewords and is then embedded in one of the image’s 8 × 8 pixel blocks. The information in that particular block is used in the embedding so as to ensure robustness and image fidelity.
Two watermarking schemes designed to prevent theatre camcorder piracy are also presented. In these methods, the video is watermarked so that its display is not permitted if a compliant video player detects the watermark. A watermark that is robust to geometric distortions (rotation, scaling, cropping) and lossy compression is required in order to block access to media content that has been recorded with a camera inside a movie theatre. The proposed algorithms take advantage of the properties of the dual-tree complex wavelet transform (DT CWT). This transform offers the advantages of both the regular and the complex wavelets (perfect reconstruction, approximate shift invariance and good directional selectivity). Our methods use these characteristics to create watermarks that are robust to geometric distortions and lossy compression. The proposed schemes are simple to implement and outperform comparable methods when tested against geometric distortions. / Applied Science, Faculty of / Electrical and Computer Engineering, Department of / Graduate
|
6 |
Implementation of Low-Bit Rate Audio Codec, Codec2, in Verilog on Modern FPGASSampath Kumar, Santhiya 30 April 2020 (has links)
No description available.
|
7 |
Comparison of lossy and lossless compression algorithms for time series data in the Internet of Vehicles / Jämförelse av destruktiva och icke-förstörande komprimeringsalgorithmer för tidsseriedata inom fordonens internetHughes, Joseph January 2023 (has links)
As automotive development advances, connectivity features are continually added to vehicles that, in conjunction, form an Internet of Vehicles. For numerous reasons, it is vital for vehicle manufacturers to collect telemetry from their fleets. However, the volume of the generated data is too immense to feasibly be transmitted to a server due to CPU and memory limitations of embedded hardware and the monetary cost of cellular network usage. The purpose of this thesis is thus to investigate how these issues can be alleviated by the use of real-time compression of time series data before off-board transmission. A hybrid approach is proposed that results in fast and effective performance on a variety of time series exhibiting varying numerical data features, all while limiting the maximum reconstruction error to a user-specified absolute value. We first perform a literature review to identify state of the art compression algorithms for time series compression that run online and provide max-error guarantees. We then choose a subset of lossless and lossy algorithms that are implemented and benchmarked with regards to their compression ratio, resource usage, and reconstruction error when used on time series that exhibit a variety of data features. Finally, we ask whether we are able to run a lossy and lossless algorithm in succession in order to further increase the compression ratio. The literature review identifies a diverse range of compression algorithms. Out of these, the algorithms Poor Man's Compression - MidRange (PMC-MR) and Swing filter are selected as lossy algorithms, and Run-length Binary Encoding (RLBE) and Gorilla are selected as lossless algorithms. The experiments yield positive results for the lossy algorithms, which excel on different data sets. These are able to achieve compression ratios between 22.0% and 99.5%, depending on the data set, while limiting the max-error to 1%. In contrast, Gorilla achieves compression ratios between 66.6% and 83.7%, outperforming RLBE in nearly all aspects. Moreover, we conclude that there is a strictly positive improvement to the compression ratio when losslessly compressing the result of lossily compressed data. When combining either PMC-MR or Swing filter with Gorilla, we achieve compression ratios between 83.1% and 99.6% across a variety of time series with a maximum error for any given data point of 1%.
|
8 |
Contributions on approximate computing techniques and how to measure them / Contributions sur les techniques de computation approximée et comment les mesurerRodriguez Cancio, Marcelino 19 December 2017 (has links)
La Computation Approximée est basée dans l'idée que des améliorations significatives de l'utilisation du processeur, de l'énergie et de la mémoire peuvent être réalisées, lorsque de faibles niveaux d'imprécision peuvent être tolérés. C'est un concept intéressant, car le manque de ressources est un problème constant dans presque tous les domaines de l'informatique. Des grands superordinateurs qui traitent les big data d'aujourd'hui sur les réseaux sociaux, aux petits systèmes embarqués à contrainte énergétique, il y a toujours le besoin d'optimiser la consommation de ressources. La Computation Approximée propose une alternative à cette rareté, introduisant la précision comme une autre ressource qui peut à son tour être échangée par la performance, la consommation d'énergie ou l'espace de stockage. La première partie de cette thèse propose deux contributions au domaine de l'informatique approximative: Aproximate Loop Unrolling : optimisation du compilateur qui exploite la nature approximative des données de séries chronologiques et de signaux pour réduire les temps d'exécution et la consommation d'énergie des boucles qui le traitent. Nos expériences ont montré que l'optimisation augmente considérablement les performances et l'efficacité énergétique des boucles optimisées (150% - 200%) tout en préservant la précision à des niveaux acceptables. Primer: le premier algorithme de compression avec perte pour les instructions de l'assembleur, qui profite des zones de pardon des programmes pour obtenir un taux de compression qui surpasse techniques utilisées actuellement jusqu'à 10%. L'objectif principal de la Computation Approximée est d'améliorer l'utilisation de ressources, telles que la performance ou l'énergie. Par conséquent, beaucoup d'efforts sont consacrés à l'observation du bénéfice réel obtenu en exploitant une technique donnée à l'étude. L'une des ressources qui a toujours été difficile à mesurer avec précision, est le temps d'exécution. Ainsi, la deuxième partie de cette thèse propose l'outil suivant : AutoJMH : un outil pour créer automatiquement des microbenchmarks de performance en Java. Microbenchmarks fournissent l'évaluation la plus précis de la performance. Cependant, nécessitant beaucoup d'expertise, il subsiste un métier de quelques ingénieurs de performance. L'outil permet (grâce à l'automatisation) l'adoption de microbenchmark par des non-experts. Nos résultats montrent que les microbencharks générés, correspondent à la qualité des manuscrites par des experts en performance. Aussi ils surpassent ceux écrits par des développeurs professionnels dans Java sans expérience en microbenchmarking. / Approximate Computing is based on the idea that significant improvements in CPU, energy and memory usage can be achieved when small levels of inaccuracy can be tolerated. This is an attractive concept, since the lack of resources is a constant problem in almost all computer science domains. From large super-computers processing today’s social media big data, to small, energy-constraint embedded systems, there is always the need to optimize the consumption of some scarce resource. Approximate Computing proposes an alternative to this scarcity, introducing accuracy as yet another resource that can be in turn traded by performance, energy consumption or storage space. The first part of this thesis proposes the following two contributions to the field of Approximate Computing :Approximate Loop Unrolling: a compiler optimization that exploits the approximative nature of signal and time series data to decrease execution times and energy consumption of loops processing it. Our experiments showed that the optimization increases considerably the performance and energy efficiency of the optimized loops (150% - 200%) while preserving accuracy to acceptable levels. Primer: the first ever lossy compression algorithm for assembler instructions, which profits from programs’ forgiving zones to obtain a compression ratio that outperforms the current state-of-the-art up to a 10%. The main goal of Approximate Computing is to improve the usage of resources such as performance or energy. Therefore, a fair deal of effort is dedicated to observe the actual benefit obtained by exploiting a given technique under study. One of the resources that have been historically challenging to accurately measure is execution time. Hence, the second part of this thesis proposes the following tool : AutoJMH: a tool to automatically create performance microbenchmarks in Java. Microbenchmarks provide the finest grain performance assessment. Yet, requiring a great deal of expertise, they remain a craft of a few performance engineers. The tool allows (thanks to automation) the adoption of microbenchmark by non-experts. Our results shows that the generated microbencharks match the quality of payloads handwritten by performance experts and outperforms those written by professional Java developers without experience in microbenchmarking.
|
9 |
[en] PERMUTATION CODES FOR DATA COMPRESSION AND MODULATION / [pt] CÓDIGOS DE PERMUTAÇÃO PARA COMPRESSÃO DE DADOS E MODULAÇÃODANILO SILVA 01 April 2005 (has links)
[pt] Códigos de permutação são uma interessante ferramenta
matemática que
pode ser empregada para construir tanto esquemas de
compressão com perdas quanto esquemas de modulação em um
sistema de transmissão digital.
Códigos de permutação vetorial, uma extensão mais
poderosa
dos códigos de
permutação escalar, foram recentemente introduzidos no
contexto de compressão de fontes. Este trabalho
apresenta
novas contribuições a essa teoria
e introduz os códigos de permutação vetorial no contexto
de modulação.
Para compressão de fontes, é demonstrado matematicamente
que os códigos
de permutação vetorial (VPC) têm desempenho assintótico
idêntico ao do
quantizador vetorial com restrição de entropia (ECVQ).
Baseado neste desenvolvimento, é proposto um método
eficiente para o projeto de VPC s.
O bom desempenho dos códigos projetados com esse método
é
verificado
através de resultados experimentais para as fontes
uniforme
e gaussiana: são
exibidos VPC s cujo desempenho é semelhante ao do ECVQ e
superior ao de
sua versão escalar. Para o propósito de transmissão
digital, é verificado que
também a modulação baseada em códigos de permutação
vetorial (VPM)
possui desempenho superior ao de sua versão escalar. São
desenvolvidas as
expressões para o projeto ótimo de VPM, e um método é
apresentado para
detecção ótima de VPM em canais AWGN e com
desvanecimento. / [en] Permutation codes are an interesting mathematical tool
which can be used
to devise both lossy compression schemes and modulation
schemes for digital transmission systems. Vector
permutation codes, a more powerful extension of scalar
permutation codes, were recently introduced for the purpose
of source compression. This work presents new contributions
to this theory
and also introduces vector permutation codes for the
purpose of modulation.
For source compression, it is proved that vector
permutation codes (VPC)
have an asymptotical performance equal to that of an
entropy-constrained
vector quantizer (ECVQ). Based on this development, an
efficient method
is proposed for VPC design. Experimental results for
Gaussian and uniform
sources show that the codes designed by this method have
indeed a good
performance: VPC s are exhibited whose performances are
similar to that
of ECVQ and superior to those of their scalar counterparts.
In the context
of digital transmission, it is verified that also vector
permutation modulation (VPM) is superior in performance to
scalar permutation modulation.
Expressions are developed for the optimal design of VPM,
and a method is
presented for maximum-likelihood detection of VPM in AWGN
and fading
channels.
|
10 |
Power System Data Compression For ArchivingDas, Sarasij 11 1900 (has links)
Advances in electronics, computer and information technology are fueling major changes in the area of power systems instrumentations. More and more microprocessor based digital instruments are replacing older type of meters. Extensive deployment of digital instruments are generating vast quantities of data which is creating information pressure in Utilities. The legacy SCADA based data management systems do not support management of such huge data. As a result utilities either have to delete or store the metered information in some compact discs, tape drives which are unreliable.
Also, at the same time the traditional integrated power industry is going through a deregulation process. The market principle is forcing competition between power utilities, which in turn demands a higher focus on profit and competitive edge. To optimize system operation and planning utilities need better decision making processes which depend on the availability of reliable system information. For utilities it is becoming clear that information is a vital asset. So, the utilities are now keen to store and use as much information as they can.
Existing SCADA based systems do not allow to store data of more than a few months. So, in this dissertation effectiveness of compression algorithms in compressing real time operational data has been assessed. Both, lossy and lossless compression schemes are considered. In lossless method two schemes are proposed among which Scheme 1 is based on arithmetic coding and Scheme 2 is based on run length coding. Both the scheme have 2 stages. First stage is common for both the schemes. In this stage the consecutive data elements are decorrelated by using linear predictors. The output from linear predictor, named as residual sequence, is coded by arithmetic coding in Scheme 1 and by run length coding in Scheme 2. Three different types of arithmetic codings are considered in this study : static, decrement and adaptive arithmetic coding. Among them static and decrement codings are two pass methods where the first pass is used to collect symbol statistics while the second is used to code the symbols. The adaptive coding method uses only one pass.
In the arithmetic coding based schemes the average compression ratio achieved for voltage data is around 30, for frequency data is around 9, for VAr generation data is around 14, for MW generation data is around 11 and for line flow data is around 14. In scheme 2 Golomb-Rice coding is used for compressing run lengths. In Scheme 2 the average compression ratio achieved for voltage data is around 25, for frequency data is around 7, for VAr generation data is around 10, for MW generation data is around 8 and for line flow data is around 9. The arithmetic coding based method mainly looks at achieving high compression ratio. On the other hand, Golomb-Rice coding based method does not achieve good compression ratio as arithmetic coding but it is computationally very simple in comparison with the arithmetic coding.
In lossy method principal component analysis (PCA) based compression method is used. From the data set, a few uncorrelated variables are derived and stored. The range of compression ratio in PCA based compression scheme is around 105-115 for voltage data, around 55-58 for VAr generation data, around 21-23 for MW generation data and around 27-29 for line flow data. This shows that the voltage parameter is amenable for better compression than other parameters.
Data of five system parameters - voltage, line flow, frequency, MW generation and MVAr generation - of Souther regional grid of India have been considered for study. One of the aims of this thesis is to argue that collected power system data can be put to other uses as well. In particular we show that, even mining the small amount of practical data (collected from SRLDC) reveals some interesting system behavior patterns. A noteworthy feature of the thesis is that all the studies have been carried out considering data of practical systems. It is believed that the thesis opens up new questions for further investigations.
|
Page generated in 0.0276 seconds