Spelling suggestions: "subject:"loses compression""
1 |
Enabling Approximate Storage through Lossy Media Data CompressionWorek, Brian David 08 February 2019 (has links)
Memory capacity, bandwidth, and energy all continue to present hurdles in the quest for efficient, high-speed computing. Recognition, mining, and synthesis (RMS) applications in particular are limited by the efficiency of the memory subsystem due to their large datasets and need to frequently access memory. RMS applications, such as those in machine learning, deliver intelligent analysis and decision making through their ability to learn, identify, and create complex data models. To meet growing demand for RMS application deployment in battery constrained devices, such as mobile and Internet-of-Things, designers will need novel techniques to improve system energy consumption and performance. Fortunately, many RMS applications demonstrate inherent error resilience, a property that allows them to produce acceptable outputs even when data used in computation contain errors. Approximate storage techniques across circuits, architectures, and algorithms exploit this property to improve the energy consumption and performance of the memory subsystem through quality-energy scaling. This thesis reviews state of the art techniques in approximate storage and presents our own contribution that uses lossy compression to reduce the storage cost of media data. / MS / Computer memory systems present challenges in the quest for more powerful overall computing systems. Computer applications with the ability to learn from large sets of data in particular are limited because they need to frequently access the memory system. These applications are capable of intelligent analysis and decision making due to their ability to learn, identify, and create complex data models. To meet growing demand for intelligent applications in smartphones and other Internet connected devices, designers will need novel techniques to improve energy consumption and performance. Fortunately, many intelligent applications are naturally resistant to errors, which means they can produce acceptable outputs even when there are errors in inputs or computation. Approximate storage techniques across computer hardware and software exploit this error resistance to improve the energy consumption and performance of computer memory by purposefully reducing data precision. This thesis reviews state of the art techniques in approximate storage and presents our own contribution that uses lossy compression to reduce the storage cost of media data.
|
2 |
Perceived audio quality of compressed audio in game dialogueAhlberg, Anton January 2016 (has links)
A game could have thousands of sound assets, to fit all of those files to a manageable storage space it is often necessary to reduce the size of the files to a more manageable size so they have to be compressed. One type of sound that often takes up a lot of disc space (because there is so much of it) is dialogue. In the popular game engine Unreal Engine 4 (UE4) the the audio is compressed to Ogg Vorbis and has as default the bit rate is set to 104 kbit/s. The goal of this paper is to see if untrained listeners find dialogue compressed in Ogg Vorbis 104 kbit/s good enough for dialogue or if they prefer higher bit rates. A game was made in UE4 that would act as a listening test. Dialogue audio was recorded with a male and a female voice-actor and was compressed in UE4 in six different bit rates. 24 untrained subjects was asked to play the game and identify the two out of six robots with the dialogue audio they thought sound the best. The results show that the subjects prefer the higher bit rates that was tested. The results was analyzed with a chi-squared test which showed that the null-hypothesis can be rejected. Only 21% of the answers were towards UE4s default bit rate of 104 kbit/s or lower. The result suggest that the subjects prefer dialogue in higher bit rates and UE4 should raise the default bit rate.
|
3 |
Low-complexity methods for image and video watermarkingCoria Mendoza, Lino Evgueni 05 1900 (has links)
For digital media, the risk of piracy is aggravated by the ease to copy and distribute the content. Watermarking has become the technology of choice for discouraging people from creating illegal copies of digital content. Watermarking is the practice of imperceptibly altering the media content by embedding a message, which can be used to identify the owner of that content. A watermark message can also be a set of instructions for the display equipment, providing information about the content’s usage restrictions. Several applications are considered and three watermarking solutions are provided.
First, applications such as owner identification, proof of ownership, and digital fingerprinting are considered and a fast content-dependent image watermarking method is proposed. The scheme offers a high degree of robustness against distortions, mainly additive noise, scaling, low-pass filtering, and lossy compression. This method also requires a small amount of computations. The method generates a set of evenly distributed codewords that are constructed via an iterative algorithm. Every message bit is represented by one of these codewords and is then embedded in one of the image’s 8 × 8 pixel blocks. The information in that particular block is used in the embedding so as to ensure robustness and image fidelity.
Two watermarking schemes designed to prevent theatre camcorder piracy are also presented. In these methods, the video is watermarked so that its display is not permitted if a compliant video player detects the watermark. A watermark that is robust to geometric distortions (rotation, scaling, cropping) and lossy compression is required in order to block access to media content that has been recorded with a camera inside a movie theatre. The proposed algorithms take advantage of the properties of the dual-tree complex wavelet transform (DT CWT). This transform offers the advantages of both the regular and the complex wavelets (perfect reconstruction, approximate shift invariance and good directional selectivity). Our methods use these characteristics to create watermarks that are robust to geometric distortions and lossy compression. The proposed schemes are simple to implement and outperform comparable methods when tested against geometric distortions.
|
4 |
Low-complexity methods for image and video watermarkingCoria Mendoza, Lino Evgueni 05 1900 (has links)
For digital media, the risk of piracy is aggravated by the ease to copy and distribute the content. Watermarking has become the technology of choice for discouraging people from creating illegal copies of digital content. Watermarking is the practice of imperceptibly altering the media content by embedding a message, which can be used to identify the owner of that content. A watermark message can also be a set of instructions for the display equipment, providing information about the content’s usage restrictions. Several applications are considered and three watermarking solutions are provided.
First, applications such as owner identification, proof of ownership, and digital fingerprinting are considered and a fast content-dependent image watermarking method is proposed. The scheme offers a high degree of robustness against distortions, mainly additive noise, scaling, low-pass filtering, and lossy compression. This method also requires a small amount of computations. The method generates a set of evenly distributed codewords that are constructed via an iterative algorithm. Every message bit is represented by one of these codewords and is then embedded in one of the image’s 8 × 8 pixel blocks. The information in that particular block is used in the embedding so as to ensure robustness and image fidelity.
Two watermarking schemes designed to prevent theatre camcorder piracy are also presented. In these methods, the video is watermarked so that its display is not permitted if a compliant video player detects the watermark. A watermark that is robust to geometric distortions (rotation, scaling, cropping) and lossy compression is required in order to block access to media content that has been recorded with a camera inside a movie theatre. The proposed algorithms take advantage of the properties of the dual-tree complex wavelet transform (DT CWT). This transform offers the advantages of both the regular and the complex wavelets (perfect reconstruction, approximate shift invariance and good directional selectivity). Our methods use these characteristics to create watermarks that are robust to geometric distortions and lossy compression. The proposed schemes are simple to implement and outperform comparable methods when tested against geometric distortions.
|
5 |
Low-complexity methods for image and video watermarkingCoria Mendoza, Lino Evgueni 05 1900 (has links)
For digital media, the risk of piracy is aggravated by the ease to copy and distribute the content. Watermarking has become the technology of choice for discouraging people from creating illegal copies of digital content. Watermarking is the practice of imperceptibly altering the media content by embedding a message, which can be used to identify the owner of that content. A watermark message can also be a set of instructions for the display equipment, providing information about the content’s usage restrictions. Several applications are considered and three watermarking solutions are provided.
First, applications such as owner identification, proof of ownership, and digital fingerprinting are considered and a fast content-dependent image watermarking method is proposed. The scheme offers a high degree of robustness against distortions, mainly additive noise, scaling, low-pass filtering, and lossy compression. This method also requires a small amount of computations. The method generates a set of evenly distributed codewords that are constructed via an iterative algorithm. Every message bit is represented by one of these codewords and is then embedded in one of the image’s 8 × 8 pixel blocks. The information in that particular block is used in the embedding so as to ensure robustness and image fidelity.
Two watermarking schemes designed to prevent theatre camcorder piracy are also presented. In these methods, the video is watermarked so that its display is not permitted if a compliant video player detects the watermark. A watermark that is robust to geometric distortions (rotation, scaling, cropping) and lossy compression is required in order to block access to media content that has been recorded with a camera inside a movie theatre. The proposed algorithms take advantage of the properties of the dual-tree complex wavelet transform (DT CWT). This transform offers the advantages of both the regular and the complex wavelets (perfect reconstruction, approximate shift invariance and good directional selectivity). Our methods use these characteristics to create watermarks that are robust to geometric distortions and lossy compression. The proposed schemes are simple to implement and outperform comparable methods when tested against geometric distortions. / Applied Science, Faculty of / Electrical and Computer Engineering, Department of / Graduate
|
6 |
On the Rate-Distortion-Perception Tradeoff for Lossy CompressionQian, Jingjing January 2023 (has links)
Deep generative models when utilized in lossy image compression tasks can reconstruct realistic looking outputs even at extremely low bit-rates, while traditional compression methods often exhibit noticeable artifacts under similar conditions. As a result, there has been a substantial surge of interest in both the information theoretic aspects and the practical architectures of deep learning based image compression. This thesis makes contributions to the emerging framework of rate-distortion-perception theory. The main results are summarized as follows:
1. We investigate the tradeoff among rate, distortion, and perception for binary sources. The distortion considered here is the Hamming distortion and the perception quality is measured by the total variation distance. We first derive a closed-form expression for the rate-distortion-perception tradeoff in the one-shot setting. This is followed by a complete characterization of the achievable distortion-perception region for a general representation. We then consider the universal setting in which the encoder is one-size-fits-all, and derive upper and lower bounds on the minimum rate penalty. Finally, we study successive refinement for both point-wise and set-wise versions of perception-constrained lossy compression. A necessary and sufficient condition for point-wise successive refinement and a sufficient condition for the successive refinability of universal representations are provided.
2. Next, we characterize the expression for the rate-distortion-perception function of vector Gaussian sources, which extends the result in the scalar counterpart, and show that in the high-perceptual-quality regime, each component of the reconstruction (including high-frequency components) is strictly correlated with that of the source, which is in contrast to the traditional water-filling solution. This result is obtained by optimizing over all possible encoder-decoder pairs subject to the distortion and perception constraints. We then consider the notion of universal representation where the encoder is fixed and the decoder is adapted to achieve different distortion-perception pairs. We characterize the achievable distortion-perception region for a fixed representation and demonstrate that the corresponding distortion-perception tradeoff is approximately optimal.
Our findings significantly enrich the nascent rate-distortion-perception theory, establishing a solid foundation for the field of learned image compression. / None / Doctor of Philosophy (PhD)
|
7 |
Implementation of Low-Bit Rate Audio Codec, Codec2, in Verilog on Modern FPGASSampath Kumar, Santhiya 30 April 2020 (has links)
No description available.
|
8 |
Comparison of lossy and lossless compression algorithms for time series data in the Internet of Vehicles / Jämförelse av destruktiva och icke-förstörande komprimeringsalgorithmer för tidsseriedata inom fordonens internetHughes, Joseph January 2023 (has links)
As automotive development advances, connectivity features are continually added to vehicles that, in conjunction, form an Internet of Vehicles. For numerous reasons, it is vital for vehicle manufacturers to collect telemetry from their fleets. However, the volume of the generated data is too immense to feasibly be transmitted to a server due to CPU and memory limitations of embedded hardware and the monetary cost of cellular network usage. The purpose of this thesis is thus to investigate how these issues can be alleviated by the use of real-time compression of time series data before off-board transmission. A hybrid approach is proposed that results in fast and effective performance on a variety of time series exhibiting varying numerical data features, all while limiting the maximum reconstruction error to a user-specified absolute value. We first perform a literature review to identify state of the art compression algorithms for time series compression that run online and provide max-error guarantees. We then choose a subset of lossless and lossy algorithms that are implemented and benchmarked with regards to their compression ratio, resource usage, and reconstruction error when used on time series that exhibit a variety of data features. Finally, we ask whether we are able to run a lossy and lossless algorithm in succession in order to further increase the compression ratio. The literature review identifies a diverse range of compression algorithms. Out of these, the algorithms Poor Man's Compression - MidRange (PMC-MR) and Swing filter are selected as lossy algorithms, and Run-length Binary Encoding (RLBE) and Gorilla are selected as lossless algorithms. The experiments yield positive results for the lossy algorithms, which excel on different data sets. These are able to achieve compression ratios between 22.0% and 99.5%, depending on the data set, while limiting the max-error to 1%. In contrast, Gorilla achieves compression ratios between 66.6% and 83.7%, outperforming RLBE in nearly all aspects. Moreover, we conclude that there is a strictly positive improvement to the compression ratio when losslessly compressing the result of lossily compressed data. When combining either PMC-MR or Swing filter with Gorilla, we achieve compression ratios between 83.1% and 99.6% across a variety of time series with a maximum error for any given data point of 1%.
|
9 |
Contributions on approximate computing techniques and how to measure them / Contributions sur les techniques de computation approximée et comment les mesurerRodriguez Cancio, Marcelino 19 December 2017 (has links)
La Computation Approximée est basée dans l'idée que des améliorations significatives de l'utilisation du processeur, de l'énergie et de la mémoire peuvent être réalisées, lorsque de faibles niveaux d'imprécision peuvent être tolérés. C'est un concept intéressant, car le manque de ressources est un problème constant dans presque tous les domaines de l'informatique. Des grands superordinateurs qui traitent les big data d'aujourd'hui sur les réseaux sociaux, aux petits systèmes embarqués à contrainte énergétique, il y a toujours le besoin d'optimiser la consommation de ressources. La Computation Approximée propose une alternative à cette rareté, introduisant la précision comme une autre ressource qui peut à son tour être échangée par la performance, la consommation d'énergie ou l'espace de stockage. La première partie de cette thèse propose deux contributions au domaine de l'informatique approximative: Aproximate Loop Unrolling : optimisation du compilateur qui exploite la nature approximative des données de séries chronologiques et de signaux pour réduire les temps d'exécution et la consommation d'énergie des boucles qui le traitent. Nos expériences ont montré que l'optimisation augmente considérablement les performances et l'efficacité énergétique des boucles optimisées (150% - 200%) tout en préservant la précision à des niveaux acceptables. Primer: le premier algorithme de compression avec perte pour les instructions de l'assembleur, qui profite des zones de pardon des programmes pour obtenir un taux de compression qui surpasse techniques utilisées actuellement jusqu'à 10%. L'objectif principal de la Computation Approximée est d'améliorer l'utilisation de ressources, telles que la performance ou l'énergie. Par conséquent, beaucoup d'efforts sont consacrés à l'observation du bénéfice réel obtenu en exploitant une technique donnée à l'étude. L'une des ressources qui a toujours été difficile à mesurer avec précision, est le temps d'exécution. Ainsi, la deuxième partie de cette thèse propose l'outil suivant : AutoJMH : un outil pour créer automatiquement des microbenchmarks de performance en Java. Microbenchmarks fournissent l'évaluation la plus précis de la performance. Cependant, nécessitant beaucoup d'expertise, il subsiste un métier de quelques ingénieurs de performance. L'outil permet (grâce à l'automatisation) l'adoption de microbenchmark par des non-experts. Nos résultats montrent que les microbencharks générés, correspondent à la qualité des manuscrites par des experts en performance. Aussi ils surpassent ceux écrits par des développeurs professionnels dans Java sans expérience en microbenchmarking. / Approximate Computing is based on the idea that significant improvements in CPU, energy and memory usage can be achieved when small levels of inaccuracy can be tolerated. This is an attractive concept, since the lack of resources is a constant problem in almost all computer science domains. From large super-computers processing today’s social media big data, to small, energy-constraint embedded systems, there is always the need to optimize the consumption of some scarce resource. Approximate Computing proposes an alternative to this scarcity, introducing accuracy as yet another resource that can be in turn traded by performance, energy consumption or storage space. The first part of this thesis proposes the following two contributions to the field of Approximate Computing :Approximate Loop Unrolling: a compiler optimization that exploits the approximative nature of signal and time series data to decrease execution times and energy consumption of loops processing it. Our experiments showed that the optimization increases considerably the performance and energy efficiency of the optimized loops (150% - 200%) while preserving accuracy to acceptable levels. Primer: the first ever lossy compression algorithm for assembler instructions, which profits from programs’ forgiving zones to obtain a compression ratio that outperforms the current state-of-the-art up to a 10%. The main goal of Approximate Computing is to improve the usage of resources such as performance or energy. Therefore, a fair deal of effort is dedicated to observe the actual benefit obtained by exploiting a given technique under study. One of the resources that have been historically challenging to accurately measure is execution time. Hence, the second part of this thesis proposes the following tool : AutoJMH: a tool to automatically create performance microbenchmarks in Java. Microbenchmarks provide the finest grain performance assessment. Yet, requiring a great deal of expertise, they remain a craft of a few performance engineers. The tool allows (thanks to automation) the adoption of microbenchmark by non-experts. Our results shows that the generated microbencharks match the quality of payloads handwritten by performance experts and outperforms those written by professional Java developers without experience in microbenchmarking.
|
10 |
Využití pokročilých objektivních kritérií hodnocení při kompresi obrazu / Advanced objective measurement criteria applied to image compressionŠimek, Josef January 2010 (has links)
This diploma thesis deals with the problem of using an objective quality assessment methods in image data compression. Lossy compression always introduces some kind of distortion into the processed data causing degradation in the quality of the image. The intensity of this distortion can be measured using subjective or objective methods. To be able to optimize compression algorithms the objective criteria are used. In this work the SSIM index as a useful tool for describing the quality of compressed images has been presented. Lossy compression scheme is realized using the wavelet transform and SPIHT algorithm. The modification of this algorithm using partitioning of the wavelet coefficients into the separate tree-preserving blocks followed by independent coding, which is especially suitable for parallel processing, was implemented. For the given compression ratio the traditional problem is being solved – how to allocate available bits among the spatial blocks to achieve the highest possible image quality. The possible approaches to achieve this solution were discussed. As a result, some methods for bit allocation based on MSSIM index were proposed. To test the effectivity of these methods the MATLAB environment was used.
|
Page generated in 0.1019 seconds