Spelling suggestions: "subject:"image compression"" "subject:"image 8compression""
231 |
An entropy based adaptive image encoding techniqueMurphy, Gregory Paul 01 January 1990 (has links)
Many image encoders exist that reduce the amount of information that needs to be transmitted or stored on disk. Reduction of information reduces the transmission rate but compromises i~age quality. The encoders that have the best compression ratios often lose image quality by distorting the high frequency portions of the image. Other encoders have slow algorithms that will not work in real time. Encoders that use quantizers often exhibit a gray scale contouring effect due to insufficient quantizer levels. This paper presents a fast encoding algorithm that reduces the number of quantizer levels without introducing an error large enough to cause gray scale contouring. The new algorithm uses entropy to determine the most advantageous difference mapping technique and the number of bits per pixel used to encode the image. The double Difference values are reduced in magnitude such that an eight level power series quantizer can be used without introducing an error large enough to cause gray scale contouring. The one dimensional application of the algorithm results in 3.0 bits per pixel with a RMS error of 4.2 gray scale values. Applied two dimensionally, the algorithm reduces the image to 1.5 bits per pixel with a RMS error of 6.7 gray scale values.
|
232 |
The evaluation of chest images compressed with JPEG and wavelet techniquesWen, Cathlyn Y. 22 August 2008 (has links)
Image compression reduces the amount of space necessary to store digital images and allows quick transmission of images to other hospitals, departments, or clinics. However, the degradation of image quality due to compression may not be acceptable to radiologists or it may affect diagnostic results. A preliminary study was conducted using several chest images with common lung diseases and compressed with JPEG and wavelet techniques at various ratios. Twelve board-certified radiologists were recruited to perform two types of experiments.
In the first part of the experiment, presence of lung disease, confidence of presence of lung disease, severity of lung disease, confidence of severity of lung disease, and difficulty of making a diagnosis were rated by radiologists. The six images presented were either uncompressed or compressed at 32:1 or 48:1 compression ratios.
In the second part of the experiment, radiologists were asked to make subjective ratings by comparing the image quality of the uncompressed version of an image with the compressed version of the same image, and judging the acceptability of the compressed image for diagnosis. The second part examined a finer range of compression ratios (8:1, 16:1, 24:1, 32:1, 44:1, and 48:1).
In all cases, radiologists were able to judge the presence of lung disease and experienced little difficulty diagnosing the images. Image degradation perceptibility increased as the compression ratio increased; however, among the levels of compression ratio tested, the quality of compressed images was judged to be only slightly worse than the original image. At higher compression ratios, JPEG images were judged to be less acceptable than wavelet-based images but radiologists believed that all the images were still acceptable for diagnosis.
These results should be interpreted carefully because there were only six original images tested, but results indicate that compression ratios of up to 48:1 are acceptable using the two medically optimized compression methods, JPEG and wavelet techniques. / Master of Science
|
233 |
Advancing Learned Lossy Image Compression through Knowledge Distillation and Contextual ClusteringYichi Zhang (19960344) 29 October 2024 (has links)
<p dir="ltr">In recent decades, the rapid growth of internet traffic, particularly driven by high-definition images/videos has highlighted the critical need for effective image compression to reduce bit rates and enable efficient data transmission. Learned lossy image compression (LIC), which uses end-to-end deep neural networks, has emerged as a highly promising method, even outperforming traditional methods such as the intra-coding of the versatile video coding (VVC) standard. This thesis contributes to the field of LIC in two ways. First, we present a theoretical bound-guided knowledge distillation technique, which utilizes estimated bound information rate-distortion (R-D) functions to guide the training of LIC models. Implemented with a modified hierarchical variational autoencoder (VAE), this method demonstrates superior rate-distortion performance with reduced computational complexity. Next, we introduce a token mixer neural architecture, referred to as <i>contextual clustering</i>, which serves as an alternative to conventional convolutional layers or self-attention mechanisms in transformer architectures. Contextual clustering groups pixels based on their cosine similarity and uses linear layers to aggregate features within each cluster. By integrating with current LIC methods, we not only improve coding performance but also reduce computational load. </p>
|
234 |
Reducing Energy Consumption Through Image Compression / Reducera energiförbrukning genom bildkompressionFerdeen, Mats January 2016 (has links)
The energy consumption to make the off-chip memory writing and readings are aknown problem. In the image processing field structure from motion simpler compressiontechniques could be used to save energy. A balance between the detected features suchas corners, edges, etc., and the degree of compression becomes a big issue to investigate.In this thesis a deeper study of this balance are performed. A number of more advancedcompression algorithms for processing of still images such as JPEG is used for comparisonwith a selected number of simpler compression algorithms. The simpler algorithms canbe divided into two categories: individual block-wise compression of each image andcompression with respect to all pixels in each image. In this study the image sequences arein grayscale and provided from an earlier study about rolling shutters. Synthetic data setsfrom a further study about optical flow is also included to see how reliable the other datasets are. / Energikonsumtionen för att skriva och läsa till off-chip minne är ett känt problem. Inombildbehandlingsområdet struktur från rörelse kan enklare kompressionstekniker användasför att spara energi. En avvägning mellan detekterade features såsom hörn, kanter, etc.och grad av kompression blir då en fråga att utreda. I detta examensarbete har en djuparestudie av denna avvägning utförts. Ett antal mer avancerade kompressionsalgoritmer förbearbetning av stillbilder som tex. JPEG används för jämförelse med ett antal utvaldaenklare kompressionsalgoritmer. De enklare algoritmerna kan delas in i två kategorier:individuell blockvis kompression av vardera bilden och kompression med hänsyn tillsamtliga pixlar i vardera bilden. I studien är bildsekvenserna i gråskala och tillhandahållnafrån en tidigare studie om rullande slutare. Syntetiska data set från ytterligare en studie om’optical flow’ ingår även för att se hur pass tillförlitliga de andra dataseten är.
|
235 |
Applying the MDCT to image compressionMuller, Rikus 03 1900 (has links)
Thesis (DSc (Mathematical Sciences. Applied Mathematics))--University of Stellenbosch, 2009. / The replacement of the standard discrete cosine transform (DCT) of JPEG with the
windowed modifed DCT (MDCT) is investigated to determine whether improvements
in numerical quality can be achieved. To this end, we employ an existing algorithm
for optimal quantisation, for which we also propose improvements. This involves the
modelling and prediction of quantisation tables to initialise the algorithm, a strategy that
is also thoroughly tested. Furthermore, the effects of various window functions on the
coding results are investigated, and we find that improved quality can indeed be achieved
by modifying JPEG in this fashion.
|
236 |
Compression temps réel de séquences d'images médicales sur les systèmes embarqués / Real time medical image compression in embedded SystemBai, Yuhui 18 November 2014 (has links)
Dans le domaine des soins de santé, l'imagerie médicale a rapidement progressé et est aujourd'hui largement utilisés pour le diagnostic médical et le traitement du patient. La santé mobile devient une tendance émergente qui fournit des soins de santé et de diagnostic à distance. de plus, à l'aide des télécommunications, les données médicale incluant l'imagerie médicale et les informations du patient peuvent être facilement et rapidement partagées entre les hôpitaux et les services de soins de santé. En raison de la grande capacité de stockage et de la bande passante de transmission limitée, une technique de compression efficace est nécessaire. En tant que technique de compression d'image certifiée médicale, WAAVES fournit des taux de compression élevé, tout en assurant une qualité d'image exceptionnelle pour le diagnostic médical. Le défi consiste à transmettre à distance l'image médicale de l'appareil mobile au centre de soins de santé via un réseau à faible bande passante. Nos objectifs sont de proposer une solution de compression d'image intégrée à une vitesse de compression de 10 Mo/s, tout en maintenant la qualité de compression. Nous examinons d'abord l'algorithme WAAVES et évaluons sa complexité logicielle, basée sur un profilage précis du logiciel qui indique un complexité de l'algorithme WAAVES très élevée et très difficile à optimiser de contraintes très sévères en terme de surface, de temps d'exécution ou de consommation d'énergie. L'un des principaux défis est que les modules Adaptative Scanning et Hierarchical Enumerative Coding de WAAVES prennent plus de 90% du temps d'exécution. Par conséquent, nous avons exploité plusieurs possibilités d'optimisation de l'algorithme WAAVES pour simplifier sa mise en œuvre matérielle. Nous avons proposé des méthodologies de mise en œuvre possible de WAAVES, en premier lieu une mise en œuvre logiciel sur plateforme DSP. En suite, nous avons réalisé notre implémentation matérielle de WAAVES. Comme les FPGAs sont largement utilisés pour le prototypage ou la mise en œuvre de systèmes sur puce pour les applications de traitement du signal, leur capacités de parallélisme massif et la mémoire sur puce abondante permet une mise en œuvre efficace qui est souvent supérieure aux CPUs et DSPs. Nous avons conçu WAAVES Encoder SoC basé sur un FPGA de Stratix IV de chez Altera, les deux grands blocs coûteux en temps: Adaptative Scanning et Hierarchical Enumerative Coding sont implementés comme des accélérateurs matériels. Nous avons réalisé ces accélérateurs avec deux niveaux d'optimisations différents et les avons intégrés dans notre Encodeur SoC. La mise en œuvre du matérielle fonctionnant à 100MHz fournit des accélérations significatives par rapport aux implémentations logicielles, y compris les implémentations sur ARM Cortex A9, DSP et CPU et peut atteindre une vitesse de codage de 10 Mo/s, ce qui répond bien aux objectifs de notre thèse. / In the field of healthcare, developments in medical imaging are progressing very fast. New technologies have been widely used for the support of patient medical diagnosis and treatment. The mobile healthcare becomes an emerging trend, which provides remote healthcare and diagnostics. By using telecommunication networks and information technology, the medical records including medical imaging and patient's information can be easily and rapidly shared between hospitals and healthcare services. Due to the large storage size and limited transmission bandwidth, an efficient compression technique is necessary. As a medical certificate image compression technique, WAAVES provides high compression ratio while ensuring outstanding image quality for medical diagnosis. The challenge is to remotely transmit the medical image through the mobile device to the healthcare center over a low bandwidth network. Our goal is to propose a high-speed embedded image compression solution, which can provide a compression speed of 10MB/s while maintaining the equivalent compression quality as its software version. We first analyzed the WAAVES encoding algorithm and evaluated its software complexity, based on a precise software profiling, we revealed that the complex algorithm in WAAVES makes it difficult to be optimized for certain implementations under very hard constrains, including area, timing and power consumption. One of the key challenges is that the Adaptive Scanning block and Hierarchical Enumerative Coding block in WAAVES take more than 90% of the total execution time. Therefore, we exploited several potentialities of optimizations of the WAAVES algorithm to simplify the hardware implementation. We proposed the methodologies of the possible implementations of WAAVES, which started from the evaluation of software implementation on DSP platforms, following this evaluation we carried out our hardware implementation of WAAVES. Since FPGAs are widely used as prototyping or actual SoC implementation for signal processing applications, their massive parallelism and abundant on-chip memory allow efficient implementation that often rivals CPUs and DSPs. We designed our WAAVES Encoder SoC based on an Altera's Stratix IV FPGA, the two major time consuming blocks: Adaptive Scanning and Hierarchical Enumerative Coding are designed as IP accelerators. We realized the IPs with two different optimization levels and integrated them into our Encoder SoC. The Hardware implementation running at 100MHz provides significant speedup compared to the other software implementation including ARM Cortex A9, DSP and CPU and can achieve a coding speed of 10MB/s that fulfills the goals of our thesis.
|
237 |
Efficient Lower Layer Techniques for Electromagnetic Nanocommunication Networks / Techniques de couche basse efficaces pour les réseaux de nanocommunications électromagnétiquesZainuddin, Muhammad Agus 17 March 2017 (has links)
Nous avons proposé nanocode bloc simple pour assurer la fiabilité des communications nano. Nous proposons également la compression d'image simple, efficace de l'énergie pour les communications nano. Nous étudions les performances des méthodes proposées en termes d'efficacité énergétique, le taux d'erreur binaire et de robustesse contre les erreurs de transmission. Dans la compression d'image pour les communications nano, nous comparons notre méthode proposée SEIC avec compression standart images des méthodes telles que JPEG, JPEG 2000, GIF et PNG. Les résultats montrent que notre méthode proposée surpasse les méthodes de compression d'image standard dans la plupart des indicateurs. Dans la compression d'erreur pour les communications nano, nous proposons nanocode de simple bloc (SBN) et comparer la performance avec le code de correction d'erreur existant pour nanocommunication, tels que Canal Minimum Energy (MEC) et le faible poids de la Manche (LWC) codes. Le résultat montre que notre méthode proposée surpasse MEC et LWC en termes de fiabilité et de la complexité du matériel. / We proposed nanocode single block to ensure the reliability of nano communications. We also offer the simple image compression, power efficient for nano communications. We study the performance of the proposed methods in terms of energy efficiency, bit error rate and robustness against transmission errors. In image compression for nanocommunications, we compare our proposed method SEIC with standart compression image methods such as JPEG, JPEG 2000, GIF and PNG. The results show that our proposed method outperforms standard image compression methods in most metrics. In error compression for nanocommunications, we propose simple block nanocode (SBN) and compare the performance with existing error correction code for nanocommunication, such as Minimum Energy Channel (MEC) and Low weight Channel (LWC) codes. The result show that our proposed method outperforms MEC and LWC in terms of reliability and hardware complexity.
|
238 |
Novel scalable and real-time embedded transceiver systemMohammed, Rand Basil January 2017 (has links)
Our society increasingly relies on the transmission and reception of vast amounts of data using serial connections featuring ever-increasing bit rates. In imaging systems, for example, the frame rate achievable is often limited by the serial link between camera and host even when modern serial buses with the highest bit rates are used. This thesis documents a scalable embedded transceiver system with a bandwidth and interface standard that can be adapted to suit a particular application. This new approach for a real-time scalable embedded transceiver system is referred to as a Novel Reference Model (NRM), which connects two or more applications through a transceiver network in order to provide real-time data to a host system. Different transceiver interfaces for which the NRM model has been tested include: LVDS, GIGE, PMA-direct, Rapid-IO and XAUI, one support a specific range for transceiver speed that suites a special type for transceiver physical medium. The scalable serial link approach has been extended with loss-less data compression with the aim of further increasing dataflow at a given bit rate. Two lossless compression methods were implemented, based on Huffman coding and a novel method called Reduced Lossless Compression Method (RLCM). Both methods are integrated into the scalable transceivers providing a comprehensive solution for optimal data transmission over a variety of different interfaces. The NRM is implemented on a field programmable gate array (FPGA) using a system architecture that consists of three layers: application, transport and physical. A Terasic DE4 board was used as the main platform for implementing and testing the embedded system, while Quartus-II software and tools were used to design and debug the embedded hardware systems.
|
239 |
Vektorkvantisering för kodning och brusreducering / Vector quantization for coding and noise reductionCronvall, Per January 2004 (has links)
<p>This thesis explores the possibilities of avoiding the issues generally associated with compression of noisy imagery, through the usage of vector quantization. By utilizing the learning aspects of vector quantization, image processing operations such as noise reduction could be implemented in a straightforward way. Several techniques are presented and evaluated. A direct comparison shows that for noisy imagery, vector quantization, in spite of it's simplicity, has clear advantages over MPEG-4 encoding.</p>
|
240 |
A Scalable, Secure, and Energy-Efficient Image Representation for Wireless SystemsWoo, Tim January 2004 (has links)
The recent growth in wireless communications presents a new challenge to multimedia communications. Digital image transmission is a very common form of multimedia communication. Due to limited bandwidth and broadcast nature of the wireless medium, it is necessary to compress and encrypt images before they are sent. On the other hand, it is important to efficiently utilize the limited energy in wireless devices. In a wireless device, two major sources of energy consumption are energy used for computation and energy used for transmission. Computation energy can be reduced by minimizing the time spent on compression and encryption. Transmission energy can be reduced by sending a smaller image file that is obtained by compressing the original highest quality image. Image quality is often sacrificed in the compression process. Therefore, users should have the flexibility to control the image quality to determine whether such a tradeoff is acceptable. It is also desirable for users to have control over image quality in different areas of the image so that less important areas can be compressed more, while retaining the details in important areas. To reduce computations for encryption, a partial encryption scheme can be employed to encrypt only the critical parts of an image file, without sacrificing security. This thesis proposes a scalable and secure image representation scheme that allows users to select different image quality and security levels. The binary space partitioning (BSP) tree presentation is selected because this representation allows convenient compression and scalable encryption. The Advanced Encryption Standard (AES) is chosen as the encryption algorithm because it is fast and secure. Our experimental result shows that our new tree construction method and our pruning formula reduces execution time, hence computation energy, by about 90%. Our image quality prediction model accurately predicts image quality to within 2-3dB of the actual image PSNR.
|
Page generated in 0.068 seconds