• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 20
  • 11
  • 4
  • 4
  • 2
  • Tagged with
  • 47
  • 21
  • 20
  • 17
  • 12
  • 12
  • 10
  • 10
  • 8
  • 8
  • 6
  • 6
  • 6
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Visually Lossless Compression Based on JPEG2000 for Efficient Transmission of High Resolution Color Aerial Images

Oh, Han 10 1900 (has links)
ITC/USA 2010 Conference Proceedings / The Forty-Sixth Annual International Telemetering Conference and Technical Exhibition / October 25-28, 2010 / Town and Country Resort & Convention Center, San Diego, California / Aerial image collections have experienced exponential growth in size in recent years. These high resolution images are often viewed at a variety of scales. When an image is displayed at reduced scale, maximum quantization step sizes for visually lossless quality become larger. However, previous visually lossless coding algorithms quantize the image with a single set of quantization step sizes, optimized for display at the full resolution level. This implies that if the image is rendered at reduced resolution, there are significant amounts of extraneous information in the codestream. Thus, in this paper, we propose a method which effectively incorporates multiple quantization step sizes, for various display resolutions, into the JPEG2000 framework. If images are browsed from a remote location, this method can significantly reduce bandwidth usage by only transmitting the portion of the codestream required for visually lossless reconstruction at the desired resolution. Experimental results for high resolution color aerial images are presented.
12

Isually Lossless Coding for Color Aerial Images Using PEG

Oh, Han, Kim, Yookyung 10 1900 (has links)
ITC/USA 2009 Conference Proceedings / The Forty-Fifth Annual International Telemetering Conference and Technical Exhibition / October 26-29, 2009 / Riviera Hotel & Convention Center, Las Vegas, Nevada / This paper describes a psychophysical experiment to measure visibility thresholds (VT) for quantization distortion in JPEG2000 and an associated quantization algorithm for visually lossless coding of color aerial images. The visibility thresholds are obtained from a quantization distortion model based on the statistical characteristics of wavelet coefficients and the deadzone quantizer of JPEG2000, and the resulting visibility thresholds are presented for the luminance component (Y) and two chrominance components (Cb and Cr). Using the thresholds, we have achieved visually lossless coding for 24-bit color aerial images at an average bitrate of 4.17 bits/pixels, which is approximately 30% of the bitrate required for numerically lossless coding.
13

Scalable Perceptual Image Coding for Remote Sensing Systems

Oh, Han, Lalgudi, Hariharan G. 10 1900 (has links)
ITC/USA 2008 Conference Proceedings / The Forty-Fourth Annual International Telemetering Conference and Technical Exhibition / October 27-30, 2008 / Town and Country Resort & Convention Center, San Diego, California / In this work, a scalable perceptual JPEG2000 encoder that exploits properties of the human visual system (HVS) is presented. The algorithm modifies the final three stages of a conventional JPEG2000 encoder. In the first stage, the quantization step size for each subband is chosen to be the inverse of the contrast sensitivity function (CSF). In bit-plane coding, two masking effects are considered during distortion calculation. In the final bitstream formation step, quality layers are formed corresponding to desired perceptual distortion thresholds. This modified encoder exhibits superior visual performance for remote sensing images compared to conventional JPEG2000 encoders. Additionally, it is completely JPEG2000 Part-1 compliant, and therefore can be decoded by any JPEG2000 decoder.
14

High efficiency coarse-grained customised dynamically reconfigurable architecture for digital image processing and compression technologies

Zhao, Xin January 2012 (has links)
Digital image processing and compression technologies have significant market potential, especially the JPEG2000 standard which offers outstanding codestream flexibility and high compression ratio. Strong demand for high performance digital image processing and compression system solutions is forcing designers to seek proper architectures that offer competitive advantages in terms of all performance metrics, such as speed and power. Traditional architectures such as ASIC, FPGA and DSPs have limitations in either low flexibility or high power consumption. On the other hand, through the provision of a degree of flexibility similar to that of a DSP and performance and power consumption advantages approaching that of an ASIC, coarse-grained dynamically reconfigurable architectures are proving to be strong candidates for future high performance digital image processing and compression systems. This thesis investigates dynamically reconfigurable architectures and especially the newly emerging RICA paradigm. Case studies such as Reed- Solomon decoder and WiMAX OFDM timing synchronisation engine are implemented in order to explore the potential of RICA-based architectures and the possible optimisation approaches such as eliminating conditional branches, reducing memory accesses and constructing kernels. Based on investigations in this thesis, a novel customised dynamically reconfigurable architecture targeting digital image processing and compression applications is devised, which can be tailored to adopt different applications. A demosaicing engine based on the Freeman algorithm is designed and implemented on the proposed architecture as the pre-processing module in a digital imaging system. An efficient data buffer rotating scheme is designed with the aim of reducing memory accesses. Meanwhile an investigation targeting mapping the demosaicing engine onto a dual-core RICA platform is performed. After optimisation, the performance of the proposed engine is carefully evaluated and compared in aspects of throughput and consumed computational resources. When targeting the JPEG2000 standard, the core tasks such as 2-D Discrete Wavelet Transform (DWT) and Embedded Block Coding with Optimal Truncation (EBCOT) are implemented and optimised on the proposed architecture. A novel 2-D DWT architecture based on vector operations associated with RICA paradigm is developed, and the complete DWT application is highly optimised for both throughput and area. For the EBCOT implementation, a novel Partial Parallel Architecture (PPA) for the most computationally intensive module in EBCOT, termed Context Modeling (CM), is devised. Based on the algorithm evaluation, an ARM core is integrated into the proposed architecture for performance enhancement. A Ping-Pong memory switching mode with carefully designed communication scheme between RICA based architecture and ARM is proposed. Simulation results demonstrate that the proposed architecture for JPEG2000 offers significant advantage in throughput.
15

Estudo comparativo de algoritmos de compressão de imagens para transmissão em redes de computadores

Majory de Sá Rodrigues, Charlana January 2005 (has links)
Made available in DSpace on 2014-06-12T17:40:49Z (GMT). No. of bitstreams: 2 arquivo7042_1.pdf: 3910655 bytes, checksum: 9a5e52d2535f94f8e232ee5d957d91ee (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2005 / Recentemente, foram desenvolvidos algoritmos iterativos de compressão destinados à transmissão de imagens estáticas via rede tais como: JPEG progressivo, JPEG2000 progressivo, PNG entrelaçado e GIF entrelaçado. Esses algoritmos decompõem a imagem e a transmitem de forma não seqüencial. O propósito desta dissertação consiste em efetuar um estudo comparativo desses algoritmos. A metodologia adotada consiste em fazer uma análise das imagens parciais obtidas para cada formato. Em cada etapa, faz-se uma inspeção visual da imagem e mede-se o PSNR (Peak Signal-to-Noise Ratio) em relação à imagem final, um fator objetivo de qualidade de imagens. Parâmetros como tamanho do arquivo parcial, natureza da imagem e inspeção visual também são alvo de estudo. Através de uma análise detalhada das imagens parciais obtidas somos capazes de definir então qual algoritmo é mais apropriado em cada etapa da transmissão de acordo com a natureza da imagem analisada
16

Hardware Implementation Techniques for JPEG2000.

Dyer, Michael Ian, Electrical Engineering & Telecommunications, Faculty of Engineering, UNSW January 2007 (has links)
JPEG2000 is a recently standardized image compression system that provides substantial improvements over the existing JPEG compression scheme. This improvement in performance comes with an associated cost in increased implementation complexity, such that a purely software implementation is inefficient. This work identifies the arithmetic coder as a bottleneck in efficient hardware implementations, and explores various design options to improve arithmetic coder speed and size. The designs produced improve the critical path of the existing arithmetic coder designs, and then extend the coder throughput to 2 or more symbols per clock cycle. Subsequent work examines more system level implementation issues. This work examines the communication between hardware blocks and utilizes certain modes of operation to add flexibility to buffering solutions. It becomes possible to significantly reduce the amount of intermediate buffering between blocks, whilst maintaining a loose synchronization. Full hardware implementations of the standard are necessarily limited in the amount of features that they can offer, in order to constrain complexity and cost. To circumvent this, a hardware / software codesign is produced using the Altera NIOS II softcore processor. By keeping the majority of the standard implemented in software and using hardware to accelerate those time consuming functions, generality of implementation can be retained, whilst implementation speed is improved. In addition to this, there is the opportunity to explore parallelism, by providing multiple identical hardware blocks to code multiple data units simultaneously.
17

Model-Based JPEG2000 rate control methods

Aulí Llinàs, Francesc 05 December 2006 (has links)
Aquesta recerca està centrada en l'escalabilitat qualitativa de l'estàndard de compressió d'imatges JPEG2000. L'escalabilitat qualitativa és una característica fonamental que permet el truncament de la tira de bits a diferents punts sense penalitzar la qualitat de la imatge recuperada. L'escalabilitat qualitativa és també fonamental en transmissions d'imatges interactives, ja que permet la transmissió de finestres d'interès a diferents qualitats. El JPEG2000 aconsegueix escalabilitat qualitativa a partir del mètode de control de factor de compressió utilitzat en el procés de compressió, que empotra capes de qualitat a la tira de bits. En alguns escenaris, aquesta arquitectura pot causar dos problemàtiques: per una banda, quan el procés de codificació acaba, el número i distribució de les capes de qualitat és permanent, causant una manca d'escalabilitat qualitativa a tires de bits amb una o poques capes de qualitat. Per altra banda, el mètode de control de factor de compressió construeix capes de qualitat considerant la optimització de la raó distorsió per l'àrea completa de la imatge, i això pot provocar que la distribució de les capes de qualitat per la transmissió de finestres d'interès no sigui adequada. Aquesta tesis introdueix tres mètodes de control de factor de compressió que proveeixen escalabilitat qualitativa per finestres d'interès, o per tota l'àrea de la imatge, encara que la tira de bits contingui una o poques capes de qualitat. El primer mètode està basat en una simple estratègia d'entrellaçat (CPI) que modela la raó distorsió a partir d'una aproximació clàssica. Un anàlisis acurat del CPI motiva el segon mètode, basat en un ordre d'escaneig invers i una concatenació de passades de codificació (ROC). El tercer mètode es beneficia dels models de raó distorsió del CPI i ROC, desenvolupant una novedosa aproximació basada en la caracterització de la raó distorsió dels blocs de codificació dins una subbanda (CoRD). Els resultats experimentals suggereixen que tant el CPI com el ROC són capaços de proporcionar escalabilitat qualitativa a tires de bits, encara que continguin una o poques capes de qualitat, aconseguint un rendiment de codificació pràcticament equivalent a l'obtingut amb l'ús de capes de qualitat. Tot i això, els resultats del CPI no estan ben balancejats per les diferents raons de compressió i el ROC presenta irregularitats segons el corpus d'imatges. CoRD millora els resultats de CPI i ROC i aconsegueix un rendiment ben balancejat. A més, CoRD obté un rendiment de compressió una mica millor que l'aconseguit amb l'ús de capes de qualitat. La complexitat computacional del CPI, ROC i CoRD és, a la pràctica, negligible, fent-los adequats per el seu ús en transmissions interactives d'imatges. / This work is focused on the quality scalability of the JPEG2000 image compression standard. Quality scalability is an important feature that allows the truncation of the code-stream at different bit-rates without penalizing the coding performance. Quality scalability is also fundamental in interactive image transmissions to allow the delivery of Windows of Interest (WOI) at increasing qualities. JPEG2000 achieves quality scalability through the rate control method used in the encoding process, which embeds quality layers to the code-stream. In some scenarios, this architecture might raise two drawbacks: on the one hand, when the coding process finishes, the number and bit-rates of quality layers are fixed, causing a lack of quality scalability to code-streams encoded with a single or few quality layers. On the other hand, the rate control method constructs quality layers considering the rate¬distortion optimization of the complete image, and this might not allocate the quality layers adequately for the delivery of a WOI at increasing qualities. This thesis introduces three rate control methods that supply quality scalability for WOIs, or for the complete image, even if the code-stream contains a single or few quality layers. The first method is based on a simple Coding Passes Interleaving (CPI) that models the rate-distortion through a classical approach. An accurate analysis of CPI motivates the second rate control method, which introduces simple modifications to CPI based on a Reverse subband scanning Order and coding passes Concatenation (ROC). The third method benefits from the rate-distortion models of CPI and ROC, developing an approach based on a novel Characterization of the Rate-Distortion slope (CoRD) that estimates the rate-distortion of the code¬blocks within a subband. Experimental results suggest that CPI and ROC are able to supply quality scalability to code-streams, even if they contain a single or few quality layers, achieving a coding performance almost equivalent to the one obtained with the use of quality layers. However, the results of CPI are unbalanced among bit-rates, and ROC presents an irregular coding performance for some corpus of images. CoRD outperforms CPI and ROC achieving well-balanced and regular results and, in addition, it obtains a slightly better coding performance than the one achieved with the use of quality layers. The computational complexity of CPI, ROC and CoRD is negligible in practice, making them suitable to control interactive image transmissions.
18

Validation for Visually lossless Compression of Stereo Images

Feng, Hsin-Chang 10 1900 (has links)
ITC/USA 2013 Conference Proceedings / The Forty-Ninth Annual International Telemetering Conference and Technical Exhibition / October 21-24, 2013 / Bally's Hotel & Convention Center, Las Vegas, NV / This paper described the details of subjective validation for visually lossless compression of stereoscopic 3 dimensional (3D) images. The subjective testing method employed in this work is adapted from methods used previously for visually lossless compression of 2 dimensional (2D) images. Confidence intervals on the correct response rate obtained from the subjective validation of compressed stereo pairs provide reliable evidence to indicate that the compressed stereo pairs are visually lossless.
19

Measurement of Visibility Thresholds for Compression of Stereo Images

Feng, Hsin-Chang 10 1900 (has links)
ITC/USA 2012 Conference Proceedings / The Forty-Eighth Annual International Telemetering Conference and Technical Exhibition / October 22-25, 2012 / Town and Country Resort & Convention Center, San Diego, California / This paper proposes a method of measuring visibility thresholds for quantization distortion in JPEG2000 for compression of stereoscopic 3D images. The crosstalk effect is carefully considered to ensure that quantization errors in each channel of stereoscopic images are imperceptible to both eyes. A model for visibility thresholds is developed to reduce the daunting number of measurements required for subjective experiments.
20

Compression and Classification of Imagery

Tabesh, Ali January 2006 (has links)
Problems at the intersection of compression and statistical inference recur frequently due to the concurrent use of signal and image compression and classification algorithms in many applications. This dissertation addresses two such problems: statistical inference on compressed data, and rate-allocation for joint compression and classification.Features of the JPEG2000 standard make possible the development of computationally efficient algorithms to achieve such a goal for imagery compressed using this standard. We propose the use of the information content (IC) of wavelet subbands, defined as the number of bytes that the JPEG2000 encoder spends to compress the subbands, for content analysis. Applying statistical learning frameworks for detection and classification, we present experimental results for compressed-domain texture image classification and cut detection in video. Our results indicate that reasonable performance can be achieved, while saving computational and bandwidth resources. IC features can also be used for preliminary analysis in the compressed domain to identify candidates for further analysis in the decompressed domain.In many applications of image compression, the compressed image is to be presented to human observers and statistical decision-making systems. In such applications, the fidelity criterion with respect to which the image is compressed must be selected to strike an appropriate compromise between the (possibly conflicting) image quality criteria for the human and machine observers. We present tractable distortion measures based on the Bhattacharyya distance (BD) and a new upper bound on the quantized probability of error that make possible closed form expressions for rate allocation to image subbands and show their efficacy in maintaining the aforementioned balance between compression and classification. The new bound offers two advantages over the BD in that it yields closed-form solutions for rate-allocation in problems involving correlated sources and more than two classes.

Page generated in 0.0361 seconds