• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 2
  • 1
  • Tagged with
  • 8
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Design of High-performance, Low-power and Memory-efficient EBCOT and MQ Coder for JPEG2000

Chang, Tso-Hsuan 01 September 2003 (has links)
JPEG2000 is an emerging state-of-the-art standard for still image compression. The standard not only offers superior rate-distortion performance, but also provides a wide range of features and functionality compared to JPEG. However, advantages of JPEG2000 come at the expense of computational complexity and memory requirement in bit-plane coding. So the low cost ASIC design for JPEG2000 hardware implementation remains a challenge. Therefore, a dedicated hardware implementation for EBCOT block coder is necessary. In this thesis a high-throughput EBCOT block coder is proposed. There are two main parts in the EBCOT block coder: context modeling and MQ-coder. For context modeling a novel pass-parallel module based on vertical causal mode is proposed. Pass-parallel modeling which reduces the cycles to check the sample to be coded processes three original sequential passes in a single pass and generates one or two context labels every cycle. It is fast and saves 8K bits internal memory. Since context modeling will generate one or two context labels in one cycle, multi-bit MQ-coder which could avoid the buffer between context modeling and MQ-coder overflows is needed. For MQ-coder three approaches which process one or two context labels in one cycle are proposed. Furthermore, we modified the architecture of MQ-coder and proposed two low-power implementation concepts : reduction of memory access and disabling unused block.
2

VLSI Design and Implementation of EBCOT CODEC

Wang, Sung-Yang 26 July 2001 (has links)
This thesis proposes several hardware implementation approaches for the EBCOT (Embedded Block Coding with Optimized Truncation) algorithm, one of the key operations in the emerging JPEG 2000 standard. We also modify the EBCOT algorithm in order to reduce the memory requirement and to improve the speed performance. The modified EBCOT encoder saves 40% memory area with triple speed performance compared to the original design.
3

FPGA IMPLEMENTATION OF A PARALLEL EBCOT TIER-1 ENCODER THAT PRESERVES ENCODING EFFICIENCY

Damecharla, Hima Bindu 05 October 2006 (has links)
No description available.
4

FPGA Implementation of the JPEG2000 MQ Decoder

Lucking, David Joseph 05 May 2010 (has links)
No description available.
5

Fast Split Arithmetic Encoder Architectures and Perceptual Coding Methods for Enhanced JPEG2000 Performance

Varma, Krishnaraj M. 11 April 2006 (has links)
JPEG2000 is a wavelet transform based image compression and coding standard. It provides superior rate-distortion performance when compared to the previous JPEG standard. In addition JPEG2000 provides four dimensions of scalability-distortion, resolution, spatial, and color. These superior features make JPEG2000 ideal for use in power and bandwidth limited mobile applications like urban search and rescue. Such applications require a fast, low power JPEG2000 encoder to be embedded on the mobile agent. This embedded encoder needs to also provide superior subjective quality to low bitrate images. This research addresses these two aspects of enhancing the performance of JPEG2000 encoders. The JPEG2000 standard includes a perceptual weighting method based on the contrast sensitivity function (CSF). Recent literature shows that perceptual methods based on subband standard deviation are also effective in image compression. This research presents two new perceptual weighting methods that combine information from both the human contrast sensitivity function as well as the standard deviation within a subband or code-block. These two new sets of perceptual weights are compared to the JPEG2000 CSF weights. The results indicate that our new weights performed better than the JPEG2000 CSF weights for high frequency images. Weights based solely on subband standard deviation are shown to perform worse than JPEG2000 CSF weights for all images at all compression ratios. Embedded block coding, EBCOT tier-1, is the most computationally intensive part of the JPEG2000 image coding standard. Past research on fast EBCOT tier-1 hardware implementations has concentrated on cycle efficient context formation. These pass-parallel architectures require that JPEG2000's three mode switches be turned on. While turning on the mode switches allows for arithmetic encoding from each coding pass to run independent of each other (and thus in parallel), it also disrupts the probability estimation engine of the arithmetic encoder, thus sacrificing coding efficiency for improved throughput. In this research a new fast EBCOT tier-1 design is presented: it is called the Split Arithmetic Encoder (SAE) process. The proposed process exploits concurrency to obtain improved throughput while preserving coding efficiency. The SAE process is evaluated using three methods: clock cycle estimation, multithreaded software implementation, a field programmable gate array (FPGA) hardware implementation. All three methods achieve throughput improvement; the hardware implementation exhibits the largest speedup, as expected. A high speed, task-parallel, multithreaded, software architecture for EBCOT tier-1 based on the SAE process is proposed. SAE was implemented in software on two shared-memory architectures: a PC using hyperthreading and a multi-processor non-uniform memory access (NUMA) machine. The implementation adopts appropriate synchronization mechanisms that preserve the algorithm's causality constraints. Tests show that the new architecture is capable of improving throughput as much as 50% on the NUMA machine and as much as 19% on a PC with two virtual processing units. A high speed, multirate, FPGA implementation of the SAE process is also proposed. The mismatch between the rate of production of data by the context formation (CF) module and the rate of consumption of data by the arithmetic encoder (AE) module is studied in detail. Appropriate choices for FIFO sizes and FIFO write and read capabilities are made based on the statistics obtained from test runs of the algorithm. Using a fast CF module, this implementation was able to achieve as much as 120% improvement in throughput. / Ph. D.
6

Komprese obrazu pomocí vlnkové transformace / Image Compression Using the Wavelet Transform

Bradáč, Václav January 2017 (has links)
This work deals with image compression using wavelet transformation. At the beginning , you can find theoretical information about the best known techniques used for image compression , a thorough description of wavelet transormation and the EBCOT algorithm. A significant part of the work is devoted to the library's own implementation . Another chapter of the diploma thesis deals with the comparison and evaluation of the achieved results of the processed library with the JPEG2000 format
7

Scalable video compression with optimized visual performance and random accessibility

Leung, Raymond, Electrical Engineering & Telecommunications, Faculty of Engineering, UNSW January 2006 (has links)
This thesis is concerned with maximizing the coding efficiency, random accessibility and visual performance of scalable compressed video. The unifying theme behind this work is the use of finely embedded localized coding structures, which govern the extent to which these goals may be jointly achieved. The first part focuses on scalable volumetric image compression. We investigate 3D transform and coding techniques which exploit inter-slice statistical redundancies without compromising slice accessibility. Our study shows that the motion-compensated temporal discrete wavelet transform (MC-TDWT) practically achieves an upper bound to the compression efficiency of slice transforms. From a video coding perspective, we find that most of the coding gain is attributed to offsetting the learning penalty in adaptive arithmetic coding through 3D code-block extension, rather than inter-frame context modelling. The second aspect of this thesis examines random accessibility. Accessibility refers to the ease with which a region of interest is accessed (subband samples needed for reconstruction are retrieved) from a compressed video bitstream, subject to spatiotemporal code-block constraints. We investigate the fundamental implications of motion compensation for random access efficiency and the compression performance of scalable interactive video. We demonstrate that inclusion of motion compensation operators within the lifting steps of a temporal subband transform incurs a random access penalty which depends on the characteristics of the motion field. The final aspect of this thesis aims to minimize the perceptual impact of visible distortion in scalable reconstructed video. We present a visual optimization strategy based on distortion scaling which raises the distortion-length slope of perceptually significant samples. This alters the codestream embedding order during post-compression rate-distortion optimization, thus allowing visually sensitive sites to be encoded with higher fidelity at a given bit-rate. For visual sensitivity analysis, we propose a contrast perception model that incorporates an adaptive masking slope. This versatile feature provides a context which models perceptual significance. It enables scene structures that otherwise suffer significant degradation to be preserved at lower bit-rates. The novelty in our approach derives from a set of "perceptual mappings" which account for quantization noise shaping effects induced by motion-compensated temporal synthesis. The proposed technique reduces wavelet compression artefacts and improves the perceptual quality of video.
8

Komprese obrazu pomocí vlnkové transformace / Image Compression Using the Wavelet Transform

Urbánek, Pavel January 2013 (has links)
This thesis is focused on subject of image compression using wavelet transform. The first part of this document provides reader with information about image compression, presents well known contemporary algorithms and looks into details of wavelet compression and following encoding schemes. Both JPEG and JPEG 2000 standards are introduced. Second part of this document analyzes and describes implementation of image compression tool including inovations and optimalizations. The third part is dedicated to comparison and evaluation of achievements.

Page generated in 0.036 seconds