Spelling suggestions: "subject:"[een] IMAGE COMPRESSION"" "subject:"[enn] IMAGE COMPRESSION""
31 |
Subjective Evaluation of an Edge-based Depth Image Compression SchemeLi, Yun, Sjöström, Mårten, Jennehag, Ulf, Olsson, Roger, Sylvain, Tourancheau January 2013 (has links)
Multi-view three-dimensional television requires many views, which may be synthesized from two-dimensional images with accompanying pixel-wise depth information. This depth image, which typically consists of smooth areas and sharp transitions at object borders, must be consistent with the acquired scene in order for synthesized views to be of good quality. We have previously proposed a depth image coding scheme that preserves significant edges and encodes smooth areas between these. An objective evaluation considering the structural similarity (SSIM) index for synthesized views demonstrated an advantage to the proposed scheme over the High Efficiency Video Coding (HEVC) intra mode in certain cases. However, there were some discrepancies between the outcomes from the objective evaluation and from our visual inspection, which motivated this study of subjective tests. The test was conducted according to ITU-R BT.500-13 recommendation with Stimulus-comparison methods. The results from the subjective test showed that the proposed scheme performs slightly better than HEVC with statistical significance at majority of the tested bit rates for the given contents.
|
32 |
An improved search algorithm for fractal image compression based on intra-block variance distributionChen, Shin-Si 13 September 2000 (has links)
Fractal image compression is based on the representation of an image by
contractive transforms whose xed points are close to the original image. In the
encoding of fractal image compression, most of the time is spent on nding the close
match between a range block and a large pool of domain blocks. In this thesis, we
use the intra-block variances distributions of domain blocks to reduce the searching
space. For nding a close match we need only search the domain blocks whose
maximal intra-block variance quadrants are the same as that of the range block in
nding a close match. The experiment results show that our algorithm can reduce
much encoding time with only slight loss of quality.
|
33 |
JPEG2000 image compression and error resilience for transmission over wireless channels /Kamaras, Konstantinos. January 2002 (has links) (PDF)
Thesis (M.S.)--Naval Postgraduate School, 2002. / Thesis advisor(s): Murali Tummala, Robert Ives. Includes bibliographical references (p. 95-97). Also available online.
|
34 |
Video compression and rate control methods based on the wavelet transformBalster, Eric J., January 2004 (has links)
Thesis (Ph. D.)--Ohio State University, 2003. / Title from first page of PDF file. Document formatted into pages; contains xxv, 142 p.; also includes graphics. Includes abstract and vita. Advisor: Yuan F. Zheng, Dept. of Electrical and Computer Engineering. Includes bibliographical references (p. 135-142).
|
35 |
Real-time video postprocessing algorithms and metrics /Gao, Wenfeng, January 2003 (has links)
Thesis (Ph. D.)--University of Washington, 2003. / Vita. Includes bibliographical references (leaves 94-103).
|
36 |
Fast rate control for JPEG2000 image coding /Yeung, Yick Ming. January 2003 (has links)
Thesis (M. Phil.)--Hong Kong University of Science and Technology, 2003. / Includes bibliographical references (leaves 63-65). Also available in electronic version. Access restricted to campus users.
|
37 |
Efektyvaus vaizdų suspaudimo algoritmo sudarymas ir tyrimas / Analysis and Development of Efficient Image Compression AlgorithmDusevičius, Vytautas 25 May 2004 (has links)
Uncompressed multimedia data requires considerable storage capacity and transmission bandwidth. Despite rapid progress in mass-storage density, processor speeds, and digital communication system performance, demand for data storage capacity and data-transmission bandwidth continues to outstrip the capabilities of available technologies. The recent growth of data intensive multimedia-based web applications even more sustained the need for more efficient ways to encode such data.
There are two types of image compression schemes – lossless and lossy algorithms. In lossless compression schemes, the reconstructed image, after compression, is numerically identical to the original image. However lossless compression can only achieve a modest amount of compression. An image reconstructed following lossy compression contains degradation relative to the original. Often this is because the compression scheme completely discards redundant information. However, lossy schemes are capable of achieving much higher compression.
The aim of this research is to create an efficient lossy image compression algorithm, using heuristic data clusterization methods; perform experiments of the new algorithm, measure its performance, analyze advantages and disadvantages of the proposed method, propose possible improvements and compare it with other popular algorithms.
In this paper is presented new algorithm for image compression, which uses data base of popular image fragments. Proposed algorithm is... [to full text]
|
38 |
High ratio wavelet video compression through real-time rate-distortion estimation.Jackson, Edmund Stephen. January 2003 (has links)
The success of the wavelet transform in the compression of still images has prompted an
expanding effort to exercise this transform in the compression of video. Most existing video
compression methods incorporate techniques from still image compression, such techniques
being abundant, well defined and successful. This dissertation commences with a thorough
review and comparison of wavelet still image compression techniques. Thereafter an
examination of wavelet video compression techniques is presented. Currently, the most
effective video compression system is the DCT based framework, thus a comparison between
these and the wavelet techniques is also given.
Based on this review, this dissertation then presents a new, low-complexity, wavelet video
compression scheme. Noting from a complexity study that the generation of temporally
decorrelated, residual frames represents a significant computational burden, this scheme uses
the simplest such technique; difference frames. In the case of local motion, these difference
frames exhibit strong spatial clustering of significant coefficients. A simple spatial syntax is
created by splitting the difference frame into tiles. Advantage of the spatial clustering may then
be taken by adaptive bit allocation between the tiles. This is the central idea of the method.
In order to minimize the total distortion of the frame, the scheme uses the new p-domain rate-distortion
estimation scheme with global numerical optimization to predict the optimal
distribution of bits between tiles. Thereafter each tile is independently wavelet transformed and
compressed using the SPIHT technique.
Throughout the design process computational efficiency was the design imperative, thus leading
to a real-time, software only, video compression scheme. The scheme is finally compared to
both the current video compression standards and the leading wavelet schemes from the
literature in terms of computational complexity visual quality. It is found that for local motion
scenes the proposed algorithm executes approximately an order of magnitude faster than these
methods, and presents output of similar quality. This algorithm is found to be suitable for
implementation in mobile and embedded devices due to its moderate memory and
computational requirements. / Thesis (M.Sc.Eng.)-University of Natal, Durban, 2003.
|
39 |
Multiresolution fractal coding of still imagesCesbron, Fred́eŕique Chantal 05 1900 (has links)
No description available.
|
40 |
Robust and flexible hardware implementation of ITU-G4Mulder, Aart January 2014 (has links)
This project was carried out as thesis work during the last semester of my Master studies Electronics Design at the Mid Sweden University. Firstly, it considers a robust and exible implementation of ITU-G4 in hardware based on earlier work, and secondly, it covers review of related work and investigation in the weaknesses of two published designs. More specically, it is an investigation on the robustness of the previously developed VHDL implementation ofthe ITU-G4 algorithm. This includes designing of a debug interface to track the compression process inside the FPGA. The nal result, when comparing to earlier work and other published designs, the ITU-G4 compression performs without any glitches or crashes at certain patterns. The maximum frame rate the design can run at is 60fps at a frame size of 752x480 and clockrate of 33.3MHz. The design is tested with three sets of images: easy, medium and complexwhich are all successfully compressed. This includes imperfect images of bar-codes and Q-codes without the need of morphological preprocessing when comparing to the published design that needs preprocessing for medium and complex images to remove unexpected transitions.
|
Page generated in 0.0488 seconds