1 |
Cue combination for depth, brightness and lightness in 3-D scenesWishart, Keith A. January 1996 (has links)
No description available.
|
2 |
A study of central and eccentric visual perception : ocular dominance and contrast matchingLeat, Susan Jennifer January 1986 (has links)
No description available.
|
3 |
Aspects of chromatic and temporal processing in normal and impaired human visionSnelgar, Rosemary S. January 1987 (has links)
No description available.
|
4 |
Low-Complexity Perceptual JPEG2000 Encoder for Aerial ImagesOh, Han, Kim, Yookyung 10 1900 (has links)
ITC/USA 2011 Conference Proceedings / The Forty-Seventh Annual International Telemetering Conference and Technical Exhibition / October 24-27, 2011 / Bally's Las Vegas, Las Vegas, Nevada / A highly compressed image inevitably has visible compression artifacts. To minimize these artifacts, many compression algorithms exploit the varying sensitivity of the human visual system (HVS) to different frequencies. However, this sensitivity has typically been measured at the near-threshold level where distortion is just noticeable. Thus, it is unclear that the same sensitivity applies at the supra-threshold level where distortion is highly visible. In this paper, we measure the sensitivity of the HVS for several supra-threshold distortion levels based on our JPEG2000 distortion model. Then, a low-complexity JPEG2000 encoder using the measured sensitivity is described. For aerial images, the proposed encoder significantly reduces encoding time while maintaining superior visual quality compared with a conventional JPEG2000 encoder.
|
5 |
Visually Lossless Compression Based on JPEG2000 for Efficient Transmission of High Resolution Color Aerial ImagesOh, Han 10 1900 (has links)
ITC/USA 2010 Conference Proceedings / The Forty-Sixth Annual International Telemetering Conference and Technical Exhibition / October 25-28, 2010 / Town and Country Resort & Convention Center, San Diego, California / Aerial image collections have experienced exponential growth in size in recent years. These high resolution images are often viewed at a variety of scales. When an image is displayed at reduced scale, maximum quantization step sizes for visually lossless quality become larger. However, previous visually lossless coding algorithms quantize the image with a single set of quantization step sizes, optimized for display at the full resolution level. This implies that if the image is rendered at reduced resolution, there are significant amounts of extraneous information in the codestream. Thus, in this paper, we propose a method which effectively incorporates multiple quantization step sizes, for various display resolutions, into the JPEG2000 framework. If images are browsed from a remote location, this method can significantly reduce bandwidth usage by only transmitting the portion of the codestream required for visually lossless reconstruction at the desired resolution. Experimental results for high resolution color aerial images are presented.
|
6 |
Simulating Perception : Perception based colours in virtual environmentsForsmark, Rebecca January 2016 (has links)
This research explores the differences between how game engine cameras and the human visual system (HVS) render colour. The study is motivated by a two part research question: will HVS colours or game camera colours be preferred when experiencing a virtual environment from a 1st-person perspective and how does light intensity relate to preference? While previous research defines perceptual processes which influence the interpretation of colour information this study advances the understanding of how these theories may be applied to 3D colour grading.When evaluating the two colour modes with a combination of quantitative data and qualitative reflections it was possible to establish a correlation between preference and light intensity, in the sense that HVS colours were preferred in high illumination and camera colours in low. The findings implicate that in order to be well received the colours of a virtual environment need to be adjusted according to illumination.
|
7 |
COLOR HALFTONING BASED ON NEUGEBAUER PRIMARY AREA COVERAGE AND NOVEL COLOR HALFTONING ALGORITHM FOR INK SAVINGSWanling Jiang (6631334) 11 June 2019 (has links)
<p>A
halftoning method with Neugebauer Primary Area Coverage direct binary search
(NPAC-DBS) is developed. With the optimized human visual system (HVS) model, we
are able obtain homogeneous and smooth halftone colored image. The halftoning
is based on separating the colored image represented in Neugebauer Primary in
three channels based on human visual system, with swap-only DBS, we arrange the
dots to bring the error metric to its minimum and the optimized halftone image
is obtained. The separation of chrominance HVS filters between red-green and
blue-yellow channels allows us to represent HVS more accurately. Color halftone
images generated using this method and method of using traditional screening
methods are compared.</p>
<p>In
order to speed up the halftone process with similar quality of NPAC-DBS, we
developed PARAWACS screens for color halftoning. PARAWACS screen is designed
level by level using DBS. With PARAWACS screen, we can create halftone using
simple pixel by pixel comparison with the merit of DBS. We further optimized
the screen to achieve the best quality.</p>
<p>Next, a novel halftoning method that we call
Ink-Saving, Single-Frequency, Single-Angle, Multi-Drop (IS-SF-SA-MD)
halftoning is introduced. The application target for our algorithm is high-volume production
ink-jet printing in which the user will value a reduction in ink usage. Unlike
commercial offset printing in which four-colorant printing is achieved by
rotating a single screen to four different angles, our method uses a single
frequency screen at a single angle, and it depends on accurate registration
between colorant planes to minimize dot-overlap especially between the black
(K) colorant and the other colorants (C, M, and Y). To increase the number of gray
levels for each colorant, we exploit the multidrop capabilities of the target
writing system. We also use the hybrid screening method to yield improved
halftone texture in the highlights and shadows. The proposed method can help
preserve ink significantly.</p>
|
8 |
Adaptive Image Restoration: Perception Based Neural Nework Models and Algorithms.Perry, Stuart William January 1999 (has links)
Abstract This thesis describes research into the field of image restoration. Restoration is a process by which an image suffering some form of distortion or degradation can be recovered to its original form. Two primary concepts within this field have been investigated. The first concept is the use of a Hopfield neural network to implement the constrained least square error method of image restoration. In this thesis, the author reviews previous neural network restoration algorithms in the literature and builds on these algorithms to develop a new faster version of the Hopfield neural network algorithm for image restoration. The versatility of the neural network approach is then extended by the author to deal with the cases of spatially variant distortion and adaptive regularisation. It is found that using the Hopfield-based neural network approach, an image suffering spatially variant degradation can be accurately restored without a substantial penalty in restoration time. In addition, the adaptive regularisation restoration technique presented in this thesis is shown to produce superior results when compared to non-adaptive techniques and is particularly effective when applied to the difficult, yet important, problem of semi-blind deconvolution. The second concept investigated in this thesis, is the difficult problem of incorporating concepts involved in human visual perception into image restoration techniques. In this thesis, the author develops a novel image error measure which compares two images based on the differences between local regional statistics rather than pixel level differences. This measure more closely corresponds to the way humans perceive the differences between two images. Two restoration algorithms are developed by the author based on versions of the novel image error measure. It is shown that the algorithms which utilise this error measure have improved performance and produce visually more pleasing images in the cases of colour and grayscale images under high noise conditions. Most importantly, the perception based algorithms are shown to be extremely tolerant of faults in the restoration algorithm and hence are very robust. A number of experiments have been performed to demonstrate the performance of the various algorithms presented.
|
9 |
Validation for Visually lossless Compression of Stereo ImagesFeng, Hsin-Chang 10 1900 (has links)
ITC/USA 2013 Conference Proceedings / The Forty-Ninth Annual International Telemetering Conference and Technical Exhibition / October 21-24, 2013 / Bally's Hotel & Convention Center, Las Vegas, NV / This paper described the details of subjective validation for visually lossless compression of stereoscopic 3 dimensional (3D) images. The subjective testing method employed in this work is adapted from methods used previously for visually lossless compression of 2 dimensional (2D) images. Confidence intervals on the correct response rate obtained from the subjective validation of compressed stereo pairs provide reliable evidence to indicate that the compressed stereo pairs are visually lossless.
|
10 |
Measurement of Visibility Thresholds for Compression of Stereo ImagesFeng, Hsin-Chang 10 1900 (has links)
ITC/USA 2012 Conference Proceedings / The Forty-Eighth Annual International Telemetering Conference and Technical Exhibition / October 22-25, 2012 / Town and Country Resort & Convention Center, San Diego, California / This paper proposes a method of measuring visibility thresholds for quantization distortion in JPEG2000 for compression of stereoscopic 3D images. The crosstalk effect is carefully considered to ensure that quantization errors in each channel of stereoscopic images are imperceptible to both eyes. A model for visibility thresholds is developed to reduce the daunting number of measurements required for subjective experiments.
|
Page generated in 0.0664 seconds