• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 2
  • 1
  • 1
  • Tagged with
  • 9
  • 9
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Cross-layer perceptual optimization for wireless video transmission

Abdel Khalek, Amin Nazih 21 January 2014 (has links)
Bandwidth-intensive video streaming applications occupy an overwhelming fraction of bandwidth-limited wireless network traffic. Compressed video data are highly structured and the psycho-visual perception of distortions and losses closely depends on that structure. This dissertation exploits the inherent video data structure to develop perceptually-optimized transmission paradigms at different protocol layers that improve video quality of experience, introduce error resilience, and enable supporting more video users. First, we consider the problem of network-wide perceptual quality optimization whereby different video users with (possibly different) real-time delay constraints are sharing wireless channel resources. Due to the inherently stochastic nature of wireless fading channels, we provide statistical delay guarantees using the theory of effective capacity. We derive the resource allocation policy that maximizes the sum video quality and show that the optimal operating point per user is such that the rate-distortion slope is the inverse of the supported video source rate per unit bandwidth, termed source spectral efficiency. We further propose a scheduling policy that maximizes the number of scheduled users that meet their QoS requirement. Next, we develop user-level perceptual quality optimization techniques for non-scalable video streams. For non-scalable videos, we estimate packet loss visibility through a generalized linear model and use for prioritized packet delivery. We solve the problem of mapping video packets to MIMO subchannels and adapting per-stream rates to maximize the total perceptual value of successfully delivered packets per unit time. We show that the solution enables jointly reaping gains in terms of improved video quality and lower latency. Optimized packet-stream mapping enables transmission of more relevant packets over more reliable streams while unequal modulation opportunistically increases the transmission rate on the stronger streams to enable low latency delivery of high priority packets. Finally, we develop user-level perceptual quality optimization techniques for scalable video streams. We propose online learning of the mapping between packet losses and quality degradation using nonparametric regression. This quality-loss mapping is subsequently used to provide unequal error protection for different video layers with perceptual quality guarantees. Channel-aware scalable codec adaptation and buffer management policies simultaneously ensure continuous high-quality playback. Across the various contributions, analytic results as well as video transmission simulations demonstrate the value of perceptual optimization in improving video quality and capacity. / text
2

Perceptual Image Compression using JPEG2000

Oh, Han January 2011 (has links)
Image sizes have increased exponentially in recent years. The resulting high-resolution images are typically encoded in a lossy fashion to achieve high compression ratios. Lossy compression can be categorized into visually lossless and visually lossy compression depending on the visibility of compression artifacts. This dissertation proposes visually lossless coding methods as well as a visually lossy coding method with perceptual quality control. All resulting codestreams are JPEG2000 Part-I compliant.Visually lossless coding is increasingly considered as an alternative to numerically lossless coding. In order to hide compression artifacts caused by quantization, visibility thresholds (VTs) are measured and used for quantization of subbands in JPEG2000. In this work, VTs are experimentally determined from statistically modeled quantization distortion, which is based on the distribution of wavelet coefficients and the dead-zone quantizer of JPEG2000. The resulting VTs are adjusted for locally changing background through a visual masking model, and then used to determine the minimum number of coding passes to be included in a codestream for visually lossless quality under desired viewing conditions. The proposed coding scheme successfully yields visually lossless images at competitive bitrates compared to those of numerically lossless coding and visually lossless algorithms in the literature.This dissertation also investigates changes in VTs as a function of display resolution and proposes a method which effectively incorporates multiple VTs for various display resolutions into the JPEG2000 framework. The proposed coding method allows for visually lossless decoding at resolutions natively supported by the wavelet transform as well as arbitrary intermediate resolutions, using only a fraction of the full-resolution codestream. When images are browsed remotely, this method can significantly reduce bandwidth usage.Contrary to images encoded in the visually lossless manner, highly compressed images inevitably have visible compression artifacts. To minimize these artifacts, many compression algorithms exploit the varying sensitivity of the human visual system (HVS) to different frequencies, which is typically obtained at the near-threshold level where distortion is just noticeable. However, it is unclear that the same frequency sensitivity applies at the supra-threshold level where distortion is highly visible. In this dissertation, the sensitivity of the HVS for several supra-threshold distortion levels is measured based on the JPEG2000 quantization distortion model. Then, a low-complexity JPEG2000 encoder using the measured sensitivity is described. The proposed visually lossy encoder significantly reduces encoding time while maintaining superior visual quality compared with conventional JPEG2000 encoders.
3

Proposta de metodologia para avaliação de métodos de iluminação global em síntese de imagens / Proposal of a methodology for evaluation of global illumination methods in image synthesis.

Meneghel, Giovani Balen 01 July 2015 (has links)
Produzir imagens de alta qualidade por computador, no menor tempo possível, que sejam convincentes ao público alvo, utilizando-se de maneira ótima todos os recursos computacionais à disposição, é uma tarefa que envolve uma cadeia de processos específicos, sendo um grande desafio ainda nos dias de hoje. O presente trabalho apresenta um estudo sobre toda esta cadeia de processos, com foco na avaliação de métodos de Iluminação Global empregados na Síntese de Imagens fotorrealistas para as áreas de Animação e Efeitos Visuais. Com o objetivo de auxiliar o usuário na tarefa de produzir imagens fotorrealistas de alta qualidade, foram realizados experimentos envolvendo diversas cenas de teste e seis métodos de Iluminação Global do Estado da Arte: Path Tracing, Light Tracing, Bidirectional Path Tracing, Metropolis Light Transport, Progressive Photon Mapping e Vertex Connection and Merging. O sintetizador escolhido para execução do experimento foi o Mitsuba Renderer. Para avaliação da qualidade dos resultados, duas métricas perceptuais foram adotadas: o Índice de Similaridade Estrutural SSIM e o Previsor de Diferenças Visuais HDR-VDP-2. A partir da avaliação dos resultados, foi construído um Guia de Recomendações para o usuário, indicando, com base nas características de uma cena arbitrária, o método de Iluminação Global mais adequado para realizar a síntese das imagens. Por fim, foram apontados caminhos de pesquisa para trabalhos futuros, sugerindo o emprego de classificadores, métodos de redução de parâmetros e Inteligência Artificial a fim de automatizar o processo de produção de imagens fotorrealistas e de alta qualidade. / The task of generating high quality computer images in the shortest time possible, believable to the targets audience perception, using all computational resources available, is still a challenging procedure, composed by a chain of specific processes. This work presents a study of this chain, focusing on the evaluation of Global Illumination methods used on the Synthesis of Photorealistic Images, in the areas of Animation and Visual Effects. To achieve the goal of helping users to produce high-quality photorealistic images, two experiments were proposed containing several test scenes and six State-of-the-Art Global Illumination methods: Path Tracing, Light Tracing, Bidirectional Path Tracing, Metropolis Light Transport, Progressive Photon Mapping and Vertex Connection and Merging. In order to execute the tests, the open source renderer Mitsuba was used. The quality of the produced images was analyzed using two different perceptual metrics: Structural Similarity Index SSIM and Visual Difference Predictor HDR-VDP-2. By analyzing results, a Recommendation Guide was created, providing suggestions, based on an arbitrary scenes characteristics, of the most suitable Global Illumination method to be used in order to synthesize images from the given scene. In the end, future ways of research are presented, proposing the use of classifiers, parameter reduction methods and Artificial Intelligence, in order to build an automatic procedure to generate high quality photorealistic images.
4

Proposta de metodologia para avaliação de métodos de iluminação global em síntese de imagens / Proposal of a methodology for evaluation of global illumination methods in image synthesis.

Giovani Balen Meneghel 01 July 2015 (has links)
Produzir imagens de alta qualidade por computador, no menor tempo possível, que sejam convincentes ao público alvo, utilizando-se de maneira ótima todos os recursos computacionais à disposição, é uma tarefa que envolve uma cadeia de processos específicos, sendo um grande desafio ainda nos dias de hoje. O presente trabalho apresenta um estudo sobre toda esta cadeia de processos, com foco na avaliação de métodos de Iluminação Global empregados na Síntese de Imagens fotorrealistas para as áreas de Animação e Efeitos Visuais. Com o objetivo de auxiliar o usuário na tarefa de produzir imagens fotorrealistas de alta qualidade, foram realizados experimentos envolvendo diversas cenas de teste e seis métodos de Iluminação Global do Estado da Arte: Path Tracing, Light Tracing, Bidirectional Path Tracing, Metropolis Light Transport, Progressive Photon Mapping e Vertex Connection and Merging. O sintetizador escolhido para execução do experimento foi o Mitsuba Renderer. Para avaliação da qualidade dos resultados, duas métricas perceptuais foram adotadas: o Índice de Similaridade Estrutural SSIM e o Previsor de Diferenças Visuais HDR-VDP-2. A partir da avaliação dos resultados, foi construído um Guia de Recomendações para o usuário, indicando, com base nas características de uma cena arbitrária, o método de Iluminação Global mais adequado para realizar a síntese das imagens. Por fim, foram apontados caminhos de pesquisa para trabalhos futuros, sugerindo o emprego de classificadores, métodos de redução de parâmetros e Inteligência Artificial a fim de automatizar o processo de produção de imagens fotorrealistas e de alta qualidade. / The task of generating high quality computer images in the shortest time possible, believable to the targets audience perception, using all computational resources available, is still a challenging procedure, composed by a chain of specific processes. This work presents a study of this chain, focusing on the evaluation of Global Illumination methods used on the Synthesis of Photorealistic Images, in the areas of Animation and Visual Effects. To achieve the goal of helping users to produce high-quality photorealistic images, two experiments were proposed containing several test scenes and six State-of-the-Art Global Illumination methods: Path Tracing, Light Tracing, Bidirectional Path Tracing, Metropolis Light Transport, Progressive Photon Mapping and Vertex Connection and Merging. In order to execute the tests, the open source renderer Mitsuba was used. The quality of the produced images was analyzed using two different perceptual metrics: Structural Similarity Index SSIM and Visual Difference Predictor HDR-VDP-2. By analyzing results, a Recommendation Guide was created, providing suggestions, based on an arbitrary scenes characteristics, of the most suitable Global Illumination method to be used in order to synthesize images from the given scene. In the end, future ways of research are presented, proposing the use of classifiers, parameter reduction methods and Artificial Intelligence, in order to build an automatic procedure to generate high quality photorealistic images.
5

Objective Perceptual Quality Assessment of JPEG2000 Image Coding Format Over Wireless Channel

Chintala, Bala Venkata Sai Sundeep January 2019 (has links)
A dominant source of Internet traffic, today, is constituted of compressed images. In modern multimedia communications, image compression plays an important role. Some of the image compression standards set by the Joint Photographic Expert Group (JPEG) include JPEG and JPEG2000. The expert group came up with the JPEG image compression standard so that still pictures could be compressed to be sent over an e-mail, be displayed on a webpage, and make high-resolution digital photography possible. This standard was originally based on a mathematical method, used to convert a sequence of data to the frequency domain, called the Discrete Cosine Transform (DCT). In the year 2000, however, a new standard was proposed by the expert group which came to be known as JPEG2000. The difference between the two is that the latter is capable of providing better compression efficiency. There is also a downside to this new format introduced. The computation required for achieving the same sort of compression efficiency as one would get with the original JPEG format is higher. JPEG is a lossy compression standard which can throw away some less important information without causing any noticeable perception differences. Whereas, in lossless compression, the primary purpose is to reduce the number of bits required to represent the original image samples without any loss of information. The areas of application of the JPEG image compression standard include the Internet, digital cameras, printing, and scanning peripherals. In this thesis work, a simulator kind of functionality setup is needed for conducting the objective quality assessment. An image is given as an input to our wireless communication system and its data size is varied (e.g. 5%, 10%, 15%, etc) and a Signal-to-Noise Ratio (SNR) value is given as input, for JPEG2000 compression. Then, this compressed image is passed through a JPEG encoder and then transmitted over a Rayleigh fading channel. The corresponding image obtained after having applied these constraints on the original image is then decoded at the receiver and inverse discrete wavelet transform (IDWT) is applied to inverse the JPEG 2000 compression. Quantization is done for the coefficients which are scalar-quantized to reduce the number of bits to represent them, without the loss of quality of the image. Then the final image is displayed on the screen. The original input image is co-passed with the images of varying data size for an SNR value at the receiver after decoding. In particular, objective perceptual quality assessment through Structural Similarity (SSIM) index using MATLAB is provided.
6

Analyse subjective et évaluation objective de la qualité perceptuelle des maillages 3D / Perceptual quality assessment : subjective and objective studies

Torkhani, Fakhri 01 December 2014 (has links)
Les maillages 3D polygonaux sont largement utilisés dans diverses applications telles que le divertissement numérique, la conception assistée par ordinateur et l'imagerie médicale. Un maillage peut être soumis à différents types d'opérations comme la compression, le tatouage ou la simplification qui introduisent des distorsions géométriques (modifications) à la version originale. Il est important de quantifier ces modification introduites au maillage d'origine et d'évaluer la qualité perceptuelle des maillages dégradés. Dans ce cadre, on s'intéresse dans cette thèse à l'évaluation de la qualité perceptuelle des maillages 3D statiques et dynamiques. On présente des études expérimentales pour l'évaluation subjective de la qualité des maillages 3D dynamiques.On présente également de nouvelles métriques objectives, de type avec-référence complète ou de type avec référence-réduite, qui sont efficaces pour l'estimation de la qualité perçue des maillages statiques et dynamiques. / 3D mesh animations have been increasingly used in various applications, e.g., in digital entertainment, computer-aided design and medical imaging. It is possible that a mesh model undergoes some lossy operations, e.g., compression, watermarking or simplification, which can impair the original mesh surface and introduce geometric distortions. An important task is to quantify such distortions and assess the perceptual quality of impaired meshes. In this manuscript, we focus on the perceptual quality assessment of 3D static and dynamic meshes. We present psychometric experiments that we conducted to measure the subjective perceptual quality of dynamic meshes. We also present new full-reference and reduced-reference objective metrics capable of faithfully evaluating the perceptual quality of 3D static and dynamic meshes.
7

SSIM-Inspired Quality Assessment, Compression, and Processing for Visual Communications

Rehman, Abdul January 2013 (has links)
Objective Image and Video Quality Assessment (I/VQA) measures predict image/video quality as perceived by human beings - the ultimate consumers of visual data. Existing research in the area is mainly limited to benchmarking and monitoring of visual data. The use of I/VQA measures in the design and optimization of image/video processing algorithms and systems is more desirable, challenging and fruitful but has not been well explored. Among the recently proposed objective I/VQA approaches, the structural similarity (SSIM) index and its variants have emerged as promising measures that show superior performance as compared to the widely used mean squared error (MSE) and are computationally simple compared with other state-of-the-art perceptual quality measures. In addition, SSIM has a number of desirable mathematical properties for optimization tasks. The goal of this research is to break the tradition of using MSE as the optimization criterion for image and video processing algorithms. We tackle several important problems in visual communication applications by exploiting SSIM-inspired design and optimization to achieve significantly better performance. Firstly, the original SSIM is a Full-Reference IQA (FR-IQA) measure that requires access to the original reference image, making it impractical in many visual communication applications. We propose a general purpose Reduced-Reference IQA (RR-IQA) method that can estimate SSIM with high accuracy with the help of a small number of RR features extracted from the original image. Furthermore, we introduce and demonstrate the novel idea of partially repairing an image using RR features. Secondly, image processing algorithms such as image de-noising and image super-resolution are required at various stages of visual communication systems, starting from image acquisition to image display at the receiver. We incorporate SSIM into the framework of sparse signal representation and non-local means methods and demonstrate improved performance in image de-noising and super-resolution. Thirdly, we incorporate SSIM into the framework of perceptual video compression. We propose an SSIM-based rate-distortion optimization scheme and an SSIM-inspired divisive optimization method that transforms the DCT domain frame residuals to a perceptually uniform space. Both approaches demonstrate the potential to largely improve the rate-distortion performance of state-of-the-art video codecs. Finally, in real-world visual communications, it is a common experience that end-users receive video with significantly time-varying quality due to the variations in video content/complexity, codec configuration, and network conditions. How human visual quality of experience (QoE) changes with such time-varying video quality is not yet well-understood. We propose a quality adaptation model that is asymmetrically tuned to increasing and decreasing quality. The model improves upon the direct SSIM approach in predicting subjective perceptual experience of time-varying video quality.
8

SSIM-Inspired Quality Assessment, Compression, and Processing for Visual Communications

Rehman, Abdul January 2013 (has links)
Objective Image and Video Quality Assessment (I/VQA) measures predict image/video quality as perceived by human beings - the ultimate consumers of visual data. Existing research in the area is mainly limited to benchmarking and monitoring of visual data. The use of I/VQA measures in the design and optimization of image/video processing algorithms and systems is more desirable, challenging and fruitful but has not been well explored. Among the recently proposed objective I/VQA approaches, the structural similarity (SSIM) index and its variants have emerged as promising measures that show superior performance as compared to the widely used mean squared error (MSE) and are computationally simple compared with other state-of-the-art perceptual quality measures. In addition, SSIM has a number of desirable mathematical properties for optimization tasks. The goal of this research is to break the tradition of using MSE as the optimization criterion for image and video processing algorithms. We tackle several important problems in visual communication applications by exploiting SSIM-inspired design and optimization to achieve significantly better performance. Firstly, the original SSIM is a Full-Reference IQA (FR-IQA) measure that requires access to the original reference image, making it impractical in many visual communication applications. We propose a general purpose Reduced-Reference IQA (RR-IQA) method that can estimate SSIM with high accuracy with the help of a small number of RR features extracted from the original image. Furthermore, we introduce and demonstrate the novel idea of partially repairing an image using RR features. Secondly, image processing algorithms such as image de-noising and image super-resolution are required at various stages of visual communication systems, starting from image acquisition to image display at the receiver. We incorporate SSIM into the framework of sparse signal representation and non-local means methods and demonstrate improved performance in image de-noising and super-resolution. Thirdly, we incorporate SSIM into the framework of perceptual video compression. We propose an SSIM-based rate-distortion optimization scheme and an SSIM-inspired divisive optimization method that transforms the DCT domain frame residuals to a perceptually uniform space. Both approaches demonstrate the potential to largely improve the rate-distortion performance of state-of-the-art video codecs. Finally, in real-world visual communications, it is a common experience that end-users receive video with significantly time-varying quality due to the variations in video content/complexity, codec configuration, and network conditions. How human visual quality of experience (QoE) changes with such time-varying video quality is not yet well-understood. We propose a quality adaptation model that is asymmetrically tuned to increasing and decreasing quality. The model improves upon the direct SSIM approach in predicting subjective perceptual experience of time-varying video quality.
9

Scalable video compression with optimized visual performance and random accessibility

Leung, Raymond, Electrical Engineering & Telecommunications, Faculty of Engineering, UNSW January 2006 (has links)
This thesis is concerned with maximizing the coding efficiency, random accessibility and visual performance of scalable compressed video. The unifying theme behind this work is the use of finely embedded localized coding structures, which govern the extent to which these goals may be jointly achieved. The first part focuses on scalable volumetric image compression. We investigate 3D transform and coding techniques which exploit inter-slice statistical redundancies without compromising slice accessibility. Our study shows that the motion-compensated temporal discrete wavelet transform (MC-TDWT) practically achieves an upper bound to the compression efficiency of slice transforms. From a video coding perspective, we find that most of the coding gain is attributed to offsetting the learning penalty in adaptive arithmetic coding through 3D code-block extension, rather than inter-frame context modelling. The second aspect of this thesis examines random accessibility. Accessibility refers to the ease with which a region of interest is accessed (subband samples needed for reconstruction are retrieved) from a compressed video bitstream, subject to spatiotemporal code-block constraints. We investigate the fundamental implications of motion compensation for random access efficiency and the compression performance of scalable interactive video. We demonstrate that inclusion of motion compensation operators within the lifting steps of a temporal subband transform incurs a random access penalty which depends on the characteristics of the motion field. The final aspect of this thesis aims to minimize the perceptual impact of visible distortion in scalable reconstructed video. We present a visual optimization strategy based on distortion scaling which raises the distortion-length slope of perceptually significant samples. This alters the codestream embedding order during post-compression rate-distortion optimization, thus allowing visually sensitive sites to be encoded with higher fidelity at a given bit-rate. For visual sensitivity analysis, we propose a contrast perception model that incorporates an adaptive masking slope. This versatile feature provides a context which models perceptual significance. It enables scene structures that otherwise suffer significant degradation to be preserved at lower bit-rates. The novelty in our approach derives from a set of "perceptual mappings" which account for quantization noise shaping effects induced by motion-compensated temporal synthesis. The proposed technique reduces wavelet compression artefacts and improves the perceptual quality of video.

Page generated in 0.071 seconds