• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 38
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 53
  • 53
  • 53
  • 19
  • 18
  • 17
  • 11
  • 9
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Spatial pooling strategies for image quality assessment

Moorthy, Anush Krishna 03 September 2009 (has links)
Recent image quality assessment (IQA) metrics achieve high correlation with human perception of image quality. Naturally, it is of interest to produce even better results. One promising method is to weight image quality measurements by visual importance. To this end, we describe three strategies - visual fixation-based weighting, quality-based weighting and weighting based on distribution of local quality scores about the mean. By contrast with some prior studies we find that these strategies can improve the correlations with subjective judgment significantly. We demonstrate improvements on the SSIM index in both its multi-scale and single-scale versions, using the LIVE database as a test-bed. / text
12

Natural scene statistics based blind image quality assessment in spatial domain

Mittal, Anish 05 August 2011 (has links)
We propose a natural scene statistic based quality assessment model Refer- enceless Image Spatial QUality Evaluator (RISQUE) which extracts marginal statistics of local normalized luminance signals and measures 'un-naturalness' of the distorted image based on measured deviation of them. We also model distribution of pairwise products of adjacent normalized luminance signals providing us with orientation distortion information. Although multi-scale, the model is defined in the space domain avoiding costly frequency or wavelet transforms. The frame work is simple, fast, human perception based and shown to perform statistically better than other proposed no reference algorithms and full reference structural similarity index(SSIM). / text
13

Perceived Image Quality Assessment for Stereoscopic Vision

Akhter, Roushain 07 April 2011 (has links)
This thesis describes an automatic evaluation approach for estimating the quality of stereo displays and vision systems using image features. The method is inspired by the human visual system. Display of stereo images is widely used to enhance the viewing experience of three-dimensional (3D) visual displays and communication systems. Applications are numerous and range from entertainment to more specialized applications such as: 3D visualization and broadcasting, robot tele-operation, object recognition, body exploration, 3D teleconferencing, and therapeutic purposes. Consequently, perceived image quality is important for assessing the performance of 3D imaging applications. There is no doubt that subjective testing (i.e., asking human viewers to rank the quality of stereo images) is the most accurate method for quality evaluation. It reflects true human perception. However, these assessments are time consuming and expensive. Furthermore, they cannot be done in real time. Therefore, the goal of this research is to develop an objective quality evaluation methods computational models that can automatically predict perceived image quality) correlating well with subjective predictions that are required in the field of quality assessment. I believe that the perceived distortion and disparity of any stereoscopic display are strongly dependent on local features, such as edge (non-uniform) and non-edge (uniform) areas. Therefore, in this research, I propose a No-Reference (NR) objective quality assessment for coded stereoscopic images based on segmented local features of artifacts and disparity. Local feature information such as edge and non-edge area based relative disparity estimation, as well as the blockiness, blur, and the zero-crossing within the block of images, are evaluated in this method. A block-based edge dissimilarity approach is used for disparity estimation. I use the Toyama stereo images database to evaluate the performance and to compare it with other approaches both qualitatively and quantitatively.
14

Perceived Image Quality Assessment for Stereoscopic Vision

Akhter, Roushain 07 April 2011 (has links)
This thesis describes an automatic evaluation approach for estimating the quality of stereo displays and vision systems using image features. The method is inspired by the human visual system. Display of stereo images is widely used to enhance the viewing experience of three-dimensional (3D) visual displays and communication systems. Applications are numerous and range from entertainment to more specialized applications such as: 3D visualization and broadcasting, robot tele-operation, object recognition, body exploration, 3D teleconferencing, and therapeutic purposes. Consequently, perceived image quality is important for assessing the performance of 3D imaging applications. There is no doubt that subjective testing (i.e., asking human viewers to rank the quality of stereo images) is the most accurate method for quality evaluation. It reflects true human perception. However, these assessments are time consuming and expensive. Furthermore, they cannot be done in real time. Therefore, the goal of this research is to develop an objective quality evaluation methods computational models that can automatically predict perceived image quality) correlating well with subjective predictions that are required in the field of quality assessment. I believe that the perceived distortion and disparity of any stereoscopic display are strongly dependent on local features, such as edge (non-uniform) and non-edge (uniform) areas. Therefore, in this research, I propose a No-Reference (NR) objective quality assessment for coded stereoscopic images based on segmented local features of artifacts and disparity. Local feature information such as edge and non-edge area based relative disparity estimation, as well as the blockiness, blur, and the zero-crossing within the block of images, are evaluated in this method. A block-based edge dissimilarity approach is used for disparity estimation. I use the Toyama stereo images database to evaluate the performance and to compare it with other approaches both qualitatively and quantitatively.
15

A Study of the Structural Similarity Image Quality Measure with Applications to Image Processing

Brunet, Dominique 02 August 2012 (has links)
Since its introduction in 2004, the Structural Similarity (SSIM) index has gained widespread popularity as an image quality assessment measure. SSIM is currently recognized to be one of the most powerful methods of assessing the visual closeness of images. That being said, the Mean Squared Error (MSE), which performs very poorly from a perceptual point of view, still remains the most common optimization criterion in image processing applications because of its relative simplicity along with a number of other properties that are deemed important. In this thesis, some necessary tools to assist in the design of SSIM-optimal algorithms are developed. This work combines theoretical developments with experimental research and practical algorithms. The description of the mathematical properties of the SSIM index represents the principal theoretical achievement in this thesis. Indeed, it is demonstrated how the SSIM index can be transformed into a distance metric. Local convexity, quasi-convexity, symmetries and invariance properties are also proved. The study of the SSIM index is also generalized to a family of metrics called normalized (or M-relative) metrics. Various analytical techniques for different kinds of SSIM-based optimization are then devised. For example, the best approximation according to the SSIM is described for orthogonal and redundant basis sets. SSIM-geodesic paths with arclength parameterization are also traced between images. Finally, formulas for SSIM-optimal point estimators are obtained. On the experimental side of the research, the structural self-similarity of images is studied. This leads to the confirmation of the hypothesis that the main source of self-similarity of images lies in their regions of low variance. On the practical side, an implementation of local statistical tests on the image residual is proposed for the assessment of denoised images. Also, heuristic estimations of the SSIM index and the MSE are developed. The research performed in this thesis should lead to the development of state-of-the-art image denoising algorithms. A better comprehension of the mathematical properties of the SSIM index represents another step toward the replacement of the MSE with SSIM in image processing applications.
16

GPGPU based implementation of BLIINDS-II NR-IQA

January 2016 (has links)
abstract: The technological advances in the past few decades have made possible creation and consumption of digital visual content at an explosive rate. Consequently, there is a need for efficient quality monitoring systems to ensure minimal degradation of images and videos during various processing operations like compression, transmission, storage etc. Objective Image Quality Assessment (IQA) algorithms have been developed that predict quality scores which match well with human subjective quality assessment. However, a lot of research still remains to be done before IQA algorithms can be deployed in real world systems. Long runtimes for one frame of image is a major hurdle. Graphics Processing Units (GPUs), equipped with massive number of computational cores, provide an opportunity to accelerate IQA algorithms by performing computations in parallel. Indeed, General Purpose Graphics Processing Units (GPGPU) techniques have been applied to a few Full Reference IQA algorithms which fall under the. We present a GPGPU implementation of Blind Image Integrity Notator using DCT Statistics (BLIINDS-II), which falls under the No Reference IQA algorithm paradigm. We have been able to achieve a speedup of over 30x over the previous CPU version of this algorithm. We test our implementation using various distorted images from the CSIQ database and present the performance trends observed. We achieve a very consistent performance of around 9 milliseconds per distorted image, which made possible the execution of over 100 images per second (100 fps). / Dissertation/Thesis / Masters Thesis Computer Science 2016
17

Hardware Acceleration of Most Apparent Distortion Image Quality Assessment Algorithm on FPGA Using OpenCL

January 2017 (has links)
abstract: The information era has brought about many technological advancements in the past few decades, and that has led to an exponential increase in the creation of digital images and videos. Constantly, all digital images go through some image processing algorithm for various reasons like compression, transmission, storage, etc. There is data loss during this process which leaves us with a degraded image. Hence, to ensure minimal degradation of images, the requirement for quality assessment has become mandatory. Image Quality Assessment (IQA) has been researched and developed over the last several decades to predict the quality score in a manner that agrees with human judgments of quality. Modern image quality assessment (IQA) algorithms are quite effective at prediction accuracy, and their development has not focused on improving computational performance. The existing serial implementation requires a relatively large run-time on the order of seconds for a single frame. Hardware acceleration using Field programmable gate arrays (FPGAs) provides reconfigurable computing fabric that can be tailored for a broad range of applications. Usually, programming FPGAs has required expertise in hardware descriptive languages (HDLs) or high-level synthesis (HLS) tool. OpenCL is an open standard for cross-platform, parallel programming of heterogeneous systems along with Altera OpenCL SDK, enabling developers to use FPGA's potential without extensive hardware knowledge. Hence, this thesis focuses on accelerating the computationally intensive part of the most apparent distortion (MAD) algorithm on FPGA using OpenCL. The results are compared with CPU implementation to evaluate performance and efficiency gains. / Dissertation/Thesis / Masters Thesis Electrical Engineering 2017
18

Texture Structure Analysis

January 2014 (has links)
abstract: Texture analysis plays an important role in applications like automated pattern inspection, image and video compression, content-based image retrieval, remote-sensing, medical imaging and document processing, to name a few. Texture Structure Analysis is the process of studying the structure present in the textures. This structure can be expressed in terms of perceived regularity. Our human visual system (HVS) uses the perceived regularity as one of the important pre-attentive cues in low-level image understanding. Similar to the HVS, image processing and computer vision systems can make fast and efficient decisions if they can quantify this regularity automatically. In this work, the problem of quantifying the degree of perceived regularity when looking at an arbitrary texture is introduced and addressed. One key contribution of this work is in proposing an objective no-reference perceptual texture regularity metric based on visual saliency. Other key contributions include an adaptive texture synthesis method based on texture regularity, and a low-complexity reduced-reference visual quality metric for assessing the quality of synthesized textures. In order to use the best performing visual attention model on textures, the performance of the most popular visual attention models to predict the visual saliency on textures is evaluated. Since there is no publicly available database with ground-truth saliency maps on images with exclusive texture content, a new eye-tracking database is systematically built. Using the Visual Saliency Map (VSM) generated by the best visual attention model, the proposed texture regularity metric is computed. The proposed metric is based on the observation that VSM characteristics differ between textures of differing regularity. The proposed texture regularity metric is based on two texture regularity scores, namely a textural similarity score and a spatial distribution score. In order to evaluate the performance of the proposed regularity metric, a texture regularity database called RegTEX, is built as a part of this work. It is shown through subjective testing that the proposed metric has a strong correlation with the Mean Opinion Score (MOS) for the perceived regularity of textures. The proposed method is also shown to be robust to geometric and photometric transformations and outperforms some of the popular texture regularity metrics in predicting the perceived regularity. The impact of the proposed metric to improve the performance of many image-processing applications is also presented. The influence of the perceived texture regularity on the perceptual quality of synthesized textures is demonstrated through building a synthesized textures database named SynTEX. It is shown through subjective testing that textures with different degrees of perceived regularities exhibit different degrees of vulnerability to artifacts resulting from different texture synthesis approaches. This work also proposes an algorithm for adaptively selecting the appropriate texture synthesis method based on the perceived regularity of the original texture. A reduced-reference texture quality metric for texture synthesis is also proposed as part of this work. The metric is based on the change in perceived regularity and the change in perceived granularity between the original and the synthesized textures. The perceived granularity is quantified through a new granularity metric that is proposed in this work. It is shown through subjective testing that the proposed quality metric, using just 2 parameters, has a strong correlation with the MOS for the fidelity of synthesized textures and outperforms the state-of-the-art full-reference quality metrics on 3 different texture databases. Finally, the ability of the proposed regularity metric in predicting the perceived degradation of textures due to compression and blur artifacts is also established. / Dissertation/Thesis / Ph.D. Electrical Engineering 2014
19

Modeling of Video Quality for Automatic Video Analysis and Its Applications in Wireless Camera Networks

Kong, Lingchao 01 October 2019 (has links)
No description available.
20

Program pro hodnocení kvality obrazu s využitím neuronové sítě / Program for evaluating image quality using neural network

Šimíček, Pavel January 2008 (has links)
This thesis studies the assessment of picture quality using the artificial neural network approach. In the first part, two main ways to evaluate the picture quality are described. It is the subjective assessment of picture quality, where a group of people watches the picture and evaluates its quality, and objective assessment which is based on mathematical relations. Calculation of structural similarity index (SSIM) is analyzed in detail. In the second part, the basis of neural networks is described. A neural network was created in Matlab, designed to simulate subjective assessment scores based on the SSIM index.

Page generated in 0.1087 seconds