• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 38
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 53
  • 53
  • 53
  • 19
  • 18
  • 17
  • 11
  • 9
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Blind Full Reference Quality Assessment of Poisson Image Denoising

Zhang, Chen 05 June 2014 (has links)
No description available.
2

Analysis and Performance Optimization of a GPGPU Implementation of Image Quality Assessment (IQA) Algorithm VSNR

January 2017 (has links)
abstract: Image processing has changed the way we store, view and share images. One important component of sharing images over the networks is image compression. Lossy image compression techniques compromise the quality of images to reduce their size. To ensure that the distortion of images due to image compression is not highly detectable by humans, the perceived quality of an image needs to be maintained over a certain threshold. Determining this threshold is best done using human subjects, but that is impractical in real-world scenarios. As a solution to this issue, image quality assessment (IQA) algorithms are used to automatically compute a fidelity score of an image. However, poor performance of IQA algorithms has been observed due to complex statistical computations involved. General Purpose Graphics Processing Unit (GPGPU) programming is one of the solutions proposed to optimize the performance of these algorithms. This thesis presents a Compute Unified Device Architecture (CUDA) based optimized implementation of full reference IQA algorithm, Visual Signal to Noise Ratio (VSNR) that uses M-level 2D Discrete Wavelet Transform (DWT) with 9/7 biorthogonal filters among other statistical computations. The presented implementation is tested upon four different image quality databases containing images with multiple distortions and sizes ranging from 512 x 512 to 1600 x 1280. The CUDA implementation of VSNR shows a speedup of over 32x for 1600 x 1280 images. It is observed that the speedup scales with the increase in size of images. The results showed that the implementation is fast enough to use VSNR on high definition videos with a frame rate of 60 fps. This work presents the optimizations made due to the use of GPU’s constant memory and reuse of allocated memory on the GPU. Also, it shows the performance improvement using profiler driven GPGPU development in CUDA. The presented implementation can be deployed in production combined with existing applications. / Dissertation/Thesis / Masters Thesis Computer Science 2017
3

Multi-modality quality assessment for unconstrained biometric samples / Évaluation de la qualité multimodale pour des échantillons biométriques non soumis à des contraintes

Liu, Xinwei 22 June 2018 (has links)
L’objectif de ces travaux de recherche est d’étudier les méthodes d’évaluation de laqualité des images biométriques multimodales sur des échantillons acquis de manièrenon contrainte. De nombreuses s études ont noté l’importance de la qualité del’échantillon pour un système de reconnaissance ou un algorithme de comparaison,puisque la performance du système biométrique est intrinsèquement dépendant dela qualité des images de l’échantillon. Dès lors, la nécessité d’évaluer la qualitédes échantillons biométriques pour plusieurs modalités (empreintes digitales, iris,visage, etc.) est devenue primordiale notamment avec l’apparition de systèmesbiométriques multimodaux de haute précision.Après une introduction présentant un historique de la biométrie et des préceptesliés à la qualité des échantillons biométriques, nous présentons le concept d’évaluationde la qualité des échantillons pour plusieurs modalités. Les normes de qualitéISO / CEI récemment établies pour les empreintes digitales, l’iris et le visage sontprésentées. De plus, des approches d’évaluation de la qualité des échantillons conçuesspécifiquement pour les empreintes digitales avec et sans contact, pour l’iris(dont une image est capturée en proche infrarouge et dans le domaine visible),ainsi que le visage sont étudiées. Finalement, des techniques d’évaluation des performancesdes mesures de qualité des échantillons biométriques sont égalementétudiées.Sur la base des conclusions formulées suite à l’étude des solutions algorithmiques portant sur l’évaluation de la qualité des échantillons biométriques, nous proposonsun cadre commun pour l’évaluation de la qualité d’image biométrique pourplusieurs modalité. Après avoir étudié les attributs de qualité basés sur l’image parmodalité biométrique, nous examinons quelle intersection existe pour l’ensembledes modalités. Ensuite, nous sélectionnons et redéfinissons les attributs de qualitébasés sur l’image qui sont les plus importants afin de définir un cadre commun.Afin de relier ces attributs de qualité aux vrais échantillons biométriques,nous développons une nouvelle base de données de qualité d’image biométriquemulti-modalité qui contient des images échantillons de haute qualité et des imagesdégradées pour l’empreinte digitale acquise sans contact, l’iris (dont l’acquisitionest réalisée dans le spectre visible) et le visage. Les types de dégradation appliquéssont liés aux attributs de qualité qui sont communs aux diverses modalitéset qui sont basés sur l’image. Un autre aspect important du cadre commun proposéest la qualité de l’image et ses applications en biométrie. Nous avons d’abordintroduit et classifié les métriques de qualité d’image existantes, puis effectué unbref aperçu des métriques de qualité d’image sans référence, qui peuvent être appliquéespour l’évaluation de la qualité des échantillons biométriques. De plus, nousétudions comment les mesures de qualité d’image sans référence ont été utiliséespour l’évaluation de la qualité des empreintes digitales, de l’iris et des modalitésbiométriques du visage.Des expériences pour l’évaluation de la performance des métriques de qualitéd’image sans référence sur les images de visage et de l’iris sont effectuées. Lesrésultats expérimentaux indiquent qu’il existe plusieurs métriques qui peuventévaluer la qualité des échantillons biométriques de l’iris et du visage avec un fortcoefficient de correlation. La méthode obtenant les meilleurs résultats en termede performance est ré-entrainée sur des images d’empreintes digitales, ce qui permetd’augmenter significativement les performances du système de reconnaissancebiométrique.À travers le travail réalisé dans cette thèse, nous avons démontré l’applicabilitédes métriques de qualité d’image sans référence pour l’évaluation d’échantillonsbiométriques multi-modalité non contraints. / The aim of this research is to investigate multi-modality biometric image qualityassessment methods for unconstrained samples. Studies of biometrics noted thesignificance of sample quality for a recognition system or a comparison algorithmbecause the performance of the biometric system depends mainly on the qualityof the sample images. The need to assess the quality of multi-modality biometricsamples is increased with the requirement of a high accuracy multi-modalitybiometric systems.Following an introduction and background in biometrics and biometric samplequality, we introduce the concept of biometric sample quality assessment for multiplemodalities. Recently established ISO/IEC quality standards for fingerprint,iris, and face are presented. In addition, sample quality assessment approacheswhich are designed specific for contact-based and contactless fingerprint, nearinfrared-based iris and visible wavelength iris, as well as face are surveyed. Followingthe survey, approaches for the performance evaluation of biometric samplequality assessment methods are also investigated.Based on the knowledge gathered from the biometric sample quality assessmentchallenges, we propose a common framework for the assessment of multi-modalitybiometric image quality. We review the previous classification of image-basedquality attributes for a single biometric modality and investigate what are the commonimage-based attributes for multi-modality. Then we select and re-define themost important image-based quality attributes for the common framework. In order to link these quality attributes to the real biometric samples, we develop anew multi-modality biometric image quality database which has both high qualitysample images and degraded images for contactless fingerprint, visible wavelengthiris, and face modalities. The degradation types are based on the selected commonimage-based quality attributes. Another important aspect in the proposed commonframework is the image quality metrics and their applications in biometrics. Wefirst introduce and classify the existing image quality metrics and then conducteda brief survey of no-reference image quality metrics, which can be applied to biometricsample quality assessment. Plus, we investigate how no-reference imagequality metrics have been used for the quality assessment for fingerprint, iris, andface biometric modalities.The experiments for the performance evaluation of no-reference image qualitymetrics for visible wavelength face and iris modalities are conducted. The experimentalresults indicate that there are several no-reference image quality metricsthat can assess the quality of both iris and face biometric samples. Lastly, we optimizethe best metric by re-training it. The re-trained image quality metric canprovide better recognition performance than the original. Through the work carriedout in this thesis we have shown the applicability of no-reference image qualitymetrics for the assessment of unconstrained multi-modality biometric samples.
4

Methods for the analysis of ordinal response data in medical image quality assessment

Keeble, C., Baxter, P.D., Gislason-Lee, Amber J., Treadgold, L.A., Davies, A.G. 12 April 2016 (has links)
Yes / The assessment of image quality in medical imaging often requires observers to rate images for some metric or detectability task. These subjective results are used in optimization, radiation dose reduction or system comparison studies and may be compared to objective measures from a computer vision algorithm performing the same task. One popular scoring approach is to use a Likert scale, then assign consecutive numbers to the categories. The mean of these response values is then taken and used for comparison with the objective or second subjective response. Agreement is often assessed using correlation coefficients. We highlight a number of weaknesses in this common approach, including inappropriate analyses of ordinal data and the inability to properly account for correlations caused by repeated images or observers. We suggest alternative data collection and analysis techniques such as amendments to the scale and multilevel proportional odds models. We detail the suitability of each approach depending upon the data structure and demonstrate each method using a medical imaging example. Whilst others have raised some of these issues, we evaluated the entire study from data collection to analysis, suggested sources for software and further reading, and provided a checklist plus flowchart for use with any ordinal data. We hope that raised awareness of the limitations of the current approaches will encourage greater method consideration and the utilization of a more appropriate analysis. More accurate comparisons between measures in medical imaging will lead to a more robust contribution to the imaging literature and ultimately improved patient care. / EU-funded PANORAMA project, funded by grants from Belgium, Italy, France, Netherlands, UK and the ENIAC Joint Undertaking.
5

Perceptual Image Quality Prediction Using Region of Interest Based Reduced Reference Metrics Over Wireless Channel

R V Krishnam Raju, Kunadha Raju January 2016 (has links)
As there is a rapid growth in the field of wireless communications, the demand for various multimedia services is also increasing. The data that is being transmitted suffers from distortions through source encoding and transmission over errorprone channels. Due to these errors, the quality of the content is degraded. There is a need for service providers to provide certain Quality of Experience (QoE) to the end user. Several methods are being developed by network providers for better QoE.The human tendency mainly focuses on distortions in the Region of Interest(ROI) which are perceived to be more annoying compared to the Background(BG). With this as a base, the main aim of this thesis is to get an accurate prediction quality metric to measure the quality of the image over ROI and the BG independently. Reduced Reference Image Quality Assessment (RRIQA), a reduced reference image quality assessment metric, is chosen for this purpose. In this method, only partial information about the reference image is available to assess the quality. The quality metric is measured independently over ROI and BG. Finally the metric estimated over ROI and BG are pooled together to get aROI aware metric to predict the Mean Opinion Score (MOS) of the image.In this thesis, an ROI aware quality metric is used to measure the quality of distorted images that are generated using a wireless channel. The MOS of distorted images are obtained. Finally, the obtained MOS are validated with the MOS obtained from a database [1].It is observed that the proposed image quality assessment method provides better results compared to the traditional approach. It also gives a better performance over a wide variety of distortions. The obtained results show that the impairments in ROI are perceived to be more annoying when compared to the BG.
6

IMAGE AND VIDEO QUALITY ASSESSMENT WITH APPLICATIONS IN FIRST-PERSON VIDEOS

Chen Bai (6760616) 12 August 2019 (has links)
<div>First-person videos (FPVs) captured by wearable cameras provide a huge amount of visual data. FPVs have different characteristics compared to broadcast videos and mobile videos. The video quality of FPVs are influenced by motion blur, tilt, rolling shutter and exposure distortions. In this work, we design image and video assessment methods applicable for FPVs. </div><div><br></div><div>Our video quality assessment mainly focuses on three quality problems. The first problem is the video frame artifacts including motion blur, tilt, rolling shutter, that are caused by the heavy and unstructured motion in FPVs. The second problem is the exposure distortions. Videos suffer from exposure distortions when the camera sensor is not exposed to the proper amount of light, which often caused by bad environmental lighting or capture angles. The third problem is the increased blurriness after video stabilization. The stabilized video is perceptually more blurry than its original because the masking effect of motion is no longer present. </div><div><br></div><div>To evaluate video frame artifacts, we introduce a new strategy for image quality estimation, called mutual reference (MR), which uses the information provided by overlapping content to estimate the image quality. The MR strategy is applied to FPVs by partitioning temporally nearby frames with similar content into sets, and estimating their visual quality using their mutual information. We propose one MR quality estimator, Local Visual Information (LVI), that estimates the relative quality between two images which overlap.</div><div><br></div><div>To alleviate exposure distortions, we propose a controllable illumination enhancement method that adjusts the amount of enhancement with a single knob. The knob can be controlled by our proposed over-enhancement measure, Lightness Order Measure (LOM). Since the visual quality is an inverted U-shape function of the amount of enhancement, our design is to control the amount of enhancement so that the image is enhanced to the peak visual quality. </div><div><br></div><div>To estimate the increased blurriness after stabilization, we propose a visibility-inspired temporal pooling (VTP) mechanism. VTP mechanism models the motion masking effect on perceived video blurriness as the influence of the visibility of a frame on the temporal pooling weight of the frame quality score. The measure for visibility is estimated as the proportion of spatial details that is visible for human observers.</div>
7

INFORMATION THEORETIC CRITERIA FOR IMAGE QUALITY ASSESSMENT BASED ON NATURAL SCENE STATISTICS

Zhang, Di January 2009 (has links)
Measurement of visual quality is crucial for various image and video processing applications. It is widely applied in image acquisition, media transmission, video compression, image/video restoration, etc. The goal of image quality assessment (QA) is to develop a computable quality metric which is able to properly evaluate image quality. The primary criterion is better QA consistency with human judgment. Computational complexity and resource limitations are also concerns in a successful QA design. Many methods have been proposed up to now. At the beginning, quality measurements were directly taken from simple distance measurements, which refer to mathematically signal fidelity, such as mean squared error or Minkowsky distance. Lately, QA was extended to color space and the Fourier domain in which images are better represented. Some existing methods also consider the adaptive ability of human vision. Unfortunately, the Video Quality Experts Group indicated that none of the more sophisticated metrics showed any great advantage over other existing metrics. This thesis proposes a general approach to the QA problem by evaluating image information entropy. An information theoretic model for the human visual system is proposed and an information theoretic solution is presented to derive the proper settings. The quality metric is validated by five subjective databases from different research labs. The key points for a successful quality metric are investigated. During the testing, our quality metric exhibits excellent consistency with the human judgments and compatibility with different databases. Other than full reference quality assessment metric, blind quality assessment metrics are also proposed. In order to predict quality without a reference image, two concepts are introduced which quantitatively describe the inter-scale dependency under a multi-resolution framework. Based on the success of the full reference quality metric, several blind quality metrics are proposed for five different types of distortions in the subjective databases. Our blind metrics outperform all existing blind metrics and also are able to deal with some distortions which have not been investigated.
8

INFORMATION THEORETIC CRITERIA FOR IMAGE QUALITY ASSESSMENT BASED ON NATURAL SCENE STATISTICS

Zhang, Di January 2009 (has links)
Measurement of visual quality is crucial for various image and video processing applications. It is widely applied in image acquisition, media transmission, video compression, image/video restoration, etc. The goal of image quality assessment (QA) is to develop a computable quality metric which is able to properly evaluate image quality. The primary criterion is better QA consistency with human judgment. Computational complexity and resource limitations are also concerns in a successful QA design. Many methods have been proposed up to now. At the beginning, quality measurements were directly taken from simple distance measurements, which refer to mathematically signal fidelity, such as mean squared error or Minkowsky distance. Lately, QA was extended to color space and the Fourier domain in which images are better represented. Some existing methods also consider the adaptive ability of human vision. Unfortunately, the Video Quality Experts Group indicated that none of the more sophisticated metrics showed any great advantage over other existing metrics. This thesis proposes a general approach to the QA problem by evaluating image information entropy. An information theoretic model for the human visual system is proposed and an information theoretic solution is presented to derive the proper settings. The quality metric is validated by five subjective databases from different research labs. The key points for a successful quality metric are investigated. During the testing, our quality metric exhibits excellent consistency with the human judgments and compatibility with different databases. Other than full reference quality assessment metric, blind quality assessment metrics are also proposed. In order to predict quality without a reference image, two concepts are introduced which quantitatively describe the inter-scale dependency under a multi-resolution framework. Based on the success of the full reference quality metric, several blind quality metrics are proposed for five different types of distortions in the subjective databases. Our blind metrics outperform all existing blind metrics and also are able to deal with some distortions which have not been investigated.
9

A Study of the Structural Similarity Image Quality Measure with Applications to Image Processing

Brunet, Dominique 02 August 2012 (has links)
Since its introduction in 2004, the Structural Similarity (SSIM) index has gained widespread popularity as an image quality assessment measure. SSIM is currently recognized to be one of the most powerful methods of assessing the visual closeness of images. That being said, the Mean Squared Error (MSE), which performs very poorly from a perceptual point of view, still remains the most common optimization criterion in image processing applications because of its relative simplicity along with a number of other properties that are deemed important. In this thesis, some necessary tools to assist in the design of SSIM-optimal algorithms are developed. This work combines theoretical developments with experimental research and practical algorithms. The description of the mathematical properties of the SSIM index represents the principal theoretical achievement in this thesis. Indeed, it is demonstrated how the SSIM index can be transformed into a distance metric. Local convexity, quasi-convexity, symmetries and invariance properties are also proved. The study of the SSIM index is also generalized to a family of metrics called normalized (or M-relative) metrics. Various analytical techniques for different kinds of SSIM-based optimization are then devised. For example, the best approximation according to the SSIM is described for orthogonal and redundant basis sets. SSIM-geodesic paths with arclength parameterization are also traced between images. Finally, formulas for SSIM-optimal point estimators are obtained. On the experimental side of the research, the structural self-similarity of images is studied. This leads to the confirmation of the hypothesis that the main source of self-similarity of images lies in their regions of low variance. On the practical side, an implementation of local statistical tests on the image residual is proposed for the assessment of denoised images. Also, heuristic estimations of the SSIM index and the MSE are developed. The research performed in this thesis should lead to the development of state-of-the-art image denoising algorithms. A better comprehension of the mathematical properties of the SSIM index represents another step toward the replacement of the MSE with SSIM in image processing applications.
10

Blind image and video quality assessment using natural scene and motion models

Saad, Michele Antoine 05 November 2013 (has links)
We tackle the problems of no-reference/blind image and video quality evaluation. The approach we take is that of modeling the statistical characteristics of natural images and videos, and utilizing deviations from those natural statistics as indicators of perceived quality. We propose a probabilistic model of natural scenes and a probabilistic model of natural videos to drive our image and video quality assessment (I/VQA) algorithms respectively. The VQA problem is considerably different from the IQA problem since it imposes a number of challenges on top of the challenges faced in the IQA problem; namely the challenges arising from the temporal dimension in video that plays an important role in influencing human perception of quality. We compare our IQA approach to the state of the art in blind, reduced reference and full-reference methods, and we show that it is top performing. We compare our VQA approach to the state of the art in reduced and full-reference methods (no blind VQA methods that perform reliably well exist), and show that our algorithm performs as well as the top performing full and reduced reference algorithms in predicting human judgments of quality. / text

Page generated in 0.1109 seconds