• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 32
  • 3
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 43
  • 43
  • 43
  • 17
  • 16
  • 16
  • 11
  • 8
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Blind Full Reference Quality Assessment of Poisson Image Denoising

Zhang, Chen 05 June 2014 (has links)
No description available.
2

Analysis and Performance Optimization of a GPGPU Implementation of Image Quality Assessment (IQA) Algorithm VSNR

January 2017 (has links)
abstract: Image processing has changed the way we store, view and share images. One important component of sharing images over the networks is image compression. Lossy image compression techniques compromise the quality of images to reduce their size. To ensure that the distortion of images due to image compression is not highly detectable by humans, the perceived quality of an image needs to be maintained over a certain threshold. Determining this threshold is best done using human subjects, but that is impractical in real-world scenarios. As a solution to this issue, image quality assessment (IQA) algorithms are used to automatically compute a fidelity score of an image. However, poor performance of IQA algorithms has been observed due to complex statistical computations involved. General Purpose Graphics Processing Unit (GPGPU) programming is one of the solutions proposed to optimize the performance of these algorithms. This thesis presents a Compute Unified Device Architecture (CUDA) based optimized implementation of full reference IQA algorithm, Visual Signal to Noise Ratio (VSNR) that uses M-level 2D Discrete Wavelet Transform (DWT) with 9/7 biorthogonal filters among other statistical computations. The presented implementation is tested upon four different image quality databases containing images with multiple distortions and sizes ranging from 512 x 512 to 1600 x 1280. The CUDA implementation of VSNR shows a speedup of over 32x for 1600 x 1280 images. It is observed that the speedup scales with the increase in size of images. The results showed that the implementation is fast enough to use VSNR on high definition videos with a frame rate of 60 fps. This work presents the optimizations made due to the use of GPU’s constant memory and reuse of allocated memory on the GPU. Also, it shows the performance improvement using profiler driven GPGPU development in CUDA. The presented implementation can be deployed in production combined with existing applications. / Dissertation/Thesis / Masters Thesis Computer Science 2017
3

Multi-modality quality assessment for unconstrained biometric samples / Évaluation de la qualité multimodale pour des échantillons biométriques non soumis à des contraintes

Liu, Xinwei 22 June 2018 (has links)
L’objectif de ces travaux de recherche est d’étudier les méthodes d’évaluation de laqualité des images biométriques multimodales sur des échantillons acquis de manièrenon contrainte. De nombreuses s études ont noté l’importance de la qualité del’échantillon pour un système de reconnaissance ou un algorithme de comparaison,puisque la performance du système biométrique est intrinsèquement dépendant dela qualité des images de l’échantillon. Dès lors, la nécessité d’évaluer la qualitédes échantillons biométriques pour plusieurs modalités (empreintes digitales, iris,visage, etc.) est devenue primordiale notamment avec l’apparition de systèmesbiométriques multimodaux de haute précision.Après une introduction présentant un historique de la biométrie et des préceptesliés à la qualité des échantillons biométriques, nous présentons le concept d’évaluationde la qualité des échantillons pour plusieurs modalités. Les normes de qualitéISO / CEI récemment établies pour les empreintes digitales, l’iris et le visage sontprésentées. De plus, des approches d’évaluation de la qualité des échantillons conçuesspécifiquement pour les empreintes digitales avec et sans contact, pour l’iris(dont une image est capturée en proche infrarouge et dans le domaine visible),ainsi que le visage sont étudiées. Finalement, des techniques d’évaluation des performancesdes mesures de qualité des échantillons biométriques sont égalementétudiées.Sur la base des conclusions formulées suite à l’étude des solutions algorithmiques portant sur l’évaluation de la qualité des échantillons biométriques, nous proposonsun cadre commun pour l’évaluation de la qualité d’image biométrique pourplusieurs modalité. Après avoir étudié les attributs de qualité basés sur l’image parmodalité biométrique, nous examinons quelle intersection existe pour l’ensembledes modalités. Ensuite, nous sélectionnons et redéfinissons les attributs de qualitébasés sur l’image qui sont les plus importants afin de définir un cadre commun.Afin de relier ces attributs de qualité aux vrais échantillons biométriques,nous développons une nouvelle base de données de qualité d’image biométriquemulti-modalité qui contient des images échantillons de haute qualité et des imagesdégradées pour l’empreinte digitale acquise sans contact, l’iris (dont l’acquisitionest réalisée dans le spectre visible) et le visage. Les types de dégradation appliquéssont liés aux attributs de qualité qui sont communs aux diverses modalitéset qui sont basés sur l’image. Un autre aspect important du cadre commun proposéest la qualité de l’image et ses applications en biométrie. Nous avons d’abordintroduit et classifié les métriques de qualité d’image existantes, puis effectué unbref aperçu des métriques de qualité d’image sans référence, qui peuvent être appliquéespour l’évaluation de la qualité des échantillons biométriques. De plus, nousétudions comment les mesures de qualité d’image sans référence ont été utiliséespour l’évaluation de la qualité des empreintes digitales, de l’iris et des modalitésbiométriques du visage.Des expériences pour l’évaluation de la performance des métriques de qualitéd’image sans référence sur les images de visage et de l’iris sont effectuées. Lesrésultats expérimentaux indiquent qu’il existe plusieurs métriques qui peuventévaluer la qualité des échantillons biométriques de l’iris et du visage avec un fortcoefficient de correlation. La méthode obtenant les meilleurs résultats en termede performance est ré-entrainée sur des images d’empreintes digitales, ce qui permetd’augmenter significativement les performances du système de reconnaissancebiométrique.À travers le travail réalisé dans cette thèse, nous avons démontré l’applicabilitédes métriques de qualité d’image sans référence pour l’évaluation d’échantillonsbiométriques multi-modalité non contraints. / The aim of this research is to investigate multi-modality biometric image qualityassessment methods for unconstrained samples. Studies of biometrics noted thesignificance of sample quality for a recognition system or a comparison algorithmbecause the performance of the biometric system depends mainly on the qualityof the sample images. The need to assess the quality of multi-modality biometricsamples is increased with the requirement of a high accuracy multi-modalitybiometric systems.Following an introduction and background in biometrics and biometric samplequality, we introduce the concept of biometric sample quality assessment for multiplemodalities. Recently established ISO/IEC quality standards for fingerprint,iris, and face are presented. In addition, sample quality assessment approacheswhich are designed specific for contact-based and contactless fingerprint, nearinfrared-based iris and visible wavelength iris, as well as face are surveyed. Followingthe survey, approaches for the performance evaluation of biometric samplequality assessment methods are also investigated.Based on the knowledge gathered from the biometric sample quality assessmentchallenges, we propose a common framework for the assessment of multi-modalitybiometric image quality. We review the previous classification of image-basedquality attributes for a single biometric modality and investigate what are the commonimage-based attributes for multi-modality. Then we select and re-define themost important image-based quality attributes for the common framework. In order to link these quality attributes to the real biometric samples, we develop anew multi-modality biometric image quality database which has both high qualitysample images and degraded images for contactless fingerprint, visible wavelengthiris, and face modalities. The degradation types are based on the selected commonimage-based quality attributes. Another important aspect in the proposed commonframework is the image quality metrics and their applications in biometrics. Wefirst introduce and classify the existing image quality metrics and then conducteda brief survey of no-reference image quality metrics, which can be applied to biometricsample quality assessment. Plus, we investigate how no-reference imagequality metrics have been used for the quality assessment for fingerprint, iris, andface biometric modalities.The experiments for the performance evaluation of no-reference image qualitymetrics for visible wavelength face and iris modalities are conducted. The experimentalresults indicate that there are several no-reference image quality metricsthat can assess the quality of both iris and face biometric samples. Lastly, we optimizethe best metric by re-training it. The re-trained image quality metric canprovide better recognition performance than the original. Through the work carriedout in this thesis we have shown the applicability of no-reference image qualitymetrics for the assessment of unconstrained multi-modality biometric samples.
4

Perceptual Image Quality Prediction Using Region of Interest Based Reduced Reference Metrics Over Wireless Channel

R V Krishnam Raju, Kunadha Raju January 2016 (has links)
As there is a rapid growth in the field of wireless communications, the demand for various multimedia services is also increasing. The data that is being transmitted suffers from distortions through source encoding and transmission over errorprone channels. Due to these errors, the quality of the content is degraded. There is a need for service providers to provide certain Quality of Experience (QoE) to the end user. Several methods are being developed by network providers for better QoE.The human tendency mainly focuses on distortions in the Region of Interest(ROI) which are perceived to be more annoying compared to the Background(BG). With this as a base, the main aim of this thesis is to get an accurate prediction quality metric to measure the quality of the image over ROI and the BG independently. Reduced Reference Image Quality Assessment (RRIQA), a reduced reference image quality assessment metric, is chosen for this purpose. In this method, only partial information about the reference image is available to assess the quality. The quality metric is measured independently over ROI and BG. Finally the metric estimated over ROI and BG are pooled together to get aROI aware metric to predict the Mean Opinion Score (MOS) of the image.In this thesis, an ROI aware quality metric is used to measure the quality of distorted images that are generated using a wireless channel. The MOS of distorted images are obtained. Finally, the obtained MOS are validated with the MOS obtained from a database [1].It is observed that the proposed image quality assessment method provides better results compared to the traditional approach. It also gives a better performance over a wide variety of distortions. The obtained results show that the impairments in ROI are perceived to be more annoying when compared to the BG.
5

IMAGE AND VIDEO QUALITY ASSESSMENT WITH APPLICATIONS IN FIRST-PERSON VIDEOS

Chen Bai (6760616) 12 August 2019 (has links)
<div>First-person videos (FPVs) captured by wearable cameras provide a huge amount of visual data. FPVs have different characteristics compared to broadcast videos and mobile videos. The video quality of FPVs are influenced by motion blur, tilt, rolling shutter and exposure distortions. In this work, we design image and video assessment methods applicable for FPVs. </div><div><br></div><div>Our video quality assessment mainly focuses on three quality problems. The first problem is the video frame artifacts including motion blur, tilt, rolling shutter, that are caused by the heavy and unstructured motion in FPVs. The second problem is the exposure distortions. Videos suffer from exposure distortions when the camera sensor is not exposed to the proper amount of light, which often caused by bad environmental lighting or capture angles. The third problem is the increased blurriness after video stabilization. The stabilized video is perceptually more blurry than its original because the masking effect of motion is no longer present. </div><div><br></div><div>To evaluate video frame artifacts, we introduce a new strategy for image quality estimation, called mutual reference (MR), which uses the information provided by overlapping content to estimate the image quality. The MR strategy is applied to FPVs by partitioning temporally nearby frames with similar content into sets, and estimating their visual quality using their mutual information. We propose one MR quality estimator, Local Visual Information (LVI), that estimates the relative quality between two images which overlap.</div><div><br></div><div>To alleviate exposure distortions, we propose a controllable illumination enhancement method that adjusts the amount of enhancement with a single knob. The knob can be controlled by our proposed over-enhancement measure, Lightness Order Measure (LOM). Since the visual quality is an inverted U-shape function of the amount of enhancement, our design is to control the amount of enhancement so that the image is enhanced to the peak visual quality. </div><div><br></div><div>To estimate the increased blurriness after stabilization, we propose a visibility-inspired temporal pooling (VTP) mechanism. VTP mechanism models the motion masking effect on perceived video blurriness as the influence of the visibility of a frame on the temporal pooling weight of the frame quality score. The measure for visibility is estimated as the proportion of spatial details that is visible for human observers.</div>
6

A Study of the Structural Similarity Image Quality Measure with Applications to Image Processing

Brunet, Dominique 02 August 2012 (has links)
Since its introduction in 2004, the Structural Similarity (SSIM) index has gained widespread popularity as an image quality assessment measure. SSIM is currently recognized to be one of the most powerful methods of assessing the visual closeness of images. That being said, the Mean Squared Error (MSE), which performs very poorly from a perceptual point of view, still remains the most common optimization criterion in image processing applications because of its relative simplicity along with a number of other properties that are deemed important. In this thesis, some necessary tools to assist in the design of SSIM-optimal algorithms are developed. This work combines theoretical developments with experimental research and practical algorithms. The description of the mathematical properties of the SSIM index represents the principal theoretical achievement in this thesis. Indeed, it is demonstrated how the SSIM index can be transformed into a distance metric. Local convexity, quasi-convexity, symmetries and invariance properties are also proved. The study of the SSIM index is also generalized to a family of metrics called normalized (or M-relative) metrics. Various analytical techniques for different kinds of SSIM-based optimization are then devised. For example, the best approximation according to the SSIM is described for orthogonal and redundant basis sets. SSIM-geodesic paths with arclength parameterization are also traced between images. Finally, formulas for SSIM-optimal point estimators are obtained. On the experimental side of the research, the structural self-similarity of images is studied. This leads to the confirmation of the hypothesis that the main source of self-similarity of images lies in their regions of low variance. On the practical side, an implementation of local statistical tests on the image residual is proposed for the assessment of denoised images. Also, heuristic estimations of the SSIM index and the MSE are developed. The research performed in this thesis should lead to the development of state-of-the-art image denoising algorithms. A better comprehension of the mathematical properties of the SSIM index represents another step toward the replacement of the MSE with SSIM in image processing applications.
7

A Study of the Structural Similarity Image Quality Measure with Applications to Image Processing

Brunet, Dominique 02 August 2012 (has links)
Since its introduction in 2004, the Structural Similarity (SSIM) index has gained widespread popularity as an image quality assessment measure. SSIM is currently recognized to be one of the most powerful methods of assessing the visual closeness of images. That being said, the Mean Squared Error (MSE), which performs very poorly from a perceptual point of view, still remains the most common optimization criterion in image processing applications because of its relative simplicity along with a number of other properties that are deemed important. In this thesis, some necessary tools to assist in the design of SSIM-optimal algorithms are developed. This work combines theoretical developments with experimental research and practical algorithms. The description of the mathematical properties of the SSIM index represents the principal theoretical achievement in this thesis. Indeed, it is demonstrated how the SSIM index can be transformed into a distance metric. Local convexity, quasi-convexity, symmetries and invariance properties are also proved. The study of the SSIM index is also generalized to a family of metrics called normalized (or M-relative) metrics. Various analytical techniques for different kinds of SSIM-based optimization are then devised. For example, the best approximation according to the SSIM is described for orthogonal and redundant basis sets. SSIM-geodesic paths with arclength parameterization are also traced between images. Finally, formulas for SSIM-optimal point estimators are obtained. On the experimental side of the research, the structural self-similarity of images is studied. This leads to the confirmation of the hypothesis that the main source of self-similarity of images lies in their regions of low variance. On the practical side, an implementation of local statistical tests on the image residual is proposed for the assessment of denoised images. Also, heuristic estimations of the SSIM index and the MSE are developed. The research performed in this thesis should lead to the development of state-of-the-art image denoising algorithms. A better comprehension of the mathematical properties of the SSIM index represents another step toward the replacement of the MSE with SSIM in image processing applications.
8

GPGPU based implementation of BLIINDS-II NR-IQA

January 2016 (has links)
abstract: The technological advances in the past few decades have made possible creation and consumption of digital visual content at an explosive rate. Consequently, there is a need for efficient quality monitoring systems to ensure minimal degradation of images and videos during various processing operations like compression, transmission, storage etc. Objective Image Quality Assessment (IQA) algorithms have been developed that predict quality scores which match well with human subjective quality assessment. However, a lot of research still remains to be done before IQA algorithms can be deployed in real world systems. Long runtimes for one frame of image is a major hurdle. Graphics Processing Units (GPUs), equipped with massive number of computational cores, provide an opportunity to accelerate IQA algorithms by performing computations in parallel. Indeed, General Purpose Graphics Processing Units (GPGPU) techniques have been applied to a few Full Reference IQA algorithms which fall under the. We present a GPGPU implementation of Blind Image Integrity Notator using DCT Statistics (BLIINDS-II), which falls under the No Reference IQA algorithm paradigm. We have been able to achieve a speedup of over 30x over the previous CPU version of this algorithm. We test our implementation using various distorted images from the CSIQ database and present the performance trends observed. We achieve a very consistent performance of around 9 milliseconds per distorted image, which made possible the execution of over 100 images per second (100 fps). / Dissertation/Thesis / Masters Thesis Computer Science 2016
9

Hardware Acceleration of Most Apparent Distortion Image Quality Assessment Algorithm on FPGA Using OpenCL

January 2017 (has links)
abstract: The information era has brought about many technological advancements in the past few decades, and that has led to an exponential increase in the creation of digital images and videos. Constantly, all digital images go through some image processing algorithm for various reasons like compression, transmission, storage, etc. There is data loss during this process which leaves us with a degraded image. Hence, to ensure minimal degradation of images, the requirement for quality assessment has become mandatory. Image Quality Assessment (IQA) has been researched and developed over the last several decades to predict the quality score in a manner that agrees with human judgments of quality. Modern image quality assessment (IQA) algorithms are quite effective at prediction accuracy, and their development has not focused on improving computational performance. The existing serial implementation requires a relatively large run-time on the order of seconds for a single frame. Hardware acceleration using Field programmable gate arrays (FPGAs) provides reconfigurable computing fabric that can be tailored for a broad range of applications. Usually, programming FPGAs has required expertise in hardware descriptive languages (HDLs) or high-level synthesis (HLS) tool. OpenCL is an open standard for cross-platform, parallel programming of heterogeneous systems along with Altera OpenCL SDK, enabling developers to use FPGA's potential without extensive hardware knowledge. Hence, this thesis focuses on accelerating the computationally intensive part of the most apparent distortion (MAD) algorithm on FPGA using OpenCL. The results are compared with CPU implementation to evaluate performance and efficiency gains. / Dissertation/Thesis / Masters Thesis Electrical Engineering 2017
10

Texture Structure Analysis

January 2014 (has links)
abstract: Texture analysis plays an important role in applications like automated pattern inspection, image and video compression, content-based image retrieval, remote-sensing, medical imaging and document processing, to name a few. Texture Structure Analysis is the process of studying the structure present in the textures. This structure can be expressed in terms of perceived regularity. Our human visual system (HVS) uses the perceived regularity as one of the important pre-attentive cues in low-level image understanding. Similar to the HVS, image processing and computer vision systems can make fast and efficient decisions if they can quantify this regularity automatically. In this work, the problem of quantifying the degree of perceived regularity when looking at an arbitrary texture is introduced and addressed. One key contribution of this work is in proposing an objective no-reference perceptual texture regularity metric based on visual saliency. Other key contributions include an adaptive texture synthesis method based on texture regularity, and a low-complexity reduced-reference visual quality metric for assessing the quality of synthesized textures. In order to use the best performing visual attention model on textures, the performance of the most popular visual attention models to predict the visual saliency on textures is evaluated. Since there is no publicly available database with ground-truth saliency maps on images with exclusive texture content, a new eye-tracking database is systematically built. Using the Visual Saliency Map (VSM) generated by the best visual attention model, the proposed texture regularity metric is computed. The proposed metric is based on the observation that VSM characteristics differ between textures of differing regularity. The proposed texture regularity metric is based on two texture regularity scores, namely a textural similarity score and a spatial distribution score. In order to evaluate the performance of the proposed regularity metric, a texture regularity database called RegTEX, is built as a part of this work. It is shown through subjective testing that the proposed metric has a strong correlation with the Mean Opinion Score (MOS) for the perceived regularity of textures. The proposed method is also shown to be robust to geometric and photometric transformations and outperforms some of the popular texture regularity metrics in predicting the perceived regularity. The impact of the proposed metric to improve the performance of many image-processing applications is also presented. The influence of the perceived texture regularity on the perceptual quality of synthesized textures is demonstrated through building a synthesized textures database named SynTEX. It is shown through subjective testing that textures with different degrees of perceived regularities exhibit different degrees of vulnerability to artifacts resulting from different texture synthesis approaches. This work also proposes an algorithm for adaptively selecting the appropriate texture synthesis method based on the perceived regularity of the original texture. A reduced-reference texture quality metric for texture synthesis is also proposed as part of this work. The metric is based on the change in perceived regularity and the change in perceived granularity between the original and the synthesized textures. The perceived granularity is quantified through a new granularity metric that is proposed in this work. It is shown through subjective testing that the proposed quality metric, using just 2 parameters, has a strong correlation with the MOS for the fidelity of synthesized textures and outperforms the state-of-the-art full-reference quality metrics on 3 different texture databases. Finally, the ability of the proposed regularity metric in predicting the perceived degradation of textures due to compression and blur artifacts is also established. / Dissertation/Thesis / Ph.D. Electrical Engineering 2014

Page generated in 0.0549 seconds