• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 158
  • 37
  • 21
  • 10
  • 9
  • 6
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 300
  • 300
  • 86
  • 59
  • 58
  • 56
  • 48
  • 41
  • 40
  • 38
  • 36
  • 31
  • 28
  • 26
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

Subjektive Bildqualität digitaler Panoramaschichtaufnahmen in Relation zur Exposition

Hadjizadeh-Ziabari, Seyed Madjid 15 July 2002 (has links)
Die Zielsetzung dieser Studie war es, für digitale Panoramaschichtaufnahmen die Relation der subjektiven Bildqualität zur Exposition zu bestimmen. An Hand von Humanpräparataufnahmen konnten bei Standardwerten sowie experimentellen Variationen der Expositionsdaten weiterhin Effekte der intentionellen Unterexposition sowie der Strahlenaufhärtung auf die subjektive Bildqualität quantifiziert werden. Die Herstellung der Aufnahmen erfolgte auf einem Sirona Orthophos DS. Dabei wurden 37 Aufnahmen mit Expositionswerten von 60 kV/9 mA - 84 kV/13 mA aus dem Herstellerprogramm erzeugt und mit Hilfe einer modifizierten Steuerungssoftware zusätzliche PSA mit experimentellen Einstellungen von 60 kV/3 mA - 90 kV/11 mA hergestellt. Für die Beurteilung und Auswertung der subjektiven Bildqualität wurde eine individuelle Software (Eldoredo V2.2) entwickelt. 39 Zahnärzte und 5 MTRA beurteilten die Aufnahmen damit an einem Monitor unter standardisierten Bedingungen. Ein iterativer Beurteilungsprozess erlaubte, eine Serie von 1369 (37 x 37) PSA-Abbildungspaaren darzustellen. Für jedes Paar entschieden die Untersucher an Hand definierter Kriterien, ob eine PSA hinsichtlich der subjektiv beurteilten Bildqualität vorzuziehen oder Äquivalenz gegeben sei. Nach statistischer Aufarbeitung der Einzelentscheidungen ließ sich damit für jede Expositionsstufe ein Index der Bildqualität berechnen. Bei Expositionswerten in einem Bereich von 60 kV/9 mA - 69 kV/15 mA der Herstellersoftware und 60 kV/5 - 15 mA sowie 70 kV/5 - 15mA der experimentellen Software fanden sich dabei keine signifikanten Verteilungsunterschiede der Bildqualität. Eine intentionelle Unterexposition bei digitalen PSA-Geräten, etwa bei Kindern oder häufigen Wiederholungsaufnahmen, kann nach den vorliegenden Ergebnissen vertreten werden, ohne dass es dabei zu einer signifikanten Verschlechterung der Bildqualität kommt. Damit ist im Gegensatz dazu bei einer Strahlenaufhärtung in dem untersuchten digitalen System stets zu rechnen. Insgesamt zeigen die Ergebnisse, dass auch digitale PSA-Systeme beachtliche Reserven hinsichtlich der Dosisminimierung aufweisen können. / The aim of this study was to describe the relation of the subjective image quality of digital panoramic radiographs in relation to exposure. In addition, variations of exposure were compared to standard settings, thus evaluating the effects of intentional underexposure on the achievable image quality. A Sirona Orthophos DS unit was used to produce 37 digital panoramic images of a human skull. Exposure values ranged from 60 kV/9 mA to 84 kV/13 mA in the conventional and 60 kV/3 mA to 90 kV/11 mA in the experimental setting. Assessment and evaluation of the subjective image quality were performed with an HTML-based protocol. 39 dentists and 5 radiographic assistants had to assign their preference of an image or equality in 1,369 (37x37) image pairs. The decisions were computed to a quality index for each exposure setting. Statistical analysis demonstrated no significant differences of image quality between 60 kV/9 mA - 69 kV/15 mA in the conventional and 60 kV/5 to15 mA as well as 70 kV/5 to 15 mA in the experimental setting. Following these results, a considerable dose reduction by the means of intentional underexposure can be achieved without any loss of image quality. By reducing the absorbed doses, an increase of kV values up to 80 kV and more is also correlated with dose reduction. However, those images showed high significant loss of quality. In summary, the results demonstrate an equivalent image quality of digital panoramic images over a very wide range of exposure values. The feasible dose reduction might be of interest not only in individuals (minors, repeated images), but also in defining general principles of panoramic imaging.
182

Reconstrução espectral de tubos de radiação odontológicos usando a transformada inversa de Laplace da curva de atenuação / Spectral Reconstruction of Dental X Ray Tubes Using the Inverse Laplace Transform of the Attenuation Curve.

Malezan, Alex 27 June 2013 (has links)
No estudo de imagens radiográcas, os parâmetros relacionados ao contraste objeto, SC, razão sinal ruído, SNR, e dose, estão vinculados à forma do espectro de raios X utilizado e seu conhecimento permite predizer e otimizar a qualidade da imagem. Neste trabalho foi desenvolvida uma metodologia que permite obter o espectro de tubos de raios X odontológicos de uso clínico de forma indireta. Esta metodologia é baseada na aplicação de um modelo matemático que utiliza a transformada inversa de Laplace da curva de transmissão do feixe para gerar dados sobre a distribuição espectral do mesmo. Com o auxílio de uma câmara de ionização e ltros alumínio de alta pureza, foram levantadas as curvas de transmissão de 8 tubos de raios X disponíveis comercialmente. Para a validação do método foi realizada a espectrometria direta com detector de telureto de cádmio (CdTe), cuja resposta foi determinada por simulação Monte Carlo (MC). A partir reconstrução espectral obtida, foram realizados estudos sobre os parâmetros de qualidade de imagem SNR, contraste objeto, SC, KERMA na entrada da pele. O desempenho dos tubos foi avaliado com base na relação entre SNR e KERMA na entrada da pele. Os resultados mostram que é possível determinar a distribuição espectral de tubos de raios X odontológicos com base no método proposto. A relação proposta entre SNR e KERMA na entrada da pele sugere que tubos com fótons de baixa energia possuem baixo rendimento. / In the study of radiographic images, the parameters related to the subject contrast, SC, signal to noise ratio, SNR, and dose, are linked to the shape of the X-ray spectrum used and their knowledge allows to predict and optimize the image quality. In this work we developed a methodology to obtain the spectrum of dental X-ray tubes of clinical usage in an indirety way. This methodology is based on application of a mathematical model that uses the inverse Laplace transform of the attenuation curve to generate data on the spectral distribution of the beam. With the aid of an ionization chamber and high purity aluminum lters, were raised the transmission curves of 8 X-ray tubes that are available commercially. The method validation was performed with direct spectrometry detector cadmium telluride (CdTe), whose response was determined by Monte Carlo simulation (MC). From reconstruction obtained spectral studies were carried out on the parameters of SNR image quality, contrast object, SC, KERMA entrance skin. The performance of the tubes was evaluated based on the relationship between SNR and KERMA entrance skin. The results show that it is possible to determine the spectral distribution of dental X-ray tubes based on the proposed method. The proposed relationship between SNR and KERMA entrance skin suggests that tubes with low energy photons have a low performance.
183

Microcomputed tomography dosimetry and image quality in preclinical image-guided radiation therapy

Johnstone, Christopher Daniel 29 April 2019 (has links)
Motivated by the need to standardize preclinical imaging for image-guided radiation therapy (IGRT), we examine the parameters that influence microcomputed tomography (microCT) scans in the realm of image quality and absorbed dose to tissue, including therapy beam measurements of small fields. Preclinical radiation research aims to understand radiation-induced effects in living tissues to improve quality of life. Small targets and low kilovoltage x-rays create challenges that do not arise in clinical radiation therapy. Evidence based on our multi-institutional study reveals a considerable aberration in microCT image quality from one institution to the next. We propose the adoption of recommended tolerance levels to provide a baseline for producing satisfactory and reproducible microCT image quality scans for accurate dose delivery in preclinical IGRT. Absorbed dose imparted by these microCT images may produce deterministic effects that can negatively influence a radiobiological study. Through Monte Carlo (MC) methods we establish absorbed microCT imaging dose to a variety of tissues and murine sizes for a comprehensive combination of imaging parameters. Radiation beam quality in the small confines of a preclinical irradiator is also established to quantify the effects of beam scatter on half-value layer measurements. MicroCT scans of varying imaging protocols are also compared for murine subjects. Absorbed imaging dose to tissues are established and presented alongside their respective microCT images, providing a visual bridge to systematically link image quality and imaging dose. We then characterize a novel small plastic scintillating dosimeter to experimentally measure microCT imaging and therapy beams in real-time. The presented scintillating dosimeter is specifically characterized for the low energies and small fields found in preclinical research. Beam output is measured for small fields previously only achievable using film. Finally, quality assurance tests are recommended for a preclinical IGRT unit. Within this dissertation, a narrative is presented for guiding preclinical radiotherapy towards producing high quality microCT images with an understanding of the absorbed imaging dose deposited to tissues, including providing a tool to measure small radiation fields. / Graduate
184

Développement d’un outil d’optimisation de la dose aux organes en fonction de la qualité image pour l’imagerie scanographique / Tool development for organ dose optimization taking into account the image quality in Computed Tomography

Adrien, Camille 30 September 2015 (has links)
Ces dernières années, la multiplication du nombre d’actes d’imagerie scanographique a eu pour conséquence l’augmentation de la dose collective due aux examens d’imagerie médicale. La dose au patient en imagerie scanographique est donc devenue un enjeu de santé publique majeur impliquant l’optimisation des protocoles d’examen, ces derniers devant tenir compte de la qualité image, indispensable aux radiologues pour poser leur diagnostic. En pratique clinique, l’optimisation est réalisée à partir d’indicateurs empiriques ne donnant pas accès à la dose aux organes et la qualité image est mesurée sur des fantômes spécifiques, tel que le fantôme CATPHAN®. Sans aucune information sur la dose aux organes et aucun outil pour prendre en compte l’avis du praticien, il est difficile d’optimiser correctement les protocoles. Le but de ce travail de thèse est de développer un outil qui permettra l’optimisation de la dose au patient tout en préservant la qualité image nécessaire au diagnostic. Ce travail est scindé en deux parties : (i) le développement d’un simulateur de dose Monte Carlo (MC) à partir du code PENELOPE, et (ii) l’estimation d’un critère de qualité image objectif. Dans ce but, le scanner GE VCT Lightspeed 64 a été modélisé à partir des informations fournies dans la note technique du constructeur et en adaptant la méthode proposée par Turner et al (Med. Phys. 36:2154-2164). Les mouvements axial et hélicoïdal du tube ont été implémentés dans l’outil MC. Pour améliorer l’efficacité de la simulation, les techniques de réduction de variance dites de splitting circulaire et longitudinal ont été utilisées. Ces deux réductions de variances permettent de reproduire le mouvement uniforme du tube le long de l’axe du scanner de manière discrète. La validation expérimentale de l’outil MC a été réalisée dans un premier temps en conditions homogènes avec un fantôme fabriqué au laboratoire et le fantôme CTDI, habituellement utilisé en routine clinique pour les contrôles qualité. Puis, la distribution de la dose absorbée dans le fantôme anthropomorphe CIRS ATOM, a été mesurée avec des détecteurs OSL et des films Gafchromic® XR-QA2. Ensuite, la dose aux organes a été simulée pour différentes acquisitions dans le fantôme femme de la CIPR 110 afin de créer une base de données utilisable en clinique. En parallèle, la qualité image a été étudiée en utilisant le fantôme CATPHAN® 600. A partir du module CTP 404, le rapport signal sur bruit (SNR pour signal to noise ratio) a été calculé en utilisant le modèle développé par Rose (J. Opt. Soc. Am. A 16:633-645). Un grand nombre d’images, correspondant à différents paramètres d’acquisition et de reconstruction, ont été analysées afin d’étudier les variations du SNR. Les acquisitions avec un SNR proche du critère de Rose ont été sélectionnées pour permettre des nouvelles acquisitions avec un fantôme préclinique contenant des petites structures suspectes en PMMA de différents diamètres. Ces images ont été analysées par deux radiologues expérimentés. Sur chaque image, ils devaient déterminer si une anomalie était présente ou non et indiquer leur niveau de confiance sur leur choix. Deux courbes ROC ont ainsi été obtenues : une pour les anomalies dites « détectables » par le critère de Rose (SNR > 5), et une pour les anomalies dites « non-détectables ». L’analyse des courbes montre que les deux radiologues détectent plus facilement les lésions suspectes lorsque que le critère de Rose est satisfait, démontrant le potentiel du modèle de Rose dans l’évaluation de la qualité image pour les tâches cliniques de détection. En conclusion, à partir des paramètres d’acquisition, la dose aux organes a été corrélée à des valeurs de SNR. Les premiers résultats prouvent qu’il est possible d’optimiser les protocoles en utilisant la dose aux organes et le critère de Rose, avec une réduction de la dose pouvant aller jusqu’à un facteur 6. / Due to the significant rise of computed tomography (CT) exams in the past few years and the increase of the collective dose due to medical exams, dose estimation in CT imaging has become a major public health issue. However dose optimization cannot be considered without taking into account the image quality which has to be good enough for radiologists. In clinical practice, optimization is obtained through empirical index and image quality using measurements performed on specific phantoms like the CATPHAN®. Based on this kind of information, it is thus difficult to correctly optimize protocols regarding organ doses and radiologist criteria. Therefore our goal is to develop a tool allowing the optimization of the patient dose while preserving the image quality needed for diagnosis. The work is divided into two main parts: (i) the development of a Monte Carlo dose simulator based on the PENELOPE code, and (ii) the assessment of an objective image quality criterion. For that purpose, the GE Lightspeed VCT 64 CT tube was modelled with information provided by the manufacturer technical note and by adapting the method proposed by Turner et al (Med. Phys. 36: 2154-2164). The axial and helical movements of the X-ray tube were then implemented into the MC tool. To improve the efficiency of the simulation, two variance reduction techniques were used: a circular and a translational splitting. The splitting algorithms allow a uniform particle distribution along the gantry path to simulate the continuous gantry motion in a discrete way. Validations were performed in homogeneous conditions using a home-made phantom and the well-known CTDI phantoms. Then, dose values were measured in CIRS ATOM anthropomorphic phantom using both optically stimulated luminescence dosimeters for point doses and XR-QA Gafchromic® films for relative dose maps. Comparisons between measured and simulated values enabled us to validate the MC tool used for dosimetric purposes. Finally, organ doses for several acquisition parameters into the ICRP 110 numerical female phantoms were simulated in order to build a dosimetric data base which could be used in clinical practice. In parallel to this work, image quality was first studied using the CATPHAN® 600. From the CTP 404 inserts, the signal-to-noise ratio (SNR) was then computed by using the classical Rose model (J. Opt. Soc. Am. A 16:633-645). An extensive number of images, linked to several acquisitions setups, were analyzed and SNR variations studied. Acquisitions with a SNR closed to the Rose criterion were selected. New acquisitions, based on those selected, were performed with a pre-clinical phantom containing suspect structures in PMMA. These images were presented to two senior radiologists. Both of them reviewed all images and indicated if they were able to locate the structures or not using a 5 confidence levels scale. Two ROC curves were plotted to compare the detection ability if the bead was detectable (SNR > 5) or not. Results revealed a significant difference between the two types of image and thus demonstrated the Rose criterion potential for image quality quantification in CT. Ultimately, organ dose estimations were linked to SNR values through acquisition parameters. Preliminary results proved that an optimization can be performed using the Rose criterion and organ dose estimation, leading to a dose reduction by a factor up to 6.
185

An Algorithm for Image Quality Assessment

Ivkovic, Goran 10 July 2003 (has links)
Image quality measures are used to optimize image processing algorithms and evaluate their performances. The only reliable way to assess image quality is subjective evaluation by human observers, where the mean value of their scores is used as the quality measure. This is known as mean opinion score (MOS). In addition to this measure there are various objective (quantitative) measures. Most widely used quantitative measures are: mean squared error (MSE), peak signal to noise ratio (PSNR) and signal to noise ratio (SNR). Since these simple measures do not always produce results that are in agreement with subjective evaluation, many other quality measures have been proposed. They are mostly various modifications of MSE, which try to take into account some properties of human visual system (HVS) such as nonlinear character of brightness perception, contrast sensitivity function (CSF) and texture masking. In these approaches quality measure is computed as MSE of input image intensities or frequency domain coefficients obtained after some transform (DFT, DCT etc.), weighted by some coefficients which account for the mentioned properties of HVS. These measures have some advantages over MSE, but their ability to predict image quality is still limited. A different approach is presented here. Quality measure proposed here uses simple model of HVS, which has one user-defined parameter, whose value depends on the reference image. This quality measure is based on the average value of locally computed correlation coefficients. This takes into account structural similarity between original and distorted images, which cannot be measured by MSE or any kind of weighted MSE. The proposed measure also differentiates between random and signal dependant distortion, because these two have different effect on human observer. This is achieved by computing the average correlation coefficient between reference image and error image. Performance of the proposed quality measure is illustrated by examples involving images with different types of degradation.
186

[en] IMAGE QUALITY METRICS FOR FACE RECOGNITION / [pt] MEDIDAS DE QUALIDADE DE IMAGENS PARA RECONHECIMENTO FACIAL

JOSÉ LUIZ BUONOMO DE PINHO 09 April 2014 (has links)
[pt] O Reconhecimento Facial é o processo de identificação de uma pessoa a partir da imagem de sua face. Na forma mais usual, o processo de identificação consiste em extrair informações dessa imagem e compará-las com informações relativas a outras imagens armazenadas numa base de dados e por fim indicar na saída a imagem da base mais similar à imagem de entrada. O desempenho desse processo está diretamente ligado à qualidade das imagens, tanto das que estão armazenadas na base de dados, quanto da imagem do indivíduo cuja identidade está sendo determinada. Por isso, convém que a qualidade das imagens faciais seja avaliada antes que estas sejam submetidas ao procedimento de reconhecimento. A maioria dos métodos apresentados até o momento na literatura baseia-se em um conjunto de critérios, cada um voltado a um atributo isolado da imagem. A qualidade da imagem é considerada adequada se aprovada por todos os critérios individualmente. Desconsidera-se, portanto, o efeito cumulativo de diversos fatores que afetam a qualidade das imagens e, por conseguinte, o desempenho do reconhecimento facial. Essa monografia propõe uma metodologia para o projeto de métricas de qualidade de imagens faciais que expressem num único índice o efeito combinado de diversos fatores que afetam o reconhecimento. Tal índice é dado por uma função de um conjunto de atributos extraídos diretamente da imagem. O presente estudo analisa experimentalmente uma função linear e uma rede neural do tipo back-propagation como alternativas para a estimativa de qualidade a partir dos atributos. Experimentos conduzidos sobre a base de dados IMM para o algoritmo de reconhecimento baseado em padrões binários locais comprovam a o bom desempenho da metodologia. / [en] Face Recognition is the process of identifying people based on facial images. In its most usual form the identification procedure consists of extracting information from an input face image and comparing them to the records of other face images stored in a face data base, and finally indicating the most similar one to the input image. The performance of this process is directly dependent on the input image quality, as well as on the images in the data base. Thus, it is important that the quality of a face image is tested before it is given to the recognition procedure, either as a input image or as a new record in the face database. Most methods proposed thus far based on a set of criteria, each one devoted to an isolated attribute. The image quality is considered adequate if approved by all criteria individually. Thus, the cumulative effect of different factors affecting the image quality is no regarded. This dissertation proposes a methodology for the design of quality metrics of facial images that Express in a single scalar the combined effect of multiple factors affecting the quality. Such score is given by a function of attributes extracted directly from the image. This study investigates a linear and a non-linear approach for quality assessment. Experiments conducted upon the IMM face database for a Local Binary Pattern face recognition algorithm demonstrate the good performance of the proposed methodology.
187

A Scalable, Secure, and Energy-Efficient Image Representation for Wireless Systems

Woo, Tim January 2004 (has links)
The recent growth in wireless communications presents a new challenge to multimedia communications. Digital image transmission is a very common form of multimedia communication. Due to limited bandwidth and broadcast nature of the wireless medium, it is necessary to compress and encrypt images before they are sent. On the other hand, it is important to efficiently utilize the limited energy in wireless devices. In a wireless device, two major sources of energy consumption are energy used for computation and energy used for transmission. Computation energy can be reduced by minimizing the time spent on compression and encryption. Transmission energy can be reduced by sending a smaller image file that is obtained by compressing the original highest quality image. Image quality is often sacrificed in the compression process. Therefore, users should have the flexibility to control the image quality to determine whether such a tradeoff is acceptable. It is also desirable for users to have control over image quality in different areas of the image so that less important areas can be compressed more, while retaining the details in important areas. To reduce computations for encryption, a partial encryption scheme can be employed to encrypt only the critical parts of an image file, without sacrificing security. This thesis proposes a scalable and secure image representation scheme that allows users to select different image quality and security levels. The binary space partitioning (BSP) tree presentation is selected because this representation allows convenient compression and scalable encryption. The Advanced Encryption Standard (AES) is chosen as the encryption algorithm because it is fast and secure. Our experimental result shows that our new tree construction method and our pruning formula reduces execution time, hence computation energy, by about 90%. Our image quality prediction model accurately predicts image quality to within 2-3dB of the actual image PSNR.
188

A Scalable, Secure, and Energy-Efficient Image Representation for Wireless Systems

Woo, Tim January 2004 (has links)
The recent growth in wireless communications presents a new challenge to multimedia communications. Digital image transmission is a very common form of multimedia communication. Due to limited bandwidth and broadcast nature of the wireless medium, it is necessary to compress and encrypt images before they are sent. On the other hand, it is important to efficiently utilize the limited energy in wireless devices. In a wireless device, two major sources of energy consumption are energy used for computation and energy used for transmission. Computation energy can be reduced by minimizing the time spent on compression and encryption. Transmission energy can be reduced by sending a smaller image file that is obtained by compressing the original highest quality image. Image quality is often sacrificed in the compression process. Therefore, users should have the flexibility to control the image quality to determine whether such a tradeoff is acceptable. It is also desirable for users to have control over image quality in different areas of the image so that less important areas can be compressed more, while retaining the details in important areas. To reduce computations for encryption, a partial encryption scheme can be employed to encrypt only the critical parts of an image file, without sacrificing security. This thesis proposes a scalable and secure image representation scheme that allows users to select different image quality and security levels. The binary space partitioning (BSP) tree presentation is selected because this representation allows convenient compression and scalable encryption. The Advanced Encryption Standard (AES) is chosen as the encryption algorithm because it is fast and secure. Our experimental result shows that our new tree construction method and our pruning formula reduces execution time, hence computation energy, by about 90%. Our image quality prediction model accurately predicts image quality to within 2-3dB of the actual image PSNR.
189

An algorithm for image quality assessment [electronic resource] / by Goran Ivkovic.

Ivkovic, Goran. January 2003 (has links)
Title from PDF of title page. / Document formatted into pages; contains 82 pages. / Thesis (M.S.E.E.)--University of South Florida, 2003. / Includes bibliographical references. / Text (Electronic thesis) in PDF format. / ABSTRACT: Image quality measures are used to optimize image processing algorithms and evaluate their performances. The only reliable way to assess image quality is subjective evaluation by human observers, where the mean value of their scores is used as the quality measure. This is known as mean opinion score (MOS). In addition to this measure there are various objective (quantitative) measures. Most widely used quantitative measures are: mean squared error (MSE), peak signal to noise ratio (PSNR) and signal to noise ratio (SNR). Since these simple measures do not always produce results that are in agreement with subjective evaluation, many other quality measures have been proposed. They are mostly various modifications of MSE, which try to take into account some properties of human visual system (HVS) such as nonlinear character of brightness perception, contrast sensitivity function (CSF) and texture masking. / ABSTRACT: In these approaches quality measure is computed as MSE of input image intensities or frequency domain coefficients obtained after some transform (DFT, DCT etc.), weighted by some coefficients which account for the mentioned properties of HVS. These measures have some advantages over MSE, but their ability to predict image quality is still limited. A different approach is presented here. Quality measure proposed here uses simple model of HVS, which has one user-defined parameter, whose value depends on the reference image. This quality measure is based on the average value of locally computed correlation coefficients. This takes into account structural similarity between original and distorted images, which cannot be measured by MSE or any kind of weighted MSE. The proposed measure also differentiates between random and signal dependant distortion, because these two have different effect on human observer. / ABSTRACT: This is achieved by computing the average correlation coefficient between reference image and error image. Performance of the proposed quality measure is illustrated by examples involving images with different types of degradation. / System requirements: World Wide Web browser and PDF reader. / Mode of access: World Wide Web.
190

Objective Quality Assessment and Optimization for High Dynamic Range Image Tone Mapping

Ma, Kede 03 June 2014 (has links)
Tone mapping operators aim to compress high dynamic range (HDR) images to low dynamic range ones so as to visualize HDR images on standard displays. Most existing works were demonstrated on specific examples without being thoroughly tested on well-established and subject-validated image quality assessment models. A recent tone mapped image quality index (TMQI) made the first attempt on objective quality assessment of tone mapped images. TMQI consists of two fundamental building blocks: structural fidelity and statistical naturalness. In this thesis, we propose an enhanced tone mapped image quality index (eTMQI) by 1) constructing an improved nonlinear mapping function to better account for the local contrast visibility of HDR images and 2) developing an image dependent statistical naturalness model to quantify the unnaturalness of tone mapped images based on a subjective study. Experiments show that the modified structural fidelity and statistical naturalness terms in eTMQI better correlate with subjective quality evaluations. Furthermore, we propose an iterative optimization algorithm for tone mapping. The advantages of this algorithm are twofold: 1) eTMQI and TMQI can be compared in a more straightforward way; 2) better quality tone mapped images can be automatically generated by using eTMQI as the optimization goal. Numerical and subjective experiments demonstrate that eTMQI is a superior objective quality assessment metric for tone mapped images and consistently outperforms TMQI.

Page generated in 0.0723 seconds