• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 157
  • 37
  • 21
  • 10
  • 9
  • 6
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 298
  • 298
  • 86
  • 58
  • 57
  • 56
  • 48
  • 41
  • 39
  • 38
  • 36
  • 31
  • 28
  • 26
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Avaliação da correção de atenuação e espalhamento em imagens SPECT em protocolo cerebral / Evaluation of Attenuation and Scattering Correction in SPECT images of a Cerebral Protocol

Käsemodel, Thays Berretta 22 September 2014 (has links)
A tomografia computadorizada por emissão de fóton único (SPECT) é uma das modalidades de diagnóstico na Medicina Nuclear em que se detecta a radiação emitida por um radiofármaco previamente administrado ao paciente. Visto que osfótons emitidos sofrem interações com o corpo do paciente, fazem-se necessárias as correções de atenuação e de espalhamento a fim de melhor representar a distribuição do radiofármaco, e assimresultar em imagens mais precisas. O objetivo deste trabalho é avaliar os parâmetros anotados como padrão para reconstruções de imagens tomográficas e as correções de atenuação e de espalhamento em imagens SPECT do Hospital de Clínicas da Faculdade de Medicina da Universidade de São Paulo, por meio de análises qualitativas e quantitativas das imagens reconstruída a partir das aquisições tomográficas. Sob um protocolo de SPECT-CT cerebral modificado para duas janelas de aquisição, foram adquiridas imagens SPECT e SPECT-CT (BrightView XCT, Philips) utilizando fantomaJaszczak e reconstruídas pelos métodos FBP, MLEM e OSEM. Os resultados mostram que o método FBP apresenta imagens de baixa precisão devido à baixa SNR. A avaliação sugere o uso dos métodos iterativos MLEM e OSEM com correção de atenuação como método padrão de reconstrução de imagens de perfusão cerebral. De acordo com a avaliação de imagens do fantomaJaszczak e análise do contraste entre esfera fria ebackground,propõe-se análise observacional e avaliação das imagens clínicas reconstruídas pelo método OSEM com os parâmetros 3 iterações, 16 subsets, filtro Butterworth com frequência de corte 0,34 e potencia 1, como novos parâmetros padrão de reconstrução de imagens. / Single Photon Emission Computed Tomography (SPECT) is one of the diagnostic modalities in nuclear medicine, it detects the radiation emitted by a radioisotope previously administered to the patient. Since the photons undergo interactions with the patient\'s body,attenuationand scatteringcorrections are necessary in order to best represent the distribution of the radiopharmaceutical, and thus result in more accurate images. The aim of this study is to evaluate the standard parameters for tomographic imagesreconstruction, and attenuation and scatter corrections ofSPECT images, from Hospital das Clínicas da Faculdade de Medicina de RibeirãoPreto, Universidade de São Paulo, through qualitativeand quantitative analysis of the reconstructed image obtained from SPECT aquisitions. Though a modified to two windows of acquisition protocol for cerebral SPECT-CT, we acquired SPECT and SPECT-CT images (BrightView XCT, Philips) using phantom Jaszczak and the ones were reconstructed by FBP, MLEM and OSEM methods. The results show that the FBP method has poor image precision due to low SNR. The review suggests the use of iterative methods MLEM and OSEM with attenuation correction as a standard method of image reconstruction of cerebral perfusion. According to the images the phantom Jaszczak and contrast analysis between cold sphere and background, we propose observational analysis and evaluations of clinical images reconstructed by OSEM method with parameters 3 iterations, 16 subsets, Butterworth filter with cutoff frequency 0.34 and order 1, as newstandard parameters for image reconstruction parameters.
42

Avaliação dos critérios de qualidade de imagem e estudo das doses em um departamento de mamografia / Evaluation of the image quality criteria and study of dosis in a mammography department

Alcântara, Marcela Costa 30 October 2009 (has links)
Os critérios de qualidade de imagem mamógrafica publicados pela European Commission foram implementados em três mamógrafos de um mesmo departamento de radiologia de um hospital na cidade de São Paulo. Dentre os mamógrafos dois apresentam o sistema tela-filme e um deles apresenta o sistema digital indireto. Durante o período de coleta de dados, foi observada a necessidade de realizar um estudo sobre índice de rejeição de imagem em cada mamógrafo. Portanto, este estudo foi realizado e, em seguida, foram feitas comparações, entre os mamógrafos, do índice de rejeição de imagem e da porcentagem de imagens que atendiam a cada critério de qualidade de imagem. Paralelamente a esses estudos, foi realizado o estudo das doses na entrada da pele e glandular média. Essas doses foram estimadas, baseando-se em diferentes metodologias apresentadas por diferentes grupos de estudiosos, para todas as combinações anodo-filtro apresentadas pelo equipamento. Para estimar a dose na entrada da pele pelo método publicado no guia da ANVISA e a dose glandular média pelo método de Wu, foi desenvolvido um manequim no formato bem próximo ao de uma mama, em diferentes espessuras de PMMA. Por fim, associou-se a qualidade da imagem com a dose recebida pela paciente. O equipamento digital apresentou melhores resultados na avaliação dos critérios de qualidade, menor índice de rejeição de imagem e menores valores de dose glandular média e na entrada da pele em todos os métodos estudados. Porém não é suficiente, pois não atende às pacientes que possuem mamas grandes. / The mammographic image quality criteria published by European Commission were implemented in three mammography equipments of a same radiology department in a hospital of Sao Paulo city. Among the mammography equipments, two use the screen-film system and one of them uses the indirect digital system. During the data collection, it was noted the need to conduct a study about image rejection in each mammography equipment. Therefore, this study was realized and, after that, the results in each mammography equipment of image rejection and image percentage that present each quality criterion it were compared. At the same time of this studies, it was realized other study about surface entrance dose and average glandular dose. These doses it was estimated based on different methods published by different groups of researcher, for all combinations anodefilter available in the equipment. To estimate the surface entrance dose following the methodology published in ANVISA guide and the average glandular dose following the Wu methodology, it was developed a phantom, in different thicknesses of acrylic, to simulate a breast. Finally, the image quality it was associated with the dose received by patient. The digital equipment shows better results in the evaluation of quality criteria, lower rate of image rejection and lower values of average glandular dose and surface entrance dose in all methods studied. But it is not sufficient, because is not adequate for patients with great breast.
43

COLOR HALFTONING BASED ON NEUGEBAUER PRIMARY AREA COVERAGE AND NOVEL COLOR HALFTONING ALGORITHM FOR INK SAVINGS

Wanling Jiang (6631334) 11 June 2019 (has links)
<p>A halftoning method with Neugebauer Primary Area Coverage direct binary search (NPAC-DBS) is developed. With the optimized human visual system (HVS) model, we are able obtain homogeneous and smooth halftone colored image. The halftoning is based on separating the colored image represented in Neugebauer Primary in three channels based on human visual system, with swap-only DBS, we arrange the dots to bring the error metric to its minimum and the optimized halftone image is obtained. The separation of chrominance HVS filters between red-green and blue-yellow channels allows us to represent HVS more accurately. Color halftone images generated using this method and method of using traditional screening methods are compared.</p> <p>In order to speed up the halftone process with similar quality of NPAC-DBS, we developed PARAWACS screens for color halftoning. PARAWACS screen is designed level by level using DBS. With PARAWACS screen, we can create halftone using simple pixel by pixel comparison with the merit of DBS. We further optimized the screen to achieve the best quality.</p> <p>Next, a novel halftoning method that we call Ink-Saving, Single-Frequency, Single-Angle, Multi-Drop (IS-SF-SA-MD) halftoning is introduced. The application target for our algorithm is high-volume production ink-jet printing in which the user will value a reduction in ink usage. Unlike commercial offset printing in which four-colorant printing is achieved by rotating a single screen to four different angles, our method uses a single frequency screen at a single angle, and it depends on accurate registration between colorant planes to minimize dot-overlap especially between the black (K) colorant and the other colorants (C, M, and Y). To increase the number of gray levels for each colorant, we exploit the multidrop capabilities of the target writing system. We also use the hybrid screening method to yield improved halftone texture in the highlights and shadows. The proposed method can help preserve ink significantly.</p>
44

Avaliação dos critérios de qualidade de imagem e estudo das doses em um departamento de mamografia / Evaluation of the image quality criteria and study of dosis in a mammography department

Marcela Costa Alcântara 30 October 2009 (has links)
Os critérios de qualidade de imagem mamógrafica publicados pela European Commission foram implementados em três mamógrafos de um mesmo departamento de radiologia de um hospital na cidade de São Paulo. Dentre os mamógrafos dois apresentam o sistema tela-filme e um deles apresenta o sistema digital indireto. Durante o período de coleta de dados, foi observada a necessidade de realizar um estudo sobre índice de rejeição de imagem em cada mamógrafo. Portanto, este estudo foi realizado e, em seguida, foram feitas comparações, entre os mamógrafos, do índice de rejeição de imagem e da porcentagem de imagens que atendiam a cada critério de qualidade de imagem. Paralelamente a esses estudos, foi realizado o estudo das doses na entrada da pele e glandular média. Essas doses foram estimadas, baseando-se em diferentes metodologias apresentadas por diferentes grupos de estudiosos, para todas as combinações anodo-filtro apresentadas pelo equipamento. Para estimar a dose na entrada da pele pelo método publicado no guia da ANVISA e a dose glandular média pelo método de Wu, foi desenvolvido um manequim no formato bem próximo ao de uma mama, em diferentes espessuras de PMMA. Por fim, associou-se a qualidade da imagem com a dose recebida pela paciente. O equipamento digital apresentou melhores resultados na avaliação dos critérios de qualidade, menor índice de rejeição de imagem e menores valores de dose glandular média e na entrada da pele em todos os métodos estudados. Porém não é suficiente, pois não atende às pacientes que possuem mamas grandes. / The mammographic image quality criteria published by European Commission were implemented in three mammography equipments of a same radiology department in a hospital of Sao Paulo city. Among the mammography equipments, two use the screen-film system and one of them uses the indirect digital system. During the data collection, it was noted the need to conduct a study about image rejection in each mammography equipment. Therefore, this study was realized and, after that, the results in each mammography equipment of image rejection and image percentage that present each quality criterion it were compared. At the same time of this studies, it was realized other study about surface entrance dose and average glandular dose. These doses it was estimated based on different methods published by different groups of researcher, for all combinations anodefilter available in the equipment. To estimate the surface entrance dose following the methodology published in ANVISA guide and the average glandular dose following the Wu methodology, it was developed a phantom, in different thicknesses of acrylic, to simulate a breast. Finally, the image quality it was associated with the dose received by patient. The digital equipment shows better results in the evaluation of quality criteria, lower rate of image rejection and lower values of average glandular dose and surface entrance dose in all methods studied. But it is not sufficient, because is not adequate for patients with great breast.
45

Objective Assessment of Image Quality: Extension of Numerical Observer Models to Multidimensional Medical Imaging Studies

Lorsakul, Auranuch January 2015 (has links)
Encompassing with fields on engineering and medical image quality, this dissertation proposes a novel framework for diagnostic performance evaluation based on objective image-quality assessment, an important step in the development of new imaging devices, acquisitions, or image-processing techniques being used for clinicians and researchers. The objective of this dissertation is to develop computational modeling tools that allow comprehensive evaluation of task-based assessment including clinical interpretation of images regardless of image dimensionality. Because of advances in the development of medical imaging devices, several techniques have improved image quality where the format domain of the outcome images becomes multidimensional (e.g., 3D+time or 4D). To evaluate the performance of new imaging devices or to optimize various design parameters and algorithms, the quality measurement should be performed using an appropriate image-quality figure-of-merit (FOM). Classical FOM such as bias and variance, or mean-square error, have been broadly used in the past. Unfortunately, they do not reflect the fact that the average performance of the principal agent in medical decision-making is frequently a human observer, nor are they aware of the specific diagnostic task. The standard goal for image quality assessment is a task-based approach in which one evaluates human observer performance of a specified diagnostic task (e.g. detection of the presence of lesions). However, having a human observer performs the tasks is costly and time-consuming. To facilitate practical task-based assessment of image quality, a numerical observer is required as a surrogate for human observers. Previously, numerical observers for the detection task have been studied both in research and industry; however, little research effort has been devoted toward development of one utilized for multidimensional imaging studies (e.g., 4D). Limiting the numerical observer tools that accommodate all information embedded in a series of images, the performance assessment of a particular new technique that generates multidimensional data is complex and limited. Consequently, key questions remain unanswered about how much the image quality improved using these new multidimensional images on a specific clinical task. To address this gap, this dissertation proposes a new numerical-observer methodology to assess the improvement achieved from newly developed imaging technologies. This numerical observer approach can be generalized to exploit pertinent statistical information in multidimensional images and accurately predict the performance of a human observer over the complexity of the image domains. Part I of this dissertation aims to develop a numerical observer that accommodates multidimensional images to process correlated signal components and appropriately incorporate them into an absolute FOM. Part II of this dissertation aims to apply the model developed in Part I to selected clinical applications with multidimensional images including: 1) respiratory-gated positron emission tomography (PET) in lung cancer (3D+t), 2) kinetic parametric PET in head-and-neck cancer (3D+k), and 3) spectral computed tomography (CT) in atherosclerotic plaque (3D+e). The author compares the task-based performance of the proposed approach to that of conventional methods, evaluated based on a broadly-used signal-known-exactly /background-known-exactly paradigm, which is in the context of the specified properties of a target object (e.g., a lesion) on highly realistic and clinical backgrounds. A realistic target object is generated with specific properties and applied to a set of images to create pathological scenarios for the performance evaluation, e.g., lesions in the lungs or plaques in the artery. The regions of interest (ROIs) of the target objects are formed over an ensemble of data measurements under identical conditions and evaluated for the inclusion of useful information from different complex domains (i.e., 3D+t, 3D+k, 3D+e). This work provides an image-quality assessment metric with no dimensional limitation that could help substantially improve assessment of performance achieved from new developments in imaging that make use of high dimensional data.
46

Full-reference objective visual quality assessment for images and videos. / CUHK electronic theses & dissertations collection

January 2012 (has links)
視覺質量評估在各種多媒體應用中起到了關鍵性的作用。因為人類的視覺系統是視覺信號的最終接收髓,王觀視覺質量評估被認為是最可靠的視覺質量評估方法。然而,王觀視覺質量評估耗時、昂貴,並且不適合線上應用。因此,自動的、客觀的視覺質量評估方法已經被開發並被應用於很多實用埸合當中。最廣泛使用的客觀視覺質量度量方法,如均方差(MSE) 、峰值信噪比(PSNR) 等與人IN對視覺信號質量的判斷相距甚遠。因此,開發更準確的客觀質量度量算法將會成為未來視覺信號處理和傳輸應用成功與否的重要因素。 / 該論文主要研究全參考客觀視覺質量度量算法。主要內容分為三部分。 / 第一部分討論圖像質量評估。首先研究了一個經典的圖像質量度量算法--SSIM。提出了個新的加權方法並整合至IjSSIM 當中,提升了SSIM自可預測精度。之後,受到前面這個工作的故發,設計7 個全新的圖像質量度量算法,將噪聲分類為加性噪聲和細節失兩大類。這個算法在很多主觀質量圓像資料庫上都有很優秀的預測表現。 / 第二部分研究視頻質量評估。首先,將上面提到的全新的圓像質量度量算法通過挖掘視頻運動信息和時域相關的人眼視覺特性擴展為視頻質量度量算法。方法包括:使用基於人自民運動的時空域對比敏感度方程,使用基於運動崗量的時域視覺掩蓋,使用基於認知層面的空域整合等等。這個算法被證明對處理標清和高清序列同樣有效。其次,提出了一個測量視頻順間不一致程度的算法。該算法被整合到MSE 中,提高了MSE的預測表現。 / 上面提到的算法只考慮到了亮度噪聲。論文的最後部分通過個具體應用色差立體圓像生成究了色度噪聲。色差立體圖像是三維立體顯示技衛的其中種方法。它使在普通電視、電腦顯示器、甚至印刷品上顯示三維立體效果成為可能。我們提出了一個新的色差立體圖像生成方法。該方法工作在CIELAB彩色空間,並力圖匹配原始圖像與觀測立體圖像的色彩屬性值。 / Visual quality assessment (VQA) plays a fundamental role in multimedia applications. Since the human visual system (HVS) is the ultimate viewer of the visual information, subjective VQA is considered to be the most reliable way to evaluate visual quality. However, subjective VQA is time-consuming, expensive, and not feasible for on-line manipulation. Therefore, automatic objective VQA algorithms, or namely visual quality metrics, have been developed and widely used in practical applications. However, it is well known that the popular visual quality metrics, such as Mean Square Error (MSE), Peak Signal to Noise Ratio (PSNR), etc., correlate poorly with the human perception of visual quality. The development of more accurate objective VQA algorithms becomes of paramount importance to the future visual information processing and communication applications. / In this thesis, full-reference objective VQA algorithms are investigated. Three parts of the work are discussed as briefly summarized below. / The first part concerns image quality assessment. It starts with the investigation of a popular image quality metric, i.e., Structural Similarity Index (SSIM). A novel weighting function is proposed and incorporated into SSIM, which leads to a substantial performance improvement in terms of matching subjective ratings. Inspired by this work, a novel image quality metric is developed by separately evaluating two distinct types of spatial distortions: detail losses and additive impairments. The pro- posed method demonstrates the state-of-the-art predictive performance on most of the publicly-available subjective quality image databases. / The second part investigates video quality assessment. We extend the proposed image quality metric to assess video quality by exploiting motion information and temporal HVS characteristics, e.g., eye movement spatio-velocity contrast sensitivity function, temporal masking using motion vectors, temporal pooling considering human cognitive behaviors, etc. It has been experimentally verified that the proposed video quality metric can achieve good performance on both standard-definition and high-definition video databases. We also propose a novel method to measure temporal inconsistency, an essential type of video temporal distortions. It is incorporated into the MSE for video quality assessment, and experiments show that it can significantly enhance MSE's predictive performance. / The aforementioned algorithms only analyze luminance distortions. In the last part, we investigate chrominance distortions for a specific application: anaglyph image generation. Anaglyph image is one of the 3D displaying techniques, which enables stereoscopic perception on traditional TVs, PC monitors, projectors, and even papers. Three perceptual color attributes are taken into account for the color distortion measure, i.e., lightness, saturation, and hue, based on which a novel anaglyph image generation algorithm is developed via approximation in the CIELAB color space. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Li, Songnan. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2012. / Includes bibliographical references (leaves 122-130). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract also in Chinese. / Dedication --- p.ii / Acknowledgments --- p.iii / Abstract --- p.vi / Publications --- p.viii / Nomenclature --- p.xii / Contents --- p.xvii / List of Figures --- p.xx / List of Tables --- p.xxii / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Motivation and Objectives --- p.1 / Chapter 1.2 --- Overview of Subjective Visual Quality Assessment --- p.3 / Chapter 1.2.1 --- Viewing condition --- p.4 / Chapter 1.2.2 --- Candidate observer selection --- p.4 / Chapter 1.2.3 --- Test sequence selection --- p.4 / Chapter 1.2.4 --- Structure of test session --- p.5 / Chapter 1.2.5 --- Assessment procedure --- p.6 / Chapter 1.2.6 --- Post-processing of scores --- p.7 / Chapter 1.3 --- Overview of Objective Visual Quality Assessment --- p.8 / Chapter 1.3.1 --- Classification --- p.8 / Chapter 1.3.2 --- HVS-model-based metrics --- p.9 / Chapter 1.3.3 --- Engineering-based metrics --- p.21 / Chapter 1.3.4 --- Performance evaluation method --- p.28 / Chapter 1.4 --- Thesis Outline --- p.29 / Chapter I --- Image Quality Assessment --- p.32 / Chapter 2 --- Weighted Structural Similarity Index based on Local Smoothness --- p.33 / Chapter 2.1 --- Introduction --- p.33 / Chapter 2.2 --- The Structural Similarity Index --- p.33 / Chapter 2.3 --- Influence of the Smooth Region on SSIM --- p.35 / Chapter 2.3.1 --- Overall performance analysis --- p.35 / Chapter 2.3.2 --- Performance analysis for individual distortion types --- p.37 / Chapter 2.4 --- The Proposed Weighted-SSIM --- p.40 / Chapter 2.5 --- Experiments --- p.41 / Chapter 2.6 --- Summary --- p.43 / Chapter 3 --- Image Quality Assessment by Decoupling Detail Losses and Additive Impairments --- p.44 / Chapter 3.1 --- Introduction --- p.44 / Chapter 3.2 --- Motivation --- p.45 / Chapter 3.3 --- Related Works --- p.47 / Chapter 3.4 --- The Proposed Method --- p.48 / Chapter 3.4.1 --- Decoupling additive impairments and useful image contents --- p.48 / Chapter 3.4.2 --- Simulating the HVS processing --- p.56 / Chapter 3.4.3 --- Two quality measures and their combination --- p.58 / Chapter 3.5 --- Experiments --- p.59 / Chapter 3.5.1 --- Subjective quality image databases --- p.59 / Chapter 3.5.2 --- Parameterization --- p.60 / Chapter 3.5.3 --- Overall performance --- p.61 / Chapter 3.5.4 --- Statistical significance --- p.62 / Chapter 3.5.5 --- Performance on individual distortion types --- p.64 / Chapter 3.5.6 --- Hypotheses validation --- p.66 / Chapter 3.5.7 --- Complexity analysis --- p.69 / Chapter 3.6 --- Summary --- p.70 / Chapter II --- Video Quality Assessment --- p.71 / Chapter 4 --- Video Quality Assessment by Decoupling Detail Losses and Additive Impairments --- p.72 / Chapter 4.1 --- Introduction --- p.72 / Chapter 4.2 --- Related Works --- p.73 / Chapter 4.3 --- The Proposed Method --- p.74 / Chapter 4.3.1 --- Framework --- p.74 / Chapter 4.3.2 --- Decoupling additive impairments and useful image contents --- p.75 / Chapter 4.3.3 --- Motion estimation --- p.76 / Chapter 4.3.4 --- Spatio-velocity contrast sensitivity function --- p.77 / Chapter 4.3.5 --- Spatial and temporal masking --- p.79 / Chapter 4.3.6 --- Two quality measures and their combination --- p.80 / Chapter 4.3.7 --- Temporal pooling --- p.81 / Chapter 4.4 --- Experiments --- p.82 / Chapter 4.4.1 --- Subjective quality video databases --- p.82 / Chapter 4.4.2 --- Parameterization --- p.83 / Chapter 4.4.3 --- With/without decoupling --- p.84 / Chapter 4.4.4 --- Overall predictive performance --- p.85 / Chapter 4.4.5 --- Performance on individual distortion types --- p.88 / Chapter 4.4.6 --- Cross-distortion performance evaluation --- p.89 / Chapter 4.5 --- Summary --- p.91 / Chapter 5 --- Temporal Inconsistency Measure --- p.92 / Chapter 5.1 --- Introduction --- p.92 / Chapter 5.2 --- The Proposed Method --- p.93 / Chapter 5.2.1 --- Implementation --- p.93 / Chapter 5.2.2 --- MSE TIM --- p.94 / Chapter 5.3 --- Experiments --- p.96 / Chapter 5.4 --- Summary --- p.97 / Chapter III --- Application related to Color and 3D Perception --- p.98 / Chapter 6 --- Anaglyph Image Generation --- p.99 / Chapter 6.1 --- Introduction --- p.99 / Chapter 6.2 --- Anaglyph Image Artifacts --- p.99 / Chapter 6.3 --- Related Works --- p.101 / Chapter 6.3.1 --- Simple anaglyphs --- p.101 / Chapter 6.3.2 --- XYZ and LAB anaglyphs --- p.102 / Chapter 6.3.3 --- Ghosting reduction methods --- p.103 / Chapter 6.4 --- The Proposed Method --- p.104 / Chapter 6.4.1 --- Gamma transfer --- p.104 / Chapter 6.4.2 --- Converting RGB to CIELAB --- p.105 / Chapter 6.4.3 --- Matching color appearance attributes in CIELAB color space --- p.106 / Chapter 6.4.4 --- Converting CIELAB to RGB --- p.110 / Chapter 6.4.5 --- Parameterization --- p.111 / Chapter 6.5 --- Experiments --- p.112 / Chapter 6.5.1 --- Subjective tests --- p.112 / Chapter 6.5.2 --- Results and analysis --- p.113 / Chapter 6.5.3 --- Complexity --- p.115 / Chapter 6.6 --- Summary --- p.115 / Chapter 7 --- Conclusions --- p.117 / Chapter 7.1 --- Contributions of the Thesis --- p.117 / Chapter 7.2 --- Future Research Directions --- p.120 / Bibliography --- p.122
47

Avaliação da qualidade da imagem e taxa de exposição na cardiologia intervencionista / Evaluation of Image Quality and Exposure Rate in Interventional Cardiology.

Pitorri, Roberto Contreras 14 October 2013 (has links)
A Fluoroscopia é uma técnica de obtenção de imagens de raios X, através de um detector de imagens dinâmicas, que permite o acompanhamento de exames de órgãos em tempo real. Os detectores atualmente utilizados são os intensificadores de imagem (II) e os Flat Panel (FP), os primeiros (do tipo válvula) tem a principal função de aumentar o brilho na imagem e mais recentemente os segundos (de estado sólido), também sido utilizados nos equipamentos de fluoroscopia para melhorar a qualidade da imagem (contraste e detalhe), diminuindo ruídos e artefatos na mesma. Os exames gerais que utilizam a técnica de fluoroscopia servem para cabeça, tórax e abdômen, que antes eram realizados com um mesmo tipo de equipamento, mas devido a evolução da tecnologia equipamentos dedicados à esses exames passaram a ser utilizados. O objetivo deste trabalho é analisar especificamente um grupo de equipamentos de fluoroscopia cardíaca (de diferentes instituições e fabricantes) para inferir como estão os parâmetros de contraste, detalhe e taxa de exposição no detector (também interessante para os serviços de manutenção) em relação a sua média e também como estão em relação às referências internacionais. Para tal, foi desenvolvido um objeto simulador e um protocolo de testes incluindo medidas de taxa de exposição e análises dos parâmetros obtidos, a saber: a) testes preliminares para aceitação do equipamento para a amostragem, b) de detalhe e de contraste (utilizando o objeto simulador desenvolvido) para a obtenção do seu produto denominado por FOM (Figure of Merit), c) as medidas de taxa de exposição que chega no detector e d) as análises das distribuições dos resultados obtidos com os dois grupos de detectores, quanto as suas médias e comparação dessas (equipamentos utilizados no Brasil) com os valores de referência da literatura internacional. Do trabalho realizado foi possível comprovar que o objeto simulador e o protocolo desenvolvido, juntos à metodologia aplicada, foram adequados para auxiliar no controle de qualidade dos equipamentos selecionados, vii classificando-os quanto aos potenciais de otimização de FOM e TEEDI. Os FOMs médios do II e do FP distam do FOM referência de 35,5 % e 35,0 % e as TEEDIs médias para os II e FP distam da TEDDI referência respectivamente de 13,8% e 24,9% Estes últimos deverão ser ajustados pela manutenção para trazê-los mais próximos das referências utilizadas nas distribuições obtidas. / Fluoroscopy is a X ray technique used to obtain images through a dynamic image detector or sensor that allows to follow the organs movements in real time exams. Nowadays the detectors used are the image intensifier (II) e the Flat Panel (FP). The first one type valve has the main function to enhance the image brightness and more recently was developed the second one, (solid state technology) which the main function is to enhance the image quality (contrast and detail) minimizing the noise and artifacts in itself. Head, thorax and abdomen are the body sections which the general fluoroscopy deals and that was performed with one only type of equipment. Actually these exams are performed with dedicated machines due to the technology evolution and several manufactures are responsible for theirs development and assembly in several continents (Americas, Europe and Asia). The scope of this work is to analize two groups of fluoroscopy equipments (II and FP detectors), dedicated only for cardiac fluoroscopy and from different institutions and manufacturers, in order to infer how the parameters of contrast, detail and exposure rate at the entrance of the detector are in a Quality Control that the maintenance service would be also interested, besides a medical physicist. With these results one could know how the cited groups (through their average results) would be doing related to others groups of equipments or an specific one and even to international references. With this purpose a PMMA simulator object (OS) was developed with an protocol derived from the literature that was composed of exposure rate in the entrance of the detector (TEEDI), tests related to the selection of the equipments to be part of theirs samples, tests of contrast and detail (using the OS) to obtain their product named by FOM (Figure of Merit) and with all results obtained, to analyse the two distribution groups through their averages and comparing them not only with themselves, but also with the references from abroad. With this work it was possible to confirm that the OS, as well the protocol developed together the methodology used, were adequate to perform the quality control of the selected ix equipment samples, classifying them related theirs optimization potentials of FOM and TEEDI. The average FOMs for II and FP are far from the reference by 35,5 % and 35,0 % respectively and the average TEEDIs for II and FP are far from the reference TEDDI respectively by 13,8% and 24,9%. These last one has to be adjusted for the maintenance service (mainly the FP one) in order to bring them more near to the reference used to obtain the distributions.
48

Studies on the salient properties of digital imagery that impact on human target acquisition and the implications for image measures.

Ewing, Gary John January 1999 (has links)
Electronically displayed images are becoming increasingly important as an interface between man and information systems. Lengthy periods of intense observation are no longer unusual. There is a growing awareness that specific demands should be made on displayed images in order to achieve an optimum match with the perceptual properties of the human visual system. These demands may vary greatly, depending on the task for which the displayed image is to be used and the ambient conditions. Optimal image specifications are clearly not the same for a home TV, a radar signal monitor or an infrared targeting image display. There is, therefore, a growing need for means of objective measurement of image quality, where "image quality" is used in a very broad sense and is defined in the thesis, but includes any impact of image properties on human performance in relation to specified visual tasks. The aim of this thesis is to consolidate and comment on the image measure literatures, and to find through experiment the salient properties of electronically displayed real world complex imagery that impacts on human performance. These experiments were carried out for well specified visual tasks (of real relevance), and the appropriate application of image measures to this imagery, to predict human performance, was considered. An introduction to certain aspects of image quality measures is given, and clutter metrics are integrated into this concept. A very brief and basic introduction to the human visual system (HVS) is given, with some basic models. The literature on image measures is analysed, with a resulting classification of image measures, according to which features they were attempting to quantify. A series of experiments were performed to evaluate the effects of image properties on human performance, using appropriate measures of performance. The concept of image similarity was explored, by objectively measuring the subjective perception of imagery of the same scene, as obtained through different sensors, and which underwent different luminance transformations. Controlled degradations were introduced, by using image compression. Both still and video compression were used to investigate both spatial and temporal aspects of HVS processing. The effects of various compression schemes on human target acquisition performance were quantified. A study was carried out to determine the "local" extent, to which the clutter around a target, affects its detectability. It was found in this case, that the excepted wisdom, of setting the local domain (support of the metric) to twice the expected target size, was incorrect. The local extent of clutter was found to be much greater, with this having implications for the application of clutter metrics. An image quality metric called the gradient energy measure (GEM), for quantifying the affect of filtering on Nuclear Medicine derived images, was developed and evaluated. This proved to be a reliable measure of image smoothing and noise level, which in preliminary studies agreed with human perception. The final study discussed in this thesis determined the performance of human image analysts, in terms of their receiver-operating characteristic, when using Synthetic Aperture Radar (SAR) derived images in the surveillance context. In particular, the effects of target contrast and background clutter on human analyst target detection performance were quantified. In the final chapter, suggestions to extend the work of this thesis are made, and in this context a system to predict human visual performance, based on input imagery, is proposed. This system intelligently uses image metrics based on the particular visual task and human expectations and human visual system performance parameters. / Thesis (Ph.D.)--Medical School; School of Computer Science, 1999.
49

Perceptual Criteria on Image Compression

Moreno Escobar, Jesús Jaime 01 July 2011 (has links)
Hoy en día las imágenes digitales son usadas en muchas areas de nuestra vida cotidiana, pero estas tienden a ser cada vez más grandes. Este incremento de información nos lleva al problema del almacenamiento de las mismas. Por ejemplo, es común que la representación de un pixel a color ocupe 24 bits, donde los canales rojo, verde y azul se almacenen en 8 bits. Por lo que, este tipo de pixeles en color pueden representar uno de los 224 ¼ 16:78 millones de colores. Así, una imagen de 512 £ 512 que representa con 24 bits un pixel ocupa 786,432 bytes. Es por ello que la compresión es importante. Una característica importante de la compresión de imágenes es que esta puede ser con per didas o sin ellas. Una imagen es aceptable siempre y cuando dichas perdidas en la información de la imagen no sean percibidas por el ojo. Esto es posible al asumir que una porción de esta información es redundante. La compresión de imágenes sin pérdidas es definida como deco dificar matemáticamente la misma imagen que fue codificada. En la compresión de imágenes con pérdidas se necesita identificar dos características: la redundancia y la irrelevancia de in formación. Así la compresión con pérdidas modifica los datos de la imagen de tal manera que cuando estos son codificados y decodificados, la imagen recuperada es lo suficientemente pare cida a la original. Que tan parecida es la imagen recuperada en comparación con la original es definido previamente en proceso de codificación y depende de la implementación a ser desarrollada. En cuanto a la compresión con pérdidas, los actuales esquemas de compresión de imágenes eliminan información irrelevante utilizando criterios matemáticos. Uno de los problemas de estos esquemas es que a pesar de la calidad numérica de la imagen comprimida es baja, esta muestra una alta calidad visual, dado que no muestra una gran cantidad de artefactos visuales. Esto es debido a que dichos criterios matemáticos no toman en cuenta la información visual percibida por el Sistema Visual Humano. Por lo tanto, el objetivo de un sistema de compresión de imágenes diseñado para obtener imágenes que no muestren artefactos, aunque su calidad numérica puede ser baja, es eliminar la información que no es visible por el Sistema Visual Humano. Así, este trabajo de tesis doctoral propone explotar la redundancia visual existente en una imagen, reduciendo frecuencias imperceptibles para el sistema visual humano. Por lo que primeramente, se define una métrica de calidad de imagen que está altamente correlacionada con opiniones de observadores. La métrica propuesta pondera el bien conocido PSNR por medio de una modelo de inducción cromática (CwPSNR). Después, se propone un algoritmo compresor de imágenes, llamado Hi-SET, el cual explota la alta correlación de un vecindario de pixeles por medio de una función Fractal. Hi-SET posee las mismas características que tiene un compresor de imágenes moderno, como ser una algoritmo embedded que permite la transmisión progresiva. También se propone un cuantificador perceptual(½SQ), el cual es una modificación a la clásica cuantificación Dead-zone. ½SQes aplicado a un grupo entero de pixelesen una sub-banda Wavelet dada, es decir, se aplica una cuantificación global. A diferencia de lo anterior, la modificación propuesta permite hacer una cuantificación local tanto directa como inversa pixel-por-pixel introduciéndoles una distorsión perceptual que depende directamente de la información espacial del entorno del pixel. Combinando el método ½SQ con Hi-SET, se define un compresor perceptual de imágenes, llamado ©SET. Finalmente se presenta un método de codificación de areas de la Región de Interés, ½GBbBShift, la cual pondera perceptualmente los pixeles en dichas areas, en tanto que las areas que no pertenecen a la Región de Interés o el Fondo sólo contendrán aquellas que perceptualmente sean las más importantes. Los resultados expuestos en esta tesis indican que CwPSNR es el mejor indicador de calidad de imagen en las distorsiones más comunes de compresión como son JPEG y JPEG2000, dado que CwPSNR posee la mejor correlación con la opinión de observadores, dicha opinión está sujeta a los experimentos psicofísicos de las más importantes bases de datos en este campo, como son la TID2008, LIVE, CSIQ y IVC. Además, el codificador de imágenes Hi-SET obtiene mejores resultados que los obtenidos por JPEG2000 u otros algoritmos que utilizan el fractal de Hilbert. Así cuando a Hi-SET se la aplica la cuantificación perceptual propuesta, ©SET, este incrementa su eficiencia tanto objetiva como subjetiva. Cuando el método ½GBbBShift es aplicado a Hi-SET y este es comparado contra el método MaxShift aplicado al estándar JPEG2000 y a Hi-SET, se obtienen mejores resultados perceptuales comparando la calidad subjetiva de toda la imagen de dichos métodos. Tanto la cuantificación perceptual propuesta ½SQ como el método ½GBbBShift son algoritmos generales, los cuales pueden ser aplicados a otros algoritmos de compresión de imágenes basados en Transformada Wavelet tales como el mismo JPEG2000, SPIHT o SPECK, por citar algunos ejemplos. / Nowadays, digital images are used in many areas in everyday life, but they tend to be big. This increases amount of information leads us to the problem of image data storage. For example, it is common to have a representation a color pixel as a 24-bit number, where the channels red, green, and blue employ 8 bits each. In consequence, this kind of color pixel can specify one of 224 ¼ 16:78 million colors. Therefore, an image at a resolution of 512 £ 512 that allocates 24 bits per pixel, occupies 786,432 bytes. That is why image compression is important. An important feature of image compression is that it can be lossy or lossless. A compressed image is acceptable provided these losses of image information are not perceived by the eye. It is possible to assume that a portion of this information is redundant. Lossless Image Compression is defined as to mathematically decode the same image which was encoded. In Lossy Image Compression needs to identify two features inside the image: the redundancy and the irrelevancy of information. Thus, lossy compression modifies the image data in such a way when they are encoded and decoded, the recovered image is similar enough to the original one. How similar is the recovered image in comparison to the original image is defined prior to the compression process, and it depends on the implementation to be performed. In lossy compression, current image compression schemes remove information considered irrelevant by using mathematical criteria. One of the problems of these schemes is that although the numerical quality of the compressed image is low, it shows a high visual image quality, e.g. it does not show a lot of visible artifacts. It is because these mathematical criteria, used to remove information, do not take into account if the viewed information is perceived by the Human Visual System. Therefore, the aim of an image compression scheme designed to obtain images that do not show artifacts although their numerical quality can be low, is to eliminate the information that is not visible by the Human Visual System. Hence, this Ph.D. thesis proposes to exploit the visual redundancy existing in an image by reducing those features that can be unperceivable for the Human Visual System. First, we define an image quality assessment, which is highly correlated with the psychophysical experiments performed by human observers. The proposed CwPSNR metrics weights the well-known PSNR by using a particular perceptual low level model of the Human Visual System, e.g. the Chromatic Induction Wavelet Model (CIWaM). Second, we propose an image compression algorithm (called Hi-SET), which exploits the high correlation and self-similarity of pixels in a given area or neighborhood by means of a fractal function. Hi-SET possesses the main features that modern image compressors have, that is, it is an embedded coder, which allows a progressive transmission. Third, we propose a perceptual quantizer (½SQ), which is a modification of the uniform scalar quantizer. The ½SQ is applied to a pixel set in a certain Wavelet sub-band, that is, a global quantization. Unlike this, the proposed modification allows to perform a local pixel-by-pixel forward and inverse quantization, introducing into this process a perceptual distortion which depends on the surround spatial information of the pixel. Combining ½SQ method with the Hi-SET image compressor, we define a perceptual image compressor, called ©SET. Finally, a coding method for Region of Interest areas is presented, ½GBbBShift, which perceptually weights pixels into these areas and maintains only the more important perceivable features in the rest of the image. Results presented in this report show that CwPSNR is the best-ranked image quality method when it is applied to the most common image compression distortions such as JPEG and JPEG2000. CwPSNR shows the best correlation with the judgement of human observers, which is based on the results of psychophysical experiments obtained for relevant image quality databases such as TID2008, LIVE, CSIQ and IVC. Furthermore, Hi-SET coder obtains better results both for compression ratios and perceptual image quality than the JPEG2000 coder and other coders that use a Hilbert Fractal for image compression. Hence, when the proposed perceptual quantization is introduced to Hi-SET coder, our compressor improves its numerical and perceptual e±ciency. When ½GBbBShift method applied to Hi-SET is compared against MaxShift method applied to the JPEG2000 standard and Hi-SET, the images coded by our ROI method get the best results when the overall image quality is estimated. Both the proposed perceptual quantization and the ½GBbBShift method are generalized algorithms that can be applied to other Wavelet based image compression algorithms such as JPEG2000, SPIHT or SPECK.
50

Natural scene statistics based blind image quality assessment and repair

Moorthy, Anush Krishna, 1986- 11 July 2012 (has links)
Progress in multimedia technologies has resulted in a plethora of services and devices that capture, compress, transmit and display audiovisual stimuli. Humans -- the ultimate receivers of such stimuli -- now have access to visual entertainment at their homes, their workplaces as well as on mobile devices. With increasing visual signals being received by human observers, in the face of degradations that occur to due the capture, compression and transmission processes, an important aspect of the quality of experience of such stimuli is the \emph{perceived visual quality}. This dissertation focuses on algorithm development for assessing such visual quality of natural images, without need for the `pristine' reference image, i.e., we develop computational models for no-reference image quality assessment (NR IQA). Our NR IQA model stems from the theory that natural images have certain statistical properties that are violated in the presence of degradations, and quantifying such deviations from \emph{naturalness} leads to a blind estimate of quality. The proposed modular and easily extensible framework is distortion-agnostic, in that it does not need to have knowledge of the distortion afflicting the image (contrary to most present-day NR IQA algorithms) and is not only capable of quality assessment with high correlation with human perception, but also is capable of identifying the distortion afflicting the image. This additional distortion-identification, coupled with blind quality assessment leads to a framework that allows for blind general-purpose image repair, which is the second major contribution of this dissertation. The blind general-purpose image repair framework, and its exemplar algorithm described here stem from a revolutionary perspective on image repair, where the framework does not simply attempt to ameliorate the distortion in the image, but to ameliorate the distortion, so that visual quality at the output is maximized. Lastly, this dissertation describes a large-scale human subjective study that was conducted at UT to assess human behavior and opinion on visual quality of videos when viewed on mobile devices. The study lead to a database of 200 distorted videos, which incorporates previously studied distortions such as compression and wireless packet-loss, and also dynamically varying distortions that change as a function of time, such as frame-freezes and temporally varying compression rates. This study -- the first of its kind -- involved over 50 human subjects and resulted in 5,300 summary subjective scores and time-sampled subjective traces of quality for multiple displays. The last part of this dissertation analyzes human behavior and opinion on time-varying video quality, opening up an extremely interesting and relevant field for future research in the area of quality assessment and human behavior. / text

Page generated in 0.0513 seconds