211 |
Estudo semianalítico da qualidade de imagem e dose em mamografia / Semianalytical study of image quality and dose in mamographyTomal, Alessandra 24 February 2011 (has links)
Neste trabalho, foram desenvolvidos modelos semianalíticos para estudar os parâmetros de qualidade da imagem (contraste objeto, SC, e razão contraste-ruído, CNR) e a dose glandular normalizada (DgN ) em mamografia convencional e digital. As características de 161 amostras de tecidos mamários (coeficiente de atenuação linear e densidade) e os espectros de raios X mamográficos foram determinados experimentalmente, visando construir uma base de dados consistente destas grandezas para serem utilizadas nos modelos. Os coeficientes de atenuação linear foram determinados utilizando um feixe de raios X polienergético e um detector de Si(Li), e as densidades foram medidas utilizando o método da pesagem hidrostática. Os espectros de raios X de um equipamento industrial, que simula as qualidades de radiação de mamografia, foram medidos utilizando detectores de Si(Li), CdTe e SDD. A resposta de cada detector foi determinada por simulação Monte Carlo (MC). Os modelos semianalíticos desenvolvidos neste trabalho permitem calcular a deposição de energia na mama e no receptor de imagem, e foram utilizados para estudar o SC, a CNR e a DgN, para diferentes tipos de mama (espessura e glandularidade) e características do espectro incidente (combinação ânodo/filtro, potencial do tubo e camada semirredutora), bem como permitem avaliar a figura de mérito (FOM) em mamografia convencional e digital. Os resultados de coeficiente de atenuação e densidade para os diferentes grupos de tecidos mamários, mostram que os tecidos normais fibroglandulares e neoplásicos possuem características similares, enquanto tecidos normais adiposos apresentam menores valores destas grandezas. Os espectros medidos com cada detector, e devidamente corrigidos por suas respostas, mostram que os três tipos de detectores podem ser usados para determinar espectros mamográficos. Com base nos resultados de SC e CNR, foram estimados limites de detecção de nódulos em mamografia convencional e digital, que se mostraram similares entre si. Os resultados de SC, CNR e DgN obtidos também destacam a importância da escolha do modelo da mama e da base de dados de coeficiente de atenuação e espectros de raios X utilizados, uma vez que estes são responsáveis por uma grande variação nas grandezas estudadas. Além disso, os resultados de FOM mostram que, para mamas finas, a combinação Mo/Mo, tradicionalmente utilizada, apresenta o melhor desempenho, enquanto as combinações W/Rh e W/Ag são as mais indicadas para mamas espessas. Para mamas de espessuras médias, a melhor combinação depende da técnica utilizada (convencional ou digital). Finalmente, verificou-se que os modelos semianalíticos desenvolvidos permitem a obtenção de resultados de forma prática e rápida, com valores similares aos obtidos por simulação MC. Desta forma, estes modelos permitirão estudos futuros a respeito da otimização da mamografia, para outros tipos de mama e condições de irradiação. / In this work, semianalytical models were developed to study the image quality parameters (subject contrast, SC, and contrast-to-noise ratio, CNR) and the normalized average glandular dose (DgN) in conventional and digital mammography. The characteristics of 161 breast tissue samples (linear attenuation coefficient and density), and the mammographic x-ray spectra were determined experimentally, aiming to establish a consistent experimental database of these quantities to be used in the models. The linear attenuation coefficients were determined using a polyenergetic x-ray beam and a Si(Li) detector, and the densities were measured using the buoyancy method. The x-ray spectra from an industrial equipment, which reproduces the mammographic qualities, were measured using Si(Li), CdTe and SDD detectors. The responses of the detectors were determined using Monte Carlo (MC) simulation. The semianalytical models developed in this work allow computing the energy deposited in the breast and in the image receptor, and they were employed to study the SC, CNR and DgN, for different types of breast (thickness and glandularity) and incident x-ray spectra (anode/filter combination, tube potential and half-value layer). These models also allow evaluating the figure of merit (FOM) for conventional and digital mammography. The results of attenuation coefficient and density for the tissues analyzed show similar characteristics for the normal fibroglandular and neoplasic breast tissues, while the adipose tissue presents lower values of these quantities. From the x-ray spectra obtained using each detector, and corrected by their respective responses, it is observed that the three types of detectors can be used to determine mammographic spectra. Detection limits for nodules were estimated from the results of SC and CNR, and they were similar for both cases. The results of SC, CNR and DgN also show the importance of the choice of the breast model, and of the database of attenuation coefficient of breast tissues and x-ray spectra, since they largely influence the studied quantities. Besides, the results for FOM show that, for thin breasts, the Mo/Mo spectrum exhibits the better performance, while the W/Rh and W/Ag spectra are recommended for thicker breasts. For average thickness breasts, the more indicated spectra depend on the employed technique (conventional or digital). Finally, it was verified that the semianalytical models developed in this work provided results in a fast and simple way, with a good agreement with those obtained by using MC simulation. Therefore, these models allow further studies, regarding optimization of mammography, for other breast characteristics and irradiation parameters.
|
212 |
Feature Extraction and Image Analysis with the Applications to Print Quality Assessment, Streak Detection, and Pedestrian DetectionXing Liu (5929994) 02 January 2019 (has links)
Feature extraction is the main driving force behind the advancement of the image processing techniques infields suchas image quality assessment, objectdetection, and object recognition. In this work, we perform a comprehensive and in-depth study on feature extraction for the following applications: image macro-uniformity assessment, 2.5D printing quality assessment, streak defect detection, and pedestrian detection. Firstly, a set of multi-scale wavelet-based features is proposed, and a quality predictor is trained to predict the perceived macro-uniformity. Secondly, the 2.5D printing quality is characterized by a set of merits that focus on the surface structure.Thirdly, a set of features is proposed to describe the streaks, based on which two detectors are developed: the first one uses Support Vector Machine (SVM) to train a binary classifier to detect the streak; the second one adopts Hidden Markov Model (HMM) to incorporates the row dependency information within a single streak. Finally, a novel set of pixel-difference features is proposed to develop a computationally efficient feature extraction method for pedestrian detection.
|
213 |
Identificação da correlação entre as características das imagens de documentos e os impactos na fidelidade visual em função da taxa de compressão. / Identification of correlation between the characteristics of document images and its impact in visual fidelity in function of compression rate.Vitor Hitoshi Tsujiguchi 11 October 2011 (has links)
Imagens de documentos são documentos digitalizados com conteúdo textual. Estes documentos são compostos de caracteres e diagramação, apresentando características comuns entre si, como a presença de bordas e limites no formato de cada caractere. A relação entre as características das imagens de documentos e os impactos do processo de compressão com respeito à fidelidade visual são analisadas nesse trabalho. Métricas objetivas são empregadas na análise das características das imagens de documentos, como a medida da atividade da imagem (IAM) no domínio espacial dos pixels, e a verificação da medida de atividade espectral (SAM) no domínio espectral. Os desempenhos das técnicas de compressão de imagens baseada na transformada discreta de cosseno (DCT) e na transformada discreta de Wavelet (DWT) são avaliados sobre as imagens de documentos ao aplicar diferentes níveis de compressão sobre as mesmas, para cada técnica. Os experimentos são realizados sobre imagens digitais de documentos impressos e manuscritos de livros e periódicos, explorando texto escritos entre os séculos 16 ao século 19. Este material foi coletado na biblioteca Brasiliana Digital (www.brasiliana.usp.br), no Brasil. Resultados experimentais apontam que as medidas de atividade nos domínios espacial e espectral influenciam diretamente a fidelidade visual das imagens comprimidas para ambas as técnicas baseadas em DCT e DWT. Para uma taxa de compressão fixa de uma imagem comprimida em ambas técnicas, a presença de valores superiores de IAM e níveis menores de SAM na imagem de referência resultam em menor fidelidade visual, após a compressão. / Document images are digitized documents with textual content. These documents are composed of characters and their layout, with common characteristics among them, such as the presence of borders and boundaries in the shape of each character. The relationship between the characteristics of document images and the impact of the compression process with respect to visual fidelity are analyzed herein. Objective metrics are employed to analyze the characteristics of document images, such as the Image Activity Measure (IAM) in the spatial domain, and assessment of Spectral Activity Measure (SAM) in the spectral domain. The performance of image compression techniques based on Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) are evaluated from document images by applying different compression levels for each technique to these images. The experiments are performed on digital images of printed documents and manuscripts of books and magazines, exploring texts written from the 16th to the 19th century. This material was collected in the Brasiliana Digital Library in Brazil. Experimental results show that the activity measures in spatial and spectral domains directly influence the visual fidelity of compressed images for both the techniques based on DCT and DWT. For a fixed compression ratio for both techniques on a compressed image, higher values of IAM and low levels of SAM in the reference image result in less visual fidelity after compression.
|
214 |
Photon Counting X-ray Detector SystemsNorlin, Börje January 2005 (has links)
<p>This licentiate thesis concerns the development and characterisation of X-ray imaging detector systems. “Colour” X-ray imaging opens up new perspectives within the fields of medical X-ray diagnosis and also in industrial X-ray quality control. The difference in absorption for different “colours” can be used to discern materials in the object. For instance, this information might be used to identify diseases such as brittle-bone disease. The “colour” of the X-rays can be identified if the detector system can process each X-ray photon individually. Such a detector system is called a “single photon processing” system or, less precise, a “photon counting system”.</p><p>With modern technology it is possible to construct photon counting detector systems that can resolve details to a level of approximately 50 µm. However with such small pixels a problem will occur. In a semiconductor detector each absorbed X-ray photon creates a cloud of charge which contributes to the picture achieved. For high photon energies the size of the charge cloud is comparable to 50 µm and might be distributed between several pixels in the picture. Charge sharing is a key problem since, not only is the resolution degenerated, but it also destroys the “colour” information in the picture.</p><p>The problem involving charge sharing which limits “colour” X-ray imaging is discussed in this thesis. Image quality, detector effectiveness and “colour correctness” are studied on pixellated detectors from the MEDIPIX collaboration. Characterisation measurements and simulations are compared to be able to understand the physical processes that take place in the detector. Simulations can show pointers for the future development of photon counting X-ray systems. Charge sharing can be suppressed by introducing 3D-detector structures or by developing readout systems which can correct the crosstalk between pixels.</p>
|
215 |
Novel Approaches for Application of Principal Component Analysis on Dynamic PET Images for Improvement of Image Quality and Clinical DiagnosisRazifar, Pasha January 2005 (has links)
<p>Positron Emission Tomography, PET, can be used for dynamic studies in humans. In such studies a selected part of the body, often the whole brain, is imaged repeatedly after administration of a radiolabelled tracer. Such studies are performed to provide sequences of images reflecting the tracer’s kinetic behaviour, which may be related to physiological, biochemical and functional properties of tissues. This information can be obtained by analyzing the distribution and kinetic behaviour of the administered tracers in different regions, tissues and organs. Each image in the sequence thus contains part of the kinetic information about the administered tracer. </p><p>Several factors make analysis of PET images difficult, such as a high noise magnitude and correlation between image elements in conjunction with a high level of non-specific binding to the target and a sometimes small difference in target expression between pathological and healthy regions. It is therefore important to understand how these factors affect the derived quantitative measurements when using different methods such as kinetic modelling and multivariate image analysis.</p><p>In this thesis, a new method to explore the properties of the noise in dynamic PET images was introduced and implemented. The method is based on an analysis of the autocorrelation function of the images. This was followed by proposing and implementing three novel approaches for application of Principal Component Analysis, PCA, on dynamic human PET studies. The common underlying idea of these approaches was that the images need to be normalized before application of PCA to ensure that the PCA is signal driven, not noise driven. Different ways to estimate and correct for the noise variance were investigated. Normalizations were carried out Slice-Wise (SW), for the whole volume at once, and in both image domain and sinogram domain respectively. We also investigated the value of masking out and removing the area outside the brain for the analysis. </p><p>The results were very encouraging. We could demonstrate that for phantoms as well as for real image data, the applied normalizations allow PCA to reveal the signal much more clearly than what can be seen in the original image data sets. Using our normalizations, PCA can thus be used as a multivariate analysis technique that without any modelling assumptions can separate important kinetic information into different component images. Furthermore, these images contained optimized signal to noise ratio (SNR), low levels of noise and thus showed improved quality and contrast. This should allow more accurate visualization and better precision in the discrimination between pathological and healthy regions. Hopefully this can in turn lead to improved clinical diagnosis. </p>
|
216 |
Novel Approaches for Application of Principal Component Analysis on Dynamic PET Images for Improvement of Image Quality and Clinical DiagnosisRazifar, Pasha January 2005 (has links)
Positron Emission Tomography, PET, can be used for dynamic studies in humans. In such studies a selected part of the body, often the whole brain, is imaged repeatedly after administration of a radiolabelled tracer. Such studies are performed to provide sequences of images reflecting the tracer’s kinetic behaviour, which may be related to physiological, biochemical and functional properties of tissues. This information can be obtained by analyzing the distribution and kinetic behaviour of the administered tracers in different regions, tissues and organs. Each image in the sequence thus contains part of the kinetic information about the administered tracer. Several factors make analysis of PET images difficult, such as a high noise magnitude and correlation between image elements in conjunction with a high level of non-specific binding to the target and a sometimes small difference in target expression between pathological and healthy regions. It is therefore important to understand how these factors affect the derived quantitative measurements when using different methods such as kinetic modelling and multivariate image analysis. In this thesis, a new method to explore the properties of the noise in dynamic PET images was introduced and implemented. The method is based on an analysis of the autocorrelation function of the images. This was followed by proposing and implementing three novel approaches for application of Principal Component Analysis, PCA, on dynamic human PET studies. The common underlying idea of these approaches was that the images need to be normalized before application of PCA to ensure that the PCA is signal driven, not noise driven. Different ways to estimate and correct for the noise variance were investigated. Normalizations were carried out Slice-Wise (SW), for the whole volume at once, and in both image domain and sinogram domain respectively. We also investigated the value of masking out and removing the area outside the brain for the analysis. The results were very encouraging. We could demonstrate that for phantoms as well as for real image data, the applied normalizations allow PCA to reveal the signal much more clearly than what can be seen in the original image data sets. Using our normalizations, PCA can thus be used as a multivariate analysis technique that without any modelling assumptions can separate important kinetic information into different component images. Furthermore, these images contained optimized signal to noise ratio (SNR), low levels of noise and thus showed improved quality and contrast. This should allow more accurate visualization and better precision in the discrimination between pathological and healthy regions. Hopefully this can in turn lead to improved clinical diagnosis.
|
217 |
Algorithms to Process and Measure Biometric Information Content in Low Quality Face and Iris ImagesYoumaran, Richard 02 February 2011 (has links)
Biometric systems allow identification of human persons based on physiological or behavioral characteristics, such as voice, handprint, iris or facial characteristics. The use of face and iris recognition as a way to authenticate user’s identities has been a topic of research for years. Present iris recognition systems require that subjects stand close (<2m) to the imaging camera and look for a period of about three seconds until the data are captured. This cooperative behavior is required in order to capture quality images for accurate recognition. This will eventually restrict the amount of practical applications where iris recognition can be applied, especially in an uncontrolled environment where subjects are not expected to cooperate such as criminals and terrorists, for example. For this reason, this thesis develops a collection of methods to deal with low quality face and iris images and that can be applied for face and iris recognition in a non-cooperative environment. This thesis makes the following main contributions: I. For eye and face tracking in low quality images, a new robust method is developed. The proposed system consists of three parts: face localization, eye detection and eye tracking. This is accomplished using traditional image-based passive techniques such as shape information of the eye and active based methods which exploit the spectral properties of the pupil under IR illumination. The developed method is also tested on underexposed images where the subject shows large head movements. II. For iris recognition, a new technique is developed for accurate iris segmentation in low quality images where a major portion of the iris is occluded. Most existing methods perform generally quite well but tend to overestimate the occluded regions, and thus lose iris information that could be used for identification. This information loss is potentially important in the covert surveillance applications we consider in this thesis. Once the iris region is properly segmented using the developed method, the biometric feature information is calculated for the iris region using the relative entropy technique. Iris biometric feature information is calculated using two different feature decomposition algorithms based on Principal Component Analysis (PCA) and Independent Component Analysis (ICA). III. For face recognition, a new approach is developed to measure biometric feature information and the changes in biometric sample quality resulting from image degradations. A definition of biometric feature information is introduced and an algorithm to measure it proposed, based on a set of population and individual biometric features, as measured by a biometric algorithm under test. Examples of its application were shown for two different face recognition algorithms based on PCA (Eigenface) and Fisher Linear Discriminant (FLD) feature decompositions.
|
218 |
Algorithms to Process and Measure Biometric Information Content in Low Quality Face and Iris ImagesYoumaran, Richard 02 February 2011 (has links)
Biometric systems allow identification of human persons based on physiological or behavioral characteristics, such as voice, handprint, iris or facial characteristics. The use of face and iris recognition as a way to authenticate user’s identities has been a topic of research for years. Present iris recognition systems require that subjects stand close (<2m) to the imaging camera and look for a period of about three seconds until the data are captured. This cooperative behavior is required in order to capture quality images for accurate recognition. This will eventually restrict the amount of practical applications where iris recognition can be applied, especially in an uncontrolled environment where subjects are not expected to cooperate such as criminals and terrorists, for example. For this reason, this thesis develops a collection of methods to deal with low quality face and iris images and that can be applied for face and iris recognition in a non-cooperative environment. This thesis makes the following main contributions: I. For eye and face tracking in low quality images, a new robust method is developed. The proposed system consists of three parts: face localization, eye detection and eye tracking. This is accomplished using traditional image-based passive techniques such as shape information of the eye and active based methods which exploit the spectral properties of the pupil under IR illumination. The developed method is also tested on underexposed images where the subject shows large head movements. II. For iris recognition, a new technique is developed for accurate iris segmentation in low quality images where a major portion of the iris is occluded. Most existing methods perform generally quite well but tend to overestimate the occluded regions, and thus lose iris information that could be used for identification. This information loss is potentially important in the covert surveillance applications we consider in this thesis. Once the iris region is properly segmented using the developed method, the biometric feature information is calculated for the iris region using the relative entropy technique. Iris biometric feature information is calculated using two different feature decomposition algorithms based on Principal Component Analysis (PCA) and Independent Component Analysis (ICA). III. For face recognition, a new approach is developed to measure biometric feature information and the changes in biometric sample quality resulting from image degradations. A definition of biometric feature information is introduced and an algorithm to measure it proposed, based on a set of population and individual biometric features, as measured by a biometric algorithm under test. Examples of its application were shown for two different face recognition algorithms based on PCA (Eigenface) and Fisher Linear Discriminant (FLD) feature decompositions.
|
219 |
SSIM-Inspired Quality Assessment, Compression, and Processing for Visual CommunicationsRehman, Abdul January 2013 (has links)
Objective Image and Video Quality Assessment (I/VQA) measures predict image/video quality as perceived by human beings - the ultimate consumers of visual data. Existing research in the area is mainly limited to benchmarking and monitoring of visual data. The use of I/VQA measures in the design and optimization of image/video processing algorithms and systems is more desirable, challenging and fruitful but has not been well explored. Among the recently proposed objective I/VQA approaches, the structural similarity (SSIM) index and its variants have emerged as promising measures that show superior performance as compared to the widely used mean squared error (MSE) and are computationally simple compared with other state-of-the-art perceptual quality measures. In addition, SSIM has a number of desirable mathematical properties for optimization tasks. The goal of this research is to break the tradition of using MSE as the optimization criterion for image and video processing algorithms. We tackle several important problems in visual communication applications by exploiting SSIM-inspired design and optimization to achieve significantly better performance.
Firstly, the original SSIM is a Full-Reference IQA (FR-IQA) measure that requires access to the original reference image, making it impractical in many visual communication applications. We propose a general purpose Reduced-Reference IQA (RR-IQA) method that can estimate SSIM with high accuracy with the help of a small number of RR features extracted from the original image. Furthermore, we introduce and demonstrate the novel idea of partially repairing an image using RR features. Secondly, image processing algorithms such as image de-noising and image super-resolution are required at various stages of visual communication systems, starting from image acquisition to image display at the receiver. We incorporate SSIM into the framework of sparse signal representation and non-local means methods and demonstrate improved performance in image de-noising and super-resolution. Thirdly, we incorporate SSIM into the framework of perceptual video compression. We propose an SSIM-based rate-distortion optimization scheme and an SSIM-inspired divisive optimization method that transforms the DCT domain frame residuals to a perceptually uniform space. Both approaches demonstrate the potential to largely improve the rate-distortion performance of state-of-the-art video codecs. Finally, in real-world visual communications, it is a common experience that end-users receive video with significantly time-varying quality due to the variations in video content/complexity, codec configuration, and network conditions. How human visual quality of experience (QoE) changes with such time-varying video quality is not yet well-understood. We propose a quality adaptation model that is asymmetrically tuned to increasing and decreasing quality. The model improves upon the direct SSIM approach in predicting subjective perceptual experience of time-varying video quality.
|
220 |
Image Dynamic Range EnhancementOzyurek, Serkan 01 September 2011 (has links) (PDF)
In this thesis, image dynamic range enhancement methods are studied in order to solve the problem of representing high dynamic range scenes with low dynamic range images. For this purpose, two main image dynamic range enhancement methods, which are high dynamic range imaging and exposure fusion, are studied. More detailed analysis of exposure fusion algorithms are carried out because the whole enhancement process in the exposure fusion is performed in low dynamic
range, and they do not need any prior information about input images. In order to evaluate the performances of exposure fusion algorithms, both objective and subjective quality metrics are used. Moreover, the correlation between the
objective quality metrics and subjective ratings is studied in the experiments.
|
Page generated in 0.0612 seconds