1 |
Structural characterization of liver fibrosis in magnetic resonance imagesSzilágyi, Anna Tünde January 2014 (has links)
The overall clinical motivation of this thesis is to differentiate between the different stages of liver disease stratifying into: no disease, mild disease, and severe fibrosis using Magnetic Resonance Imaging (MRI). As a related aim, we seek to differentiate as much as possible pericellular and nonpericellular fibrosis. This latter is clinically important, but currently no method exists that is able to perform this. Quickly, we realised that these aims push low level image analysis beyond their current bounds and so a great deal of the thesis is dedicated to extending such techniques before they can be applied. To work on the most fundamental low level image analysis concepts and algorithms we choose one of the most recent developments, namely continuous intrinsic dimensionality (ciD), which allows the continuous classification of homogeneous patches from 1D structures to intrinsically 2D structures. We show that the current formalism has several fundamental limitations and we propose a number of developments to improve on these. We re-evaluated feature energy statistics that were originally proposed in ciD, and additionally we examined the confidence one may have in stateof- the-art methods to estimate the orientation of features. We show that new statistical methods are required for feature energy, and that orientation predictability is more important than correctness of the estimation. This evaluation led us to the monogenic signal local orientation. Analysis of feature or texture energy is also a main contribution of this thesis. Within this framework we propose the Riesz-weighted phase congruency model. This is able to detect internal texture structures but it is not capable of delineating boundaries. Nevertheless, it proves an appropriate basis for texture quantification. Finally, we show that in contrast to using the standard established Kovesi approach, the developed texture measure leads to good results on the suboptimal T1w MRI liver image staging images. We show that we are able to differentiate automatically between the separate disease scores and between pericellular and non-pericellular fibrosis.
|
2 |
A Fully Automatic Segmentation Method for Breast Ultrasound ImagesShan, Juan 01 May 2011 (has links)
Breast cancer is the second leading cause of death of women worldwide. Accurate lesion boundary detection is important for breast cancer diagnosis. Since many crucial features for discriminating benign and malignant lesions are based on the contour, shape, and texture of the lesion, an accurate segmentation method is essential for a successful diagnosis. Ultrasound is an effective screening tool and primarily useful for differentiating benign and malignant lesions. However, due to inherent speckle noise and low contrast of breast ultrasound imaging, automatic lesion segmentation is still a challenging task. This research focuses on developing a novel, effective, and fully automatic lesion segmentation method for breast ultrasound images. By incorporating empirical domain knowledge of breast structure, a region of interest is generated. Then, a novel enhancement algorithm (using a novel phase feature) and a newly developed neutrosophic clustering method are developed to detect the precise lesion boundary. Neutrosophy is a recently introduced branch of philosophy that deals with paradoxes, contradictions, antitheses, and antinomies. When neutrosophy is used to segment images with vague boundaries, its unique ability to deal with uncertainty is brought to bear. In this work, we apply neutrosophy to breast ultrasound image segmentation and propose a new clustering method named neutrosophic l-means. We compare the proposed method with traditional fuzzy c-means clustering and three other well-developed segmentation methods for breast ultrasound images, using the same database. Both accuracy and time complexity are analyzed. The proposed method achieves the best accuracy (TP rate is 94.36%, FP rate is 8.08%, and similarity rate is 87.39%) with a fairly rapid processing speed (about 20 seconds). Sensitivity analysis shows the robustness of the proposed method as well. Cases with multiple-lesions and severe shadowing effect (shadow areas having similar intensity values of the lesion and tightly connected with the lesion) are not included in this study.
|
3 |
Local Phase Coherence Measurement for Image Analysis and ProcessingHassen, Rania Khairy Mohammed January 2013 (has links)
The ability of humans to perceive significant pattern and structure of an image is something which humans take for granted. We can recognize objects and patterns independent of changes in image contrast and illumination. In the past decades, it has been widely recognized in both biology and computer vision that phase contains critical information in characterizing the structures in images.
Despite the importance of local phase information and its significant success in many computer vision and image processing applications, the coherence behavior of local phases at scale-space is not well understood. This thesis concentrates on developing an invariant image representation method based on local phase information. In particular, considerable effort is devoted to study the coherence relationship between local phases at different scales in the vicinity of image features and to develop robust methods to measure the strength of this relationship. A computational framework that computes local phase coherence (LPC) intensity with arbitrary selections in the number of coefficients, scales, as well as the scale ratios between them has been developed. Particularly, we formulate local phase prediction as an optimization problem, where the objective function computes the closeness between true local phase and the predicted phase by LPC. The proposed framework not only facilitates flexible and reliable computation of LPC, but also broadens the potentials of LPC in many applications.
We demonstrate the potentials of LPC in a number of image processing applications. Firstly, we have developed a novel sharpness assessment algorithm, identified as LPC-Sharpness Index (LPC-SI), without referencing the original image. LPC-SI is tested using four subject-rated publicly-available image databases, which demonstrates competitive performance when compared with state-of-the-art algorithms. Secondly, a new fusion quality assessment algorithm has been developed to objectively assess the performance of existing fusion algorithms. Validations over our subject-rated multi-exposure multi-focus image database show good correlations between subjective ranking score and the proposed image fusion quality index. Thirdly, the invariant properties of LPC measure have been employed to solve image registration problem where inconsistency in intensity or contrast patterns are the major challenges. LPC map has been utilized to estimate image plane transformation by maximizing weighted mutual information objective function over a range of possible transformations. Finally, the disruption of phase coherence due to blurring process is employed in a multi-focus image fusion algorithm. The algorithm utilizes two activity measures, LPC as sharpness activity measure along with local energy as contrast activity measure. We show that combining these two activity measures result in notable performance improvement in achieving both maximal contrast and maximal sharpness simultaneously at each spatial location.
|
4 |
Local Phase Coherence Measurement for Image Analysis and ProcessingHassen, Rania Khairy Mohammed January 2013 (has links)
The ability of humans to perceive significant pattern and structure of an image is something which humans take for granted. We can recognize objects and patterns independent of changes in image contrast and illumination. In the past decades, it has been widely recognized in both biology and computer vision that phase contains critical information in characterizing the structures in images.
Despite the importance of local phase information and its significant success in many computer vision and image processing applications, the coherence behavior of local phases at scale-space is not well understood. This thesis concentrates on developing an invariant image representation method based on local phase information. In particular, considerable effort is devoted to study the coherence relationship between local phases at different scales in the vicinity of image features and to develop robust methods to measure the strength of this relationship. A computational framework that computes local phase coherence (LPC) intensity with arbitrary selections in the number of coefficients, scales, as well as the scale ratios between them has been developed. Particularly, we formulate local phase prediction as an optimization problem, where the objective function computes the closeness between true local phase and the predicted phase by LPC. The proposed framework not only facilitates flexible and reliable computation of LPC, but also broadens the potentials of LPC in many applications.
We demonstrate the potentials of LPC in a number of image processing applications. Firstly, we have developed a novel sharpness assessment algorithm, identified as LPC-Sharpness Index (LPC-SI), without referencing the original image. LPC-SI is tested using four subject-rated publicly-available image databases, which demonstrates competitive performance when compared with state-of-the-art algorithms. Secondly, a new fusion quality assessment algorithm has been developed to objectively assess the performance of existing fusion algorithms. Validations over our subject-rated multi-exposure multi-focus image database show good correlations between subjective ranking score and the proposed image fusion quality index. Thirdly, the invariant properties of LPC measure have been employed to solve image registration problem where inconsistency in intensity or contrast patterns are the major challenges. LPC map has been utilized to estimate image plane transformation by maximizing weighted mutual information objective function over a range of possible transformations. Finally, the disruption of phase coherence due to blurring process is employed in a multi-focus image fusion algorithm. The algorithm utilizes two activity measures, LPC as sharpness activity measure along with local energy as contrast activity measure. We show that combining these two activity measures result in notable performance improvement in achieving both maximal contrast and maximal sharpness simultaneously at each spatial location.
|
5 |
Implementation and evaluation of motion correction for quantitative MRILarsson, Jonatan January 2010 (has links)
Image registration is the process of aligning two images such that their mutual features overlap. This is of great importance in several medical applications. In 2008 a novel method for simultaneous T1, T2 and proton density quantification was suggested. The method is in the field of quantitative Magnetic Resonance Imaging or qMRI. In qMRI parameters are quantified by a pixel-to-pixel fit of the image intensity as a function of different MR scanner settings. The quantification depends on several volumes of different intensities to be aligned. If a patient moves during the data aquisition the datasets will not be aligned and the results are degraded due to this. Since the quantification takes several minutes there is a considerable risk of patient movements. In this master thesis three image registration methods are presented and a comparison in robustness and speed was made. The phase based algorithm was suited for this problem and limited to finding rigid motion. The other two registration algorithms, originating from the Statistical Parametrical Mapping, SPM, package, were used as references. The result shows that the pixel-to-pixel fit is greatly improved in the datasets with found motion. In the comparison between the different methods the phase based algorithm turned out to be both the fastest and the most robust method.
|
6 |
Estimation of a Coronary Vessel Wall Deformation with High-Frequency Ultrasound ElastographyKasimoglu, Ismail Hakki 08 November 2007 (has links)
Elastography, which is based on applying pressure and estimating the resulting deformation, involves the forward problem to obtain the strain distributions and inverse problem to construct the elastic distributions consistent with the obtained strains on observation points. This thesis focuses on the former problem whose solution is used as an input to the latter problem. The aim is to provide the inverse problem community with accurate strain estimates of a coronary artery vessel wall. In doing so, a new ultrasonic image-based elastography approach is developed. Because the accuracy and quality of the estimated strain fields depend on the resolution level of the ultrasound image and to date best resolution levels obtained in the literature are not enough to clearly see all boundaries of the artery, one of the main goals is to acquire high-resolution coronary vessel wall ultrasound images at different pressures. For this purpose, first an experimental setup is designed to collect radio frequency (RF) signals, and then image formation algorithm is developed to obtain ultrasound images from the collected signals. To segment the noisy ultrasound images formed, a geodesic active contour-based segmentation algorithm with a novel stopping function that includes local phase of the image is developed. Then, region-based information is added to make the segmentation more robust to noise. Finally, elliptical deformable template is applied so that a priori information regarding the shape of the arteries could be taken into account, resulting in more stable and accurate results. The use of this template also implicitly provides boundary point correspondences from which high-resolution, size-independent, non-rigid and local strain fields of the coronary vessel wall are obtained.
|
7 |
Contributions à la sonification d’image et à la classification de sonsToffa, Ohini Kafui 11 1900 (has links)
L’objectif de cette thèse est d’étudier d’une part le problème de sonification d’image
et de le solutionner à travers de nouveaux modèles de correspondance entre domaines
visuel et sonore. D’autre part d’étudier le problème de la classification de son et de le résoudre
avec des méthodes ayant fait leurs preuves dans le domaine de la reconnaissance
d’image.
La sonification d’image est la traduction de données d’image (forme, couleur, texture,
objet) en sons. Il est utilisé dans les domaines de l’assistance visuelle et de l’accessibilité
des images pour les personnes malvoyantes. En raison de sa complexité, un
système de sonification d’image qui traduit correctement les données d’image en son de
manière intuitive n’est pas facile à concevoir.
Notre première contribution est de proposer un nouveau système de sonification
d’image de bas-niveau qui utilise une approche hiérarchique basée sur les caractéristiques
visuelles. Il traduit, à l’aide de notes musicales, la plupart des propriétés d’une
image (couleur, gradient, contour, texture, région) vers le domaine audio, de manière
très prévisible et donc est facilement ensuite décodable par l’être humain.
Notre deuxième contribution est une application Android de sonification de haut
niveau qui est complémentaire à notre première contribution car elle implémente la traduction
des objets et du contenu sémantique de l’image. Il propose également une base
de données pour la sonification d’image.
Finalement dans le domaine de l’audio, notre dernière contribution généralise le motif
binaire local (LBP) à 1D et le combine avec des descripteurs audio pour faire de
la classification de sons environnementaux. La méthode proposée surpasse les résultats
des méthodes qui utilisent des algorithmes d’apprentissage automatique classiques et
est plus rapide que toutes les méthodes de réseau neuronal convolutif. Il représente un
meilleur choix lorsqu’il y a une rareté des données ou une puissance de calcul minimale. / The objective of this thesis is to study on the one hand the problem of image sonification
and to solve it through new models of mapping between visual and sound domains.
On the other hand, to study the problem of sound classification and to solve it with
methods which have proven track record in the field of image recognition.
Image sonification is the translation of image data (shape, color, texture, objects)
into sounds. It is used in vision assistance and image accessibility domains for visual
impaired people. Due to its complexity, an image sonification system that properly conveys
the image data to sound in an intuitive way is not easy to design.
Our first contribution is to propose a new low-level image sonification system which
uses an hierarchical visual feature-based approach to translate, usingmusical notes, most
of the properties of an image (color, gradient, edge, texture, region) to the audio domain,
in a very predictable way in which is then easily decodable by the human being.
Our second contribution is a high-level sonification Android application which is
complementary to our first contribution because it implements the translation to the audio
domain of the objects and the semantic content of an image. It also proposes a dataset
for an image sonification.
Finally, in the audio domain, our third contribution generalizes the Local Binary
Pattern (LBP) to 1D and combines it with audio features for an environmental sound
classification task. The proposed method outperforms the results of methods that uses
handcrafted features with classical machine learning algorithms and is faster than any
convolutional neural network methods. It represents a better choice when there is data
scarcity or minimal computing power.
|
Page generated in 0.0667 seconds