• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 237
  • 28
  • 12
  • 4
  • 4
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 331
  • 331
  • 213
  • 139
  • 131
  • 91
  • 78
  • 70
  • 69
  • 59
  • 54
  • 48
  • 35
  • 34
  • 31
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Medical Image Processing Techniques for the Objective Quantification of Pathology in Magnetic Resonance Images of the Brain

Khademi, April 16 August 2013 (has links)
This thesis is focused on automatic detection of white matter lesions (WML) in Fluid Attenuation Inversion Recovery (FLAIR) Magnetic Resonance Images (MRI) of the brain. There is growing interest within the medical community regarding WML, since the total WML volume per patient (lesion load) was shown to be related to future stroke as well as carotid disease. Manual segmentation of WML is time consuming, labourious, observer-dependent and error prone. Automatic WML segmentation algorithms can be used instead since they give way to lesion load computation in a quantitative, efficient, reproducible and reliable manner. FLAIR MRI are affected by at least two types of degradations, including additive noise and the partial volume averaging (PVA) artifact, which affect the accuracy of automated algorithms. Model-based methods that rely on Gaussian distributions have been extensively used to handle these two distortions, but are not applicable to FLAIR with WML. The distribution of noise in multicoil FLAIR MRI is non-Gaussian and the presence of WML modifies tissue distributions in a manner that is difficult to model. To this end, the current thesis presents a novel way to model PVA artifacts in the presence of noise. The method is a generalized and adaptive approach, that was applied to a variety of MRI weightings (with and without pathology) for robust PVA quantification and tissue segmentation. No a priori assumptions are needed regarding class distributions and no training samples or initialization parameters are required. Segmentation experiments were completed using simulated and real FLAIR MRI. Simulated images were generated with noise and PVA distortions using realistic brain and pathology models. Real images were obtained from Sunnybrook Health Sciences Centre and WML ground truth was generated through a manual segmentation experiment. The average DSC was found to be 0.99 and 0.83 for simulated and real images, respectively. A lesion load study was performed that examined interhemispheric WML volume for each patient. To show the generalized nature of the approach, the proposed technique was also employed on pathology-free T1 and T2 MRI. Validation studies show the proposed framework is classifying PVA robustly and tissue classes are segmented with good results.
102

Medical Image Processing Techniques for the Objective Quantification of Pathology in Magnetic Resonance Images of the Brain

Khademi, April 16 August 2013 (has links)
This thesis is focused on automatic detection of white matter lesions (WML) in Fluid Attenuation Inversion Recovery (FLAIR) Magnetic Resonance Images (MRI) of the brain. There is growing interest within the medical community regarding WML, since the total WML volume per patient (lesion load) was shown to be related to future stroke as well as carotid disease. Manual segmentation of WML is time consuming, labourious, observer-dependent and error prone. Automatic WML segmentation algorithms can be used instead since they give way to lesion load computation in a quantitative, efficient, reproducible and reliable manner. FLAIR MRI are affected by at least two types of degradations, including additive noise and the partial volume averaging (PVA) artifact, which affect the accuracy of automated algorithms. Model-based methods that rely on Gaussian distributions have been extensively used to handle these two distortions, but are not applicable to FLAIR with WML. The distribution of noise in multicoil FLAIR MRI is non-Gaussian and the presence of WML modifies tissue distributions in a manner that is difficult to model. To this end, the current thesis presents a novel way to model PVA artifacts in the presence of noise. The method is a generalized and adaptive approach, that was applied to a variety of MRI weightings (with and without pathology) for robust PVA quantification and tissue segmentation. No a priori assumptions are needed regarding class distributions and no training samples or initialization parameters are required. Segmentation experiments were completed using simulated and real FLAIR MRI. Simulated images were generated with noise and PVA distortions using realistic brain and pathology models. Real images were obtained from Sunnybrook Health Sciences Centre and WML ground truth was generated through a manual segmentation experiment. The average DSC was found to be 0.99 and 0.83 for simulated and real images, respectively. A lesion load study was performed that examined interhemispheric WML volume for each patient. To show the generalized nature of the approach, the proposed technique was also employed on pathology-free T1 and T2 MRI. Validation studies show the proposed framework is classifying PVA robustly and tissue classes are segmented with good results.
103

A comparison of three methods of ultrasound to computed tomography registration

Mackay, Neilson 22 January 2009 (has links)
During orthopaedic surgery, preoperative CT scans can be aligned to the patient to assist the guidance of surgical instruments and the placement of implants. Registration (i.e. alignment) can be accomplished in many ways: by registering implanted fiducial markers, by touching a probe to the bone surface, or by aligning intraoperative two dimensional flouro images with the the three dimensional CT data. These approaches have problems: They require exposure of the bone, subject the patient and surgeons to ionizing radiation, or do both. Ultrasound can also be used to register a preoperative CT scan to the patient. The ultrasound probe is tracked as it passes over the patient and the ultrasound images are aligned to the CT data. This method eliminates the problems of bone exposure and ionizing radiation, but is computationally more difficult because the ultrasound images contain incomplete and unclear bone surfaces. In this work, we compare three methods to register a set of ultrasound images to a CT scan: Iterated Closest Point, Mutual Information and a novel method Points-to-Image. The average Target Registration Error and speed of each method is presented along with a brief summary of their strengths and weaknesses. / Thesis (Master, Computing) -- Queen's University, 2009-01-22 04:21:22.569
104

Quantification of regional cardiac function : clinically-motivated algorithm development and application to cardiac magnetic resonance and computed tomography

Vigneault, Davis Marc January 2017 (has links)
Techniques described to date for the reproducible and noninvasive quantification of regional cardiac function have been largely relegated to research settings due to time-consuming and cumbersome image acquisition and analysis. In this thesis, feature tracking algorithms are developed for 2-D+Time cardiac magnetic resonance (CMR) and 3-D+Time cardiac computed tomography (CCT) image sequences that are easily acquired clinically, while emphasising reproducibility and automation in their design. First, a commercially-implemented CMR feature tracking algorithm for the analysis of steady state free precession (SSFP) cine series is evaluated in patients with hypertrophic cardiomyopathy (HCM) and arrhythmogenic right ventricular cardiomyopathy (ARVC), which primarily affect the left ventricle (LV) and right ventricle (RV), respectively, and functional impairment compared with control populations is found in both cases. The limitations of this implementation are then used to guide development of an automated algorithm for the same purpose, making use of fully convolutional neural networks (CNN) for segmentation and spline registration across all frames simultaneously for tracking. This study is performed in the subjects with HCM, and functional impairment is again identified in disease subjects. Finally, as myocardial contraction is inherently a 3-D phenomenon, a technique is developed for quantification of regional function from 3-D+Time functional CCT studies using simultaneous registration of automatically generated Loop subdivision surface models for tracking. This study is performed in canine mongrels, and compared with the current state of the art technique for CCT functional analysis. This work demonstrates the feasibility of automated, reproducible cardiac functional analysis from CMR and CCT image sequences. While work remains to be done in extending the principles demonstrated and modular components described to fully automated whole-heart analysis, it is hoped that this thesis will accelerate the clinical adoption of regional functional analysis.
105

Analyse d'images 3D par méthodes variationnelles et ondelettes : application à l'imagerie médicale / 3D image analysis with variational methods and wavelets : applications to medical image processing

Tran, Minh-Phuong 28 September 2012 (has links)
L’imagerie médicale joue un rôle de plus en plus important avec le développement de nombreuses techniques d’acquisition. Il faut principalement pouvoir restaurer (débruiter) les images et en faire une segmentation. Ainsi toute l’information qualitative et quantitative sera disponible pour affiner les diagnostics. Dans cette thèse nous proposons une contribution à cette analyse dans un contexte 3D. Nous étudions deux grands types de méthodes : les méthodes variationnelles et les méthodes par ondelettes. Nous commençons par présenter les modèles variationnels du second ordre, qui s’avèrent plus performants que la classique méthode du premier ordre de Rudin-Osher-Fatemi. Nous l’utilisons pour débruiter et segmenter après avoir donné un bref état de l’art des procédés d’acquisition des images en médecine. Nous introduisons ensuite la transformée en ondelettes et présentons des algorithmes basés sur cette méthode. Les résultats numériques montrent que ces méthodes sont performantes et compétitives. Le coeur de notre travail est de développer des rerésentations 3D qui sont bien adaptées à des données médicales complexes comme des images IRM sous échantillonnées, peu contrastées (cervelets de souris) ou des images IRM d’angiographie (cerveaux de souris). Chaque technique a ses avantages et ses inconvénients. Aussi nous proposons un modèle variationnel mixte second ordre / seuillage par ondelettes. Ce modèle se comporte particulièrement bien : le bruit est correctement éliminé et les contours et textures préservés. Pour finir, nous adaptons plusieurs méthodes de fermeture de contours (hystérésis et distance de chanfrein) dans un contexte 3D. Le mémoire se termine par une synthèses des résultats et une présentation de futures directions de recherche. / Medical procedures have become a critical application area that makes substantial use of image processing. Medical image processing tasks mainly deal with image restoration, image segmentation that bring out medical image details, measure quantitatively medical conditions etc. The diagnosis of a health problem is now highly dependent on the quality and the credibility of the image analysis. The practical contributions of this thesis can be considered in many directions for medical domain. This manuscript addresses a 3D image analysis with variational methods and wavelet transform in the context of medical image processing. We first survey the second-order variational minimization model, which was proved that better than the classical Rudin-Osher-Fatemi model. This method is considered in problems associated to image denoising, image segmentation, that makes a short state of the art on medical imaging processing techniques. Then we introduce the concept of wavelet transform and present some algorithms that also used in this domain. Experimental results show that these tools are very useful and competitive. The core of this research is the development of new 3D representations, which are well adapted to representing complicated medical data, and filament structures in 3D volumes: the cerebellum and mice vessels network. Each of these two based methods has advantages and disadvantages, we then propose a new modified model that combines these schemes in the rest of the thesis. In this situation we propose a new modified model that combines these schemes. With the new decomposition model, in the reconstructed image, noise can be removed successfully and contours, textures are well preserved. This leads to further improvements in denoising performance. Finally, the further part of the thesis is devoted to the description of contribution to extend some classical contour closing methods, namely hysteresis thresholding and contour closing based on chamfer distance transform, in the 3D context. The thesis concludes with a review of our main results and with a discussion of a few of many open problems and promising directions for further research and application.
106

Level set segmentation of retinal structures

Wang, Chuang January 2016 (has links)
Changes in retinal structure are related to different eye diseases. Various retinal imaging techniques, such as fundus imaging and optical coherence tomography (OCT) imaging modalities, have been developed for non-intrusive ophthalmology diagnoses according to the vasculature changes. However, it is time consuming or even impossible for ophthalmologists to manually label all the retinal structures from fundus images and OCT images. Therefore, computer aided diagnosis system for retinal imaging plays an important role in the assessment of ophthalmologic diseases and cardiovascular disorders. The aim of this PhD thesis is to develop segmentation methods to extract clinically useful information from these retinal images, which are acquired from different imaging modalities. In other words, we built the segmentation methods to extract important structures from both 2D fundus images and 3D OCT images. In the first part of my PhD project, two novel level set based methods were proposed for detecting the blood vessels and optic discs from fundus images. The first one integrates Chan-Vese's energy minimizing active contour method with the edge constraint term and Gaussian Mixture Model based term for blood vessels segmentation, while the second method combines the edge constraint term, the distance regularisation term and the shape-prior term for locating the optic disc. Both methods include the pre-processing stage, used for removing noise and enhancing the contrast between the object and the background. Three automated layer segmentation methods were built for segmenting intra-retinal layers from 3D OCT macular and optic nerve head images in the second part of my PhD project. The first two methods combine different methods according to the data characteristics. First, eight boundaries of the intra-retinal layers were detected from the 3D OCT macular images and the thickness maps of the seven layers were produced. Second, four boundaries of the intra-retinal layers were located from 3D optic nerve head images and the thickness maps of the Retinal Nerve Fiber Layer (RNFL) were plotted. Finally, the choroidal layer segmentation method based on the Level Set framework was designed, which embedded with the distance regularisation term, edge constraint term and Markov Random Field modelled region term. The thickness map of the choroidal layer was calculated and shown.
107

A New Image Quantitative Method for Diagnosis and Therapeutic Response

January 2016 (has links)
abstract: Accurate quantitative information of tumor/lesion volume plays a critical role in diagnosis and treatment assessment. The current clinical practice emphasizes on efficiency, but sacrifices accuracy (bias and precision). In the other hand, many computational algorithms focus on improving the accuracy, but are often time consuming and cumbersome to use. Not to mention that most of them lack validation studies on real clinical data. All of these hinder the translation of these advanced methods from benchside to bedside. In this dissertation, I present a user interactive image application to rapidly extract accurate quantitative information of abnormalities (tumor/lesion) from multi-spectral medical images, such as measuring brain tumor volume from MRI. This is enabled by a GPU level set method, an intelligent algorithm to learn image features from user inputs, and a simple and intuitive graphical user interface with 2D/3D visualization. In addition, a comprehensive workflow is presented to validate image quantitative methods for clinical studies. This application has been evaluated and validated in multiple cases, including quantifying healthy brain white matter volume from MRI and brain lesion volume from CT or MRI. The evaluation studies show that this application has been able to achieve comparable results to the state-of-the-art computer algorithms. More importantly, the retrospective validation study on measuring intracerebral hemorrhage volume from CT scans demonstrates that not only the measurement attributes are superior to the current practice method in terms of bias and precision but also it is achieved without a significant delay in acquisition time. In other words, it could be useful to the clinical trials and clinical practice, especially when intervention and prognostication rely upon accurate baseline lesion volume or upon detecting change in serial lesion volumetric measurements. Obviously, this application is useful to biomedical research areas which desire an accurate quantitative information of anatomies from medical images. In addition, the morphological information is retained also. This is useful to researches which require an accurate delineation of anatomic structures, such as surgery simulation and planning. / Dissertation/Thesis / Doctoral Dissertation Biomedical Informatics 2016
108

De la segmentation au moyen de graphes d’images de muscles striés squelettiques acquises par RMN / Graph- based segmentation of skeletal striated muscles in NMR images

Baudin, Pierre-Yves 23 May 2013 (has links)
La segmentation d’images anatomiques de muscles striés squelettiques acquises par résonance magnétique nucléaire (IRM) présente un grand intérêt pour l’étude des myopathies. Elle est souvent un préalable nécessaire pour l’étude les mécanismes d’une maladie, ou pour le suivi thérapeutique des patients. Cependant, le détourage manuel des muscles est un travail long et fastidieux, au point de freiner les recherches cliniques qui en dépendent. Il est donc nécessaire d’automatiser cette étape. Les méthodes de segmentation automatique se basent en général sur les différences d’aspect visuel des objets à séparer et sur une détection précise des contours ou de points de repère anatomiques pertinents. L’IRM du muscle ne permettant aucune de ces approches, la segmentation automatique représente un défi de taille pour les chercheurs. Dans ce rapport de thèse, nous présentons plusieurs méthodes de segmentation d’images de muscles, toutes en rapport avec l’algorithme dit du marcheur aléatoire (MA). L’algorithme du MA, qui utilise une représentation en graphe de l’image, est connu pour être robuste dans les cas où les contours des objets sont manquants ou incomplets et pour son optimisation numérique rapide et globale. Dans sa version initiale, l’utilisateur doit d’abord segmenter de petites portions de chaque région de l’image, appelées graines, avant de lancer l’algorithme pour compléter la segmentation. Notre première contribution au domaine est un algorithme permettant de générer et d’étiqueter automatiquement toutes les graines nécessaires à la segmentation. Cette approche utilise une formulation en champs aléatoires de Markov, intégrant une connaissance à priori de l’anatomie et une détection préalable des contours entre des paires de graines. Une deuxième contribution vise à incorporer directement la connaissance à priori de la forme des muscles à la méthode du MA. Cette approche conserve l’interprétation probabiliste de l’algorithme original, ce qui permet de générer une segmentation en résolvant numériquement un grand système linéaire creux. Nous proposons comme dernière contribution un cadre d’apprentissage pour l’estimation du jeu de paramètres optimaux régulant l’influence du terme de contraste de l’algorithme du MA ainsi que des différents modèles de connaissance à priori. La principale difficulté est que les données d’apprentissage ne sont pas entièrement supervisées. En effet, l’utilisateur ne peut fournir qu’une segmentation déterministe de l’image, et non une segmentation probabiliste comme en produit l’algorithme du MA. Cela nous amène à faire de la segmentation probabiliste optimale une variable latente, et ainsi à formuler le problème d’estimation sous forme d’une machine à vecteurs de support latents (latent SVM). Toutes les méthodes proposées sont testées et validées sur des volumes de muscles squelettiques acquis par IRM dans un cadre clinique. / Segmentation of magnetic resonance images (MRI) of skeletal striated muscles is of crucial interest when studying myopathies. Diseases understanding, therapeutic followups of patients, etc. rely on discriminating the muscles in MRI anatomical images. However, delineating the muscle contours manually is an extremely long and tedious task, and thus often a bottleneck in clinical research. Typical automatic segmentation methods rely on finding discriminative visual properties between objects of interest, accurate contour detection or clinically interesting anatomical points. Skeletal muscles show none of these features in MRI, making automatic segmentation a challenging problem. In spite of recent advances on segmentation methods, their application in clinical settings is difficult, and most of the times, manual segmentation and correction is still the only option. In this thesis, we propose several approaches for segmenting skeletal muscles automatically in MRI, all related to the popular graph-based Random Walker (RW) segmentation algorithm. The strength of the RW method relies on its robustness in the case of weak contours and its fast and global optimization. Originally, the RW algorithm was developed for interactive segmentation: the user had to pre-segment small regions of the image – called seeds – before running the algorithm which would then complete the segmentation. Our first contribution is a method for automatically generating and labeling all the appropriate seeds, based on a Markov Random Fields formulation integrating prior knowledge of the relative positions, and prior detection of contours between pairs of seeds. A second contribution amounts to incorporating prior knowledge of the shape directly into the RW framework. Such formulation retains the probabilistic interpretation of the RW algorithm and thus allows to compute the segmentation by solving a large but simple sparse linear system, like in the original method. In a third contribution, we propose to develop a learning framework to estimate the optimal set of parameters for balancing the contrast term of the RW algorithm and the different existing prior models. The main challenge we face is that the training samples are not fully supervised. Specifically, they provide a hard segmentation of the medical images, instead of the optimal probabilistic segmentation, which corresponds to the desired output of the RW algorithm. We overcome this challenge by treating the optimal probabilistic segmentation as a latent variable. This allows us to employ the latent Support Vector Machine (latent SVM) formulation for parameter estimation. All proposed methods are tested and validated on real clinical datasets of MRI volumes of lower limbs.
109

Método para avaliação dos algoritmos utilizados no processamento de imagens médicas / Method for evaluation of the algorithms used in the processing of medical images

Silvia Cristina Martini Rodrigues 24 September 1999 (has links)
Este trabalho apresenta como parte de resultados, uma ampla pesquisa que permitiu identificar os grupos de pesquisas mais importantes do mundo, os quais possuem em comum o processamento de imagens médicas, mais especificamente o processamento de imagens que busca a identificação de microcalcificações mamárias. O vasto levantamento, a seleção e organização culminou na reunião de mais de cem artigos, publicados nos mais importantes periódicos da área, que mostram claramente as formas utilizadas pelos grupos de pesquisa para apresentação dos resultados encontrados pelos seus algoritmos. Esses resultados devem auxiliar o médico no diagnóstico do câncer de mama. Demonstramos neste trabalho porque as técnicas utilizadas para apresentação dos resultados são insatisfatórias e propusemos um novo método de avaliação desses resultados. O método proposto no trabalho baseia-se no teste do X&sup2 (Qui-Quadrado), nas curvas ROC (Receiver Operating Characteristic) e no teste de concordância, que juntos permitem apresentar de forma clara e objetiva as relações entre verdadeiros positivos e falsos positivos, verdadeiros negativos e falsos negativos, sensibilidade e especificidade do algoritmo analisado. O novo método é preciso e tem bases estatísticas conhecidas pelos médicos e pelos pesquisadores, facilitando sua aceitação. / This work presents as part of results, a wide investigation that it allowed to identify the principal research groups of the world, which possess in common the processing of medical images, more specifically the processing of images that search for the identification of mammary microcalcifications. The vast collection, selection and organization culminated in the meeting of more than a hundred articles, published in the most important newspapers of the area, that show the forms used by the research groups to present the results found clearly by its algorithms. Those results should assist the doctor in the diagnosis of the breast cancer. We demonstrated in this work that the techniques used for presentation of the results are unsatisfactory and we proposed a new method of evaluation of those results. The proposed method bases on the test of the X&sup2 (Qui-square), in ROC curve (Receiver Operating Characteristic) and in the agreement test, that take together allow to present in a clear and objective way the relationships among true positive and false positive, true negative and false negative, sensibility and specificity of the analyzed algorithm. The new method is precise and has statistical bases known by the clinicians and researchers, facilitating its acceptance.
110

Medical Image Fusion Based on Wavelet Transform

Ma, Yanjun January 2012 (has links)
Medical image is a core step of medical diagnosis and has been diffusely applied in modern medical domain. The technology of modern medical image is more and more mature which could present images in different modes and features. Medical image fusion is the technology that could compound two mutual images into one according to certain rules to achieve clear visual effect. By observing medical fusion image, doctor could easily confirm the position of illness. According to the mutual features of CT medical image and MRI medical image, based on the technology of wavelet transform, the paper presents twp effective and applied medical image fusion methods. The first method is based on the features of certain area. The principle is to construct weighted factor and matching degree with certain related parameters to compound the area of high frequency which presents the detailed information. To the area of low frequency, principle of maximum absolute value is selected. Finally we get the fusion image by wavelet reconfiguration. By estimate of subjectivity and objectivity, the method is applied that could export excellent visual effect and good parameters. The other method is based on lifting wavelet. It decomposes the original image to area of low frequency and high frequency, and then transforms them with different fusion rules. To area of low frequency, weighted fusion is applied and to area of high frequency, rule of maximum standard deviation is chosen. Finally we get fusion image from wavelet reconstruction. By the estimate of subjectivity and objectivity, the method is an applied and excellent way that keeps the detailed information effectively and presents clear profile. At the same time, the executed time is shorter than others.

Page generated in 0.0487 seconds