• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 35
  • 2
  • 1
  • Tagged with
  • 57
  • 57
  • 57
  • 23
  • 15
  • 15
  • 13
  • 12
  • 10
  • 10
  • 10
  • 8
  • 8
  • 8
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Morphometric analysis of brain structures in MRI

González Ballester, Miguel Ángel January 1999 (has links)
Medical computer vision is a novel research discipline based on the application of computer vision methods to data sets acquired via medical imaging techniques. This work focuses on magnetic resonance imaging (MRI) data sets, particularly in studies of schizophrenia and multiple sclerosis. Research on these diseases is challenged by the lack of appropriate morphometric tools to accurately quantify lesion growth, assess the effectiveness of a drug treatment, or investigate anatomical information believed to be evidence of schizophrenia. Thus, most hypotheses involving these conditions remain unproven. This thesis contributes towards the development of such morphometric techniques. A framework combining several tools is established, allowing for compensation of bias fields, boundary detection by modelling partial volume effects (PVE), and a combined statistical and geometrical segmentation method. Most importantly, it also allows for the computation of confidence bounds in the location of the object being segmented by bounding PVE voxels. Bounds obtained in such fashion encompass a significant percentage of the volume of the object (typically 20-60%). A statistical model of the intensities contained in PVE voxels is used to provide insight into the contents of PVE voxels and further narrow confidence bounds. This not only permits a reduction by an order of magnitude in the width of the confidence intervals, but also establishes a statistical mechanism to obtain probability distributions on shape descriptors (e.g. volume), instead of just a raw magnitude or a set of confidence bounds. A challenging clinical study is performed using these tools: to investigate differences in asymmetry of the temporal horns in schizophrenia. This study is of high clinical relevance. The results show that our tools are sufficiently accurate for studies of this kind, thus providing clinicians, for the first time, with the means to corroborate unproven hypotheses or reliably assess patient evolution.
42

Computer-aided analysis of fetal cardiac ultrasound videos

Bridge, Christopher January 2017 (has links)
This thesis addresses the task of developing automatic algorithms for analysing the two-dimensional ultrasound video footage obtained from fetal heart screening scans. These scans are typically performed in the second trimester of pregnancy to check for congenital heart anomalies and require significant training and anatomical knowledge to perform. The aim is to develop a tool that runs at high frame rates with no user initialisation and infers the visibility, position, orientation, view classification, and cardiac phase of the heart, and additionally the locations of cardiac structures of interest (such as valves and vessels) in a manner that is robust to the various sources of variation that occur in real-world ultrasound scanning. This is the first work to attempt such a detailed automated analysis of these videos. The problem is posed as a Bayesian filtering problem, which provides a principled framework for aggregating uncertain measurements across a number of frames whilst exploiting the constraints imposed by anatomical feasibility. The resulting inference problem is solved approximately with a particle filter, whose state space is partitioned to reduce the problems associated with filtering in high-dimensional spaces. Rotation-invariant features are captured from the videos in an efficient way in order to tackle the problem of unknown orientation. These are used within random forest learning models, including a novel formulation to predict circular-valued variables. The algorithm is validated on an annotated clinical dataset, and the results are compared to estimates of inter- and intra-observer variation, which are significant in both cases due to the inherent ambiguity in the imagery. The results suggest that the algorithm's output approaches these benchmarks in several respects, and fall slightly behind in others. The work presented here is an important first step towards developing automated clinical tools for the detection of congenital heart disease.
43

Random Regression Forests for Fully Automatic Multi-Organ Localization in CT Images / Localisation automatique et multi-organes d'images scanner : utilisation de forêts d'arbres décisionnels (Random Regression Forests)

Samarakoon, Prasad 30 September 2016 (has links)
La localisation d'un organe dans une image médicale en délimitant cet organe spécifique par rapport à une entité telle qu'une boite ou sphère englobante est appelée localisation d'organes. La localisation multi-organes a lieu lorsque plusieurs organes sont localisés simultanément. La localisation d'organes est l'une des étapes les plus cruciales qui est impliquée dans toutes les phases du traitement du patient à partir de la phase de diagnostic à la phase finale de suivi. L'utilisation de la technique d'apprentissage supervisé appelée forêts aléatoires (Random Forests) a montré des résultats très encourageants dans de nombreuses sous-disciplines de l'analyse d'images médicales. De même, Random Regression Forests (RRF), une spécialisation des forêts aléatoires pour la régression, ont produit des résultats de l'état de l'art pour la localisation automatique multi-organes.Bien que l'état de l'art des RRF montrent des résultats dans la localisation automatique de plusieurs organes, la nouveauté relative de cette méthode dans ce domaine soulève encore de nombreuses questions sur la façon d'optimiser ses paramètres pour une utilisation cohérente et efficace. Basé sur une connaissance approfondie des rouages des RRF, le premier objectif de cette thèse est de proposer une paramétrisation cohérente et automatique des RRF. Dans un second temps, nous étudions empiriquement l'hypothèse d'indépendance spatiale utilisée par RRF. Enfin, nous proposons une nouvelle spécialisation des RRF appelé "Light Random Regression Forests" pour améliorant l'empreinte mémoire et l'efficacité calculatoire. / Locating an organ in a medical image by bounding that particular organ with respect to an entity such as a bounding box or sphere is termed organ localization. Multi-organ localization takes place when multiple organs are localized simultaneously. Organ localization is one of the most crucial steps that is involved in all the phases of patient treatment starting from the diagnosis phase to the final follow-up phase. The use of the supervised machine learning technique called random forests has shown very encouraging results in many sub-disciplines of medical image analysis. Similarly, Random Regression Forests (RRF), a specialization of random forests for regression, have produced the state of the art results for fully automatic multi-organ localization.Although, RRF have produced state of the art results in multi-organ segmentation, the relative novelty of the method in this field still raises numerous questions about how to optimize its parameters for consistent and efficient usage. The first objective of this thesis is to acquire a thorough knowledge of the inner workings of RRF. After achieving the above mentioned goal, we proposed a consistent and automatic parametrization of RRF. Then, we empirically proved the spatial indenpendency hypothesis used by RRF. Finally, we proposed a novel RRF specialization called Light Random Regression Forests for multi-organ localization.
44

Segmentation Methods for Medical Image Analysis : Blood vessels, multi-scale filtering and level set methods

Läthén, Gunnar January 2010 (has links)
Image segmentation is the problem of partitioning an image into meaningful parts, often consisting of an object and background. As an important part of many imaging applications, e.g. face recognition, tracking of moving cars and people etc, it is of general interest to design robust and fast segmentation algorithms. However, it is well accepted that there is no general method for solving all segmentation problems. Instead, the algorithms have to be highly adapted to the application in order to achieve good performance. In this thesis, we will study segmentation methods for blood vessels in medical images. The need for accurate segmentation tools in medical applications is driven by the increased capacity of the imaging devices. Common modalities such as CT and MRI generate images which simply cannot be examined manually, due to high resolutions and a large number of image slices. Furthermore, it is very difficult to visualize complex structures in three-dimensional image volumes without cutting away large portions of, perhaps important, data. Tools, such as segmentation, can aid the medical staff in browsing through such large images by highlighting objects of particular importance. In addition, segmentation in particular can output models of organs, tumors, and other structures for further analysis, quantification or simulation. We have divided the segmentation of blood vessels into two parts. First, we model the vessels as a collection of lines and edges (linear structures) and use filtering techniques to detect such structures in an image. Second, the output from this filtering is used as input for segmentation tools. Our contributions mainly lie in the design of a multi-scale filtering and integration scheme for de- tecting vessels of varying widths and the modification of optimization schemes for finding better segmentations than traditional methods do. We validate our ideas on synthetical images mimicking typical blood vessel structures, and show proof-of-concept results on real medical images.
45

Machine learning methods for brain tumor segmentation / Méthodes d'apprentissage automatique pour la segmentation de tumeurs au cerveau

Havaei, Seyed Mohammad January 2017 (has links)
Abstract : Malignant brain tumors are the second leading cause of cancer related deaths in children under 20. There are nearly 700,000 people in the U.S. living with a brain tumor and 17,000 people are likely to loose their lives due to primary malignant and central nervous system brain tumor every year. To identify whether a patient is diagnosed with brain tumor in a non-invasive way, an MRI scan of the brain is acquired followed by a manual examination of the scan by an expert who looks for lesions (i.e. cluster of cells which deviate from healthy tissue). For treatment purposes, the tumor and its sub-regions are outlined in a procedure known as brain tumor segmentation . Although brain tumor segmentation is primarily done manually, it is very time consuming and the segmentation is subject to variations both between observers and within the same observer. To address these issues, a number of automatic and semi-automatic methods have been proposed over the years to help physicians in the decision making process. Methods based on machine learning have been subjects of great interest in brain tumor segmentation. With the advent of deep learning methods and their success in many computer vision applications such as image classification, these methods have also started to gain popularity in medical image analysis. In this thesis, we explore different machine learning and deep learning methods applied to brain tumor segmentation. / Résumé: Les tumeurs malignes au cerveau sont la deuxième cause principale de décès chez les enfants de moins de 20 ans. Il y a près de 700 000 personnes aux États-Unis vivant avec une tumeur au cerveau, et 17 000 personnes sont chaque année à risque de perdre leur vie suite à une tumeur maligne primaire dans le système nerveu central. Pour identifier de façon non-invasive si un patient est atteint d'une tumeur au cerveau, une image IRM du cerveau est acquise et analysée à la main par un expert pour trouver des lésions (c.-à-d. un groupement de cellules qui diffère du tissu sain). Une tumeur et ses régions doivent être détectées à l'aide d'une segmentation pour aider son traitement. La segmentation de tumeur cérébrale et principalement faite à la main, c'est une procédure qui demande beaucoup de temps et les variations intra et inter expert pour un même cas varient beaucoup. Pour répondre à ces problèmes, il existe beaucoup de méthodes automatique et semi-automatique qui ont été proposés ces dernières années pour aider les praticiens à prendre des décisions. Les méthodes basées sur l'apprentissage automatique ont suscité un fort intérêt dans le domaine de la segmentation des tumeurs cérébrales. L'avènement des méthodes de Deep Learning et leurs succès dans maintes applications tels que la classification d'images a contribué à mettre de l'avant le Deep Learning dans l'analyse d'images médicales. Dans cette thèse, nous explorons diverses méthodes d'apprentissage automatique et de Deep Learning appliquées à la segmentation des tumeurs cérébrales.
46

Apprentissage automatique pour simplifier l’utilisation de banques d’images cardiaques / Machine Learning for Simplifying the Use of Cardiac Image Databases

Margeta, Ján 14 December 2015 (has links)
L'explosion récente de données d'imagerie cardiaque a été phénoménale. L'utilisation intelligente des grandes bases de données annotées pourrait constituer une aide précieuse au diagnostic et à la planification de thérapie. En plus des défis inhérents à la grande taille de ces banques de données, elles sont difficilement utilisables en l'état. Les données ne sont pas structurées, le contenu des images est variable et mal indexé, et les métadonnées ne sont pas standardisées. L'objectif de cette thèse est donc le traitement, l'analyse et l'interprétation automatique de ces bases de données afin de faciliter leur utilisation par les spécialistes de cardiologie. Dans ce but, la thèse explore les outils d'apprentissage automatique supervisé, ce qui aide à exploiter ces grandes quantités d'images cardiaques et trouver de meilleures représentations. Tout d'abord, la visualisation et l'interprétation d'images est améliorée en développant une méthode de reconnaissance automatique des plans d'acquisition couramment utilisés en imagerie cardiaque. La méthode se base sur l'apprentissage par forêts aléatoires et par réseaux de neurones à convolution, en utilisant des larges banques d'images, où des types de vues cardiaques sont préalablement établies. La thèse s'attache dans un deuxième temps au traitement automatique des images cardiaques, avec en perspective l'extraction d'indices cliniques pertinents. La segmentation des structures cardiaques est une étape clé de ce processus. A cet effet une méthode basée sur les forêts aléatoires qui exploite des attributs spatio-temporels originaux pour la segmentation automatique dans des images 3Det 3D+t est proposée. En troisième partie, l'apprentissage supervisé de sémantique cardiaque est enrichi grâce à une méthode de collecte en ligne d'annotations d'usagers. Enfin, la dernière partie utilise l'apprentissage automatique basé sur les forêts aléatoires pour cartographier des banques d'images cardiaques, tout en établissant les notions de distance et de voisinage d'images. Une application est proposée afin de retrouver dans une banque de données, les images les plus similaires à celle d'un nouveau patient. / The recent growth of data in cardiac databases has been phenomenal. Cleveruse of these databases could help find supporting evidence for better diagnosis and treatment planning. In addition to the challenges inherent to the large quantity of data, the databases are difficult to use in their current state. Data coming from multiple sources are often unstructured, the image content is variable and the metadata are not standardised. The objective of this thesis is therefore to simplify the use of large databases for cardiology specialists withautomated image processing, analysis and interpretation tools. The proposed tools are largely based on supervised machine learning techniques, i.e. algorithms which can learn from large quantities of cardiac images with groundtruth annotations and which automatically find the best representations. First, the inconsistent metadata are cleaned, interpretation and visualisation of images is improved by automatically recognising commonly used cardiac magnetic resonance imaging views from image content. The method is based on decision forests and convolutional neural networks trained on a large image dataset. Second, the thesis explores ways to use machine learning for extraction of relevant clinical measures (e.g. volumes and masses) from3D and 3D+t cardiac images. New spatio-temporal image features are designed andclassification forests are trained to learn how to automatically segment the main cardiac structures (left ventricle and left atrium) from voxel-wise label maps. Third, a web interface is designed to collect pairwise image comparisons and to learn how to describe the hearts with semantic attributes (e.g. dilation, kineticity). In the last part of the thesis, a forest-based machinelearning technique is used to map cardiac images to establish distances and neighborhoods between images. One application is retrieval of the most similar images.
47

Représentation réduite de la segmentation et du suivi des images cardiaques pour l’analyse longitudinale de groupe / Reduced representation of segmentation and tracking in cardiac images for group-wise longitudinal analysis

Rohé, Marc-Michel 03 July 2017 (has links)
Cette thèse présente des méthodes d’imagerie pour l’analyse du mouvement cardiaque afin de permettre des statistiques groupées, un diagnostic automatique et une étude longitudinale. Ceci est réalisé en combinant des méthodes d’apprentissage et de modélisation statistique. En premier lieu, une méthode automatique de segmentation du myocarde est définie. Pour ce faire, nous développons une méthode de recalage très rapide basée sur des réseaux neuronaux convolutifs qui sont entrainés à apprendre le recalage cardiaque inter-sujet. Ensuite, nous intégrons cette méthode de recalage dans une pipeline de segmentation multi-atlas. Ensuite, nous améliorons des méthodes de suivi du mouvement cardiaque afin de définir des représentations à faible dimension. Deux méthodes différentes sont développées, l’une s’appuyant sur des sous-espaces barycentriques construits sur des frames de référence de la séquence et une autre basée sur une représentation d’ordre réduit du mouvement avec des transformations polyaffine. Enfin, nous appliquons la représentation précédemment définie au problème du diagnostic et de l’analyse longitudinale. Nous montrons que ces représentations en- codent des caractéristiques pertinentes permettant le diagnostic des patients atteint d’infarct et de Tétralogie de Fallot ainsi que l’analyse de l’évolution dans le temps du mouvement cardiaque des patients atteints de cardiomyopathies ou d’obésité. Ces trois axes forment un cadre pour l’étude du mouvement cardiaque de bout en bout de l’acquisition des images médicales jusqu’à leur analyse automatique afin d’améliorer la prise de décision clinique grâce à un traitement personnalisé assisté par ordinateur. / This thesis presents image-based methods for the analysis of cardiac motion to enable group-wise statistics, automatic diagnosis and longitudinal study. This is achieved by combining advanced medical image processing with machine learning methods and statistical modelling. The first axis of this work is to define an automatic method for the segmentation of the myocardium. We develop a very-fast registration method based on convolutional neural networks that is trained to learn inter-subject heart registration. Then, we embed this registration method into a multi-atlas segmentation pipeline. The second axis of this work is focused on the improvement of cardiac motion tracking methods in order to define relevant low-dimensional representations. Two different methods are developed, one relying on Barycentric Subspaces built on ref- erences frames of the sequence, and another based on a reduced order representation of the motion from polyaffine transformations. Finally, in the last axis, we apply the previously defined representation to the problem of diagnosis and longitudinal analysis. We show that these representations encode relevant features allowing the diagnosis of infarcted patients and Tetralogy of Fallot versus controls and the analysis of the evolution through time of the cardiac motion of patients with either cardiomyopathies or obesity. These three axes form an end to end framework for the study of cardiac motion starting from the acquisition of the medical images to their automatic analysis. Such a framework could be used for diagonis and therapy planning in order to improve the clinical decision making with a more personalised computer-aided medicine.
48

Radiomics risk modelling using machine learning algorithms for personalised radiation oncology

Leger, Stefan 18 June 2019 (has links)
One major objective in radiation oncology is the personalisation of cancer treatment. The implementation of this concept requires the identification of biomarkers, which precisely predict therapy outcome. Besides molecular characterisation of tumours, a new approach known as radiomics aims to characterise tumours using imaging data. In the context of the presented thesis, radiomics was established at OncoRay to improve the performance of imaging-based risk models. Two software-based frameworks were developed for image feature computation and risk model construction. A novel data-driven approach for the correction of intensity non-uniformity in magnetic resonance imaging data was evolved to improve image quality prior to feature computation. Further, different feature selection methods and machine learning algorithms for time-to-event survival data were evaluated to identify suitable algorithms for radiomics risk modelling. An improved model performance could be demonstrated using computed tomography data, which were acquired during the course of treatment. Subsequently tumour sub-volumes were analysed and it was shown that the tumour rim contains the most relevant prognostic information compared to the corresponding core. The incorporation of such spatial diversity information is a promising way to improve the performance of risk models.:1. Introduction 2. Theoretical background 2.1. Basic physical principles of image modalities 2.1.1. Computed tomography 2.1.2. Magnetic resonance imaging 2.2. Basic principles of survival analyses 2.2.1. Semi-parametric survival models 2.2.2. Full-parametric survival models 2.3. Radiomics risk modelling 2.3.1. Feature computation framework 2.3.2. Risk modelling framework 2.4. Performance assessments 2.5. Feature selection methods and machine learning algorithms 2.5.1. Feature selection methods 2.5.2. Machine learning algorithms 3. A physical correction model for automatic correction of intensity non-uniformity in magnetic resonance imaging 3.1. Intensity non-uniformity correction methods 3.2. Physical correction model 3.2.1. Correction strategy and model definition 3.2.2. Model parameter constraints 3.3. Experiments 3.3.1. Phantom and simulated brain data set 3.3.2. Clinical brain data set 3.3.3. Abdominal data set 3.4. Summary and discussion 4. Comparison of feature selection methods and machine learning algorithms for radiomics time-to-event survival models 4.1. Motivation 4.2. Patient cohort and experimental design 4.2.1. Characteristics of patient cohort 4.2.2. Experimental design 4.3. Results of feature selection methods and machine learning algorithms evaluation 4.4. Summary and discussion 5. Characterisation of tumour phenotype using computed tomography imaging during treatment 5.1. Motivation 5.2. Patient cohort and experimental design 5.2.1. Characteristics of patient cohort 5.2.2. Experimental design 5.3. Results of computed tomography imaging during treatment 5.4. Summary and discussion 6. Tumour phenotype characterisation using tumour sub-volumes 6.1. Motivation 6.2. Patient cohort and experimental design 6.2.1. Characteristics of patient cohorts 6.2.2. Experimental design 6.3. Results of tumour sub-volumes evaluation 6.4. Summary and discussion 7. Summary and further perspectives 8. Zusammenfassung
49

Fourier Based Method for Simultaneous Segmentation and Nonlinear Registration

ATTA-FOSU, THOMAS 02 June 2017 (has links)
No description available.
50

Role of Elasticity in Respiratory and Cardiovascular Flow

Subramaniam, Dhananjay Radhakrishnan 23 July 2018 (has links)
No description available.

Page generated in 0.0896 seconds