• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 239
  • 28
  • 12
  • 5
  • 4
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 334
  • 334
  • 215
  • 139
  • 131
  • 92
  • 78
  • 72
  • 70
  • 59
  • 55
  • 50
  • 36
  • 34
  • 31
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Towards label-efficient deep learning for medical image analysis

Sun, Li 11 September 2024 (has links)
Deep learning methods have achieved state-of-the-art performance in various tasks of medical image analysis. However, the success relies heavily on the expensive and time-consuming collection of large quantities of labeled data, which is not always available. This dissertation investigates the use of self-supervised and generative methods to enhance the label efficiency of deep learning models for 3D medical image analysis. Unlike natural images, medical images contain consistent anatomical contexts specific to the domain, which can be exploited as self-supervision signals to pre-train the model. Furthermore, generative methods can be utilized to synthesize additional samples, thereby increasing sample diversity. In the first part of the dissertation, we introduce self-supervised learning frameworks that learn anatomy-aware and disease-related representation. In order to learn disease-related representation, we propose two domain-specific contrasting strategies that leverage anatomical similarity across patients to create hard negative samples that incentivize learning fine-grained pathological features. In order to learn anatomy-sensitive representation, we develop a novel 3D convolutional layer with kernels that are conditionally parameterized based on the anatomical locations. We perform extensive experiments on large-scale datasets of CT scans, which show that our method improves the performance of many downstream tasks. In the second part of the dissertation, we introduce generative models capable of synthesizing high-resolution, anatomy-guided 3D medical images. Current generative models are typically limited to low-resolution outputs due to memory constraints, despite clinicians' need for high-resolution details in diagnoses. To overcome this, we present a hierarchical architecture that efficiently manages memory demands, enabling the generation of high-resolution images. In addition, diffusion-based generative models are becoming more prevalent in medical imaging. However, existing state-of-the-art methods often under-utilize the extensive information found in radiology reports and anatomical structures. To address these limitations, we propose a text-guided 3D image diffusion model that preserves anatomical details. We conduct experiments on downstream tasks and blind evaluation by radiologists, which demonstrate the clinical value of our proposed methodologies.
142

Deep Learning Informed Assistive Technologies for Biomedical and Human Activity Applications

Bayat, Nasrin 01 January 2024 (has links) (PDF)
This dissertation presents a comprehensive exploration and implementation of attention mechanisms and transformers on several healthcare-related and assistive applications. The overarching goal is to demonstrate successful implementation of the state-of-the-art approaches and provide validated models with their superior performance to inform future research and development. In Chapter 1, attention mechanisms are harnessed for the fine-grained classification of white blood cells (WBCs), showcasing their efficacy in medical diagnostics. The proposed multi-attention framework ensures accurate WBC subtype classification by capturing discriminative features from various layers, leading to superior performance compared to other existing approaches used in previous work. More importantly, the attention-based method showed consistently better results than without attention in all three backbone architectures tested (ResNet, XceptionNet and Efficient- Net). Chapter 2 introduces a self-supervised framework leveraging vision transformers for object detection, semantic and custom algorithms for collision prediction in application to assistive technology for visually impaired. In addition, Multimodal sensory feedback system was designed and fabricated to convey environmental information and potential collisions to the user for real-time navigation and grasping assistance. Chapter 3 presents implementation of transformer-based method for operation-relevant human activity recognition (HAR) and demonstrated its performance over other deep learning model, long-short term memory (LSTM). In addition, feature engineering was used (principal component analysis) to extract most discriminatory and representative motion features from the instrumented sensors, indicating that the joint angle features are more important than body segment orientations. Further, identification of a minimal number and placement of wearable sensors for use in real-world data collections and activity recognitions, addressing the critical gap found in the respective field to enhance the practicality and utility of wearable sensors for HAR. The premise and efficacy of attention-based mechanisms and transformers was confirmed through its demonstrated performance in classification accuracy as compared to LSTM. These research outcomes from three distinct applications of attention-based mechanisms and trans- formers and demonstrated performance over existing models and methods support their utility and applicability across various biomedical and human activity research fields. By sharing the custom designed model architectures, implementation methods, and resulting classification performance has direct impact in the related field by allowing direct adoption and implementation of the developed methods.
143

Pruning of U-Nets : For Faster and Smaller Machine Learning Models in Medical Image Segmentation

Hassler, Ture January 2024 (has links)
Accurate medical image segmentation is crucial for safely and effectively administering radiation therapy in cancer treatment. State of the art methods for automatic segmentation of 3D images are currently based on the U-net machine learning architecture. The current U-net models are large, often containing millions of parameters. However, the size of these machine learning models can be decreased by removing parts of the models, in what is called pruning. One algorithm, called simultaneous training and pruning (STAMP) has shown capable of reducing the model sizes upwards of 80% while keeping similar or higher levels of performance for medical image segmentation tasks.  This thesis investigates the impact of using the STAMP algorithm to reduce model size and inference time for medical image segmentation on 3D images, using one MRI and two CT datasets. Surprisingly, we show that pruning convolutional filters randomly achieves performance comparable, if not better than STAMP, provided that the filters are always removed from the largest parts of the U-net.  Inspired by these results, a modified "Flat U-net" is proposed, where an equal number of convolutional filters are used in all parts of the U-net, similar to what was achieved after pruning with our simplified pruning algorithm. The modified U-net achieves similar levels of test dice score as both a regular U-net and the STAMP pruning algorithm, on multiple datasets while avoiding pruning altogether. In addition to this the proposed modification reduces the model size by more than a factor of 12, and the number of computations by around 35%, compared to a normal U-net with the same number of input-layer convolutional filters.
144

Image Processing Methods for Myocardial Scar Analysis from 3D Late-Gadolinium Enhanced Cardiac Magnetic Resonance Images

Usta, Fatma 25 July 2018 (has links)
Myocardial scar, a non-viable tissue which occurs on the myocardium due to the insufficient blood supply to the heart muscle, is one of the leading causes of life-threatening heart disorders, including arrhythmias. Analysis of myocardial scar is important for predicting the risk of arrhythmia and locations of re-entrant circuits in patients’ hearts. For applications, such as computational modeling of cardiac electrophysiology aimed at stratifying patient risk for post-infarction arrhythmias, reconstruction of the intact geometry of scar is required. Currently, 2D multi-slice late gadolinium-enhanced magnetic resonance imaging (LGEMRI) is widely used to detect and quantify myocardial scar regions of the heart. However, due to the anisotropic spatial dimensions in 2D LGE-MR images, creating scar geometry from these images results in substantial reconstruction errors. For applications requiring reconstructing the intact geometry of scar surfaces, 3D LGE-MR images are more suited as they are isotropic in voxel dimensions and have a higher resolution. While many techniques have been reported for segmentation of scar using 2D LGEMR images, the equivalent studies for 3D LGE-MRI are limited. Most of these 2D and 3D techniques are basic intensity threshold-based methods. However, due to the lack of optimum threshold (Th) value, these intensity threshold-based methods are not robust in dealing with complex scar segmentation problems. In this study, we propose an algorithm for segmentation of myocardial scar from 3D LGE-MR images based on Markov random field based continuous max-flow (CMF) method. We utilize the segmented myocardium as the region of interest for our algorithm. We evaluated our CMF method for accuracy by comparing its results to manual delineations using 3D LGE-MR images of 34 patients. We also compared the results of the CMF technique to ones by conventional full-width-at-half-maximum (FWHM) and signal-threshold-to-reference-mean (STRM) methods. The CMF method yields a Dice similarity coefficient (DSC) of 71 +- 8.7% and an absolute volume error (|VE|) of 7.56 +- 7 cm3. Overall, the CMF method outperformed the conventional methods for almost all reported metrics in scar segmentation. We present a comparison study for scar geometries obtained from 2D vs 3D LGE-MRI. As the myocardial scar geometry greatly influences the sensitivity of risk prediction in patients, we compare and understand the differences in reconstructed geometry of scar generated using 2D versus 3D LGE-MR images beside providing a scar segmentation study. We use a retrospectively acquired dataset of 24 patients with a myocardial scar who underwent both 2D and 3D LGE-MR imaging. We use manually segmented scar volumes from 2D and 3D LGE-MRI. We then reconstruct the 2D scar segmentation boundaries to 3D surfaces using a LogOdds-based interpolation method. We use numerous metrics to quantify and analyze the scar geometry including fractal dimensions, the number-of-connected-components, and mean volume difference. The higher 3D fractal dimension results indicate that the 3D LGE-MRI produces a more complex surface geometry by better capturing the sparse nature of the scar. Finally, 3D LGE-MRI produces a larger scar surface volume (27.49 +- 20.38 cm3) than 2D-reconstructed LGE-MRI (25.07 +- 16.54 cm3).
145

Deep Neural Network for Classification of H&E-stained Colorectal Polyps : Exploring the Pipeline of Computer-Assisted Histopathology

Brunzell, Stina January 2024 (has links)
Colorectal cancer is one of the most prevalent malignancies globally and recently introduced digital pathology enables the use of machine learning as an aid for fast diagnostics. This project aimed to develop a deep neural network model to specifically identify and differentiate dysplasia in the epithelium of colorectal polyps and was posed as a binary classification problem. The available dataset consisted of 80 whole slide images of different H&E-stained polyp sections, which were parted info smaller patches, annotated by a pathologist. The best performing model was a pre-trained ResNet-18 utilising a weighted sampler, weight decay and augmentation during fine tuning. Reaching an area under precision-recall curve of 0.9989 and 97.41% accuracy on previously unseen data, the model’s performance was determined to underperform compared to the task’s intra-observer variability and be in alignment with the inter-observer variability. Final model made publicly available at https://github.com/stinabr/classification-of-colorectal-polyps.
146

Deformable lung registration for pulmonary image analysis of MRI and CT scans

Heinrich, Mattias Paul January 2013 (has links)
Medical imaging has seen a rapid development in its clinical use in assessment of treatment outcome, disease monitoring and diagnosis over the last few decades. Yet, the vast amount of available image data limits the practical use of this potentially very valuable source of information for radiologists and physicians. Therefore, the design of computer-aided medical image analysis is of great importance to imaging in clinical practice. This thesis deals with the problem of deformable image registration in the context of lung imaging, and addresses three of the major challenges involved in this challenging application, namely: designing an image similarity for multi-modal scans or scans of locally changing contrast, modelling of complex lung motion, which includes sliding motion, and approximately globally optimal mathematical optimisation to deal with large motion of small anatomical features. The two most important contributions made in this thesis are: the formulation of a multi-dimensional structural image representation, which is independent of modality, robust to intensity distortions and very discriminative for different image features, and a discrete optimisation framework, based on an image-adaptive graph structure, which enables a very efficient optimisation of large dense displacement spaces and deals well with sliding motion. The derived methods are applied to two different clinical applications in pulmonary image analysis: motion correction for breathing-cycle computed tomography (CT) volumes, and deformable multi-modal fusion of CT and magnetic resonance imaging chest scans. The experimental validation demonstrates improved registration accuracy, a high quality of the estimated deformations, and much lower computational complexity, all compared to several state-of-the-art deformable registration techniques.
147

Automatic Brain Segmentation into Substructures Using Quantitative MRI

Stacke, Karin January 2016 (has links)
Segmentation of the brain into sub-volumes has many clinical applications. Manyneurological diseases are connected with brain atrophy (tissue loss). By dividingthe brain into smaller compartments, volume comparison between the compartmentscan be made, as well as monitoring local volume changes over time. Theformer is especially interesting for the left and right cerebral hemispheres, dueto their symmetric appearance. By using automatic segmentation, the time consumingstep of manually labelling the brain is removed, allowing for larger scaleresearch.In this thesis, three automatic methods for segmenting the brain from magneticresonance (MR) images are implemented and evaluated. Since neither ofthe evaluated methods resulted in sufficiently good segmentations to be clinicallyrelevant, a novel segmentation method, called SB-GC (shape bottleneck detectionincorporated in graph cuts), is also presented. SB-GC utilizes quantitative MRIdata as input data, together with shape bottleneck detection and graph cuts tosegment the brain into the left and right cerebral hemispheres, the cerebellumand the brain stem. SB-GC shows promises of highly accurate and repeatable resultsfor both healthy, adult brains and more challenging cases such as childrenand brains containing pathologies.
148

Guidance and Visualization for Brain Tumor Surgery

Maria Marreiros, Filipe Miguel January 2016 (has links)
Image guidance and visualization play an important role in modern surgery to help surgeons perform their surgical procedures. Here, the focus is on neurosurgery applications, in particular brain tumor surgery where a craniotomy (opening of the skull) is performed to access directly the brain region to be treated. In this type of surgery, once the skull is opened the brain can change its shape, and this deformation is known as brain shift. Moreover, the boundaries of many types of tumors are difficult to identify by the naked eye from healthy tissue. The main goal of this work was to study and develop image guidance and visualization methods for tumor surgery in order to overcome the problems faced in this type of surgery. Due to brain shift the magnetic resonance dataset acquired before the operation (preoperatively) no longer corresponds to the anatomy of the patient during the operation (intraoperatively). For this reason, in this work methods were studied and developed to compensate for this deformation. To guide the deformation methods, information of the superficial vessel centerlines of the brain was used. A method for accurate (approximately 1 mm) reconstruction of the vessel centerlines using a multiview camera system was developed. It uses geometrical constraints, relaxation labeling, thin plate spline filtering and finally mean shift to find the correct correspondences between the camera images. A complete non-rigid deformation pipeline was initially proposed and evaluated with an animal model. From these experiments it was observed that although the traditional non-rigid registration methods (in our case coherent point drift) were able to produce satisfactory vessel correspondences between preoperative and intraoperative vessels, in some specific areas the results were suboptimal. For this reason a new method was proposed that combined the coherent point drift and thin plate spline semilandmarks. This combination resulted in an accurate (below 1 mm) non-rigid registration method, evaluated with simulated data where artificial deformations were performed. Besides the non-rigid registration methods, a new rigid registration method to obtain the rigid transformation between the magnetic resonance dataset and the neuronavigation coordinate systems was also developed. Once the rigid transformation and the vessel correspondences are known, the thin plate spline can be used to perform the brain shift deformation. To do so, we have used two approaches: a direct and an indirect. With the direct approach, an image is created that represents the deformed data, and with the indirect approach, a new volume is first constructed and only after that can the deformed image be created. A comparison of these two approaches, implemented for the graphics processing units, in terms of performance and image quality, was performed. The indirect method was superior in terms of performance if the sampling along the ray is high, in comparison to the voxel grid, while the direct was superior otherwise. The image quality analysis seemed to indicate that the direct method is superior. Furthermore, visualization studies were performed to understand how different rendering methods and parameters influence the perception of the spatial position of enclosed objects (typical situation of a tumor enclosed in the brain). To test these methods a new single-monitor-mirror stereoscopic display was constructed. Using this display, stereo images simulating a tumor inside the brain were presented to the users with two rendering methods (illustrative rendering and simple alpha blending) and different levels of opacity. For the simple alpha blending method an optimal opacity level was found, while for the illustrative rendering method all the opacity levels used seemed to perform similarly. In conclusion, this work developed and evaluated 3D reconstruction, registration (rigid and non-rigid) and deformation methods with the purpose of minimizing the brain shift problem. Stereoscopic perception of the spatial position of enclosed objects was also studied using different rendering methods and parameter values.
149

Automatic Detection of Anatomical Landmarks in Three-Dimensional MRI

Järrendahl, Hannes January 2016 (has links)
Detection and positioning of anatomical landmarks, also called points of interest(POI), is often a concept of interest in medical image processing. Different measures or automatic image analyzes are often directly based upon positions of such points, e.g. in organ segmentation or tissue quantification. Manual positioning of these landmarks is a time consuming and resource demanding process. In this thesis, a general method for positioning of anatomical landmarks is outlined, implemented and evaluated. The evaluation of the method is limited to three different POI; left femur head, right femur head and vertebra T9. These POI are used to define the range of the abdomen in order to measure the amount of abdominal fat in 3D data acquired with quantitative magnetic resonance imaging (MRI). By getting more detailed information about the abdominal body fat composition, medical diagnoses can be issued with higher confidence. Examples of applications could be identifying patients with high risk of developing metabolic or catabolic disease and characterizing the effects of different interventions, i.e. training, bariatric surgery and medications. The proposed method is shown to be highly robust and accurate for positioning of left and right femur head. Due to insufficient performance regarding T9 detection, a modified method is proposed for T9 positioning. The modified method shows promises of accurate and repeatable results but has to be evaluated more extensively in order to draw further conclusions.
150

Processamento e análise de imagens histológicas de pólipos para o auxílio ao diagnóstico de câncer colorretal / Processing and analysis of histological images of polyps to aid in the diagnosis of colorectal cancer

Lopes, Antonio Alex 22 March 2019 (has links)
Segundo o Instituto Nacional do Câncer (INCA), o câncer de colorretal é o terceiro tipo de câncer mais comum entre os homens e o segundo entre as mulheres. Atualmente a avaliação visual feita por um patologista é o principal método utilizado para o diagnóstico de doenças a partir de imagens microscópicas obtidas por meio de amostras em exames convencionais de biópsia. A utilização de técnicas de processamento computacional de imagens possibilita a identificação de elementos e a extração de características, o que contribui com o estudo da organização estrutural dos tecidos e de suas variações patológicas, levando a um aumento da precisão no processo de tomada de decisão. Os conceitos e técnicas envolvendo redes complexas são recursos valiosos para o desenvolvimento de métodos de análise estrutural de componentes em imagens médicas. Dentro dessa perspectiva, o objetivo geral deste trabalho foi o desenvolvimento de um método capaz de realizar o processamento e a análise de imagens obtidas em exames de biópsias de tecidos de pólipo de cólon para classificar o grau de atipia da amostra, que pode variar em: sem atipia, baixo grau, alto grau e câncer. Foram utilizadas técnicas de processamento, incluindo um conjunto de operadores morfológicos, para realizar a segmentação e a identificação de estruturas glandulares. A seguir, procedeu-se à análise estrutural baseada na identificação das glândulas, usando técnicas de redes complexas. As redes foram criadas transformado os núcleos das células que compõem as glândulas em vértices, realizando a ligação dos mesmos com 1 até 20 arestas e a extração de medidas de rede para a criação de um vetor de características. A fim de avaliar comparativamente o método proposto, foram utilizados extratores clássicos de características de imagens, a saber, Descritores de Haralick, Momentos de Hu, Transformada de Hough, e SampEn2D. Após a avaliação do método proposto em diferentes cenários de análise, o valor de acurácia geral obtida pelo mesmo foi de 82.0%, superando os métodos clássicos. Conclui-se que o método proposto para classificação de imagens histológicas de pólipos utilizando análise estrutural baseada em redes complexas mostra-se promissor no sentido de aumentar a acurácia do diagnóstico de câncer colorretal / According to the National Cancer Institute (INCA), colorectal cancer is the third most common cancer among men and the second most common cancer among women. Currently the main method used for the diagnosis of diseases from microscopic images obtained through samples in conventional biopsy tests are the visual evaluation made by a pathologist. The use of computational image processing techniques allows the identification of elements and the extraction of characteristics, which contributes to the study of the structural organization of tissues and their pathological variations, leading to an increase of precision in the decision making process. Concepts and techniques involving complex networks are valuable resources for the development of structural analysis methods of components in medical images. In this perspective, the general objective of this work was the development of a method capable of performing the image processing and analysis obtained in biopsies of colon polyp tissue to classify the degree of atypia of the sample, which may vary in: without atypia, low grade, high grade and cancer. Processing techniques including a set of morphological operators, were used to perform the segmentation and identification of glandular structures. Next, structural analysis was performed based on glands identification, using complex network techniques.The networks were created transforming the core of the cells that make up the glands in vertices, making the connection of the same with 1 to 20 edges and the extraction of network measurements to create a vector of characteristics. In order to comparatively evaluate the proposed method, classical image characteristic extractors were used, namely, Haralicks Descriptors, Hus Moments, Hough Transform, and SampEn2D. After the evaluation of the proposed method in different analysis scenarios, the overall accuracy value obtained by it was 82.0%, surpassing the classical methods. It is concluded that the proposed method for the classification of histological images of polyps using structural analysis based on complex networks is promising in order to increase the accuracy of the diagnosis of colorectal cancer

Page generated in 0.0556 seconds