• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 240
  • 28
  • 12
  • 5
  • 4
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 335
  • 335
  • 215
  • 139
  • 131
  • 93
  • 78
  • 72
  • 70
  • 59
  • 55
  • 50
  • 36
  • 34
  • 31
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Development of a Remote Medical Image Browsing and Interaction System

Ye, Wei 09 July 2010 (has links)
No description available.
62

An Analysis of Context Channel Integration Strategies for Deep Learning-Based Medical Image Segmentation / Strategier för kontextkanalintegrering inom djupinlärningsbaserad medicinsk bildsegmentering

Stoor, Joakim January 2020 (has links)
This master thesis investigates different approaches for integrating prior information into a neural network for segmentation of medical images. In the study, liver and liver tumor segmentation is performed in a cascading fashion. Context channels in the form of previous segmentations are integrated into a segmentation network at multiple positions and network depths using different integration strategies. Comparisons are made with the traditional integration approach where an input image is concatenated with context channels at a network’s input layer. The aim is to analyze if context information is lost in the upper network layers when the traditional approach is used, and if better results can be achieved if prior information is propagated to deeper layers. The intention is to support further improvements in interactive image segmentation where extra input channels are common. The results that are achieved are, however, inconclusive. It is not possible to differentiate the methods from each other based on the quantitative results, and all the methods show the ability to generalize to an unseen object class after training. Compared to the other evaluated methods there are no indications that the traditional concatenation approach is underachieving, and it cannot be declared that meaningful context information is lost in the deeper network layers.
63

ASSESSMENT OF HIP FRACTURE RISK IN OLDER ADULTS BY CONSIDERING THE EFFECT OF GEOMETRY AND BONE MINERAL DENSITY DISTRIBUTION IN THE FEMUR USING SINGLE DUAL-ENERGY X-RAY ABSORPTIOMETRY SCANS / ASSESSMENT OF HIP FRACTURE RISK IN OLDER ADULTS

JAZINIZADEH, FATEMEH January 2020 (has links)
Hip fractures in older adults have severe effects on patients’ morbidity as well as mortality, so it is crucial to avoid this injury through the early identification of patients at high risk. Currently, the diagnosis of osteoporosis and consequently hip fracture risk is done through the measurement of bone mineral density by a dual-energy X-ray absorptiometry (DXA) scan. However, studies show that this method is not accurate enough, and a high percentage of patients who sustain a hip fracture had non-osteoporotic DXA scans less than a year before the incidence. In this research, to enhance the hip fracture risk prediction, the effect of a femur’s geometry and bone mineral density distribution was considered in the hip fracture risk estimation. This was done through 2D and 3D statistical shape and appearance modeling of the proximal femur using standard clinical DXA scans. To assess the proposed techniques, destructive mechanical tests were performed on 16 isolated cadaveric femurs. Also, through collaboration with the Canadian Osteoporosis Study (CaMos), the proposed statistical techniques to predict the hip fracture risk were evaluated in a clinical population as well. The results of this study showed that new techniques can enhance hip fracture risk estimation; in the clinical study, 2D and 3D statistical modeling were able to improve identifying patients at high risk by 40% and 44% over the clinical standard method. Also, the percentage of correct predictions using 2D statistical models did not differ significantly from the 3D predictions. Therefore, by applying these techniques in clinical practice it could be possible to identify patients at high risk of sustaining a hip fracture more accurately and eventually reduce the incidence of hip fractures and the pain and social and economic burden that comes with it. / Thesis / Doctor of Philosophy (PhD) / Diagnosis of osteoporosis and consequently hip fracture risk is based on the measurement of bone mineral density in clinical imaging called DXA scanning. However, studies have shown that this method is not sufficient in identifying all patients at high risk of sustaining a hip fracture. The purpose of this work was to incorporate the geometry and bone mineral density distribution of the proximal femur in hip fracture risk prediction through image processing of DXA scans. Two algorithms of 2D and 3D statistical shape and appearance modeling were implemented and evaluated in a cadaveric study (comparing the predicted fracture load to measured ones) as well as a clinical study (comparing the fracture predictions to the fracture history of patients). The results indicated that new techniques can enhance hip fracture risk estimation compared to the clinical standard method, and hence the devastating injury can be prevented through applying protective measures.
64

AI MEET BIOINFORMATICS: INTERPRETING BIOMEDICAL DATA USING DEEP LEARNING

Ziyang Tang (6593525) 20 May 2024 (has links)
<p>Artificial Intelligence driven approaches, especially  based on deep learning algorithms, provided an alternative perspective in summarizing the common features in large-scale and complex datasets and aided the human professions in discovering novel features in cross-domain research. In this dissertation, the author proposed his research of developing AI-driven algorithms to reveal the real relation of complex medical data. The author started to identify the abnormal structures from the radiology images. When the abnormal structure was detected, the author built a model to explore the domain layers or cell phenotype of the specific tissues. Finally, the author evaluated cell-cell communication for the downstream tasks.</p> <p><br></p> <p>In his first research, the author applied IResNet, a two-stage prediction-interpretation Convolution Neural Network, to assist clinicians in the early diagnosis of Autism Spectrum Disorders (ASD). IresNet first predicted the input sMRI scan to one of the two categories: (1) ASD group or (2) Normal Control group, and interpret the prediction using a \textit{post-hoc} approach and visualized the abnormal structures on top of the raw inputs. The proposed method can be applied to other neural diseases such as Alzheimer's Disease. </p> <p><br></p> <p>When the abnormal structure was detected, the author proposed a method to reveal the latent relation at the tissue level. Thus the author proposed SiGra, an unsupervised learning paradigm to identify the domain layers and cellular phenotype in a particular tissue slide based on the corresponding gene expression matrix and the morphology representations. SiGra outperformed other benchmarking algorithms in three different tissue slides from three commercialized single-cell platforms.</p> <p><br></p> <p>At last, the author measured the potential interactions between two cells. The proposed spaCI, measured the correlation of a Ligand-Receptor interaction in the high-dimension latent space and predicted the interactive $L-R$ pair for downstream analysis. </p> <p><br></p> <p>In summary, the author presented three end-to-end AI-driven frameworks to facilitate clinicians and pathologists in better understanding the latent connections of complex diseases and tissues. </p>
65

Finding Corresponding Regions In Different Mammography Projections Using Convolutional Neural Networks / Prediktion av Motsvarande Regioner i Olika Mammografiprojektioner med Faltningsnätverk

Eriksson, Emil January 2022 (has links)
Mammography screenings are performed regularly on women in order to detect early signs of breast cancer, which is the most common form of cancer. During an exam, X-ray images (called mammograms) are taken from two different angles and reviewed by a radiologist. If they find a suspicious lesion in one of the views, they confirm it by finding the corresponding region in the other view. Finding the corresponding region is a non-trivial task, due to the different image projections of the breast and different angles of compression needed during the exam. This thesis explores the possibility of using deep learning, a data-driven approach, to solve the corresponding regions problem. Specifically, a convolutional neural network (CNN) called U-net is developed and trained on scanned mammograms, and evaluated on both scanned and digital mammograms. A model based method called the arc model is developed for comparison. Results show that the best U-net produced better results than the arc model on all evaluated metrics, and succeeded in finding the corresponding area 83.9% of times, compared to 72.6%. Generalization to digital images was excellent, achieving an even higher score of 87.6%, compared to 83.5% for the arc model.
66

Empirical Analysis of Learnable Image Resizer for Large-Scale Medical Classification and Segmentation

Rahman, M M Shaifur 07 August 2023 (has links)
No description available.
67

GAN-Based Synthesis of Brain Tumor Segmentation Data : Augmenting a dataset by generating artificial images

Foroozandeh, Mehdi January 2020 (has links)
Machine learning applications within medical imaging often suffer from a lack of data, as a consequence of restrictions that hinder the free distribution of patient information. In this project, GANs (generative adversarial networks) are used to generate data synthetically, in an effort to circumvent this issue. The GAN framework PGAN is trained on the brain tumor segmentation dataset BraTS to generate new, synthetic brain tumor masks with the same visual characteristics as the real samples. The image-to-image translation network SPADE is subsequently trained on the image pairs in the real dataset, to learn a transformation from segmentation masks to brain MR images, and is in turn used to map the artificial segmentation masks generated by PGAN to corresponding artificial MR images. The images generated by these networks form a new, synthetic dataset, which is used to augment the original dataset. Different quantities of real and synthetic data are then evaluated in three different brain tumor segmentation tasks, where the image segmentation network U-Net is trained on this data to segment (real) MR images into the classes in question. The final segmentation performance of each training instance is evaluated over test data from the real dataset with the Weighted Dice Loss metric. The results indicate a slight increase in performance across all segmentation tasks evaluated in this project, when including some quantity of synthetic images. However, the differences were largest when the experiments were restricted to using only 20 % of the real data, and less significant when the full dataset was made available. A majority of the generated segmentation masks appear visually convincing to an extent (although somewhat noisy with regards to the intra-tumoral classes), while a relatively large proportion appear heavily noisy and corrupted. However, the translation of segmentation masks to MR images via SPADE proved more reliable and consistent.
68

Fast Methods for Vascular Segmentation Based on Approximate Skeleton Detection

Lidayová, Kristína January 2017 (has links)
Modern medical imaging techniques have revolutionized health care over the last decades, providing clinicians with high-resolution 3D images of the inside of the patient's body without the need for invasive procedures. Detailed images of the vascular anatomy can be captured by angiography, providing a valuable source of information when deciding whether a vascular intervention is needed, for planning treatment, and for analyzing the success of therapy. However, increasing level of detail in the images, together with a wide availability of imaging devices, lead to an urgent need for automated techniques for image segmentation and analysis in order to assist the clinicians in performing a fast and accurate examination. To reduce the need for user interaction and increase the speed of vascular segmentation,  we propose a fast and fully automatic vascular skeleton extraction algorithm. This algorithm first analyzes the volume's intensity histogram in order to automatically adapt the internal parameters to each patient and then it produces an approximate skeleton of the patient's vasculature. The skeleton can serve as a seed region for subsequent surface extraction algorithms. Further improvements of the skeleton extraction algorithm include the expansion to detect the skeleton of diseased arteries and the design of a convolutional neural network classifier that reduces false positive detections of vascular cross-sections. In addition to the complete skeleton extraction algorithm, the thesis presents a segmentation algorithm based on modified onion-kernel region growing. It initiates the growing from the previously extracted skeleton and provides a rapid binary segmentation of tubular structures. To provide the possibility of extracting precise measurements from this segmentation we introduce a method for obtaining a segmentation with subpixel precision out of the binary segmentation and the original image. This method is especially suited for thin and elongated structures, such as vessels, since it does not shrink the long protrusions. The method supports both 2D and 3D image data. The methods were validated on real computed tomography datasets and are primarily intended for applications in vascular segmentation, however, they are robust enough to work with other anatomical tree structures after adequate parameter adjustment, which was demonstrated on an airway-tree segmentation.
69

Machine learning methods for brain tumor segmentation / Méthodes d'apprentissage automatique pour la segmentation de tumeurs au cerveau

Havaei, Seyed Mohammad January 2017 (has links)
Abstract : Malignant brain tumors are the second leading cause of cancer related deaths in children under 20. There are nearly 700,000 people in the U.S. living with a brain tumor and 17,000 people are likely to loose their lives due to primary malignant and central nervous system brain tumor every year. To identify whether a patient is diagnosed with brain tumor in a non-invasive way, an MRI scan of the brain is acquired followed by a manual examination of the scan by an expert who looks for lesions (i.e. cluster of cells which deviate from healthy tissue). For treatment purposes, the tumor and its sub-regions are outlined in a procedure known as brain tumor segmentation . Although brain tumor segmentation is primarily done manually, it is very time consuming and the segmentation is subject to variations both between observers and within the same observer. To address these issues, a number of automatic and semi-automatic methods have been proposed over the years to help physicians in the decision making process. Methods based on machine learning have been subjects of great interest in brain tumor segmentation. With the advent of deep learning methods and their success in many computer vision applications such as image classification, these methods have also started to gain popularity in medical image analysis. In this thesis, we explore different machine learning and deep learning methods applied to brain tumor segmentation. / Résumé: Les tumeurs malignes au cerveau sont la deuxième cause principale de décès chez les enfants de moins de 20 ans. Il y a près de 700 000 personnes aux États-Unis vivant avec une tumeur au cerveau, et 17 000 personnes sont chaque année à risque de perdre leur vie suite à une tumeur maligne primaire dans le système nerveu central. Pour identifier de façon non-invasive si un patient est atteint d'une tumeur au cerveau, une image IRM du cerveau est acquise et analysée à la main par un expert pour trouver des lésions (c.-à-d. un groupement de cellules qui diffère du tissu sain). Une tumeur et ses régions doivent être détectées à l'aide d'une segmentation pour aider son traitement. La segmentation de tumeur cérébrale et principalement faite à la main, c'est une procédure qui demande beaucoup de temps et les variations intra et inter expert pour un même cas varient beaucoup. Pour répondre à ces problèmes, il existe beaucoup de méthodes automatique et semi-automatique qui ont été proposés ces dernières années pour aider les praticiens à prendre des décisions. Les méthodes basées sur l'apprentissage automatique ont suscité un fort intérêt dans le domaine de la segmentation des tumeurs cérébrales. L'avènement des méthodes de Deep Learning et leurs succès dans maintes applications tels que la classification d'images a contribué à mettre de l'avant le Deep Learning dans l'analyse d'images médicales. Dans cette thèse, nous explorons diverses méthodes d'apprentissage automatique et de Deep Learning appliquées à la segmentation des tumeurs cérébrales.
70

Segmentering av medicinska bilder med inspiration från en quantum walk algoritm / Segmentation of Medical Images Inspired by a Quantum Walk Algorithm

Altuni, Bestun, Aman Ali, Jasin January 2023 (has links)
För närvarande utforskas quantum walk som en potentiell metod för att analysera medicinska bilder. Med inspiration från Gradys random walk-algoritm för bildbehandling har vi utvecklat en metod som bygger på de kvantmekaniska fördelar som quantum walk innehar för att detektera och segmentera medicinska bilder. Vidare har de segmenterade bilderna utvärderats utifrån klinisk relevans. Teoretiskt sett kan quantum walk-algoritmer erbjuda en mer effektiv metod för bildanalys inom medicin jämfört med traditionella metoder för bildsegmentering som exempelvis klassisk random walk, som inte bygger på kvantmekanik. Inom området finns omfattande potential för utveckling, och det är av yttersta vikt att fortsätta utforska och förbättra metoder. För närvarande kan det konstateras att det är en lång väg att vandra innan detta är något som kan appliceras i en klinisk miljö. / Currently, quantum walk is being explored as a potential method for analyzing medical images. Taking inspiration from Grady's random walk algorithm for image processing, we have developed an approach that leverages the quantum mechanical advantages inherent in quantum walk to detect and segment medical images. Furthermore, the segmented images have been evaluated in terms of clinical relevance. Theoretically, quantum walk algorithms have the potential to offer a more efficient method for medical image analysis compared to traditional methods of image segmentation, such as classical random walk, which do not rely on quantum mechanics. Within this field, there is significant potential for development, and it is of utmost importance to continue exploring and refining these methods. However, it should be noted that there is a long way to go before this becomes something that can be applied in a clinical environment.

Page generated in 0.0554 seconds