• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 276
  • 82
  • 58
  • 25
  • 17
  • 7
  • 6
  • 6
  • 5
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 588
  • 588
  • 153
  • 116
  • 107
  • 96
  • 85
  • 84
  • 81
  • 80
  • 74
  • 72
  • 70
  • 69
  • 64
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
471

Mutual Enhancement of Environment Recognition and Semantic Segmentation in Indoor Environment

Challa, Venkata Vamsi January 2024 (has links)
Background:The dynamic field of computer vision and artificial intelligence has continually evolved, pushing the boundaries in areas like semantic segmentation andenvironmental recognition, pivotal for indoor scene analysis. This research investigates the integration of these two technologies, examining their synergy and implicayions for enhancing indoor scene understanding. The application of this integrationspans across various domains, including smart home systems for enhanced ambientliving, navigation assistance for Cleaning robots, and advanced surveillance for security. Objectives: The primary goal is to assess the impact of integrating semantic segmentation data on the accuracy of environmental recognition algorithms in indoor environments. Additionally, the study explores how environmental context can enhance the precision and accuracy of contour-aware semantic segmentation. Methods: The research employed an extensive methodology, utilizing various machine learning models, including standard algorithms, Long Short-Term Memorynetworks, and ensemble methods. Transfer learning with models like EfficientNet B3, MobileNetV3 and Vision Tranformer was a key aspect of the experimentation. The experiments were designed to measure the effect of semantic segmentation on environmental recognition and its reciprocal influence. Results: The findings indicated that the integration of semantic segmentation data significantly enhanced the accuracy of environmental recognition algorithms. Conversely, incorporating environmental context into contour-aware semantic segmentation led to notable improvements in precision and accuracy, reflected in metrics such as Mean Intersection over Union(MIoU). Conclusion: This research underscores the mutual enhancement between semantic segmentation and environmental recognition, demonstrating how each technology significantly boosts the effectiveness of the other in indoor scene analysis. The integration of semantic segmentation data notably elevates the accuracy of environmental recognition algorithms, while the incorporation of environmental context into contour-aware semantic segmentation substantially improves its precision and accuracy.The results also open avenues for advancements in automated annotation processes, paving the way for smarter environmental interaction.
472

ENHANCED DATA REDUCTION, SEGMENTATION, AND SPATIAL MULTIPLEXING METHODS FOR HYPERSPECTRAL IMAGING

Ergin, Leanna N. 07 August 2017 (has links)
No description available.
473

SEGMENTATION OF WHITE MATTER, GRAY MATTER, AND CSF FROM MR BRAIN IMAGES AND EXTRACTION OF VERTEBRAE FROM MR SPINAL IMAGES

PENG, ZHIGANG 02 October 2006 (has links)
No description available.
474

Comparative Studies of Contouring Algorithms for Cardiac Image Segmentation

Ali, Syed Farooq January 2011 (has links)
No description available.
475

Dealing With Speckle Noise in Deep Neural Network Segmentation of Medical Ultrasound Images / Hantering av brus i segmenteing med djupinlärning i medicinska ultraljudsbilder

Daniel, Olmo January 2022 (has links)
Segmentation of ultrasonic images is a common task in healthcare that requires time and attention from healthcare professionals. Automation of medical image segmentation using deep learning solutions is fast growing field and has been shown to be capable of near human performance. Ultrasonic images suffer from low signal-to-noise ratio and speckle patterns, noise filtering is a common pre-processing step in non-deep learning image segmentation methods used to improve segmentation results. In this thesis the effect of speckle filtering of echocardiographic images in deep learning segmentation using U-Net is investigated. When trained with speckle reduced and despeckled datasets, a U-Net model with 0.5·106 trainable parameters saw an rage dice score improvement of +0.15 in the 17 out of 32 categories that were found to be statistically different compared to the same network trained with unfiltered images. The U-Net model with 1.9·106 trainable parameters saw a decrease in performance in only 5 out of 32 categories, and the U-Net model with 31·106 trainable parameters saw a decrease in performance in 10 out of 32 categories when trained with the speckle filtered datasets. No definite differences in performance between the use of speckle suppression and full speckle removal were observed. This result shows potential for speckle filtering to be used as a means to reduce the complexity required of deep learning models in ultrasound segmentation tasks. The use of the wavelet transform as a down- and up-sampling layer in U-Net was also investigated. The speckle patterns in ultrasonic images can contain information about the tissue. The wavelet transform is capable of lossless down- and up-sampling in contrast to the commonly used down-sampling methods, which could enable the network to make use textural information and improve segmentations. The U-Net modified with the wavelet transform shows slightly improved results when trained with despeckled datasets compared to the unfiltered dataset, suggesting that it was not capable of extracting any information from the speckle. The experiments with the wavelet transform were far from exhaustive and more research is needed for proper assessment. / Segmentering av ultraljudsbilder är en vanlig uppgift inom vården som kräver tid och uppmärksamhet från vårdpersonal. Automatisering av medicinsk bildsegmentering med djupinlärning är ett snabbt växande område och har visat kunna nå prestanda nära mänsklig nivå.  Ultraljudsbilder har dålig signal-brusförhållande och speckle mönster, ofta bearbetas bilder med brusfiltrering när icke djupinlärningsmetoder används för segmentering för att förbättra resultat. Effekten av speckle-filtrering i ultraljudsbilder i djupinlärnings segmentering med U-Net undersöks i den här masterexamensuppsatsen.   U-Net nätverket med 0.5·106 träningsbara parametrar presterade bättre när den tränades med speckle filtrerade dataset jämfört för med ofiltrerade bilder, men en ökning i dice-koefficienten av +0.15 i medel i de 17 kategorier av 32 som var statistikst signifikanta. En försämring av resultaten för U-Net nätverket med 1.9·106 träningsbara parametrar observerades i 5 av 32 kategorier, och en försämring av resultaten för U-Net nätverket med 31·106 träningsbara parametrar observerardes när de tränades med speckle filtrerade dataset i 10 av 32 kategorier. Inga skillnader i prestanda mellan användning av minskning av speckle och fullständig speckle borttagning observerades. Detta resultat visar att det finns potential för att använda speckle filtrering som en metod för att minska komplexiteten som kan krävas hos djupinlärningsnätverk inom ultraljudssegmentering. Användning av wavelet transformen som ett ned- och uppsamplings lager i U-Net undersöktes också. Speckle mönstren i ultraljudsbilder kan innehålla information om vävnaden. Wavelet transformen möjliggör ned- och uppsamplings av bilden utan informationsförlust till skillnad från de vanliga metoderna, vilket skulle kunna göra det möjligt för nätverket att utnyttja information om vävnadstexturen och förbättra segmenteringarna. U-Net nätverket som modifierades med wavelet transformen visar någorlunda bättre prestanda när den tränas med speckle filtrerade dataset jämfört med ofiltrerade dataset. Det tyder på att nätverket inte kunde utnyttja någon information från speckle mönstren. Wavelet transform experimenten var ej uttömmande och mer forskning behövs för en korrekt bedömning.
476

GAN-based Automatic Segmentation of Thoracic Aorta from Non-contrast-Enhanced CT Images / GAN-baserad automatisk segmentering avthoraxorta från icke-kontrastförstärkta CT-bilder

Xu, Libo January 2021 (has links)
The deep learning-based automatic segmentation methods have developed rapidly in recent years to give a promising performance in the medical image segmentation tasks, which provide clinical medicine with an accurate and fast computer-aided diagnosis method. Generative adversarial networks and their extended frameworks have achieved encouraging results on image-to-image translation problems. In this report, the proposed hybrid network combined cycle-consistent adversarial networks, which transformed contrast-enhanced images from computed tomography angiography to the conventional low-contrast CT scans, with the segmentation network and trained them simultaneously in an end-to-end manner. The trained segmentation network was tested on the non-contrast-enhanced CT images. The synthetic process and the segmentation process were also implemented in a two-stage manner. The two-stage process achieved a higher Dice similarity coefficient than the baseline U-Net did on test data, but the proposed hybrid network did not outperform the baseline due to the field of view difference between the two training data sets.
477

Resource-efficient image segmentation using self-supervision and active learning

Max, Muriel January 2021 (has links)
Neural Networks have been demonstrated to perform well in computer vision tasks, especially in the field of semantic segmentation, where a classification is performed on a per pixel-level. Using deep learning can reduce time and effort in comparison to manual segmentation, however, the performance of neural networks highly depends on the data quality and quantity, which is costly and time-consuming to obtain; especially for image segmentation tasks. In this work, this problem is addressed by investigating a combined approach of self-supervised pre-training and active learning aimed at selecting the most informative training samples. Experiments were performed using the Gland Segmentation and BraTS 2020 datasets. The results indicate that active learning can increase performance for both datasets when only a small percentage of labeled data is used. Furthermore, self-supervised pre-training improves model robustness as well as in some cases additionally boosts model performance. / Neurala nätverk har visats fungera bra för att lösa visionsbasesarade problem med datorer, särskilt inom bildsegmentering, där operationer utförs på en per pixelnivå. Att använda djupinlärning kan minska tid och ansträngning jämfört med manuell segmentering. Prestandan för dessa metoder är dock beror på kvaliteten och kvantiteten på den tillgängliga datan, vilket är kostsamt och tidskrävande att få fram. I detta arbete behandlar vi problemet om kostsam dataannotering genom att undersöka mer effektiva tillvägagångssätt för att träna dessa modeller på mindre annoterad data genom en kombination av självövervakad förträning och active learning - som kan användas för att finna de mest informativa träningspunkterna. Experiment utfördes med hjälp av datasetten Gland Segmentation och BraTS 2020. Resultaten indikerar attactive learning kan öka prestandan för båda datamängderna när endast ett fåtal datapunkter har annoterats och används för träning. Dessutom förbättrar självövervakad pre-training modellens robusthet och kan i vissa fall öka modellprestandan.
478

Multi-Modal Learning for Abdominal Organ Segmentation / Multimodalt lärande för segmentering av bukorgan

Mali, Shruti Atul January 2020 (has links)
Deep Learning techniques are widely used across various medical imaging applications. However, they are often fine-tuned for a specific modality and are not generalizable when it comes to new modalities or datasets. One of the main reasons for this is large data variations for e.g., the dynamic range of intensity values is large across multi-modal images. The goal of the project is to develop a method to address multi-modal learning that aims at segmenting liver from Computed Tomography (CT) images and abdominal organs from Magnetic Resonance (MR) images using deep learning techniques. In this project, a self-supervised approach is adapted to attain domain adaptation across images while retaining important 3D information from medical images using a simple 3D-UNet with a few auxiliary tasks. The method comprises of two main steps: representation learning via self-supervised learning (pre-training) and fully supervised learning (fine-tuning). Pre-training is done using a 3D-UNet as a base model along with some auxiliary data augmentation tasks to learn representation through texture, geometry and appearances. The second step is fine-tuning the same network, without the auxiliary tasks, to perform the segmentation tasks on CT and MR images. The annotations of all organs are not available in both modalities. Thus the first step is used to learn general representation from both image modalities; while the second step helps to fine-tune the representations to the available annotations of each modality. Results obtained for each modality were submitted online, and one of the evaluations obtained was in the form of DICE score. The results acquired showed that the highest DICE score of 0.966 was obtained for CT liver prediction and highest DICE score of 0.7 for MRI abdominal segmentation. This project shows the potential to achieve desired results by combining both self and fully-supervised approaches.
479

Automatic Segmentation Using 3DVolumes of the Nerve Fiber Layer Waist at the Optic Nerve Head / Automatisk segmentering med hjälp av 3D-volymer av nervfiberskiktets midja vid synnervshuvudet

Cao, Qiran January 2024 (has links)
Glaucoma, a leading cause of blindness worldwide, results in gradual vision loss if untreated due to retinal ganglion cell degeneration. Optical coherence tomography (OCT) machine measures retinal nerve fiber layers and the optic nerve head (ONH), with Považay et al. introducing the Pigment Epithelium – Inner limit of the retina Minimal Distance Averaged Over 2π Radians (PIMD-2π) for quantifying nerve fiber cross-sections. PIMD, defined as the distance between the Optic nerve head Pigment epithelium Central Limit (OPCL) and the Inner limit of the Retina Closest Point (IRCP), shows promise for earlier glaucoma detection compared to visual field assessments. The objective of this research is to enhance the Auto-PIMD program for calculating PIMD lengths in OCT images, aiding healthcare professionals in diagnosing glaucoma. Originally based on the 2D U-Net framework, this study proposes a replacement of the deep learning model framework and introduces a novel experimental procedure aimed at refining the accuracy of OPCLs calculation. Leveraging the nnU-Net model, commonly employed for semantic segmentation in medical imaging, the computational process entails segmenting vitreous masks and OPCLs. Utilizing a dataset of 78 OCT images provided by Uppsala University, experiments were conducted in both cylindrical domain (using 2D U-Net and nnU-Net cylindrical architecture) and Cartesian domain (nnU-Net Cartesian architecture). Qualitative and graphical analysis of the obtained OPCLs coordinate points demonstrates the nnU-Net frameworks' ability to yield points close to true voxel values(mean Euclidean distance of nnU-Net cylindrical architecture: 1.665; mean Euclidean distance of nnU-Net Cartesian architecture: 2.4495), contrasting with the higher uncertainties of the 2D U-Net architecture(mean Euclidean distance: 10.6827). Moreover, the nnU-Net Cartesian architecture eliminates human bias stemming from manual ONH center selection for cylindrical coordinate expansion. Examination of PIMD length calculations reveals all three methods effectively distinguishing between glaucoma patients and healthy subjects, with the nnU-Net-based methods displaying greater stability. This study contributes to refining OPCLs coordinate point accuracy and underscores the potential of the Auto-PIMD program in glaucoma diagnosis. / Glaukom, som är en av de främsta orsakerna till blindhet i världen, leder till gradvis synförlust om det inte behandlas på grund av degeneration av ganglieceller i näthinnan. Med optisk koherenstomografi (OCT) mäts nervfiberlagren i näthinnan och synnervshuvudet (ONH), och Považay et al. introducerade Pigmentepitel - Näthinnans inre gräns Minimal Distance Averaged Over 2π Radians (PIMD-2π) för att kvantifiera tvärsnitt av nervfibrer. PIMD, definierat som avståndet mellan den centrala gränsen (OPCL) för optikusnervhuvudets pigmentepitel och den inre gränsen för näthinnans närmaste punkt (IRCP), visar lovande resultat för tidigare upptäckt av glaukom jämfört med synfältsbedömningar. Syftet med denna forskning är att förbättra Auto-PIMD-programmet för beräkning av PIMD-längder i OCT-bilder, vilket hjälper vårdpersonal att diagnostisera glaukom. Denna studie, som ursprungligen baserades på 2D U-Net-ramverket, föreslår en ersättning av ramverket för djupinlärningsmodellen och introducerar ett nytt experimentellt förfarande som syftar till att förfina noggrannheten i OPCL-beräkningen. Med hjälp av nnU-Net-modellen, som ofta används för semantisk segmentering inom medicinsk bildbehandling, innebär beräkningsprocessen segmentering av glaskroppsmasker och OPCL. Med hjälp av ett dataset med 78 OCT-bilder från Uppsala universitet genomfördes experiment i både cylindrical domän (med 2D U-Net och nnU-Net cylindrical arkitektur) och kartesisk domän (nnU-Net kartesisk arkitektur). Kvalitativ och grafisk analys av de erhållna OPCL-koordinatpunkterna visar att nnU-Net-ramverken kan ge punkter som ligger nära sanna värden(genomsnittligt euklidiskt avstånd för nnU-Nets polära arkitektur: 1,665; genomsnittligt euklidiskt avstånd för nnU-Net kartesisk arkitektur: 2,4495), i motsats till de högre osäkerheterna i 2D U-Net-arkitekturen(genomsnittligt euklidiskt avstånd: 10,6827). Dessutom eliminerar den kartesiska arkitekturen i nnU-Net mänsklig partiskhet som härrör från manuellt val av ONH-centrum för polär koordinatexpansion. Granskning av PIMD-längdsberäkningar visar att alla tre metoderna effektivt skiljer mellan glaukompatienter och friska försökspersoner, där de nnU-Net-baserade metoderna uppvisar större stabilitet. Denna studie bidrar till att förbättra noggrannheten i OPCL:s koordinatpunkter och understryker potentialen i Auto-PIMD-programmet vid glaukomdiagnos.
480

Deep Learning Based Image Segmentation for Tumor Cell Death Characterization

Forsberg, Elise, Resare, Alexander January 2024 (has links)
This report presents a deep learning based approach for segmenting and characterizing tumor cell deaths using images provided by the Önfelt lab, which contain NK cells and HL60 leukemia cells. We explore the efficiency of convolutional neural networks (CNNs) in distinguishing between live and dead tumor cells, as well as different classes of cell death. Three CNN architectures: MobileNetV2, ResNet-18, and ResNet-50 were employed, utilizing transfer learning to optimize performance given the limited size of available datasets. The networks were trained using two loss functions: weighted cross-entropy and generalized dice loss and two optimizers: Adaptive moment estimation (Adam) and stochastic gradient descent with momentum (SGDM), with performance evaluations based on metrics such as mean accuracy, intersection over union (IoU), and BF score. Our results indicate that MobileNetV2 with cross-entropy loss and the Adam optimizer outperformed other configurations, demonstrating high mean accuracy. Challenges such as class imbalance, annotation bias, and dataset limitations are discussed, alongside potential future directions to enhance model robustness and accuracy. The successful training of networks capable of classifying all identified types of cell death, demonstrates the potential for a deep learning approach to identify different types of cell deaths as a tool for analyzing immunotherapeutic strategies and enhance understanding of NK cell behaviors in cancer treatment.

Page generated in 0.1659 seconds