• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 240
  • 28
  • 12
  • 5
  • 4
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 335
  • 335
  • 215
  • 139
  • 131
  • 93
  • 78
  • 72
  • 70
  • 59
  • 55
  • 50
  • 36
  • 34
  • 31
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Laterality Classification of X-Ray Images : Using Deep Learning

Björn, Martin January 2021 (has links)
When radiologists examine X-rays, it is crucial that they are aware of the laterality of the examined body part. The laterality refers to which side of the body that is considered, e.g. Left and Right. The consequences of a mistake based on information regarding the incorrect laterality could be disastrous. This thesis aims to address this problem by providing a deep neural network model that classifies X-rays based on their laterality. X-ray images contain markers that are used to indicate the laterality of the image. In this thesis, both a classification model and a detection model have been trained to detect these markers and to identify the laterality. The models have been trained and evaluated on four body parts: knees, feet, hands and shoulders. The images can be divided into three laterality classes: Bilateral, Left and Right. The model proposed in this thesis is a combination of two classification models: one for distinguishing between Bilateral and Unilateral images, and one for classifying Unilateral images as Left or Right. The latter utilizes the confidence of the predictions to categorize some of them as less accurate (Uncertain), which includes images where the marker is not visible or very hard to identify. The model was able to correctly distinguish Bilateral from Unilateral with an accuracy of 100.0 %. For the Unilateral images, 5.00 % were categorized as Uncertain and for the remaining images, 99.99 % of those were classified correctly as Left or Right.
12

Towards non-invasive Gleason grading of prostate cancer using diffusion weighted MRI / Mot icke-invasiv Gleason gradering av prostatacancer med hjälp av diffusionsviktad MRI

Hillergren, Pierre January 2020 (has links)
Prostate cancer is one of the most common cancer diagnosis in men. This project aimed to help in characterization and treatment planning of prostate cancer by producing a Gleason grading probability based on apparent diffusion coefficient (ADC). In a study, from which this project received the patient data, the patients were first imaged using magnetic resonance imaging (MRI) in a 3T positron emission tomography MRI (PET/MRI) scanner. The prostates were surgically removed and placed in a patient specific mold. While inside the mold, the prostates were imaged using the same scanner, producing ex-vivo images of the prostates. Lastly the prostates were cut in histopathology slices and Gleason graded by a pathologist. To get correlation between ADC and Gleason grade all images needed to be correctly related to each other. This was done by three image registrations, which was the main part of this project. The histopathology slices were first registered to the ex-vivo images of the prostate, and then to the in-vivo T2-weighted images. The in-vivo T2w images were matched to images depicting the diffusion of water in the prostates, known as ADC-maps. The ADC-values were collected and matched to their possible Gleason grade. Information from 149 images were used, which came from 22 different patients. 3D pixels, known as voxels, with a corresponding Gleason grade annotation measured a lower average ADC-value. These voxels also showed more variation with a larger standard deviation. Furthermore, these voxels measured a larger range of ADC-values compared to voxels without a corresponding Gleason grade, but the probability of a Gleason grade was mainly seen for ADC-values below 1200 mm2/s. Filtering the ADC-map before collecting the information showed less spread in measurements, and larger total probability of Gleason grade annotation for lower ADC-values. To test the validity of the result a movement of the Gleason grade map was used to simulate registration errors. No large impact was observed for small movements but more obvious change for large. The results indicate this method as promising in predicting regions with a probability for Gleason grade of 3 or 4, however it was less accurate in separating the two. Gleason 5 showed very low probability, mainly as a result of the low sample size since only two patients had such tumors. Further research with better optimized filtering is recommended in the future.
13

Automatic evaluation of breast density in mammographic images

Björklund, Tomas January 2012 (has links)
The goal of this master thesis is to develop a computerized method for automatic estimation of the mammographic density of mammographic images from 5 different types of mammography units.   Mammographic density is a measurement of the amount of fibroglandular tissue in a breast. This is the single most attributable risk factor for breast cancer; an accurate measurement of the mammographic density can increase the accuracy of cancer prediction in mammography. Today it is commonly estimated through visual inspection by a radiologist, which is subjective and results in inter-reader variation.   The developed method estimates the density as a ratio of #pixels-containing-dense-tissue over #pixels-containing-any-breast-tissue and also according to the BI-RADS density categories. To achieve this, each mammographic image is: corrected for breast thickness and normalized such that some global threshold can separate dense and non-dense tissue. iteratively thresholded until a good threshold is found.  This process is monitored and automatically stopped by a classifier which is trained on sample segmentations using features based on different image intensity characteristics in specified image regions. filtered to remove noise such as blood vessels from the segmentation. Finally, the ratio of dense tissue is calculated and a BI-RADS density class is assigned based on a calibrated scale (after averaging the ratings of both craniocaudal images for each patient). The calibration is based on resulting density ratio estimations of over 1300 training samples against ratings by radiologists of the same images.   The method was tested on craniocaudal images (not included in the training process) acquired with different mammography units of 703 patients which had also been rated by radiologists according to the BI-RADS density classes. The agreement with the radiologist rating in terms of Cohen’s weighted kappa is substantial (0.73). In 68% of the cases the agreement is exact, only in 1.2% of the cases the disagreement is more than 1 class.
14

Application for Deriving 2D Images from 3D CT Image Data for Research Purposes / Programvara för att härleda 2D-bilder från 3D CT bilddata för forskningsändamål

Agerskov, Niels, Carrizo, Gabriel January 2016 (has links)
Karolinska University Hospital, Huddinge, Sweden, has long desired to plan hip prostheses with Computed Tomography (CT) scans instead of plain radiographs to save time and patient discomfort. This has not been possible previously as their current software is limited to prosthesis planning on traditional 2D X-ray images. The purpose of this project was therefore to create an application (software) that allows medical professionals to derive a 2D image from CT images that can be used for prosthesis planning. In order to create the application NumPy and The Visualization Toolkit (VTK) Python code libraries were utilised and tied together with a graphical user interface library called PyQt4. The application includes a graphical interface and methods for optimizing the images for prosthesis planning. The application was finished and serves its purpose but the quality of the images needs to be evaluated with a larger sample group. / På Karolinska universitetssjukhuset, Huddinge har man länge önskat möjligheten att utföra mallningar av höftproteser med hjälp av data från datortomografiundersökningar (DT). Detta har hittills inte varit möjligt eftersom programmet som används för mallning av höftproteser enbart accepterar traditionella slätröntgenbilder. Därför var syftet med detta projekt att skapa en mjukvaru-applikation som kan användas för att generera 2D-bilder för mallning av proteser från DT-data. För att skapa applikationen användes huvudsakligen Python-kodbiblioteken NumPy och The Visualization Toolkit (VTK) tillsammans med användargränssnittsbiblioteket PyQt4. I applikationen ingår ett grafiskt användargränssnitt och metoder för optimering av bilderna i mallningssammanhang. Applikationen fungerar men bildernas kvalitet måste utvärderas med en större urvalsgrupp.
15

Needle Localization in Ultrasound Images : FULL NEEDLE AXIS AND TIP LOCALIZATION IN ULTRASOUND IMAGES USING GPS DATA AND IMAGE PROCESSING

Demeulemeester, Kilian January 2015 (has links)
Many medical interventions involve ultrasound based imaging systems to safely localize and navigate instruments into the patient body. To facilitate visual tracking of the instruments, we investigate the techniques and methodologies best suited for solving the problem of needle localization in ultrasound images. We propose a robust procedure that automatically determines the position of a needle in 2D ultrasound images. Such a task is decomposed into the localization of the needle axis and its tip. A first estimation of the axis position is computed with the help of multiple position sensors, including one embedded in the transducer and another in the needle. Based on this, the needle axis is computed using a RANSAC algorithm. The tip is detected by analyzing the intensity along the axis and a Kalman filter is added to compensate for measurement uncertainties. The algorithms were experimentally verified on real ultrasound images acquired by a 2D scanner scanning a portion of a cryogel phantom that contained a thin metallic needle. The experiments shows that the algorithms are capable of detecting a needle at millimeter accuracy.The computational time of the order of milliseconds permits real time needle localization.
16

Evaluation of ULM for sub-wavelength imaging of microvasculature in skeletal muscles : A simulation study

Selin, Andreas January 2020 (has links)
A vital part of the human anatomy is the circulatory system, which branches out in a vast network of vessels delivering oxygen and other nutrients to all parts of the body. In a human adult, there are about 40 billion capillaries with a diameter of about 10 µm. The behavior of the blood flow in the capillaries can be used to identify, for example, diabetes or cancer. The current method for analyzing capillaries involves removing a section of the tissue and looking at it through a microscope. To avoid having to remove tissue from the patient, a method for imaging the capillaries inside living tissue is desired. A possible candidate for the future of capillary imaging is ultrasound localization microscopy (ULM). ULM attempts to solve a well-known limitation in ultrasound imaging, the diffraction limit. The classical limit of diffraction sets a limit on the resolution achievable based on the wavelength of the transmitted soundwave. The best possible resolution would be roughly half of the transmitted wavelength, which means that objects smaller than that cannot be imaged accurately. A standard clinical ultrasound system uses wavelengths in the hundreds of micrometers when imaging deep organs. Capillaries, which are much smaller than that, can not accurately be imaged with standard ultrasound systems. ULM utilizes the detection of individual microbubbles injected into the bloodstream to pinpoint the microbubble location to a much higher precision than what the diffraction limit would allow. By combining the localization of hundreds of microbubbles, an image of the capillaries is achieved. In this study, we investigate the performance of ULM for imaging the sub-wavelength structures of capillaries in skeletal muscle. A simulation model of capillaries in skeletal muscle was built to achieve the necessary images. The model was built in Vantage 4.2.0 (Verasonics Inc.), which runs in MATLAB. The simulation model was designed to simulate microbubbles moving in capillaries in the image plane. From the results in this study, we can conclude that ULM is a viable option for imaging capillaries in skeletal muscle and can achieve a resolution that far surpasses the diffraction limit. We show that the capillaries' shape and their proximity to each other can affect the final image. The intensity of background noise relative to the microbubble signal also substantially impacts the performance of ULM but might be avoided due to the high contrast between background noise and microbubble signal. Furthermore, we show that, if the background is stationary, the background tissue signal can easily be removed with singular value decomposition (SVD). Notice: The full text of this report has been censored due to confidentiality and will not be available to the public.
17

Prioritization of Informative Regions in PET Scans for Classification of Alzheimer's Disease

Mårtensson, Fredrik, Westberg, Erik January 2021 (has links)
Alzheimer’s Disease (AD) is a widespread neurodegenerative disease. The disease causes brain atrophy, resulting in memory loss, decreased cognitive ability, and eventually death. There is currently no cure for the disease, but treatment may delay the onset. Therefore, it is crucial to detect the disease at an early stage. Medical imaging techniques, such as Positron Emission Tomography (PET), are heavily applied for this task. In recent years, machine learning approaches have shown success in identifying AD from such images. The thesis presents a pipeline approach to detect, extract and evaluate Region of Interest (ROI) for prioritization of informative regions in PET scans for classification of Alzheimer’s disease. The pipeline applies data acquired from Alzheimer’s Disease Neuroimaging Initiative (ADNI). An analysis of Weakly-Supervised Object Localization (WSOL) is discussed for detection of informative regions particularly indicative of AD. WSOL analyse the original full-volume 18F-fluorodeoxyglucose (18F-FDG)-PET scan to categorize the informative regions on subjects into Cognitively Normal (CN), Mild Cognitive Impairment (MCI), or AD. The detection of informative regions are processed to two approaches to extract ROI on the full-volume 18F-FDG-PET scan: Bounding-Box (BBox) Generatio nand Automated Anatomical Labeling (AAL) Generation. BBoxes Generation restricts the 18F-FDG-PET scans for Convolutional Neural Network (CNN) to BBox proposal swith particularly informative regions. The second approach ranks the anatomical regions of the brain through brain parcellation with the pre-defined atlas AAL3, and restricts a CNN to the highest-ranked regions. The results evaluate if ROIs increase the robustness for classification in relationto full-volume 18F-FDG-PET scan. The results suggest that full-volume 18F-FDG-PET with heavily restricted image size does not decrease classification performance. Instead, the BBox Generation results in a significant classification performance improvement on the test set from an Area under the ROC Curve (AuC) score of 70.08% to 97.73% and accuracy from 51.79% to 88.03%. AAL Generation suggests that the middle and inferior regions of the temporal lobe and the fusiform are essential to the classification. In addition, several regions of the frontal lobe were found to be highly important but could not alone discriminate between CN, MCI, and AD.
18

Organ Segmentation Using Deep Multi-task Learning with Anatomical Landmarks / Segmentering av organ med multi-task learning och anatomiska landmärken

Carrizo, Gabriel January 2018 (has links)
This master thesis is the study of multi-task learning to train a neural network to segment medical images and predict anatomical landmarks. The paper shows the results from experiments using medical landmarks in order to attempt to help the network learn the important organ structures quicker. The results found in this study are inconclusive and rather than showing the efficiency of the multi-task framework for learning, they tell a story of the importance of choosing the tasks and dataset wisely. The study also reflects and depicts the general difficulties and pitfalls of performing a project of this type.
19

Automatic Quality Assessment of Dermatology Images : A Comparison Between Machine Learning and Hand-Crafted Algorithms

Zahra, Hasseli, Raamen, Anwia Odisho January 2022 (has links)
In recent years, pictures from handheld devices such as smartphones have been increasingly utilized as a documentation tool by medical practitioners not trained to take professional photographs. Similarly to the other types of image modalities, the images should be taken in a way to capture the vital information in the region of interest. Nevertheless, image capturing cannot always be done as desired, so images may exhibit different blur types at the region of interest. Having blurry images does not serve medical purposes, therefore, the patients might have to schedule a second appointment several days later to retake the images. A solution to this problem is to create an algorithm which immediately after capturing an image determines if it is medically useful and notifies the user of the result. The algorithm needs to perform the analysis at a reasonable speed, and at best, with a limited number of operations to make the calculations directly in the smartphone device. A large number of medical images must be available to create such an algorithm. Medical images are difficult to acquire, and it is specifically difficult to acquire blurry images since they are usually deleted. The main objective of this thesis is to determine the medical usefulness of images taken with smartphone cameras, using both machine learning and handcrafted algorithms, with a low number of floating point operations and a high performance. Seven different algorithms (one hand-crafted and six machine learned) are created and compared regarding both number of floating point operations and performance. Fast Walsh-Hadamard transforms are the basis of the hand-crafted algorithm. The employed machine learning algorithms are both based on common convolutional neural networks (MobileNetV3 and ResNet50) and on our own designs. The issue with the low number of medical images acquired is solved by training the machine learning models on a synthetic dataset, where the non-medically useful images are generated by applying blur on the medically useful images. These models do, however, undergo evaluation using a real dataset, containing medically useful images as well as non-medically useful images. Our results implicate that a real-time determination of the medical usefulness of images is possible on handheld devices, since our machine learned model DeepLAD-Net reaches the highest accuracy with 42 · 106 floating point operations. In terms of accuracy, MobileNetV3-large is the second best model with31 times as many floating point operations as our best model.
20

Automatisering av skjuvvågselastografidata för kärldiagnostisk applikation. / Automatization of Shear Wave Elastography Data for Arterial Application

Boltshauser, Rasmus, Zheng, Jimmy January 2018 (has links)
Sammanfattning   Hjärt- och kärlsjukdommar är den ledande dödsorsaken i världen. En av det vanligaste hjärt- och kärlsjukdomarna är åderförkalkning. Sjukdomen kännetecknas av förhårdning samt plackansamling i kärl och bidrar till stroke och hjärtinfarkt. Information om kärlväggens styvhet kan spela en viktig roll vid diagnostiseringen av bland annat åderförkalkning. Skjuvvågselastografi (SWE) är en noninvasiv ultraljudsbaserad metod som idag används för att mäta elasticitet och styvhet av större mjuka vävnader som lever- och bröstvävnad. Dock används inte metoden inom kärlapplikationer, då få genomgående studier har utförts på SWE för kärl. Målet med projektet är att automatisera kvantifieringen av skjuvvågshastigheten för SWE och undersöka hur automatiseringens förmåga och begränsningar beror av automatiseringsinställningar. Med verktyg erhållna från CBH (skolan för kemi, bioteknologi och hälsa) skapades ett MATLAB-program med denna förmåga. Programmet applicerades på två fantommodeller. Automatiseringsinställningarna påverkade automatiseringen av dessa modeller olika, vilket innebar att generella optimala inställningar inte kunde finnas. Optimala inställningar beror på vad automatiseringen skall undersöka. / Medicinsk avbildning

Page generated in 0.0383 seconds